text
stringlengths
216
4.52M
meta
dict
\section{Introduction} \label{sec:intro} Eclipsing binaries (EBs) are the key sources to determine stellar parameters with high precision. One interesting class of EBs is contact binary stars (CBs). These are low temperature systems with components sharing a convective envelope. Due to the contact geometry, their temperatures are almost same and mostly they show equal size primary and secondary minima \citep{1941ApJ....93..133K, 1967PASP...79..395K, 1967AJ.....72S.309L, 1968ApJ...151.1123L}. The W UMa-type CBs (EWs) are particularly interesting as these are more abundant than other type of EBs \citep{1948HarMo...7..249S}. Secondly, the closeness of components of these systems allows us to directly perceive interaction between them and their atmosphere. Their orbital period is less than a day and both the components in EWs are located on or just above the main sequence with spectral type later than F \citep{1967PASP...79..395K, 1970PASJ...22..317O, 1972MNRAS.157..433M, 2005MNRAS.357..497B}. In most of the EWs deeper primary minima occurs when larger and more massive component passes in front of the smaller, less massive component. However, reverse can also occur in some cases. EWs are further divided into A and W-subtypes \citep{1970VA.....12..217B}. The A-subtype systems are earlier spectral type with higher mass and luminosity as compared to W-subtypes. In A-subtype systems, mass-ratio ($M_2/M_1$) is generally less than 0.5 and moderate or insignificant activity is observed. In W-subtype systems, less massive component is hotter and there is continuous change in the period with time \citep{1970VA.....12..217B, 1973AcA....23...79R}. Many previous studies explain the origin of CBs from small period detached EBs (DEBs) \citep[e.g.,][]{1989A&A...220..128V, 2004MNRAS.355.1383L}. The loss of angular momentum (AML) due to magnetic braking is assumed to be leading formation mechanism for CBs \citep{2007ApJ...662..596L}. The ejection of mass due to magnetic activities can result in decrease in orbital or spin angular momentum, which can bring both components close to each other \citep{1966AnAp...29..331H, 1970PASJ...22..317O, 1982A&A...109...17V}. If AML continues even after contact phase, it can result in mass transfer between the components. Evolution of EWs depends upon AML, mass loss and mass transfer between the two components \citep{2012AcA....62..153S, 2013MNRAS.430.2029Y}. Analysing the LAMOST data for 7938 EWs, \cite{2017RAA....17...87Q} determined the parameters of CBs e.g., gravitational acceleration (log g), metallicity, temperature, radial velocity and found that about 80\% EWs have metallicity less than zero, which implies that EWs are old population systems. Many EWs are found to be magnetically active due to dynamo mechanism. The presence of magnetic field effects their evolution \citep{1967PASP...79..395K, 2008MNRAS.389.1722E}. Most of the EWs show asymmetrical light curves (LCs) i.e. difference in brightness at phases 0.25 and 0.75. This is generally explained by the presence of cool or hot spots on their surface. This effect is known as O'Connell effect \citep{1951PRCO....2...85O}. However, the amount of this asymmetry can change with course of time due to evolution and movement of spots on the stellar surface. In this work, we present the multi-band photometric and low-resolution spectroscopic analysis of four EWs. These targets are chosen from Catalina Real Time Transient Survey (CRTS) which provides a catalog of $\sim47,000$ periodic variables \citep{2014yCat..22130009D}. Out of these variables $\sim31,000$ are classified as contact or ellipsoidal binaries. The J0158 ($\alpha_{2000}=01^{h}58^{m}29^{s}.5$, $\delta_{2000}=+26^{\circ}03^{\prime}33^{\prime\prime}$), J0305 ($\alpha_{2000}=03^{h}05^{m}05^{s}.1$, $\delta_{2000}=+29^{\circ}34^{\prime}43^{\prime\prime}$), J1022 ($\alpha_{2000}=10^{h}22^{m}11^{s}.7$, $\delta_{2000}=+31^{\circ}00^{\prime}22^{\prime\prime}$) and KW Psc ($\alpha_{2000}=22^{h}58^{m}31^{s}.7$, $\delta_{2000}=+05^{\circ}52^{\prime}23^{\prime\prime}$) are EWs, with approximate period of 0.227665, 0.246984, 0.2584680 and 0.234276 day, reported as in the CRTS Catalog. The list of targets and related information is given in Table~\ref{tar_info}. \input{tab01.tex} The paper is structured as follows: The information about photometric and spectroscopic observations is given in Section~\ref{Data}. The period estimation and period change is discussed in Section~\ref{orpe} which is followed by photometric analysis in Section~\ref{Ana}. The procedure used to determine physical parameters is described in Section~\ref{phy_para}. The spectroscopic analysis of these EWs is provided in Section~\ref{ch_ac}. The final results are discussed in Section~\ref{discu}. \section{Observations}\label{Data} \subsection{Photometry}\label{Photo} The photometric observations of these targets have been acquired from the 1.3-m DFOT, Nainital employing a $2k\times2k$ CCD detector having a field of view of $\sim 18^{\arcmin}\times18^{\arcmin}$. As observations were carried out on different nights having varying moon illuminations, the exposure time varied across the frames. The total number of frames collected for J0158, J0305, J1022 and KW Psc were around 140, 200, 85 and 130, respectively in each band ($VR_{c}I_{c}$). Observing log for photometric observations is given in Table~\ref{log_phot}. \input{tab02.tex} \input{tab03.tex} \begin{figure*}[!ht] \begin{center} \includegraphics[width=17cm,height=7cm]{fig01.pdf} \caption{The VRI band observed LCs of the sources. The different symbols show different date of observation.} \label{lc_obs} \end{center} \end{figure*} All the pre-processing steps like bias subtraction, flat fielding, cosmic ray removal were completed using IRAF routines. The instrumental magnitudes of target stars and comparison stars were computed by aperture photometry using DAOPHOT \citep{1992ASPC...25..297S}. Initially, five nearby field stars were selected having brightness similar to our targets for preparing differential LC. On the basis of differential LCs (Target star-Comparison star and Comparison star-Check star), most appropriate comparison stars and check stars were selected. For J0158, J0305, J1022 and KW Psc, we used TYC 1760-1359-1, TYC 1795-913-1, TYC 2510-242-1 and TYC 575-86-1 as comparison stars, respectively. The observed LCs in VRI bands are shown in Figure~\ref{lc_obs}. \subsection{Spectroscopy}\label{Spec} The Large sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) is a 4-m aperture telescope with a field of view (FoV) of $5^{\circ} \times 5^{\circ}$. Such large FoV and a combination of 4000 fibers makes it a highly efficient tool for spectroscopy. It covers a spectral range of 370 nm to $\sim$ 900 nm with spectral resolution of 1 nm to $\sim$ 0.25 nm. The use of different gratings and camera positions can give a resolution (R) in range 1000-5000 \citep{2015RAA....15.1095L}. All the four targets were observed in the LAMOST survey. The data was downloaded for J0158 (1 spectra from DR3), J0305 (3 spectra from DR3 on different epochs), J1022 (3 spectra one each from DR1, DR2 and DR3) and KW Psc (1 spectra from DR1) from LAMOST website \footnote{http://dr5.lamost.org/}. The parameters mentioned in LAMOST database for these sources are given in Table~\ref{tar_lamost}. The spectral type was again estimated using the PyHammer, which uses empirical stellar spectra library with spectral types ranging from O5 to L3 and metallicity ranging from -2.0 dex to +1.0 dex. It covers a spectral range of 365 to 1020 nm \citep{2017ApJS..230...16K, 2020ascl.soft02011K}. In addition to LAMOST spectroscopic data, Himalaya Faint Object Spectrograph Camera (HFOSC) on 2-m HCT was also used for observations. The observing log for these observations is given in Table~\ref{log_spec}. The combination of Gr7 and Gr8 grisms were used for observations. The Gr7 has a spectral range of 380-684 nm and a resolution of 1330. The Gr8 provides a wavelength range of 580-835 nm with resolution of 2190. For Gr7 spectra FeAr arc lamp and for Gr8 FeNe arc lamp were used for wavelength calibrations. For spectroscopic data reduction, IRAF package was used. Reduced calibrated spectra were normalized for further analysis. \input{tab04.tex} \begin{figure*}[!ht] \begin{center} \includegraphics[width=15cm, height=7cm]{fig02.pdf} \caption{Power spectra of four binary systems obtained using Period04. The power spectra obtained using SuperWASP data (for J0158, J0305 and J1022) and CRTS (for KW Psc) is over-plotted.} \label{periodo} \end{center} \end{figure*} \section{Orbital Period}\label{orpe} The temporal variation in the orbital period of CBs provides useful information about mass transfer rate, presence of third body and other characteristics. Although \cite{2014yCat..22130009D} mentioned the approximate period of these systems, their periods were further determined with the present data using the Period04 software \citep{2004IAUS..224..786L}. Figure~\ref{periodo} shows the power spectra corresponding to all the four sources \textbf{using present data (green color) and archival data (black color)} The phase folded LC were plotted and visually analysed corresponding to these peaks. While for the systems J0305 and KW Psc, the best phase folded LCs were achieved corresponding to their highest peaks of power spectra, it was the nearby peaks close to the maximum peak in case of J0158 and J1022 which gave the best phase folded LCs. As the LC of CBs can be represented by twofold sine waves, the actual period of the system would be twice the period obtained from periodogram. The periods for J0158, J0305, J1022 and KW Psc are therefore found to be 0.447273 0.246982, 0.258484 and 0.234298 days, respectively. \textbf{Since the spectra shown in Fig. 2 for each star are affected by strong side-lobes due to our short observing runs, we also obtained periodograms using SuperWASP data for J0158, J0305 and J1022 and CRTS data for KW Psc, which are over-plotted in Fig. 2. These periodograms show that periods obtained with the present data are very close to the periods determined with those of the archival data. We further ascertained our estimated periods through Period04 using the python periodogram based on the Lomb-Scargle method \citep{1976Ap&SS..39..447L, 1982ApJ...263..835S} and similar values were found}. While for later three systems, the newly estimated periods are close to earlier periods given by \cite{2014yCat..22130009D} but for J0158 newly estimated period is almost twice of that reported by \cite{2014yCat..22130009D}. The estimated period of J0158 is however a good match to those reported by \cite{2018yCat..22370028C} and \cite{2019yCat..51560241H}. The CRTS time series data used by \cite{2014yCat..22130009D} was reanalyzed and found that the power spectra of J0158 as represented by two sine waves indeed gave a period of 0.45 day. The TOMs for primary or secondary eclipse were estimated with the help of Minima27 software \footnote{R.H. Nelson, www.variablestarssouth.org/resources/bob-nelson/} using the \cite{1956BAN....12..327K} method. To examine the period change, we searched for the multi-epoch photometric data or any available TOM information for these sources in the literature. Surveys like Catalina Sky Survey (CSS; \citealt{2014yCat..22130009D}), Wide Angle Search for Planets (SuperWASP; \citealt{2010A&A...520L..10B}), North Sky Variability Survey (NSVS; \citealt{2004AJ....127.2436W}), All Sky Automated Survey for SuperNovae (ASAS-SN; \citealt{2018MNRAS.477.3145J}) and others provide a good database of photometric data. The three sources (J0158, J0305 and J1022) were observed in these surveys but with the poor cadence. For systems J0305 and J1022, we were able to find half or complete phase of LCs on different days in SuperWASP data as their period is around 0.25 days. But for J0158, we could get only half LCs on different days as its period is $\sim10$ hrs. We also constructed the LCs for these three sources from CSS multi-epoch data as CSS time resolution was less than the SuperWASP. The system KW Psc was not observed in any of the above surveys although we found 19 TOMs available for this system on O-C gateway \footnote{http://var2.astro.cz/ocgate/}. In the following sub-sections, we individually analyze the four sources using their accumulated data. \subsection{J0158}\label{J0158_per_stu} \begin{figure*}[!ht] \label{oc_0158_0305} \begin{center} \subfigure{\includegraphics[width=8cm,height=7cm]{fig03a.pdf}} \subfigure{\includegraphics[width=8cm,height=7cm]{fig03b.pdf}} \caption{O-C diagrams for (a) J0158 and (b) J0305 with quadratic regression. The lower panels show the residuals of the fit.} \end{center} \end{figure*} For J0158, a total of 27 TOMs (21 TOMs from SuperWASP data, 4 from ASAS data and 2 from our data) were determined, as given in sample Table~\ref{OC_info}. The updated linear ephemeris is estimated as: \begin{equation} \label{li_58} HJD_{o}=2453229.6847(\pm0.0012)+0.4553331(\pm0.0000002)\times E \end{equation} Here, $HJD_{o}$ represents TOM corresponding to primary minima and E is the number of epoch. The quadratic fit to the $(O-C)_{1}$ is shown in Figure~\ref{oc_0158_0305} (a). The $(O-C)_{1}$ shows an upward parabolic variation as shown in Figure~\ref{oc_0158_0305} (a) which can be represented by the following equation: \begin{equation} \label{qu_58_oc} \begin{aligned} (O-C)_{1} &=0.00104(\pm0.00118)-1.46736(\pm0.90607)\times 10^{-6} \times E \\ &+ 1.28313(\pm0.77219) \times 10^{-10} \times E^{2} \end{aligned} \end{equation} This trend shown by J0158 suggests a continuous increase in its period. The modified quadratic ephemeris can therefore be expressed as: \begin{equation} \label{qu_58} \begin{aligned} HJD_{o} &=2453229.6859(\pm0.0015)+0.455329(\pm0.000001) \times E \\ &+ 3.28(\pm1.07) \times 10^{-10} \times E^{2} \end{aligned} \end{equation} On the basis of above equation, the rate of period increase was estimated as $5.26(\pm1.72)\times10^{-7}~days~yr^{-1}$ for the system J0158. The change in orbital period for contact binaries is normally due to the mass-transfer or mass-loss from one component to the other which can be calculated from the relation given by \cite{1958BAN....14..131K}. \begin{equation} \label{matr} \frac{1}{M_{1}}\dfrac{dM_{1}}{dt}=\dfrac{q}{3(1-q)}\dfrac{1}{P}\dfrac{dp}{dt} \end{equation} Here, $q$ is the mass ratio defined by $M_2/M_1$. The above equation suggests that for a system with increasing period, the $dM_{1}$ will be negative if $q>1$ and positive if $q<1$. If the period of system is decreasing then $q>1$ will result in positive $dM_{1}$ and vice-versa. The negative $dM_{1}$ corresponds to mass transfer from primary component to the secondary component. The positive period change rate for J0158 along with $q < 1$ as determined in Section~\ref{Ana} suggests that the mass transfer is taking place from the secondary to primary component. Using above equation, the mass transfer rate for J0158 was estimated as $9.866\times10^{-7}~M_{\odot}~yr^{-1}$. The $M_{1}$ used in above equation in determined in Section~\ref{phy_para}. \input{tab05.tex} \begin{figure*}[!ht] \begin{center} \label{oc_1022_Psc} \subfigure{\includegraphics[width=8cm, height=7cm]{fig04a.pdf}} \subfigure{\includegraphics[width=8cm, height=7cm]{fig04b.pdf}} \caption{O-C diagrams for (a) J1022 and (b) KW Psc with quadratic and linear regression, respectively. The lower panels show the residuals.} \end{center} \end{figure*} \subsection{J0305}\label{J0305_per_stu} For J0305, we were able to estimate 41 TOMs which comprises 32 from SuperWASP, 1 from CSS, 4 from ASAS and 4 from our data. The corresponding (O-C) diagram with a quadratic fit is shown in Figure~\ref{oc_0158_0305} (b). Like J0158, this system also shows an upward parabolic trend. The updated linear ephemeris for J0305 is given by: \begin{equation} \label{li_05} HJD_{o}=2454085.436(\pm0.017)+0.246983(\pm0.000002)\times E \end{equation} The modified quadratic ephemeris for J0305 was determined as: \begin{equation} \label{qu_05} \begin{aligned} HJD_{o} &=2454085.418(\pm0.023)+0.246974(\pm0.000008) \times E \\ &+ 6.02(\pm5.14) \times 10^{-10} \times E^{2} \end{aligned} \end{equation} Similarly, the second order polynomial fitted to $(O-C)_{1}$ as shown in Figure~\ref{oc_0158_0305} (b) is as follows: \begin{equation} \label{qu_05_oc} \begin{aligned} (O-C)_{1} &=0.02323(\pm0.00020)-1.14030(\pm0.07074)\times 10^{-6} \times E \\ &+ 3.01937(\pm0.45643) \times 10^{-11} \times E^{2} \end{aligned} \end{equation} Using the quadratic ephemeris equation, we found that the rate of period change for J0305 is $1.78(\pm1.52)\times10^{-6}~days~yr^{-1}$. We used Equation~\ref{matr} to determine the mass transfer rate in J0305 which is found to be $1.001\times10^{-6}~M_{\odot}~yr^{-1}$. The increasing period and $q<1$ for J0305 indicates the mass transfer from secondary to primary component. \subsection{J1022}\label{J1022_per_stu} For J1022, we estimated 22 TOMs that includes 16 from SuperWASP, 4 from ASAS and 2 from our data. The (O-C) diagram for J0122 is shown in Figure~\ref{oc_1022_Psc} (a). The quadratic fit shows downward parabolic variation. This means that period of J1022 is decreasing with time. The updated linear ephemeris for J1022 is found to be \begin{equation} \label{li_22} HJD_{o}=2458564.233(\pm0.033)+0.258484(\pm0.000002)\times E \end{equation} From the (O-C) diagram, the updated quadratic ephemeris for J1022 was found as follows: \begin{equation} \label{qu_22} \begin{aligned} HJD_{o} &=2458564.170(\pm0.039)+0.258455(\pm0.000011) \times E \\ &-1.494(\pm0.593) \times 10^{-9} \times E^{2} \end{aligned} \end{equation} The rate of decrease in period, according to the above equation, is found to be $4.22(\pm1.67)\times10^{-6}$ days/year. The decrease in period can be attributed to AML via magnetic braking, gravitational wave radiation (GWR) or mass loss/transfer. We also estimated period decay rate due to GWR and magnetic braking which are estimated using equations given by \cite{1962ApJ...136..312K} and \cite{1988ASIC..241..345G}, respectively. The period decrease rate due to GWR corresponds to $2.486\times 10^{-16}$ days/year which is very small in comparison to the observed rate. The period decrease due to magnetic braking is found to be $7.005\times 10^{-8}$ days/year which is $\sim2\%$ of the observed period decay rate. Therefore most plausible mechanism behind the observed period change could be the mass transfer between the two components. We obtained a mass transfer rate of $2.467\times10^{-6}M_{\odot}$ per year from secondary to the primary which can explain the period change in the system J0122. \subsection{KW Psc}\label{KWPsc_per_stu} As mentioned earlier, KW Psc was not observed in any of the surveys except ASAS. From previous studies \citep{2010IBVS.5922....1G, 2010IBVS.5920....1D, 2011IBVS.5960....1D, 2012IBVS.6011....1D, 2013IBVS.6042....1D}, 23 TOMs were collected for KW Psc. From these values, the updated linear ephemeris is estimated as: \begin{equation} \label{li_Psc} HJD_{o}=2455014.845(\pm0.008)+0.234278(\pm0.000001)\times E \end{equation} The (O-C) diagram for KW Psc is shown in Figure~\ref{oc_1022_Psc} (b) and the residual plot is shown in the lower panel of Figure~\ref{oc_1022_Psc} (b). The $(O-C)_{1}$ for KW Psc can be written as: \begin{equation} \label{qu_Psc} \begin{aligned} (O-C)_{1} &=-0.00195(\pm0.00030)-2.47017(\pm0.04911) \times E \\ \end{aligned} \end{equation} The O-C diagram is a straight line with negative slope which means that its period is almost constant over a period of atleast 12 years during 2007 to 2019. \section{Photometric Analysis}\label{Ana} For photometric analysis of LCs in different bands, we used PHOEBE-1.0 (PHysics Of Eclipsing BinariEs) package \citep{2005ApJ...628..426P}. It is an open source modeling program based on Wilson-Devinney code \citep{1971ApJ...166..605W} for computing theoretical photometric and radial velocity curves in the binary systems. It can work with two different minimization algorithms namely differential corrections and Nelder $\&$ Mead's Simplex. In present analysis, differential corrections minimization algorithm was used. As present systems are reported to be EWs, hence, the over-contact binary not in thermal contact mode was used during their photometric analysis. \subsection{Effective Temperature}\label{teff} Although these sources were selected from \cite{2014yCat..22130009D} but these were also observed in other surveys like SuperWASP, ASAS, KELT, 2MASS, NSVS etc. as discussed in Section~\ref{periodo}. Their magnitudes in $B$, $V$, $J$, $H$ and $K$ bands were collected from available archival catalogs. We calculated the effective temperature ($T_{eff}$) using the (J-H)-$T_{eff}$ relation from \cite{2007MNRAS.380.1230C} as given below. \begin{equation} T_{eff}=-4369.5(J-H)+7188.2 \end{equation} Here, $(J-H)$ color index is taken from 2MASS and $T_{eff}$ represents effective temperature of the star. For J0158, J0305, J1022 and KW Psc, the $T_{eff}$ is determined as 6140 ($\pm$105), 4829 ($\pm$105), 5440 ($\pm$118) and 5047 ($\pm$138) K, respectively using the above equation. The $T_{eff}$ is also calculated with $(B-V)_{o}-T_{eff}$ relations given by \cite{1994ApJ...434..277W} and \cite{2010AJ....140.1158T}. The $T_{eff}$ obtained from different equations as well as those provided in the LAMOST survey are listed in Table~\ref{all_temp}. It can be seen from the table that the $T_{eff}$ obtained using different methods are almost similar for all sources except J0305. Finally, we calculated the average temperature and used it as $T_{eff}$ for the primary component during LC model fitting. \input{tab06.tex} \begin{figure*}[!ht] \begin{center} \includegraphics[height=7cm,width=14cm]{fig05.pdf} \caption{The estimation of q-parameter for the EWs marked at the top right corner of each panel.} \label{qsearch} \end{center} \end{figure*} \subsection{q-search and Modeling}\label{q_param} The accurate determination of mass-ratio requires multi-epoch radial velocity (RV) information for each component. Due to absence of RV data, q-search technique was used for the estimation of $q$ parameter from the photometric data \citep[e.g.,][]{2016RAA....16...63J, 2017RAA....17..115J}. In this process, we fixed the gravity darkening coefficients as $g_{1}=g_{2}=0.32$ and bolometric albedo as $A_{1}=A_{2}=0.5$ assuming these EWs having convective envelopes. The limb darkening coefficients were estimated automatically by the program using tables from \cite{1993AJ....106.2096V} with square root limb-darkening laws. We set the $T_{eff}$ of primary as determined in Section~\ref{teff}. Then, we varied the $q$ parameter from 0.1 to higher values in steps of 0.02-0.05 and ran the PHOEBE program corresponding to each value of q. In this process, other parameters like secondary $T_{eff}$, primary component surface potential ($\Omega_{1}$), primary component luminosity ($L_{1}$) and inclination ($i$) are set as free parameters. The sum of squared residuals ($\Sigma res^2$) obtained corresponding to best fit is plotted verses corresponding $q$. The Figure~\ref{qsearch} shows that the solution is converged at some specific value of $q$ corresponding to minimum $\Sigma res^2$ for each system. The $q$ is estimated as 0.55($\pm$1), 0.44($\pm$2), 3.40($\pm$1) and 0.44($\pm$1) for J0158, J0305, J1022 and KW Psc, respectively. The best q and corresponding parameters obtained in q-search are initial estimates. The Figure~\ref{qsearch} shows that q-search has given a wide range of equiprobable q for J0305, J1022 and KW Psc. The final parameters, associated errors and uniqueness of the these solutions were explored with the help of PHOEBE scripter. The scripter was run for 15000 iterations with differential correction minimizations. All the parameters e.g. i, q, secondary $T_{eff}$, $\Omega_{1}$, $\Omega_{2}$ and L were set free with initial values obtained during q-search process. After every 50 iterations a kick of $\pm5\%$ was introduced to all parameters. The fit converged to minimum after these kicks after 5-10 iterations. The output was saved after each iteration. The final values were determined by guassian fitting to the histograms of these iteration results. The Figure~\ref{uniq} shows some of the gaussian fitted histograms. For all four systems, the estimated parameters were almost similar to the best fit parameters obtained during q-search process. The final $q$ for J0158, J0305, J1022 and J2258 are found to be 0.67, 0.31, 3.23, 0.42, respectively, after the gaussian fitting. \begin{figure*}[!ht] \begin{center} \label{uniq} \subfigure{\includegraphics[width=6cm, height=4cm]{fig06a.pdf}} \subfigure{\includegraphics[width=6cm, height=4cm]{fig06b.pdf}}\vspace{-0.3cm} \subfigure{\includegraphics[width=6cm, height=4cm]{fig06c.pdf}} \subfigure{\includegraphics[width=6cm, height=4cm]{fig06d.pdf}} \caption{Hitograms obtained for four parameters using heuristic scanning and parameter kicking.} \end{center} \end{figure*} The heuristic scanning was used for checking the stability of adopted solution in the nearby parameter space. Almost 50-60 values of $q$ and $i$ within $\pm5\%$ of the above obtained values were used to generate a grid of $\sim$2500-3000 perturbed models. Figure~\ref{scan} represents the color map of $q$ versus $i$ in these models. It is a 2-D histogram representing the variation of chi squares in the q-i parameter hyperspace obtained by heuristic scanning. The blue end of the color scale represents the minimum chi square. The "+" signs indicates the position of final adopted model in q-i space. It can be seen that determined models are in the bluer regions which corresponds to a better fit model. \begin{figure*}[!ht] \begin{center} \label{scan} \subfigure{\includegraphics[width=7cm, height=4cm]{fig07a.pdf}} \subfigure{\includegraphics[width=7cm, height=4cm]{fig07b.pdf}}\vspace{-0.3cm} \subfigure{\includegraphics[width=7cm, height=4cm]{fig07c.pdf}} \subfigure{\includegraphics[width=7cm, height=4cm]{fig07d.pdf}} \caption{The q-i parameter space mapping for Figure~\ref{scan} (a). J0158, Figure~\ref{scan} (b). J0305, Figure~\ref{scan} (c). J1022 and Figure~\ref{scan} (d). KW Psc. The + sign represents the final model q-i. The low chi-square regions are represented by blue color and red color represents high chi-square regions.} \end{center} \end{figure*} \begin{figure*}[!ht] \begin{center} \includegraphics[width=16.5cm, height=7cm]{fig08.pdf} \caption{The Observed and model fitted LCs in $VRI$ bands as shown by red, green and blue open circles. The lower panels of each plot show the residuals of the fitted model.} \label{mo_fit} \end{center} \end{figure*} For J0158, the estimated input $q$ is from 0.67($\pm$0.12) and primary $T_{eff}$ is 6156 ($\pm$35) K. The final photometric solutions show that the secondary $T_{eff}$ is less than the primary $T_{eff}$ by $\sim160$ K. We determined fill-out factors for primary and secondary components as 0.282 ($f_{1}$, $f_{2}$) respectively. The J0158 LCs show small asymmetry at phases 0.25 and 0.75. This is a well known effect of CBs and known as O'Connel effect \citep{1951PRCO....2...85O}. To understand this asymmetricity in the LCs of J0158, we considered a spot on primary while modeling it. It is not possible to identify the presence of spot on any component without Doppler imaging technique. Two different set of sport parameters can generate similar LC. The non-uniqueness of spot parameters obtained using photometric data alone is discussed previously by many authors. According to \cite{1999TJPh...23..357E}, the reasonable accuracy in spot parameters can be achieved only if photometric data accuracy is better than 0.0001 mag. We arbitrarily selected a cool spot for all the systems. The position and other spot parameters were decided on the basis of minimum cost function. The best fit model found that the spot was at co-latitude of $90^{o}$ and longitude of $145^{o}$. The position of spot was fixed while determining its radius and temperature ratio. The radius and $T_{spot}/T_{star}$ were estimated as $17^{o}$ and 0.93. The observed LCs of J0305 have almost similar primary and secondary minima. The primary and secondary $T_{eff}$ are determined as 5125 ($\pm$41) and 5112 ($\pm$3) K, respectively. The temperature difference between components is $\sim$10 K which shows that they are in good thermal contact. J0305 also shows asymmetry in the observed LCs. The ($Max_{1}$-$Max_{2}$) for J0305 is about 0.04 mag. The fill-out factor for J0305 is 0.105. A cool spot on secondary was used while modeling the system J0305. Initially, spot was fixed at co-latitude of $90^{o}$ and longitude of $90^{o}$ but it was further moved towards the pole to get better fit. Finally, the best fit found the spot at co-latitude of $69^{o}$ and longitude of $75^{o}$. The radius and $T_{spot}/T_{star}$ are estimated as $23^{o}$ and 0.88. The rest of parameters are summarized in Table~\ref{mod_para}. \begin{figure*}[!ht] \begin{center} \includegraphics[width=16.0cm, height=10.0cm]{fig09.pdf} \caption{Spot distribution on surface of eclipsing binaries. The first, second, third and fourth row from upper shows geometry of J0158, J0305, J1022 and KW Psc with spots at phases 0, 0.25, 0.50 and 0.75, respectively. } \label{spots} \end{center} \end{figure*} For J1022, it can be seen in Figure~\ref{lc_obs} that the primary and secondary minima are at different levels. The photometric solutions show that the secondary $T_{eff}$ is less than primary by $\sim300$ K. The fill-out factor was found to be 0.177 and 0.194 for both primary and secondary. Although LCs show very small asymmetricity, we still applied a cool spot on primary to improve our fit and determined the best fit parameters. The sum of squared residuals ($\Sigma~res^2$) reduced to 0.02 from 0.024 after including the spot. The radius and $T_{spot}/T_{star}$ are estimated as $19^{o}$ and 0.95. The spot position was found at co-latitude of $94^{o}$ and longitude of $235^{o}$ as shown in Figure~\ref{spots}. For KW Psc, we obtained secondary $T_{eff}$ as 4830 ($\pm$3). It is about 90 K less than the primary $T_{eff}$, so, both the components are in good thermal contact. The fill-out factors were calculated as 0.192 ($f_{1}$) and 0.231 ($f_{2}$). The (Max$_{1}$-Max$_{2}$) for KW Psc is about -0.02. For this asymmetry, we used a cool spot on secondary at co-latitude of $76^{o}$ and longitude of $120^{o}$ with radius and $T_{spot}/T_{star}$ of $31^{o}$ and 0.96, respectively. The other parameters obtained from LCs fitting are given in Table~\ref{mod_para}. Figure~\ref{spots} illustrates the geometrical representation of the systems having spots on specific positions. \input{tab07.tex} \section{Physical Parameters}\label{phy_para} The parameters like $q$, $i$, $f$, $L_{1}/(L_{1}+L_{2})$, $r_{1}$, $r_{2}$ are estimated by modeling of observed LCs. All the four sources are observed in GAIA. The GAIA parallaxes ($\pi$) given in Table~\ref{tar_info} are used to determine the absolute magnitude using : \begin{equation} M_{V}=m_{V}-5log(1000/\pi)+5-A_{V} \end{equation} where $M_{V}$, $m_{V}$, $\pi$ and $A_{V}$ represent absolute magnitude in V-band, apparent magnitude in V-band, parallax and extinction in V-band. The $\pi$ is in milli-arcsec. The $A_{V}$ is used from \cite{2011ApJ...737..103S} and average $m_{V}$ from \cite{2014yCat..22130009D}. The absolute V-band magnitudes are found to be 3.284, 5.581, 5.309 and 6.232 mag for J0158, J0305, J1022 and KW Psc, respectively. For estimating absolute bolometric magnitude ($M_{bol}$) from absolute magnitude, the bolometric corrections were used from \cite{2011yCat..21930001W} corresponding to the $T_{eff}$, metallicity and surface gravity of each system (-0.04, -0.24, -0.16 and -0.31 for J0158, J0305, J1022 and KW Psc, respectively). The $M_{bol}$ for J0158, J0305, J1022 and KW Psc are found to be 3.244, 5.341, 5.149 and 5.922 mag, respectively. \begin{figure*}[!ht] \begin{center} \includegraphics[width=17cm, height=8cm]{fig10.pdf} \caption{The diagram shows the position of our systems with previously studies systems \citep{2013MNRAS.430.2029Y} on Mass-Luminosity and Mass- Radius planes. The continuous lines are ZAMS taken from \cite{2012yCat..35410041M} corresponding to z=0.014.} \label{evol} \end{center} \end{figure*} The total luminosity ($L_{1}+L_{2}$) was determined using : \begin{equation} \label{lu_eq} L_{T}(L_{\odot}) = L_{1}+L_{2}=10^{-0.4(M_{bol}-M_{bol}{_\odot})} \end{equation} Here $M_{bol}{_\odot}$ is taken as 4.73 mag \citep{2010AJ....140.1158T}. Using the above equation, the total luminosity was calculated as 4.311, 0.625, 0.745 and 0.366 in $L_{\odot}$ units, for J0158, J0305, J1022 and KW Psc, respectively. The luminosity for individual component is determined by using the $L_{1}/(L_{1}+L_{2}$) obtained from LCs fitting in PHOEBE. For J0158, J0305, J1022 and KW Psc, $L_{1}$ is calculated as 2.655, 0.466, 0.238 and 0.258 $L_{\odot}$, respectively. Total luminosity in terms of $T_{eff}$, relative radii of primary ($r_{1}$), relative radii of secondary ($r_{2}$) and separation of components (A) is given as: \begin{equation} \label{sm_eq} L_{T}=T_{1}^{4}(Ar_{1})^{2}+T_{2}^{4}(Ar_{2})^{2} \end{equation} Here, $T_{1}$ and $T_{2}$ are in solar temperature units ($T_{\odot}$ = 5770 K). The separation, A, is in solar radius unit. The relative radii for primary or secondary is determined as: \begin{equation} \label{rel_ra} r_{i}=(r_{pole} \times r_{side} \times r_{back})^{-1/3} \end{equation} Here $r_{pole}$, $r_{side}$, $r_{back}$ are obtained from the photometric LCs modeling. The $i$ is 1 for primary and 2 for secondary. The $T_{1}$ and $L_{T}$ are already determined for all the systems. The $T_{2}$, $r_{1}$ and $r_{2}$ are determined by solution of LCs fittings. Finally, the Equation~\ref{sm_eq} was used for calculating the separation between components. The separation between two components in the sources J0158, J0305, J1022 and KW Psc are found to be 3.165, 1.772, 1.881 and 1.483 $R_{\odot}$, respectively. \input{tab08.tex} To determine the total mass ($M_{1}$+$M_{2}$) of the system, we used the Kepler's law. The constant factor in the Kepler's law is described in terms of $R_{\odot}$, day, $M_{\odot}$. \begin{equation} \dfrac{A^{3}}{P^{2}}=74.94(M_{1}+M_{2}) \end{equation} Here A, P, $M_{1}$ and $M_{2}$ are in units of $R_{\odot}$, days, $M_{\odot}$ and $M_{\odot}$, respectively. Using the earlier estimated values, we determined the $M_{1}+M_{2}$ as 2.108, 1.214, 1.325 and 0.790 $M_{\odot}$ for J0158, J0305, J1022 and KW Psc, respectively. The mass of individual components ($M_{1}$ and $M_{2}$) are determined using the $q$ value obtained through the LCs fitting. The radii of primary ($R_{1}$) and secondary ($R_{2}$) are determined from mean radii ($r_{i}$) and separation A by using following relation: \begin{equation} R_{i}=Ar_{i} \end{equation} The radii $R_{i}$ and A are in $R_{\odot}$ units. Using the $r_i$ estimated through the Equation (17), we calculated $R_i$ for each system. In Table~\ref{abs_para}, we give all the physical parameters determined for the four binary systems. The errors in these parameters are given in the parenthesis of each value which come as a result of error propagated through various equations used to determine the physical parameters considering the errors in individual parameters. The position of these systems on $M$-$L$ and $M$-$R$ diagram are shown in Figure~\ref{evol} along with the other previously studied EWs \citep[e.g.,][]{2013MNRAS.430.2029Y}. It can be seen that systems J0305, J1022 and KW Psc are near the group of W sub-type EWs. The primary component of all the systems is more closer to ZAMS as compared to the secondary. Using the mass and radius of previously studied cool contact binaries, \citet{1988MNRAS.231..341H} and \citet{1996A&A...311..523M} also noted similar trend in EWs. The secondary components are above ZAMS, which indicates that they have higher radius than a main sequence star with similary mass. As suggested by \cite{2004IAUS..219..967S}, this is not possible only due to energy transfer from primary to the secondary component, so, the secondary components must be more evolved with He-depletion cores. \begin{figure*}[!ht] \begin{center} \subfigure{\includegraphics[width=8.5cm,height=5.8cm]{fig11a.pdf}} \subfigure{\includegraphics[width=8.5cm,height=5.8cm]{fig11b.pdf}} \caption{The LAMOST spectra for (a) J0158 and (b) J0305 over plotted with synthetic spectra. The black continuous line shows the subtracted spectra with excess emission lines.} \end{center} \label{spec1} \end{figure*} \begin{figure*}[!ht] \begin{center} \label{spec2} \subfigure{\includegraphics[width=8.5cm,height=5.8cm]{fig12a.pdf}} \subfigure{\includegraphics[width=8.5cm,height=5.8cm]{fig12b.pdf}} \caption{Same as in Figure~\ref{spec1} but for (a) J1022 and (b) KW Psc.} \end{center} \end{figure*} \begin{figure*}[!ht] \begin{center} \includegraphics[width=15cm,height=8cm]{fig13.pdf} \caption{The spectra of target sources obtained from the HCT (red color) and the synthesized spectra (blue color).} \label{hct_sp} \end{center} \end{figure*} \section{Chromospheric Activities}\label{ch_ac} The phenomenon of magnetic activities is often seen in the late type rotating stars having convective envelopes which result in the formation of star spots, flares or plagues. The surface chromospheric activity depends on stellar rotation rate. The spectral emission lines $H_{\alpha}$, $H_{\beta}$, Mg $I~b$ triplet, Na $I~D_{1}~D_{2}$, $Ca~II~H~\&~K$, $Ca~II~IRT$ etc are optical and near-infrared indicators of chromospheric activity \citep{1984BAAS...16..893B, 1995A&AS..114..287M} and equivalent width of these lines provide a good measure of activity level in the late type rotating stars. In case of binary stars, the total flux in spectra at a specific time contains contribution from chromospheric and photospheric flux of both the stars. The reconstruction of absorption profile and spectral subtraction techniques are commonly used for studying chromospheric activity of stars, however, former is widely used in the case of binary systems. The spectral subtraction technique is based on the assumption that the level of photospheric flux is almost same in the stars having similar spectral type. This suggests that an inactive star of similar spectral type can be used to estimate the photospheric flux contribution for an active star \citep{1984BAAS...16..893B, 1995A&AS..114..287M} which may be seen in the form of excess emission. The LAMOST spectra (for J0158, J0305 and KW Psc) and HCT spectra (for J0158, J0305, J1022 and KW Psc) are analysed for chromospheric activity signatures. The noise in the spectra can also produce some emission like features in the subtracted spectra, so, only the high SNR specra of objects and inactive stars are selected. The inactive stars having small rotational velocity are appropriate candidates for the template spectra due to less rotational broadening. Here, the synthetic spectra is constructed using the STARMOD program \citep{1984AJ.....89..549H, 1984BAAS...16..893B} which uses the inactive template spectra for both components of EWs and generate a composite spectra after introducing rotational broadening and radial velocity shifts. The stars HD 233641 \citep{2004yCat..21520261W}, HD 238130, HD 77712, HD 219829 \citep{2000yCat..41420275S} and BD+43 2328 \citep{2004ApJS..152..251V} are used for preparation of composite synthetic spectra. The spectra obtained after subtracting synthetic spectra from observed spectra are shown in Figures~\ref{spec1} to ~\ref{spec2}. These spectra show emission in $H_{\alpha}$, $H_{\beta}$ and $Ca~II~H~\&~K$ and $Ca~II~IRT$ lines. Here, the spectra from HCT are shown in Figure~\ref{hct_sp} in red color while the synthetic spectra generated from LAMOST data are shown with blue color in the same figure. It is to be noted here that the spectral subtraction technique has not been applied on HCT spectra due to unavailability of comparison stars in those observations. As we have earlier noticed asymmetry in the LCs of these systems which might have resulted due to some magnetic activities. The excess emission found in the differential spectra seems to confirm this notion. As SNR at both ends of spectra was poor we therefore could calculate the equivalent width of $H_{\alpha}$ line only. We used the spectral range from 653.5 nm to 659.5 nm and fitted Gaussian profile to determine the equivalent width. For J0158 and J0305 the equivalent widths were found to be $0.369\pm0.017$ and $1.031\pm0.018$. For J0305 and KW Psc we determined equivalent widths of $1.236\pm0.608$ and $1.206\pm0.042$. However, due to poor Gaussian fitting in other spectra, the equivalent widths could not be determined. \section{Discussion and conclusion}\label{discu} The detailed study of EWs are useful in understanding their formation mechanism and different evolution stages. A long-term photometric and spectroscopic study can thus throw light on their period change and associated processes like mass transfer, third body and spot evolution. In this study, we present the multi-band photometric and low resolution spectroscopic analysis of four EWs. Due to absence of radial velocity curves of these systems, mass ratios of the binary components are determined from photometric LCs with the $q$-search method. For all the systems $q$ is found to be less than 0.7 except J1022 for which a higher value of 3.23 is found. As the components are close to each other in EWs, the interaction between the components is quite common in these systems. This can result in a period change through mass transfer/loss among components. The presence of additional companion or long term cyclic magnetic activity is also prevalent for contact binaries which can cause cyclic variation in the (O-C) diagram. For the four EBs studied here, we could able to collect the TOM information during last 13-15 years only and with this limited time span it was very difficult to retrieve any specific information about the long term cyclic variations. However, a preliminary (O-C) analysis of these systems shows a change in period for three systems (J0158, J0305 and J1022) but no such variation was noticed in the case of KW Psc. The mass loss can be the reason for change in their periods hence we also calculated mass-transfer rates for these systems. Asymmetry in LCs of all the systems is observed and level of asymmetry show change from $V$ to $I$ band with being maximum in $V$ band.The LCs from SuperWASP data show that these systems exhibit variation from positive O'Connell to negative O'Connell effect with passage of time. Even the depths of eclipsing minima are seen to be changing in these two systems. The analysis of present observations and the SuperWASP observations shows that for J0305 $Max_{1}-Max(2)$ varies from 0.06 to -0.06. Similarly, this difference for J1022 varies from 0.04 to -0.2. This behaviour indicates that spots are not fixed but form and move with time. Different empirical relations are available in literature for determining parameters like mass, radius, luminosity etc. However, these relations can be biased due to the specific EWs sample used during their formulation. Therefore, in the present analysis, we followed the procedure adopted by \cite{2020Ap&SS.365...71L} for calculating the physical parameters. Recently \cite{2020ApJS..247...50S} derived parameters of 2335 late-type contact binaries from the CSS survey including the system J0305. They found a mass-ratio of 0.19 for J0305 which is smaller than the estimated value of 0.31 in the present study. However, it should be noted that \citet{2020ApJS..247...50S} used only $V$ band data in their analysis and a primary component cooler by 150 K than the present value \textbf{hence some disagreement between the two estimates could be possible}. The total mass ($M_{t}$) determined for all the systems is above minimum total mass limit for EWs except for KW Psc. This indicates that significant amount of mass loss had taken place in KW Psc system in the past. As we have not found any period change in KW Psc during last 12 years, we believe that the observed low mass of this system can be due to any previous magnetic activities in the form of some burst. Nevertheless, a more detailed study is required to find the exact cause of total low mass in this system. As W UMa type systems with q$>$0.5 are W-subtype, so, J0158 can be classified as W-subtype but its spectral class and high temperature of primary suggests that it can be A-subtype system. Similarly, J0305 and KW Psc can be classified as W-subtypes on the basis of their spectral type but their q$<$0.5 and high temperature of primary places them into A-subtype category. The J1022 is found to be W-subtype EW. All the systems are shallow contact W UMa type. The low resolution spectra from LAMOST and HCT for all the sources have been compared with the synthetic spectra using the spectral subtraction technique. The subtracted spectra for these systems show a small excess emission in $H_{\alpha}$, $H_{\beta}$ and Ca triplet region. Small emission is also visible in Ca HK region but considerable amount of noise in blue region makes it difficult to analyse. Although there is 3 to 4 year difference between LAMOST spectroscopic observations and our photometric observations, the presence of spots in LCs modeling can still be assumed an indirect proof of their activities. The equivalent widths of different lines in subtracted spectra can give measure of magnetic activity in these systems, however, further spectroscopic observations with better resolution at different phases will be more useful in the study of chromospheric activities in EBs. \section{ACKNOWLEDGEMENTS} The work presented here has been carried out under the DST project "INT/AUSTRIA/BMWF/P-14". We thank the staff of IAO, Hanle and CREST, Hosakote, that made these observations possible. The facilities at IAO and CREST are operated by the Indian Institute of Astrophysics, Bangalore. Guoshoujing Telescope (the Large Sky Area Multi-Object Fibre Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. In this work we have also used the data from the European Space Agency (ESA) mission GAIA, processed by the GAIA Data Processing and Analysis Consortium (DPAC). This work also make use of the Two Micron All Sky Survey and SIMBAD database. \bibliographystyle{yahapj}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction}\label {intro} Fourier analysis on locally compact Abelian groups is by now classical matter that goes back to the first half of the 20th century (see e.g. \cite{rudin} for a self-contained presentation). Consider a locally compact Abelian group $(G,+)$ endowed with a Haar measure $\mu,$ and denote by $(\wh G,\cdot)$ the dual group of $(G,+)$ that is the set of characters on $G$ endowed with the standard multiplication of functions. By definition, the Fourier transform of an integrable function $f:G\to\mathop{\mathbb C\kern 0pt}\nolimits$ is the continuous and bounded function $\wh f:\wh G\to\mathop{\mathbb C\kern 0pt}\nolimits$ (also denoted by ${\mathcal F} f$) defined by \begin{equation}\label{def:FG} \forall \g\in\wh G,\; \wh f(\g) = {\mathcal F} f(\g)\buildrel\hbox{\footnotesize def}\over =\int_G f(x)\,\overline{\gamma(x)}\,d\mu(x). \end{equation} Being also a locally compact Abelian group, the `frequency space' $\wh G$ may be endowed with a Haar measure $\wh\mu.$ It turns out to be possible to normalize $\wh\mu$ so that the following \emph{Fourier inversion formula} holds true for, say, all function $f$ in $L^1(G)$ with $\wh f$ in $L^1(\wh G)$: \begin{equation}\label{eq:inversionG} \forall x\in G,\; f(x)=\int_{\wh G} \wh f(\gamma)\,\gamma(x)\,d\wh\mu(\gamma). \end{equation} As a consequence, we get the Fourier-Plancherel identity \begin{equation}\label{eq:FPG} \int_G|f(x)|^2\,d\mu(x)=\int_{\wh G} |\wh f(\g)|^2\,d\wh\mu(\g) \end{equation} for all $f$ in $L^1(G)\cap L^2(G).$ \medbreak Fourier transform on locally compact Abelian groups has a number of other interesting properties that we do not wish to enumerate here. Let us just recall that it changes convolution products into products of functions, namely \begin{equation}\label{eq:convG} \forall f\in L^1(G),\;\forall g\in L^1(G),\;{\mathcal F}(f\star g)={\mathcal F} f\cdot{\mathcal F} g. \end{equation} In the Euclidean case of ${\mathop{\mathbb R\kern 0pt}\nolimits}^n$ the dual group may be identified to~$({\mathop{\mathbb R\kern 0pt}\nolimits}^n)^\star$ through the map~$\xi \mapsto e^{i\langle \xi ,\cdot\rangle}$ (where $\langle \cdot,\cdot\rangle$ stands for the duality bracket between~$({\mathop{\mathbb R\kern 0pt}\nolimits}^n)^\star $ and~${\mathop{\mathbb R\kern 0pt}\nolimits}^n$), and the Fourier transform of an integrable function~$f$ may thus be seen as the function on $({\mathop{\mathbb R\kern 0pt}\nolimits}^n)^\star$ (usually identified to ${\mathop{\mathbb R\kern 0pt}\nolimits}^n$) given by \begin{equation} \label {definFourierclassic} {\mathcal F} (f) (\xi) = \wh f(\xi)\buildrel\hbox{\footnotesize def}\over = \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}^n} e^{-i\langle \xi,x\rangle } f(x)\, dx. \end{equation} Of course, we have \eqref{eq:convG} and, as is well known, if one endows the frequency space $({\mathop{\mathbb R\kern 0pt}\nolimits}^n)^\star$ with the measure $\frac1{(2\pi)^n}d\xi$ then the inversion and Fourier-Plancherel formulae \eqref{eq:inversionG} and \eqref{eq:FPG} hold true. Among the numerous additional properties of the Fourier transform on ${\mathop{\mathbb R\kern 0pt}\nolimits}^n,$ let us just underline that it allows to `diagonalize' the Laplace operator, namely for all smooth compactly supported functions, we have \begin{equation} \label {diagDeltaRd} {\mathcal F}(\Delta f) (\xi) =- |\xi|^2 \wh f(\xi). \end{equation} For noncommutative groups, Fourier theory gets wilder, for the dual group is too `small' to keep the definition of the Fourier transform given in \eqref{def:FG} and still have the inversion formula\refeq{eq:inversionG}. Nevertheless, if the group has `nice' properties (that we wish not to list here) then one can work out a consistent Fourier theory with properties analogous to \eqref{eq:inversionG}, \eqref{eq:FPG} and \eqref{eq:convG} (see e.g.\ccite{astengo2,crs,corwingreenleaf,hula,RS,stein2,taylor1,thangavelu} and the references therein for the case of nilpotent Lie groups). In that context, the classical definition of the Fourier transform amounts to replacing characters in \eqref{def:FG} with suitable families of irreducible representations that are valued in Hilbert spaces (see e.g. \cite{corwingreenleaf,folland} for a detailed presentation). Consequently, the Fourier transform is no longer a complex valued function but rather a family of bounded operators on suitable Hilbert spaces. It goes without saying that within this approach, the notion of `frequency space' becomes unclear, which makes Fourier theory much more cumbersome than in the Abelian case. In the present paper, we want to focus on the Heisenberg group which, to some extent, is the simplest noncommutative nilpotent Lie group and comes into play in various areas of mathematics, ranging from complex analysis to geometry or number theory, probability theory, quantum mechanics and partial differential equations (see e.g. \cite{bfg, farautharzallah, stein2,taylor1}). As several equivalent definitions coexist in the literature, let us specify the one that we shall adopt throughout. \begin{definition} {\sl Let~$\s(Y,Y') =\langle \eta,y'\rangle -\langle \eta',y\rangle$ be the canonical symplectic form on~$T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d.$ The Heisenberg group~${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ is the set~$T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d \times{\mathop{\mathbb R\kern 0pt}\nolimits}$ equipped with the product law $$ w\cdot w'\buildrel\hbox{\footnotesize def}\over = \bigl(Y+Y' , s+s'+ 2\s(Y,Y')\bigr) = \bigl(y+y', \eta+\eta' , s+s'+2 \langle \eta,y'\rangle -2\langle \eta',y\rangle\bigr) $$ where~$w=(Y,s)=(y,\eta,s)$ and $w'=(Y',s')=(y',\eta',s')$ are generic elements of~${\mathop {\mathbb H\kern 0pt}\nolimits}^d.$} \end{definition} As regards topology and measure theory on the Heisenberg group, we shall look at~${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ as the set ${\mathop{\mathbb R\kern 0pt}\nolimits}^{2d+1},$ after identifying $(Y,s)$ in ${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ to $(y,\eta,s)$ in ${\mathop{\mathbb R\kern 0pt}\nolimits}^{2d+1}.$ With this viewpoint, the \emph{Haar measure} on ${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ is just the Lebesgue measure on ${\mathop{\mathbb R\kern 0pt}\nolimits}^{2d+1}.$ In particular, one can define the following convolution product for any two integrable functions~$f$ and~$g$: \begin{equation} \label {definConvolH} f \star g ( w ) \buildrel\hbox{\footnotesize def}\over = \int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d} f ( w \cdot v^{-1} ) g( v)\, dv = \int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d} f ( v ) g( v^{-1} \cdot w)\, dv. \end{equation} Even though convolution on the Heisenberg group is noncommutative, if one defines the \emph{Lebesgue spaces} $L^p({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ to be just $L^p({\mathop{\mathbb R\kern 0pt}\nolimits}^{2d+1}),$ then one still gets the classical Young inequalities in that context. \smallbreak As already explained above, as ${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ is noncommutative, in order to have a good Fourier theory, one has to resort to more elaborate irreducible representations than character. In fact, the group of characters on~${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ is isometric to the group of characters on~$T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d.$ Hence, if one defines the Fourier transform according to \eqref{def:FG} then the information pertaining to the vertical variable~$s$ is lost. There are essentially two (equivalent) approaches. They are based either on the \emph{Bargmann representation} or on the \emph{Schr\"odinger representation} (see \cite{corwingreenleaf}). For simplicity, let us just recall the second one which is the family of group homomorphisms $w\mapsto U^\lam_w$ (with $\lambda\in{\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\}$) between~${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ and the unitary group~${\mathcal U}(L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d))$ of~$L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d),$ defined for all~$w=(y,\eta, s)$ in~${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ and $u$ in $L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ by $$ U^\lam _w u(x)\buildrel\hbox{\footnotesize def}\over = e^{-i\lam (s+2\langle \eta, x-y\rangle)} u(x-2y). $$ The classical definition of Fourier transform of integrable functions on ${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ reads as follows: \begin{definition} \label {definFourierSchrodinger} {\sl The\emph{ Fourier transform} \index{Fourier!transform} of an integrable function~$f$ on~${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ is the family $({\mathcal F}^{\mathop {\mathbb H\kern 0pt}\nolimits}(f)(\lambda))_{\lambda\in{\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\}}$ of bounded operators on $L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ given by $$ {\mathcal F}^{{\mathop {\mathbb H\kern 0pt}\nolimits}} (f)(\lam) \buildrel\hbox{\footnotesize def}\over = \int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d} f(w) U^\lam_w\, dw. $$ } \end{definition} In the present paper, we strive for another definition of the Fourier transform, that is as similar as possible to the one for locally compact groups given in \eqref{def:FG}. In particular, we want the Fourier transform to be a complex valued function defined on some explicit `frequency space' that may be endowed with a structure of a locally compact and complete metric space, and to get formulae similar to \eqref{eq:inversionG}, \eqref{eq:FPG}, \eqref{eq:convG} together with a diagonalization of the Laplace operator (for the Heisenberg group of course) analogous to \eqref{diagDeltaRd}. There is a number of motivations for our approach. An important one is that, having an explicit frequency space will allow us to get elementary proofs of the basic results involving the Fourier transform, just by mimicking the corresponding ones of the Euclidean setting. In particular, we expect our setting to open the way to new results for partial differential equations on the Heisenberg group. Furthermore, our definition will enable us to get an explicit (and comprehensible) description of the range of the Schwartz space by the Fourier transform. As a consequence, extending the Fourier transform to the set of tempered distributions will become rather elementary (see more details in our forthcoming paper \cite{bcdh}). In the present paper, we will give two concrete applications of our approach. First, in Theorem \ref{FourierL1basic}, we will provide an explicit asymptotic description of the Fourier transform when (what plays the role of) the vertical frequency parameter tends to $0.$ Our second application is the extension (also explicit) of the Fourier transform to functions depending only on the horizontal variable (this is Theorem \ref{Fourierhorizontal}). \bigbreak\noindent{\bf Acknowledgments:} The authors wish to thank very warmly Fulvio Ricci for enlightening suggestions that played a decisive role in the construction of this text. They are also indebted to Nicolas Lerner for enriching discussions A determining part of this work has been carried out in the exceptional environment of the \emph{Centre International de Rencontres Math\'ematiques} in Luminy. The third author has been partially supported by the \emph{Institut Universitaire de France}. \section{Results}\label{s:results} Before presenting the main results of the paper, let us recall how, with the standard definition of the Fourier transform in ${\mathop {\mathbb H\kern 0pt}\nolimits}^d,$ Properties \eqref{eq:inversionG}, \eqref{eq:FPG} and \eqref{eq:convG} may be stated (the reader may refer to e.g.\ccite{bfg, Beals, farautharzallah, fisher, folland, geller2, hula, thangavelu2, stein2, taylor1, thangavelu} for more details). \begin{theorem} \label {RecallClassicla FourierH} {\sl Let~$f$ be an integrable function. Then we have \begin{equation} \label {L1LinftyFourierbasic} \forall\lam \in {\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\}\,,\ \|{\mathcal F}^{{\mathop {\mathbb H\kern 0pt}\nolimits}}(f)(\lam)\|_{{\mathcal L}(L^2)} \leq \|f\|_{L^1({\mathop {\mathbb H\kern 0pt}\nolimits}^d)} \end{equation} and, for any function~$u$ in~$L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d)$, the map $\lam \mapsto {\mathcal F}^{\mathop {\mathbb H\kern 0pt}\nolimits}(f) (\lam)(u)$ is continuous from ${\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\}$ to~$L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d).$ \medbreak For any function~$f$ in the Schwartz space ${\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ (which is the classical Schwartz space on~${\mathop{\mathbb R\kern 0pt}\nolimits}^{2d+1}$), we have the inversion formula: \begin{equation} \label {inversionHclassical} \forall w\in{\mathop {\mathbb H\kern 0pt}\nolimits}^d,\; f(w) = \frac {2^{d-1}} {\pi^{d+1} } \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} {\rm tr}\bigl( U^\lam_{w^{-1}}{\mathcal F}^{\mathop {\mathbb H\kern 0pt}\nolimits} f(\lam)\bigr) |\lam|^d d\lam\,, \end{equation} where~${\rm tr}(A)$ denotes the trace of the operator~$A$. \medbreak Moreover, if~$f$ belongs to~$L^1({\mathop {\mathbb H\kern 0pt}\nolimits}^d)\cap L^2({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ then for any~$\lam$ in~${\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\}$,~${\mathcal F}^{\mathop {\mathbb H\kern 0pt}\nolimits}(f)(\lam)$ is an Hilbert-Schmidt operator, and we have \begin{equation} \label {FourierPlancherelHclassical} \|f\|_{L^2({\mathop {\mathbb H\kern 0pt}\nolimits}^d)}^2= \frac {2^{d-1}} {\pi^{d+1}} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}}\| {\mathcal F}^{\mathop {\mathbb H\kern 0pt}\nolimits}(f)(\lam)\|_{HS} ^2 \,|\lam|^dd\lam \end{equation} where~$\|\cdot\|_{HS}$ stands for the Hilbert-Schmidt norm. } \end{theorem} We also have an analogue of the convolution identity \eqref{eq:convG}. Indeed, as the map~$w\mapsto U^\lam_w$ is a homomorphism between~${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ and~${\mathcal U}(L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d))$, we get for any integrable functions $f$ and~$g,$ \begin{equation} \label {FourierConvol} {\mathcal F}^{{\mathop {\mathbb H\kern 0pt}\nolimits}} (f\star g) (\lam) = {\mathcal F}^{{\mathop {\mathbb H\kern 0pt}\nolimits}} (f)(\lam)\circ {\mathcal F}^{{\mathop {\mathbb H\kern 0pt}\nolimits}}(g)(\lam). \end{equation} Let us next recall the definition of the (sub-elliptic) Laplacian on the Heisenberg group, that will play a fundamental role in our approach. Being a real Lie group, the Heisenberg group may be equipped with a linear space of \emph{left invariant} vector fields, that is vector fields commuting with any left translation~$\tau_w(w') \buildrel\hbox{\footnotesize def}\over = w\cdot w'$. It is well known that this linear space has dimension $2d+1$ and is generated by the vector fields $$ S\buildrel\hbox{\footnotesize def}\over =\partial_s\,,\ \ {\mathcal X}_j\buildrel\hbox{\footnotesize def}\over =\partial_{y_j} +2\eta_j\partial_s\quad\hbox{and}\quad \Xi_j\buildrel\hbox{\footnotesize def}\over = \partial_{\eta_j} -2y_j\partial_s\,,\ 1\leq j\leq d. $$ The \emph{Laplacian} \index{Laplacian} associated to the vector fields~$({\mathcal X}_j)_{1\leq j\leq d}$ and~$(\Xi_j)_{1\leq j\leq d}$ reads \begin{equation} \label{defLaplace} \D_{{\mathop {\mathbb H\kern 0pt}\nolimits}} \buildrel\hbox{\footnotesize def}\over = \sum_{j=1} ^d ({\mathcal X}_j^2+\Xi_j^2). \end{equation} As in the Euclidean case (see Identity\refeq{diagDeltaRd}), Fourier transform allows to diagonalize Operator $\D_{\mathop {\mathbb H\kern 0pt}\nolimits}$: it is based on the following relation that holds true for all functions $f$ and $u$ in ${\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ and ${\mathcal S}({\mathop{\mathbb R\kern 0pt}\nolimits}^d),$ respectively (see e.g.\ccite{huet, O}): \begin{equation} \label {FourierEtLaplace} {\mathcal F}^{{\mathop {\mathbb H\kern 0pt}\nolimits}}(\D_{\mathop {\mathbb H\kern 0pt}\nolimits} f) (\lam) = 4{\mathcal F}^{{\mathop {\mathbb H\kern 0pt}\nolimits}}(f)(\lam) \circ \D_{\rm osc} ^\lam \quad\hbox{with}\quad \D_{\rm osc}^\lam u (x) \buildrel\hbox{\footnotesize def}\over =\sum_{j=1}^d \partial_j^2 u(x) - \lam^2|x|^2 u(x). \end{equation} This prompts us to take advantage of the spectral structure of the harmonic oscillator to get an analog of Formula\refeq {diagDeltaRd}. To this end, we need to introduce the family of Hermite functions~$(H_n)_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d}$ defined by \begin{equation} \label{Hermite functions} H_n \buildrel\hbox{\footnotesize def}\over = \Bigl(\frac 1 {2^{|n|} n!}\Bigr) ^{\frac 12}C^n H_0 \quad\hbox{with}\quad C^n \buildrel\hbox{\footnotesize def}\over = \prod_{j=1}^d C_j^{n_j} \quad\hbox{and}\quad H_0(x)\buildrel\hbox{\footnotesize def}\over = \pi^{-\frac d 2} e^{-\frac {|x|^2} 2}, \end{equation} where~$C_j\buildrel\hbox{\footnotesize def}\over = -\partial_j +M_j$ stands for the \emph{creation operator} with respect to the $j$-th variable and~$M_j$ is the multiplication operator defined by~$M_ju(x)\buildrel\hbox{\footnotesize def}\over = x_ju(x).$ As usual, $n!\buildrel\hbox{\footnotesize def}\over = n_1!\dotsm n_d!$ and $|n|\buildrel\hbox{\footnotesize def}\over = n_1+\cdots+n_d$. \medbreak It is well known that the family~$ \suite H n {\mathop{\mathbb N\kern 0pt}\nolimits^d}$ is an orthonormal basis of~$L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d).$ In particular, \begin{equation}\label{def:kro} \forall (n,m)\in\mathop{\mathbb N\kern 0pt}\nolimits^d\times\mathop{\mathbb N\kern 0pt}\nolimits^d \,,\ (H_n|H_m)_{L^2}=\delta_{n,m} , \end{equation} where $\delta_{n,m}=1$ if $n=m,$ and $\delta_{n,m}=0$ if $n\not=m.$ \medbreak Besides, we have \begin{equation} \label {relationsHHermite} ( -\partial_j^2+M_j^2) H_n =( 2n_j+1) H_n \quad\hbox{and thus}\quad -\D_{\rm osc}^1 H_n = (2|n|+d) H_n. \end{equation} For~$\lam$ in~${\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\},$ we further introduce the rescaled Hermite function~$H_{n,\lam} (x)\buildrel\hbox{\footnotesize def}\over = |\lam|^{\frac d 4} H_n(|\lam|^{\frac 12} x)$. It is obvious that~$(H_{n,\lam})_{n\in \mathop{\mathbb N\kern 0pt}\nolimits^d}$ is still an orthonormal basis of~$L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ and that \begin{equation} \label {relationsHHermiteD} ( -\partial_j^2+\lam^2M_j^2) H_{n,\lam} =( 2n_j+1)|\lam| H_{n,\lam} \quad\hbox{and thus}\quad -\D_{\rm osc}^\lam H_{n,\lam} = (2|n|+d)|\lam| H_{n,\lam}. \end{equation} We are now ready to give `our' definition of the Fourier transform on ${\mathop {\mathbb H\kern 0pt}\nolimits}^d.$ \begin{definition} \label {definFouriercoeffH} {\sl Let~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits} ^d\buildrel\hbox{\footnotesize def}\over = \mathop{\mathbb N\kern 0pt}\nolimits^{d}\times\mathop{\mathbb N\kern 0pt}\nolimits^d\times {\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\}.$ We denote by~$\wh w=(n,m,\lam)$ a generic point of~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d$. For~$f$ in~$L^1({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$, we define the map~$\cF_\H f$ (also denoted by~$\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}$) to be $$ \cF_\H f: \ \left\{ \begin{array}{ccl} \wt {\mathop {\mathbb H\kern 0pt}\nolimits} ^d & \longrightarrow & \mathop{\mathbb C\kern 0pt}\nolimits\\[1ex] \wh w & \longmapsto & \bigl({\mathcal F}^{{\mathop {\mathbb H\kern 0pt}\nolimits}}(f)(\lam) H_{m,\lam} |H_{n,\lam}\bigr)_{L^2}. \end{array} \right. $$ } \end{definition} {}From now on, we shall use only that definition of the Fourier transform, which amounts to considering the `infinite matrix' of~${\mathcal F}^{\mathop {\mathbb H\kern 0pt}\nolimits} f(\lam)$ in the orthonormal basis of~$L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ given by~$(H_{n,\lam})_{n\in \mathop{\mathbb N\kern 0pt}\nolimits}.$ For further purpose, it is in fact much more convenient to rewrite~${\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits} f$ in terms of the mean value of~$f$ \emph{modulated by some oscillatory functions} which may be seen as suitable Wigner distribution functions of the family~$(H_{n,\lam})_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d,\lam\not=0},$ and will play the same role as the characters~$e^{i\langle \xi, \cdot\rangle}$ in the Euclidean case. Indeed, by definition, we have $$ {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits} f(\wh w)= \int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d\times {\mathop{\mathbb R\kern 0pt}\nolimits}^d} f(w) e^{-is\lam} e^{-2i\lam\langle \eta, x-y\rangle} H_{m,\lam} (x-2y) H_{n,\lam} (x) \,dw \,dx. $$ Therefore, making an obvious change of variable, we discover that \begin{eqnarray}\label {definFourierWigner} {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits} f(\wh w) &&\!\!\!\!\!\!\!= \int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d} \overline{e^{is\lam} {\mathcal W}(\wh w,Y)}\, f(Y,s) \,dY\,ds \quad\hbox{with}\quad\\\label{definWigner} \displaystyle {\mathcal W}(\wh w,Y) && \!\!\!\!\!\!\!\! \buildrel\hbox{\footnotesize def}\over = \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}^d} e^{2i\lam\langle \eta,z\rangle} H_{n,\lam} (y+z) H_{m,\lam} (-y+z) \,dz. \end{eqnarray} At this stage, looking at the action of the Laplace operator on functions~$e^{is\lam} {\mathcal W}(\wh w,Y)$ is illuminating. Indeed, easy computations (carried out in Appendix) give \begin{equation} \label {LaplacianHWignerj} ({\mathcal X}_j^2+\Xi_j^2) \bigl(e^{is\lam} {\mathcal W}(\wh w,Y)\bigr )= -4|\lam| (2m_j+1) e^{is\lam} {\mathcal W}(\wh w,Y). \end{equation} By summation on $j\in\{1,\cdots,d\},$ we get \begin{equation} \label {DeltaWignerHermite} \Delta_{\mathop {\mathbb H\kern 0pt}\nolimits} \bigl (e^{is\lam} {\mathcal W}(\wh w,Y) \bigr) = -4|\lam|(2|m|+d) e^{is\lam} {\mathcal W}(\wh w,Y), \end{equation} from which one may deduce that, whenever $f$ is in ${\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ (again, refer to the Appendix), \begin{equation} \label {FourierdiagDeltaHfond} {\mathcal F}_{{\mathop {\mathbb H\kern 0pt}\nolimits}}(\D_{\mathop {\mathbb H\kern 0pt}\nolimits} f) (\wh w) = -4|\lam|(2|m|+d) \wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}(\wh w). \end{equation} Let us underline the similarity with Relation\refeq {diagDeltaRd} pertaining to the Fourier transform in ${\mathop{\mathbb R\kern 0pt}\nolimits}^n.$ \medbreak One of the basic principles of the Fourier transform on ${\mathop{\mathbb R\kern 0pt}\nolimits}^n$ is that `{\it regularity implies decay}'. It remains true in the Heisenberg framework, as stated in the following lemma. \begin{lemma} \label {decaylambdan} {\sl For any non negative integer~$p$, there exist an integer~$N_p$ and a positive constant~$C_p$ such that for any~$\wh w$ in~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ and any~$f$ in~${\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d),$ we have \begin{equation} \label {eq:decay} \bigl(1+ |\lam|( |n| + |m|+ d) +|n-m| \bigr)^p |\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}(n,m,\lam)| \leq C_p \|f\|_{N_p,{\mathcal S}}, \end{equation} where~$\|\cdot\|_{N,{\mathcal S}}$ denotes the classical family of semi-norms of~${\mathcal S}({\mathop{\mathbb R\kern 0pt}\nolimits}^{2d+1})$, namely $$\|f\|_{N,{\mathcal S}}\buildrel\hbox{\footnotesize def}\over = \sup_{ |\al|\leq N} \bigl\|(1+|Y|^2+s^2)^{N/2}\,\partial_{Y,s}^\al f \bigr\|_{L^\infty}.$$ } \end{lemma} As may be easily checked by the reader, in our setting, there are very simple formulae corresponding to \refeq {inversionHclassical} and \refeq {FourierPlancherelHclassical}, if the set~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ is endowed with the measure $d\wh w$ defined by: \begin{equation} \label {definmeasurewhH} \int_{\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d} \theta (\wh w)\, d\wh w\buildrel\hbox{\footnotesize def}\over = \sum_{(n,m)\in \mathop{\mathbb N\kern 0pt}\nolimits^{2d}} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} \theta (n,m,\lam) |\lam|^d d\lam. \end{equation} Then Theorem\refer{RecallClassicla FourierH} recasts as follows: \begin{theorem} \label {inverseFourier-Plancherel} {\sl Let~$f$ be a function in~${\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$. Then the following inversion formula holds true:\begin{equation} \label {MappingofPHdemoeq1} f(w) = \frac {2^{d-1}} {\pi^{d+1} } \int_{\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d} e^{is\lam} {\mathcal W}(\wh w, Y)\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits} (\wh w) \, d\wh w. \end{equation} Moreover, for any function $f$ in~$L^1({\mathop {\mathbb H\kern 0pt}\nolimits}^d)\cap L^2({\mathop {\mathbb H\kern 0pt}\nolimits}^d),$ we have \begin{equation} \label {inverseFouriereq2} \|\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}\|_{L^2(\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d)}^2 = \frac {\pi^{d+1}} {2^{d-1}} \|f\|_{L^2({\mathop {\mathbb H\kern 0pt}\nolimits}^d)}^2. \end{equation} } \end{theorem} In this new setting, the convolution identity\refeq{FourierConvol} rewrites as follows for all integrable functions $f$ and $g$: \begin{multline} \label {newFourierconvoleq1} \cF_\H (f\star g) (n,m,\lam) = ( \wh f_{\mathop {\mathbb H\kern 0pt}\nolimits} \cdot \wh g_{\mathop {\mathbb H\kern 0pt}\nolimits})(n,m,\lam)\\ \quad\hbox{with}\quad ( \wh f_{\mathop {\mathbb H\kern 0pt}\nolimits} \cdot \wh g_{\mathop {\mathbb H\kern 0pt}\nolimits})(n,m,\lam) \buildrel\hbox{\footnotesize def}\over = \sum_{\ell\in \mathop{\mathbb N\kern 0pt}\nolimits^{d}} \wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}(n,\ell,\lam)\wh g_{\mathop {\mathbb H\kern 0pt}\nolimits}(\ell,m,\lam). \end{multline} The reader is referred to the appendix for the proof. Next, we aim at endowing the set~$\wt{\mathop {\mathbb H\kern 0pt}\nolimits}^d$ with a structure of metric space. According to the decay inequality\refeq {eq:decay}, it is natural to introduce the following distance~$\wh d$: \begin{equation} \label {defindistancewtH} \wh d(\wh w,\wh w') \buildrel\hbox{\footnotesize def}\over = \bigl|\lam(n+m)-\lam'(n'+m')\bigr|_1 +\bigl |(n-m)-(n'-m')|_1+|\lam-\lam'|, \end{equation} where~$|\cdot|_1$ denotes the~$\ell^1$ norm on~${\mathop{\mathbb R\kern 0pt}\nolimits}^d$. \medbreak At first glance, the metric space~$(\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d,\wh d)$ seems to be the natural frequency space within our approach. However, it fails to be complete, which may be a source of difficulties for further development. We thus propose to work with its completion, that is described in the following proposition. \begin{proposition} \label {completionHtilde} {\sl The completion of the set~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ for the distance~$\wh d$ is the set~$\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ defined by $$ \wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d\buildrel\hbox{\footnotesize def}\over = \bigl(\mathop{\mathbb N\kern 0pt}\nolimits^{2d} \times{\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\}\bigr) \cup \wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0 \quad\hbox{with}\quad \wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0 \buildrel\hbox{\footnotesize def}\over = {{\mathop{\mathbb R\kern 0pt}\nolimits}_{\mp}^d}\times {\mathop{\mathbb Z\kern 0pt}\nolimits}^d \quad\hbox{and}\quad {{\mathop{\mathbb R\kern 0pt}\nolimits}_{\mp}^d}\buildrel\hbox{\footnotesize def}\over = (({\mathop{\mathbb R\kern 0pt}\nolimits}_-)^d\cup ({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d). $$ On $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d,$ the extended distance (still denoted by~$\wh d$) is given by $$ \begin{aligned} &\wh d((n,m,\lam),(n',m',\lam')) = \bigl|\lam(n+m)-\lam'(n'+m')\bigr|_1 +\bigl |(m-n)-(m'-n')|_1+|\lam-\lam'| \\&\hspace{11.5cm} \hbox{if }\ \lam\not=0\ \hbox{ and }\ \lam'\not=0,\\ &\wh d\bigl ((n,m,\lam), (\dot x, k)\bigr ) = \wh d\bigl ((\dot x, k), (n,m,\lam) \bigr ) \buildrel\hbox{\footnotesize def}\over = |\lam(n+m)-\dot x|_1+ |m-n-k|_1+|\lam| \ \hbox{ if }\ \lam\not=0, \\ &\wh d\bigl ((\dot x,k), (\dot x', k')\bigr ) = |\dot x-\dot x'|_1+|k-k'|_1. \end{aligned}$$ } \end{proposition} \begin{proof} Consider a Cauchy sequence~$(n_p,m_p,\lam_p)_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}$ in~$(\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d,\wh d)$. If~$p$ and~$p'$ are large enough, then~$|(m_p-n_p)-(m_{p'}-n_{p'})|$ is less than~$1$, and thus~$ m_p-n_p$ has to be a constant, that we denote by~$k$. Next, we see that~$\suite \lam p \mathop{\mathbb N\kern 0pt}\nolimits$ is a Cauchy sequence of real numbers, and thus converges to some $\lambda$ in ${\mathop{\mathbb R\kern 0pt}\nolimits}.$ If $\lambda\not=0$ then our definition of $\wh d$ implies that the sequence~$\suite n p \mathop{\mathbb N\kern 0pt}\nolimits$ is constant after a certain index, and thus converges to some $n$ in~$\mathop{\mathbb N\kern 0pt}\nolimits^d.$ Therefore we have~$(n_p,m_p,\lam_p)\to(n,n+k,\lam).$ If $\lam=0$ then the Cauchy sequence~$\bigl(\lam_p(n_p+m_p)\bigr)_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}$ has to converge to some $\dot x$ in ${\mathop{\mathbb R\kern 0pt}\nolimits}^d.$ By definition of the extended distance, it is clear that~$(n_p,m_p, \lam_p)_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}$ converges to~$(\dot x,k)$ in~$\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d$. Now, if~$\dot x\not=0$ then there exists some index~$j$ such that~$\dot x_j\not=0$. Because the sequence~$\bigl (\lam_p(n_{j,p}+m_{j,p})\bigr)_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}$ tends to~$\dot x_j$ and~$n_{j,p}+m_{j,p}$ is positive (for large enough $p$), we must have~$\mathop{\rm sgn}\nolimits(\lam_p)=\mathop{\rm sgn}\nolimits(\dot x_j).$ Therefore, all the components of~$\dot x$ have the same sign. \smallbreak Conversely, let us prove that any point of~${\mathop{\mathbb R\kern 0pt}\nolimits}_+^d\times {\mathop{\mathbb Z\kern 0pt}\nolimits}^d$ (the case of~${\mathop{\mathbb R\kern 0pt}\nolimits}_-^d\times {\mathop{\mathbb Z\kern 0pt}\nolimits}^d$ being similar) is the limit in the sense of~$\wh d$ of some sequence~$(n_p,m_p,\lam_p)_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}. $ As~$\mathop{\mathbb Q\kern 0pt}\nolimits$ is dense in~${\mathop{\mathbb R\kern 0pt}\nolimits}$, there exist two families of sequences of positive integers~$(a_{j,p} )_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}$ and~$(b_{j,p} )_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}$ such that $$ \forall j \in \{1,\cdots,d\}\,,\ \dot x_j = \lim_{p\rightarrow \infty} \dot x_{j,p} \quad\hbox{with}\quad \dot x_{j,p} \buildrel\hbox{\footnotesize def}\over = \frac {a_{j,p} } {b_{j,p} } \quad\hbox{and}\quad \lim_{p\rightarrow \infty} b_{j,p} =+\infty. $$ Let us write that $$ \dot x_{p} = 2\lam_p n_p \quad\hbox{with}\quad \lam_p \buildrel\hbox{\footnotesize def}\over = \Bigl(2 \prod_{j=1}^d b_{j,p}\Bigr)^{-1} \,,\ n_p \buildrel\hbox{\footnotesize def}\over = \Bigl(a_{j,p} \prod_{j'\not = j}^d b_{j,p}\Bigr)_{1\leq j\leq d} \quad\hbox{and}\quad m_p\buildrel\hbox{\footnotesize def}\over = n_p+k. $$ As~$\suite \lam p \mathop{\mathbb N\kern 0pt}\nolimits$ tends to~$0$, we have that~$\displaystyle \lim_{p\rightarrow\infty} \wh d\bigl ((n_p, n_p+k,\lam_p), (\dot x,k)\bigr) $ converges to~$0$. \end{proof} \begin{remark} {\sl It is not difficult to check that the closed bounded subsets of~$\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ (for the distance~$\wh d$) are compact. The details are left to the reader.} \end{remark} The above proposition prompts us to extend the Fourier transform of an integrable function, to the frequency set~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d,$ that will play the same role as~$({\mathop{\mathbb R\kern 0pt}\nolimits}^n)^\star$ in the case of~${\mathop{\mathbb R\kern 0pt}\nolimits}^n$. With this new point of view, we expect the Fourier transform of any integrable function to be continuous on the whole $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d.$ This is exactly what is stated in the following theorem. \begin{theorem} \label {FourierL1basic} {\sl The Fourier transform $\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}$ of any integrable function on ${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ may be extended continuously to the whole set $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d.$ Still denoting by $\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}$ (or ${\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits} f$) that extension, the linear map~${\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}: f\mapsto \wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}$ is continuous from the space $L^1({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ to the space ${\mathcal C}_0(\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ of continuous functions on~$\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ tending to~$0$ at infinity. Moreover, we have for all~$(\dot x,k)$ in~$\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0$, \begin{eqnarray} \label {FourierL1basiceq2} {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits} f(\dot x,k) &= & \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal K}_d( \dot x ,k,Y) f(Y,s)\, dYds \quad\hbox{with}\quad\\ \nonumber {\mathcal K}_d (\dot x, k,Y) & = & \bigotimes_{j=1}^d {\mathcal K} (\dot x_j, k_j,Y_j)\quad\hbox{and}\quad \\ \label {FourierL1basiceq2b}{\mathcal K}(\dot x, k,y,\eta) &\buildrel\hbox{\footnotesize def}\over = & \frac1{2\pi} \!\int_{-\pi}^{\pi} e^{i \left(2|\dot x|^{\frac 12} (y\sin z + \eta \mathop{\rm sgn}\nolimits(\dot x) \cos z) +kz\right)}\, dz. \end{eqnarray} } \end{theorem} In other words, for any sequence~$(n_p, \lam_p)_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}$ of~$\mathop{\mathbb N\kern 0pt}\nolimits^d\times ({\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\})$ such that $$ \lim_{p\rightarrow \infty } \lam_p=0\quad\hbox{and}\quad \lim_{p\rightarrow \infty } {\lam_p n_p} =\frac {\dot x} 2 \,\raise 2pt\hbox{,} $$ we have $$ \lim_{p\rightarrow \infty } \wh f_{\mathop {\mathbb H\kern 0pt}\nolimits} (n_p,n_p+k,\lam_p) = \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal K}_d( \dot x ,k,Y) f(Y,s)\, dYds . $$ Granted with the above result, one can propose a natural extension of the Fourier transform to (smooth) functions on ${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ \emph{independent of the vertical variable~$s.$} This will come up as a consequence of the following theorem. \begin{theorem} \label {Fourierhorizontal} {\sl Let us define the following operator ${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$ on~$L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$: $$ {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g (\dot x,k) \buildrel\hbox{\footnotesize def}\over = \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal K}_d( \dot x ,k,Y) g(Y)\, dY. $$ Then, for any function $\chi$ in~${\mathcal S}({\mathop{\mathbb R\kern 0pt}\nolimits})$ with value~$1$ at~$0$ and compactly supported Fourier transform, and any function~$g$ in~$L^1(T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d)$, we have \begin{equation}\label{eq:horizontal} \lim_{\e\rightarrow 0} {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}(g\otimes \chi(\e\cdot)) = 2\pi ({\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g) \mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}_0^d} \end{equation} in the sense of measures on~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d,$ where~$\mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}_0^d}$ is the measure (supported in $\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d$) defined for all continuous compactly supported functions~$\theta$ on~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d$ by \begin{equation} \label {limitmeasureeq1} \langle \mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} ,\theta \rangle = \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} \theta (\dot x,k) \,d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0}(\dot x,k) \eqdefa2^{-d}\sum_{k\in {\mathop{\mathbb Z\kern 0pt}\nolimits}^d} \biggl( \int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_{-})^d} \theta(\dot x,k)\,d\dot x + \int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_{+})^d} \theta(\dot x,k)\,d\dot x\biggr)\cdotp \end{equation} } \end{theorem} The above theorem allows to give a meaning of the Fourier transform of a smooth function that does not depend on the vertical variable. The next step would be to study whether our approach allows, as in the Euclidean case, to extend the definition of the Fourier transform to a much larger set of functions, or even to tempered distributions. This requires a fine characterization of~${\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}({\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d))$ the range of~${\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ by~${\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits},$ which will be the purpose of a forthcoming paper \cite{bcdh}. \medbreak We end this section with a short description of the structure of the rest of the paper, and of the main ideas of the proofs. Section\refer {ProofCFL1} is devoted to the proof of the first part of Theorem\refer {FourierL1basic}. It relies on the fact that the function~${\mathcal W}(\cdot,Y)$ is uniformly continuous (for distance $\wh d$) on bounded sets of~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d,$ and can thus be extended to the closure~$\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ of~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d$. Establishing that property requires our using an explicit asymptotic expansion of~${\mathcal W}.$ Proving Theorem\refer {Fourierhorizontal} is the purpose of Section\refer {FourierHorizontal}. The main two ingredients are the following ones. First, we show that if~$\psi$ is an integrable function on~${\mathop{\mathbb R\kern 0pt}\nolimits}$ with integral~$1,$ then we have $$ \lim_{\e\rightarrow 0} \frac 1 \e \psi \Big(\frac \lam \e\Bigr) d\wh w= \mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0}, $$ in the sense of measures on $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d.$ That is to say, for any continuous compactly supported function $\theta$ on $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d,$ we have \begin{equation} \label {limsimplecoucheH_0} \lim_{\e\rightarrow 0} \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d} \frac 1 \e \psi \Big(\frac \lam \e\Bigr) \theta (\wh w)\, d\wh w = \langle \mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} ,\theta_{|\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} \rangle. \end{equation} Then by a density argument, the proof of Theorem\refer{Fourierhorizontal} reduces to the case when~$g$ is in ${\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$. \smallbreak Section\refer {proofFormulacK} is devoted to computing~${\mathcal K}$. This will be based on the following properties (that we shall first establish): \begin{itemize} \item ${\mathcal K}(0,k,Y)=\delta_{0,k}$ for all $Y$ in $T^\star{\mathop{\mathbb R\kern 0pt}\nolimits};$\smallbreak \item The symmetry identities: \begin{equation}\label{eq:Ksym} \begin{aligned} &{\mathcal K}(\dot x,-k,-Y)=\overline{{\mathcal K}(\dot x,k,Y)},\qquad {\mathcal K}(-\dot x,-k,Y)=(-1)^k{\mathcal K}(\dot x,k,Y)\\ &\quad\hbox{and}\quad {\mathcal K}(-\dot x,k,Y)=\overline{{\mathcal K}(\dot x,k,Y)};\end{aligned} \end{equation} \item The identity \begin{equation} \label {eq:KLap} \D_{Y} {\mathcal K}(\dot x,k,Y) = -4|\dot x| {\mathcal K}(\dot x,k,Y); \end{equation} \item The relation \begin{equation} \label {eq:Kk} ik {\mathcal K} (\dot x,k,Y) = \bigl(\eta\partial_y{\mathcal K}(\dot x,k,Y) -y \partial_\eta {\mathcal K}(\dot x,k,Y)\bigr)\,\mathop{\rm sgn}\nolimits(\dot x); \end{equation} \item The convolution property \begin{equation} \label {Convollam=0} {\mathcal K}(\dot x,k,Y_1+Y_2) = \sum_{k'\in \mathop{\mathbb Z\kern 0pt}\nolimits} {\mathcal K}(\dot x,k-k',Y_1){\mathcal K}(\dot x,k',Y_2); \end{equation} \item And finally, the following relation for $\dot x > 0$ given by the study of~${\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits} (|Y|^2f)$: \begin{equation} \label {Y2FouriercK} |Y|^2{\mathcal K} + \dot x\partial_{\dot x} ^2 {\mathcal K} +\partial_{\dot x} {\mathcal K} -\frac {k^2} {4\dot x} {\mathcal K}=0. \end{equation} \end{itemize} Let us emphasize that proving first\refeq {limsimplecoucheH_0} is essential to justify rigorously\refeq {FourierL1basiceq2b}. \medbreak Finally, Section\refer {StutyofcGH} is devoted to the proof of an inversion formula involving Operator~${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}.$ Some basic properties of Hermite functions and of Wigner transform of Hermite functions are recalled in Appendix. There, we also prove the decay result stated in Lemma\refer{decaylambdan}. \section {The uniform continuity of the Fourier transform of an~$L^1$ function } \label {ProofCFL1} The key to the proof of Theorem \ref{FourierL1basic} is a refined study of the behavior of functions~${\mathcal W}(\cdot,Y)$ defined by\refeq{definWigner} on the set~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d.$ Of course, a special attention will be given to the neighborhood of $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0.$ This is the aim of the following proposition. \begin{proposition} \label {ProofCFL1_Prop1} {\sl Let~$R_0$ be a positive real number, and let $$ {\mathcal B}(R_0)\buildrel\hbox{\footnotesize def}\over =\Bigl\{(n,m,\lambda) \in\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d,\ |\lambda|(|n+m|+d) +|n-m| \leq R_0\Bigr\} \times\Bigl\{ Y\in T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d,\ |Y|\leq R_0\Bigr\}\cdotp $$ The function~${\mathcal W}(\cdot,Y)$ restricted to ${\mathcal B}(R_0)$ is uniformly continuous with respect to~$\wh w,$ that is $$ \forall \e>0\,,\ \exists \al_\e>0\,,\ \forall (\wh w_j,Y)\in {\mathcal B}(R_0)\,, \ \wh d(\wh w_1,\wh w_2)<\al_\e\Longrightarrow \ \bigl | {\mathcal W}(\wh w_1,Y) -{\mathcal W}(\wh w_2,Y)\bigr| <\e. $$ Furthermore, for any $(\dot x,k)$ in~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0,$ we have $$ \lim_{\wh w\to(\dot x,k)} {\mathcal W}(\wh w,Y)={\mathcal K}_d(\dot x,k,Y) $$ where the function ${\mathcal K}_d$ is defined on $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0\times T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d$ by \begin{equation} \label {definPhaseFlambda=0} \begin{aligned} & {\mathcal K}_d(\dot x, k,Y) \buildrel\hbox{\footnotesize def}\over = \sum_{(\ell_1,\ell_2)\in\mathop{\mathbb N\kern 0pt}\nolimits^d\!\times\!\mathop{\mathbb N\kern 0pt}\nolimits^d}\frac {(i\eta)^{\ell_1}} {\ell_1! }\, \frac {y^{\ell_2}} {\ell_2!} \, F_{\ell_1,\ell_2} (k) \, (\mathop{\rm sgn}\nolimits\dot x)^{\ell_1} |\dot x|^{\frac{\ell_1+\ell_2}2} \quad\hbox{with}\quad \\ &F_{\ell_1,\ell_2} (k) \buildrel\hbox{\footnotesize def}\over = \sumetage{\ell_1'\leq \ell_1, \ell'_2\leq \ell_2} {k+\ell_1-2\ell'_1= \ell_2-2\ell'_2} (-1)^{\ell_2-\ell'_2} \begin{pmatrix} \ell_1 \\ \ell' _1\end{pmatrix} \begin{pmatrix} \ell_2 \\ \ell' _2\end{pmatrix}. \end{aligned} \end{equation} Above, $\mathop{\rm sgn}\nolimits \dot x$ designates the (common) sign of all components of $\dot x,$ and $|\dot x|\buildrel\hbox{\footnotesize def}\over =(|\dot x_1|,\cdots,|\dot x_d|).$} \end{proposition} \begin{proof} Let us first perform the change of variable~$z'= -y+z$ in\refeq{definWigner} so as to get \begin{equation} \label {ProofCFL1_Prop1demoeq1} \begin{aligned} {\mathcal W}(\wh w,Y) & = e^{2i\lam\langle \eta,y\rangle} \wt {\mathcal W}(\wh w,Y) \quad\hbox{with}\quad \\ \wt {\mathcal W}(\wh w,Y) & \buildrel\hbox{\footnotesize def}\over = \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}^d} e^{2i\lam\langle \eta,z'\rangle} H_{n,\lam} (2y+z') H_{m,\lam} (z') \,dz'. \end{aligned} \end{equation} Obviously, the uniform continuity of~${\mathcal W}$ reduces to that of~$\wt{\mathcal W}$. Moreover, as the integral defining~$\wt {\mathcal W}$ is a product of~$d$ integrals on~${\mathop{\mathbb R\kern 0pt}\nolimits}$ (of modulus bounded by $1$), it is enough to study the one dimensional case. Let us start with the case where both $\wh w_1=(n_1,m_1,\lam)$ and $\wh w_2=(n_2,m_2,\lam)$ are relatively far away from~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}_0^1$. As we need only to consider the situation where $\wh w_1$ and $\wh w_2$ are close to one another, one may assume that $(n_1,m_1)=(n_2,m_2)=(n,m).$ Then we can write that \begin{equation} \label {ProofCFL1_Prop1demoeq2} \begin{aligned} \wt {\mathcal W}(\wh w_1,Y)- \wt {\mathcal W}(\wh w_2,Y)& = \int_{\mathop{\mathbb R\kern 0pt}\nolimits} \bigl( e^{2i\lam_1 \eta z} - e^{2i\lam_2 \eta z}\bigr) H_{n,\lam_1} (2y+z) H_{m,\lam_1} (z) \,dz\\ & {} + \int_{\mathop{\mathbb R\kern 0pt}\nolimits} e^{2i\lam_2 \eta z} \bigl( H_{n,\lam_1} (2y+z)- H_{n,\lam_2} (2y+z)\bigr) H_{m,\lam_1} (z) \,dz\\ & {} +\int_{\mathop{\mathbb R\kern 0pt}\nolimits} e^{2i\lam_2 \eta z} H_{n,\lam_2} (2y+z) \bigl( H_{m,\lam_1} (z)-H_{m,\lam_2} (z)\bigr) \,dz. \end{aligned} \end{equation} Clearly, we have \begin{equation}\label{eq:Wligne1} \biggl|\int_{\mathop{\mathbb R\kern 0pt}\nolimits} \bigl( e^{2i\lam_1 \eta z} - e^{2i\lam_2 \eta z}\bigr) H_{n,\lam_1} (2y+z) H_{m,\lam_1} (z) \,dz\biggr| \leq2|\lam_1|^{-\frac12} R_0|\lam_2-\lam_1|\,\|MH_m\|_{L^2}. \end{equation} Next, let us study the continuity of the map $\lam \longmapsto H_{n, \lam}$ in the case $d=1.$ One may write \begin{eqnarray*} H_{n,\lam_1} (x) -H_{n,\lam_2} (x) & = & \bigl(|\lam_1|^{\frac 14} -|\lam_2|^{\frac 1 4}\bigr) H_n (|\lam_1|^{\frac 12} x) +|\lam_2|^{\frac 1 4}\bigl(H_n (|\lam_1|^{\frac 12}x) -H_n(|\lam_2|^{\frac 12}x) \bigr)\\ & = & \bigl(|\lam_1|^{\frac 14} -|\lam_2|^{\frac 1 4}\bigr) H_n (|\lam_1|^{\frac 12} x) \\ && \qquad{}+ |\lam_2|^{\frac 1 4}(|\lam_1|^{\frac 12} -|\lam_2|^{\frac 12}) \, x\!\int_0^1 H'_n\bigl(( |\lam_2|^{\frac 12} +t(|\lam_1|^{\frac 12} -|\lam_2|^{\frac 12} ))x\bigr)dt. \end{eqnarray*} If~$\bigl|| \lam_1|^{\frac 12} -|\lam_2|^{\frac 12}\bigr|\leq \displaystyle \frac 1 2 |\lam_2|^{\frac12},$ then the changes of variable $ x'=|\lam_1|^{\frac 12} x$ and $ x'= ( |\lam_2|^{\frac 12} +t(|\lam_1|^{\frac 12} -|\lam_2|^{\frac 12} ))x,$ respectively, together with the fact that the Hermite functions have~$L^2$ norms equal to~$1$ ensure that \begin{equation} \label {ProofCFL1_Prop1demoeq3} \|H_{n,\lam_1} -H_{n,\lam_2} \|_{L^2} \leq \frac {\bigl| |\lam_1|^{\frac 14} -|\lam_2|^{\frac 1 4}\bigr|} {|\lam_1|^{\frac 14}} + 4 \frac {\bigl|| \lam_1|^{\frac 12} -|\lam_2|^{\frac 12} \bigr|}{|\lam_2|^{\frac 12}} \,\|MH_n'\|_{L^2}. \end{equation} Using\refeq {relationsHHermiteCAb}, we get that $$ MH'_n = \frac 12 \bigl( \sqrt {n(n-1)}H_{n-2} -H_n-\sqrt { (n+1)(n+2)} H_{n+2}\bigr). $$ As the family of Hermite functions is an orthonormal basis of~$L^2$, one can write that $$ 4 \|MH_n'\|_{L^2}^2 = n(n-1) +1+(n+1)(n+2) = 2n^2+2n+3. $$ Combining with\refeq {ProofCFL1_Prop1demoeq2} and\refeq {ProofCFL1_Prop1demoeq3}, we conclude that if $\bigl| | \lam_1| -|\lam_2||\leq \frac 1 2 \,|\lam_2|^{\frac12} \bigl( |\lam_1|^{\frac 12} +|\lam_2|^{\frac 12}\bigr)$ and $(|\lam_1|+|\lam_2|) |n+m|+|\lam_1|+|\lam_2|+|Y|\leq R_0$ then \begin{equation}\label {ProofCFL1_Prop1demoeq4} \bigl |\wt {\mathcal W}(\wh w_1,Y)- \wt {\mathcal W}(\wh w_2,Y)\bigr| \leq C(R_0)|\lam_1-\lam_2|\biggl(\frac1{|\lam_1|} +\frac 1{\lam_1^2}\biggr)\cdotp \end{equation} That estimate fails if the above condition on~$\bigl| |\lam_1| -|\lam_2|\bigr|$ is not satisfied. To overcome that difficulty, we need the following lemma. \begin{lemma} \label {Wignerconvnormally} {\sl The series $$ \sum_{\ell_1,\ell_2}(\mathop{\rm sgn}\nolimits \lam)^{\ell_1} |\lam|^{\frac {\ell_1+\ell_2} 2} \, \frac{(2i\eta)^{\ell_1}(2y)^{\ell_2}} {\ell_1! \ell_2!} \bigl(M^{\ell_1}H_{m} | \partial^{\ell_2}H_{n}\bigr)_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d)} $$ converges normally towards $ \wt {\mathcal W}$ on ${\mathcal B}(R_{0})$. } \end{lemma} \begin{proof} Again, as Hermite functions in dimension~$d$ are tensor products of one-dimensional Hermite functions, it is enough to prove the above lemma in dimension~$1$. Now, using the expansion of the exponential function and Lebesgue theorem, we get that for any fixed~$(\wh w,Y)$ in~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^1\times T^\star {\mathop{\mathbb R\kern 0pt}\nolimits},$ \begin{eqnarray} \nonumber \wt {\mathcal W}(\wh w,Y) & = & \sum_{\ell_1=0}^\infty \frac 1 {\ell_1!} (2i\lam\eta)^{\ell_1} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} H_{n,\lam} (2y+z) z^{\ell_1} H_{m,\lam} (z) \,dz\\ \label {Wignerconvnormallydemoeq2} & = & \sum_{\ell_1=0}^\infty (\mathop{\rm sgn}\nolimits\lambda)^{\ell_1} \frac{(2i\eta)^{\ell_1}}{\ell_1!} \, |\lam|^{\frac {\ell_1}2} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} H_{n,\lam} (2y+z) (M^{\ell_1} H_m)_\lam(z) \,dz. \end{eqnarray} Let us prove that the series converges for the supremum norm on~${\mathcal B}(R_0)$. Clearly,\refeq {relationsHHermiteCAb} implies that for all integers $\ell\geq1$ and $m$ in~$\mathop{\mathbb N\kern 0pt}\nolimits,$ $$ \|(\sqrt2M)^\ell H_m\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})}\leq\sqrt m\,\|(\sqrt2M)^{\ell-1} H_{m-1}\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} +\sqrt{m+1}\,\|(\sqrt2M)^{\ell-1} H_{m+1}\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})}, $$ which, by an obvious induction yields for all $(\ell_1,m)$ in~$\mathop{\mathbb N\kern 0pt}\nolimits^2,$ \begin{equation} \label {Wignerconvnormallydemoeq3} \|M^{\ell_1}H_{m}\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} \leq (2m+2\ell_1)^{\frac{\ell_1}2}. \end{equation} Hence the generic term of the series of\refeq {Wignerconvnormallydemoeq2} can be bounded by: $$ W_{\ell_1} (\wh w) \buildrel\hbox{\footnotesize def}\over = \frac { (2\sqrt 2R_0)^{\ell_1} } {\ell_1!} |\lam|^{\frac {\ell_1} 2} (m+\ell_1)^{\frac{\ell_1}2}. $$ Let us observe that, because~$|\lam| m$ and~$|\lam|$ are less than~$R_0$, we have \begin{eqnarray*} \frac { W_{\ell_1+1} (\wh w) } {W_{\ell_1} (\wh w) } &= & \frac{2\sqrt2 R_0}{\ell_1+1} \sqrt{|\lam|(m+\ell_1+1)}\biggl(1+\frac{1}{m+\ell_1}\biggr)^{\frac{\ell_1}2} \\ & \leq & \frac {2\sqrt{2e}\,R_0 } {\ell_1+1} \sqrt {R_0} \bigl( 1 +\sqrt {\ell_1+1}\bigr). \end{eqnarray*} This implies that the series converges with respect to the supremum norm on~${\mathcal B}(R_0)$. \smallbreak Next, for fixed~$\ell_1,$ we want to expand $$ |\lam|^{\frac {\ell_1}2}\int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} H_{n,\lam} (2y+z) (M^{\ell_1} H_m)_\lam(z) \,dz $$ as a series with respect to the variable $y.$ To this end, we just have to expand the real analytic Hermite functions as follows: $$ H_{n,\lam} (z+2y) = \sum_{\ell_2=0} ^\infty \frac { (2y)^{\ell_2} } {\ell_2! } |\lam|^{\frac {\ell_2} 2} ( H_n^{(\ell_2)})_\lam(z). $$ Then we have to study (for fixed~$\ell_1$) the convergence of the series with general term, $$ W_{\ell_1,\ell_2}(\wh w,Y) \buildrel\hbox{\footnotesize def}\over = \frac { (2y)^{\ell_2} } {\ell_2! } |\lam|^{\frac {\ell_2} 2} \bigl(H_n^{(\ell_2)} | M^{\ell_1} H_m\bigr)_{L^2}. $$ Using again\refeq {relationsHHermiteCAb}, we see that $$ \|H_{n}^{(\ell_2)}\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} \leq (2n+2\ell_2)^{\frac{\ell_2}2}. $$ Hence, arguing as above, we get for any~$(\wh w,Y)$ in~${\mathcal B}(R_0)$, $$ W_{\ell_1,\ell_2}(\wh w,Y) \leq 2^{\frac {\ell_1} 2} (m\!+\!\ell_1)^{\frac {\ell_1} 2} \wt W_{\ell_2}(\wh w,Y) \quad\hbox{with}\quad \wt W_{\ell_2}(\wh w,Y) \buildrel\hbox{\footnotesize def}\over = \frac { (2\sqrt 2R_0)^{\ell_2} } {\ell_2! } |\lam|^{\frac {\ell_2} 2} (n\!+\!\ell_2)^{\frac{\ell_2}2}, $$ and it is now easy to complete the proof of the lemma. \end{proof} \medbreak Reverting to the proof of the continuity of $\wt{\mathcal W}$ in the neighborhood of $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0,$ the problem now consists in investigating the behavior of the function $$ {\mathcal H}_{\ell_1,\ell_2}:\quad \left\{ \begin{array} {rcl} \wt {\mathop {\mathbb H\kern 0pt}\nolimits}^1 & \longrightarrow &{\mathop{\mathbb R\kern 0pt}\nolimits}\\ \wh w=(n,m,\lam) & \longmapsto & |\lam|^{\frac{\ell_1+\ell_2}2 } \, \bigl(M^{\ell_1}H_{m} |H_{n}^{(\ell_2)}\bigr)_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} \end{array} \right. $$ when~$\lam$ tends to~$0$ and $\lam(n+m)\to\dot x$ for fixed $k\buildrel\hbox{\footnotesize def}\over = m-n.$ \medbreak {}From Relations \eqref{Mjdj}, we infer that $$ {\mathcal H}_{\ell_1,\ell_2}(\wh w) = 2^{-(\ell_1+\ell_2)}\, |\lam|^{\frac{\ell_1+\ell_2}2 } \bigl((A+C)^{\ell_1}H_{m} | (A-C)^{\ell_2}H_{n} \bigr)_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})}. $$ The explicit computation of~$(A\pm C)^{\ell}$ is doable but tedious and fortunately turns out to be useless when~$\lam$ tends to~$0$. Indeed, we have the following lemma: \begin{lemma} \label {ProofCFL1lemma2} {\sl A constant~$C_\ell(R_0)$ (depending only on~$R_0$ and~$\ell$) exists such that, for any~$(n,\lam)$ with $\lam>0$ and~$\lam n \leq R_0$, we have (with the convention that $H_p=0$ if $p<0$): $$ \Bigl \| \lam^{\frac \ell 2 } \Bigl(\frac{A \pm C}2\Bigr)^{\ell } H_n - \Bigl(\frac{\lam n}2\Bigr)^{\frac \ell 2} \sum_{\ell'=0} ^\ell (\pm1)^{\ell-\ell'} \begin{pmatrix} \ell \\ \ell' \end{pmatrix} H_{n+\ell-2\ell'} \Bigr\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} \leq C_\ell(R_0)\lam^{\frac 12}. $$ } \end{lemma} \begin{proof} Let~${\mathcal V}_{n,\ell}$ be the vector space generated by~$(H_{n+\ell'})_{-\ell\leq \ell'\leq \ell},$ equipped with the~$L^2({\mathop{\mathbb R\kern 0pt}\nolimits})$-norm. Let $$ R_{n,\ell} \buildrel\hbox{\footnotesize def}\over = \lam^{\frac \ell 2 } (A \pm C)^{\ell } H_n - (2\lam n)^{\frac \ell 2} \sum_{\ell'=0} ^\ell (\pm 1)^{\ell-\ell'} \begin{pmatrix} \ell \\ \ell' \end{pmatrix} H_{n+\ell-2\ell'}. $$ Formulae \eqref{relationsHHermiteCA} guarantee that $R_{n,\ell}$ is in~${\mathcal V}_{n,\ell}.$ Let us now prove by induction on $\ell$ that \begin{equation} \label {ProofCFL1lemma2demoeq1} \|R_{n,\ell}\|_{{\mathcal V}_{n,\ell}} \leq C_\ell(R_0)\lam^{\frac 12}. \end{equation} In the case when~$\ell$ equals~$1$, by definition of~$A$ and~$C$, we have \begin{eqnarray*} \lam^{\frac 12} (A\pm C) H_n &= & \lam^{\frac 12} \bigl( \sqrt {2n} H_{n-1} \pm \sqrt {2n+2} H_{n+1} \bigr)\\ & =& \sqrt {2\lam n} (H_{n-1} \pm H_{n+1}) \pm \frac {2\sqrt \lam} {\sqrt {2n+2} +\sqrt {2n}} H_{n+1} \end{eqnarray*} and\refeq {ProofCFL1lemma2demoeq1} is thus obvious. \smallbreak Let us now observe that, for any~$\ell'$ in~$\{-\ell,\cdots ,\ell\}$, we have $$\begin{aligned} \lam^{\frac 12} \|A H_{n+\ell'} \|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} &= \sqrt {2\lam(n+\ell')}\, \|H_{n+\ell'-1}\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} \quad\hbox{and}\quad\\ \lam^{\frac 12} \|C H_{n+\ell'} \|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} & = \sqrt {2\lam(n+\ell'+1)} \, \|H_{n+\ell'+1}\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})}. \end{aligned} $$ This gives that for all $\lam(n+1)\leq R_0,$ \begin{equation} \label {ProofCFL1lemma2demoeq2} \bigl \| \lam^{\frac 12} (A\pm C)\bigr\|_{{\mathcal L}({\mathcal V}_{n,\ell} ; {\mathcal V}_{n,\ell+1})} \leq C_\ell(R_0). \end{equation} Let us assume that\refeq {ProofCFL1lemma2demoeq1} holds for some~$\ell$. Inequality\refeq {ProofCFL1lemma2demoeq2} implies that \begin{equation} \label {ProofCFL1lemma2demoeq3} \bigl \| \lam^{\frac 12} (A\pm C)R_{n,\ell} \bigr\|_{{\mathcal V}_{n,\ell+1}} \leq \lam^{\frac 12}C_\ell(R_0). \end{equation} Then, for any~$\ell'$ in~$\{0,\cdots,\ell\}$, we have $$ \begin{aligned} \lam^{\frac 12} (A\pm C) H_{n+\ell-2\ell'} &= \lam^{\frac 12} \bigl( \sqrt {2n+2\ell-4\ell'}\, H_{n+\ell-2\ell'-1} \pm \sqrt {2n+2\ell-4\ell'+2}\, H_{n+\ell-2\ell'+1} \bigr)\\ & = \sqrt {2\lam n} \bigl(H_{n+\ell+1-2(\ell'+1)} \pm H_{n+\ell+1-2\ell'} \bigr) \\ &\hspace{-2cm}+\frac {2\lam^{\frac 12}(\ell-2\ell')} {\sqrt {2n+2\ell-4\ell'}+ \sqrt {2n}}H_{n+\ell-2\ell'-1} \pm \frac {2\lam^{\frac 12}(\ell-2\ell'+1)} {\sqrt {2n+2\ell-4\ell'+2}+ \sqrt {2n}}H_{n+\ell-2\ell'+1}\,\cdotp \end{aligned} $$ We deduce that for any~$\ell'$ in~$\{0, \cdots, \ell\}$, $$ \bigl\| \lam^{\frac 12} (A\pm C) H_{n+\ell-2\ell'} - \sqrt {2\lam n} \bigl(H_{n+\ell+1-2(\ell'+1)} \pm H_{n+\ell+1-2\ell'} \bigr)\bigr\|_{{\mathcal V}_{n,\ell+1} } \leq C_{\ell+1} (R_0)\lam^{\frac 12} . $$ Using\refeq{ProofCFL1lemma2demoeq3} gives $$\displaylines{\quad \bigl \| \lam^{\frac {\ell+1} 2 } (A \pm C)^{\ell +1 } H_n - (2\lam n)^{\frac { \ell+1} 2} \Sigma_{n,\ell} \bigr\|_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits})} \leq C_{\ell+1} (R_0)\lam^{\frac 12}\hfill\cr\hfill \quad\hbox{with}\quad \Sigma_{n,\ell} \buildrel\hbox{\footnotesize def}\over = \sum_{\ell'=0} ^\ell (\pm1)^{\ell-\ell'} \begin{pmatrix} \ell \\ \ell' \end{pmatrix} \bigl(H_{n+\ell+1-2(\ell'+1)} \pm H_{n+\ell+1-2\ell'} \bigr).\quad} $$ Now, Pascal's rule ensures that $$\begin{aligned} \Sigma_{n,\ell} = & \sum_{\ell'=1} ^{\ell+1} (\pm1)^{\ell+1-\ell'} \begin{pmatrix} \ell \\ \ell'-1 \end{pmatrix} H_{n+\ell+1-2\ell'} +\sum_{\ell'=0} ^\ell (\pm1)^{\ell+1-\ell'} \begin{pmatrix} \ell \\ \ell' \end{pmatrix} H_{n+\ell+1-2\ell'} \\ = & \sum_{\ell'=0} ^{\ell+1} (\pm1)^{\ell+1-\ell'} \begin{pmatrix} \ell+1 \\ \ell' \end{pmatrix} H_{n+\ell+1-2\ell'}. \end{aligned}$$ The lemma is proved. \end{proof} \medbreak From this lemma, we can deduce the following corollary. \begin{cor} \label {ProofCFL1_Coroll1} {\sl For any $(\ell_1,\ell_2)$ in $\mathop{\mathbb N\kern 0pt}\nolimits^2$ and $R_0>0,$ there exists a constant~�$C_{\ell_1,\ell_2} (R_0)$ such that for all~$(n,n+k,\lam)$ in $\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^1$ with~$|\lam n| +|k|+ |\lam|\leq R_0$, we have $$\displaylines{\qquad \Bigl|{\mathcal H}_{\ell_1,\ell_2}(\wh w) - F_{\ell_1,\ell_2} (k) \Bigl(\frac{|\lam| n}2\Bigr)^{\frac {\ell_1+\ell_2}2}\Bigr| \leq C_{\ell_1,\ell_2}(R_0) |\lam| ^{\frac 12}\hfill\cr\hfill \quad\hbox{with}\quad F_{\ell_1,\ell_2} (k) \buildrel\hbox{\footnotesize def}\over = \sumetage{\ell_1'\leq \ell_1, \ell'_2\leq \ell_2} {k+\ell_1-2\ell'_1= \ell_2-2\ell'_2} (-1)^{\ell_2-\ell'_2} \begin{pmatrix} \ell_1 \\ \ell' _1\end{pmatrix} \begin{pmatrix} \ell_2 \\ \ell' _2\end{pmatrix}\cdotp\qquad}$$ } \end{cor} \begin{proof} Lemma\refer {ProofCFL1lemma2} implies that $$ \displaylines{ \Bigl|{\mathcal H}_{\ell_1,\ell_2}(\wh w) - \Bigl(\frac{|\lam| n}2\Bigr)^{\frac {\ell_2}2} \Bigl(\frac{|\lam| (n+k)}2\Bigr)^{\frac { \ell_1} 2} \sumetage{\ell_1'\leq \ell_1} {\ell'_2\leq \ell_2} (-1)^{\ell_2-\ell'_2} \begin{pmatrix} \ell_1 \\ \ell' _1\end{pmatrix} \begin{pmatrix} \ell_2 \\ \ell' _2\end{pmatrix} \bigl(H_{n+k+\ell_1-2\ell'_1} |H_{n+\ell_2-2\ell'_2}\bigr )_{L^2}\Bigr|\cr {} \leq C_{\ell_1,\ell_2}(R_0) |\lam| ^{\frac 12}. } $$ Now, let us notice that $$ (|\lam|(n+k))^{\frac{\ell_1}2}-(|\lam|n)^{\frac{\ell_1}2}=\frac{|\lam| k}{\sqrt{|\lam| n}+\sqrt{|\lam|(n+k)}} \sum_{\ell_1'=0}^{\ell_1-1}\sqrt{|\lam| n}^{\ell_1'}\sqrt{|\lam|(n+k)}^{\ell_1-1-\ell_1'}. $$ Hence it is clear that for fixed~$k$ in~${\mathop{\mathbb Z\kern 0pt}\nolimits}$ such that $|k|\leq R_0$, we have, for~$|\lam|\leq R_0$ and~$|n\lam|\leq R_0$, $$ \bigl | (|\lam| n)^{\frac {\ell_2}2} (|\lam| (n+k))^{\frac { \ell_1} 2} - |\lam n|^{\frac {\ell_1+\ell_2}2} \bigr| \leq C_{\ell_1,\ell_2} (R_0) |\lam|^{\frac 12}. $$ Thanks to \eqref{def:kro}, we conclude the proof. \end{proof} \medbreak \noindent {\it Conclusion of the proof of Proposition\refer {ProofCFL1_Prop1} } Consider a positive real number~$\e$. Recall that $$ \wt{\mathcal W}(\wh w,Y)=\sum_{\ell_1,\ell_2}(\mathop{\rm sgn}\nolimits\lambda)^{\ell_1} \frac{(2i\eta)^{\ell_1}(2y)^{\ell_2}}{\ell_1!\ell_2!} {\mathcal H}_{\ell_1,\ell_2}(\wh w). $$ Clearly, it suffices to prove the uniform continuity of $\wt{\mathcal W}$ for all subset of $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d$ corresponding to some \emph{fixed} value $k$ of $m-n.$ Now, considering $\wh w_1=(n_1,n_1+k,\lam_1)$ and $\wh w_2=(n_2,n_2+k,\lam_2),$ Lemma\refer {Wignerconvnormally} implies that for all $\ep>0,$ there exist two integers~$L_ {1,\e}$ and $L_{2,\e}$ such that \begin{equation} \label {ProofCFL1_Prop1demoeq6} \begin{aligned} &\bigl |\wt {\mathcal W}(\wh w_1,Y)- \wt {\mathcal W}(\wh w_2,Y)\bigr| \leq \frac \e 4 +\sumetage {\ell_1\leq L_{1,\e}} {\ell_2\leq L_{2,\e}} \frac {(2|\eta|)^{\ell_1}(2|y|)^{\ell_2}} {\ell_1!\ell_2!} \\ &\qquad{} \times \bigl | (\mathop{\rm sgn}\nolimits \lam_1)^{\ell_1} {\mathcal H}_{\ell_1,\ell_2} (n_1,n_1+k,\lam_1) - (\mathop{\rm sgn}\nolimits \lam_2)^{\ell_1} {\mathcal H}_{\ell_1,\ell_2} (n_2,n_2+k,\lam_2) \bigr| \,. \end{aligned} \end{equation} Let~$C_\e(R_0)$ be the supremum for~$\ell_1\leq L_{1,\e}$ and $\ell_2\leq L_{2,\e}$ of all constants~$C_{\ell_1,\ell_2} (R_0)$ which appear in Corollary\refer {ProofCFL1_Coroll1}. Then we have \begin{equation} \label {ProofCFL1_Prop1demoeq6b} \begin{aligned} &|\lam_1|\!+\!|\lam_2|\leq A(\e,R_0) \Longrightarrow \bigl |\wt {\mathcal W}(\wh w_1,Y)- \wt {\mathcal W}(\wh w_2,Y)\bigr| \leq \frac \e 2 +\!\!\!\sumetage {\ell_1\leq L_{1,\e}} {\ell_2\leq L_{2,\e}} \!\!\!\!\frac {(2\,R_0)^{\ell_1+\ell_2}} {\ell_1!\ell_2!}|F_{\ell_1,\ell_2} (k) | \\ &\qquad\qquad\qquad\qquad\qquad\qquad{} \times \Bigl |(\mathop{\rm sgn}\nolimits \lam_1)^{\ell_1} \Big|\frac{\lam_1 n_1}2\Big|^{\frac {\ell_1+\ell_2}2} - (\mathop{\rm sgn}\nolimits \lam_2)^{\ell_1}\Big|\frac{\lam_2 n_2}2\Big|^{\frac {\ell_1+\ell_2}2} \Bigl |\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \quad\hbox{with}\quad A(\e,R_0)\buildrel\hbox{\footnotesize def}\over = \displaystyle \frac {e^{-8R_0} \e^2} {32 C_\e^2(R_0)}\,\cdotp \end{aligned} \end{equation} If~$\ell_1+\ell_2=0$ then the last term of the above inequality is~$0$. If~$\ell_1+\ell_2$ is positive, as~$|F_{\ell_1,\ell_2} (k)|$ is less than~$2^{\ell_1+\ell_2}$, we have, using\refeq {ProofCFL1_Prop1demoeq6b}, \begin{equation} \label {ProofCFL1_Prop1demoeq7} \begin{aligned} & |\lam_1|+|\lam_2|\leq A(\e,R_0) \quad\hbox{and}\quad |\lam_1 n_1|+|\lam_2 n_2| \leq \frac 1{16} \e^2 e^{-8R_0}\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \Longrightarrow \bigl |\wt {\mathcal W}(\wh w_1,Y)- \wt {\mathcal W}(\wh w_2,Y)\bigr| \leq \e. \end{aligned} \end{equation} In the case when~$ |\lam_1 n_1|+|\lam_2 n_2| $ is greater than~$\displaystyle \frac 1{16}\e^2 e^{-8R_0}$ then if $$ \bigl| \lam_1n_1-\lam_2 n_2\bigr| \leq \frac 1{32}\e^2 e^{-8R_0} $$ then~$\lam_1$ and~$\lam_2$ have the same sign. The sum in the right-hand side term is finite, and it is clear that each term converges uniformly to $0$ if $\lambda_2n_2$ tends to $\lambda_1 n_1.$ Thus a positive real number~$\eta_\e$ exists such that \begin{equation} \label {ProofCFL1_Prop1demoeq8} |\lam_1|+|\lam_2|\leq A(\e,R_0) \quad\hbox{and}\quad |\lam_1n_1-\lam_2n_2| \leq \eta_\e \Longrightarrow \bigl |\wt {\mathcal W}(\wh w_1,Y)- \wt {\mathcal W}(\wh w_2,Y)\bigr| \leq \e.\\ \end{equation} Finally, we have to consider the case where~$|\lam_1|+|\lam_2|\geq A(\e,R_0)$. With no loss of generality, one can assume that~$\lam_2 \geq \displaystyle \frac 12 A(\e,R_0)$. Thus, if~$|\lam_1-\lam_2|$ is less than~$\displaystyle \frac 14 A(\e,R_0)$ we have~$\lam_1 \geq \displaystyle \frac 14 A(\e,R_0)$ and we can apply Inequality\refeq {ProofCFL1_Prop1demoeq4} which gives (supposing that $A(\ep,R_0)\leq1$): $$ \bigl |\wt {\mathcal W}(\wh w_1,Y)- \wt {\mathcal W}(\wh w_2,Y)\bigr| \leq 2C(R_0) \Bigl( \frac 1 4 A(\e,R_0)\Bigr)^{-2} |\lam_1-\lam_2|. $$ Together with\refeq {ProofCFL1_Prop1demoeq7}, this gives, if~$(n_j,m_j,\lam_j)$ are in~${\mathcal B}(R_0)$, $$ |\lam_1-\lam_2| \leq \frac {\e A^2(\e,R_0)} {32C(R_0)} \quad\hbox{and}\quad |\lam_1n_1-\lam_2 n_2| <\eta_\e \Longrightarrow \bigl |\wt {\mathcal W}(\wh w_1,Y)- \wt {\mathcal W}(\wh w_2,Y)\bigr| <\e. $$ The proposition is proved. \end{proof} \medbreak \begin{proof} [End of the proof of the first part of Theorem\refer {FourierL1basic}] Because of the integrability of~$f$, Proposition\refer {ProofCFL1_Prop1} implies that~$\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}$ is uniformly continuous on~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d,$ and can thus be extended to a uniformly continuous function on the complete metric space~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d.$ Let us finally establish that $\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}(\wh w) \to0$ when $\wh w$ goes to infinity. In the case where $f$ is in ${\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d),$ this in an obvious consequence of Lemma\refer {decaylambdan}. The general case of an integrable function on ${\mathop {\mathbb H\kern 0pt}\nolimits}^d$ follows by density as, obviously, Formula \eqref{definFourierWigner} implies that the map $f\to \wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}$ is continuous from $L^1({\mathop {\mathbb H\kern 0pt}\nolimits}^d)$ to $L^\infty(\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d).$ \end{proof} \medbreak We are now ready to establish Formula\refeq {FourierL1basiceq2} for any integrable function $f$ on ${\mathop {\mathbb H\kern 0pt}\nolimits}^d.$ So let us fix some~$(\dot x,k)$ in~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0$, and consider a sequence~$\suite {\wh w } p \mathop{\mathbb N\kern 0pt}\nolimits = (n_p,n_p+k,\lam_p)_{p\in \mathop{\mathbb N\kern 0pt}\nolimits}$ such that $$ \lim_{p\rightarrow\infty} \wh w_p = (\dot x, k) \ \hbox{in the sense of~$\wh d$}. $$ According to Proposition\refer {ProofCFL1_Prop1}, if we set $$ {\mathcal K}_d(\dot x, k,Y) \buildrel\hbox{\footnotesize def}\over = \lim_{p\rightarrow\infty} {\mathcal W}(\wh w_p,Y) $$ then the definition of~$\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits}$ on~$\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ and the Lebesgue dominated convergence theorem imply that $$\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits} (\dot x, k) = \lim_{p\rightarrow\infty}\wh f_{\mathop {\mathbb H\kern 0pt}\nolimits} ( \wh w_p) =\int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d} \ov{\mathcal K}_d (\dot x, k,Y) f(Y,s) \,dY\,ds\,.$$ Now, Lemma\refer {Wignerconvnormally} gives $$\displaylines{\quad {\mathcal K}_d (\dot x, k,Y) = \sum_{\ell_1,\ell_2} \frac {(2i\eta)^{\ell_1}} {\ell_1! } \frac {(2y)^{\ell_2}} {\ell_2!} \lim_{p\rightarrow\infty} {\mathcal H}_{\ell_1,\ell_2} (\wh w_p)(\mathop{\rm sgn}\nolimits\lam_p)^{\ell_1}\hfill\cr\hfill\quad\quad\hbox{with}\quad {\mathcal H}_{\ell_1,\ell_2}(\wh w)\buildrel\hbox{\footnotesize def}\over =|\lam|^{\frac{|\ell_1+\ell_2|}2} \bigl(M^{\ell_1} H_m|\partial^{\ell_2} H_n\bigr)_{L^2({\mathop{\mathbb R\kern 0pt}\nolimits}^d)}.} $$ If $d=1$ then Corollary\refer {ProofCFL1_Coroll1} implies that $$ \lim_{p\rightarrow\infty} {\mathcal H}_{\ell_1,\ell_2} (\wh w_p) = F_{\ell_1,\ell_2} (k) \,\biggl(\frac{|\dot x|}4\biggr)^{\frac{\ell_1+\ell_2}2} $$ and, because $\mathop{\rm sgn}\nolimits(\lambda_p)=\mathop{\rm sgn}\nolimits \dot x$ for large enough $p,$ this guarantees \refeq {FourierL1basiceq2} and Formula \eqref {definPhaseFlambda=0}. \smallbreak Once again, as in general dimension $d\geq1$ the term ${\mathcal H}_{\ell_1,\ell_2}$ may be written as the product of $d$ terms involving only one-dimensional Hermite functions, the above formula still holds true (with the notation convention given in Proposition\refer{ProofCFL1_Prop1} of course). This concludes the proof of the first part of Theorem\refer {FourierL1basic} and of Identity\refeq {FourierL1basiceq2}. \qed \begin{remark}\label{rk:K0}{\sl Computing ${\mathcal K}_d$ will be carried out later on, in Section\refer{proofFormulacK}. For the time being, let us just underline that the expression of $F_{\ell_1,\ell_2}(k)$ which appears in\refeq{definPhaseFlambda=0} ensures that $F_{0,0}(k)=\delta_{0,k}.$ We thus have \begin{equation} \label {eq:K0} {\mathcal K}_d(\dot x, k,0) = {\mathcal K}_d(0, k,Y) =F_{0,0}(k)=\delta_{0,k}. \end{equation} Let us also notice that, denoting by~$\wh 0$ the point~$(0,0)$ of~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0$, we recover the following property: \begin{equation} \label {Fourier0=int} \wh f_{\mathop {\mathbb H\kern 0pt}\nolimits} (\wh 0) =\int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d} f(w)\, dw. \end{equation} } \end{remark} \section {The case of functions that do not depend on the vertical variable} \label {FourierHorizontal} The purpose of this section is to prove Theorem\refer {Fourierhorizontal}. As already pointed out in the introduction, a key issue is to study the limit (in the sense of weak convergence of measures) of functions which concentrate near the set~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0$. This is the aim of the following lemma. \begin{lemma} \label {convergesimplecouchH_0} {\sl Let $\wh\chi:{\mathop{\mathbb R\kern 0pt}\nolimits}\to{\mathop{\mathbb R\kern 0pt}\nolimits}$ be integrable, compactly supported and with integral $1.$ Then for any continuous function $\theta$ from~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d$ to~$\mathop{\mathbb C\kern 0pt}\nolimits$ satisfying \begin{equation}\label{eq:condtheta} \sup_{(n,m,\lam)\in\wt{\mathop {\mathbb H\kern 0pt}\nolimits}^d}\bigl( 1+|\lam|( |n+m|+d) +|n-m|\bigr) ^{2d+1} | \theta (n,m,\lam)| <\infty, \end{equation} we have $$ \lim_{\ep\to0} \int_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d} \ep^{-1}\wh\chi(\ep^{-1}\lam) \theta(n,m,\lam)\,d\wh w = \langle \mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0},\theta\rangle $$ where the measure in the right-hand side has been defined in \eqref{limitmeasureeq1}. } \end{lemma} \begin{proof} Let us first prove the result if the function $\theta$ is supported in the closure of $$ {\mathcal B}_K\buildrel\hbox{\footnotesize def}\over =\bigl\{(n,m,\lambda)\in \wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d\,:\, |\lam|(2|n|+d)\leq K\ \hbox{ and }\ |m-n|\leq K\bigr\} $$ for some positive~$K$. Then we have \begin{eqnarray*} {\mathcal I}_\e &\buildrel\hbox{\footnotesize def}\over = & \int_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d} \ep^{-1}\wh\chi(\ep^{-1}\lam) \theta(n,m,\lam)\,d\wh w= \sum_{|k|\leq K}\bigl({\mathcal I}_\e^-(k)+ {\mathcal I}_\e^+(k)\bigr)\quad\hbox{with}\quad\\ {\mathcal I}_\ep^\pm(k) & \buildrel\hbox{\footnotesize def}\over = &\int_{{\mathop{\mathbb R\kern 0pt}\nolimits}_{\pm}} \ep^{-1}\wh \chi(\ep^{-1}\lam) \biggl(\sum_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d} \theta(n,n+k,\lam)\biggr)|\lambda|^dd\lambda. \end{eqnarray*} Above, we agreed that~$\theta(n,n+k,\lam) =0$ whenever at least one component of~$n+k$ is negative. Then the idea is to use Riemann type sums. More concretely, for all $n$ in~$\mathop{\mathbb N\kern 0pt}\nolimits^d$ and $\lambda$ in~${\mathop{\mathbb R\kern 0pt}\nolimits}\setminus\{0\},$ let us define the family of cubes~$Q_{n,\lam} \buildrel\hbox{\footnotesize def}\over = 2\lam n + 2\lam[0,1[^d$. It is obvious that \begin{equation} \label {convergesimplecouchH_0demoeq0} {\rm Vol} (Q_{n,\lam}) = (2|\lam|)^{d}\quad\hbox{and}\quad \sum_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d} {\bf 1}_{Q_{n,\lam}}=1\ \hbox{ on }\ ({\mathop{\mathbb R\kern 0pt}\nolimits}_{\mathop{\rm sgn}\nolimits\lam})^d. \end{equation} {}From the volume property and the definition of ${\mathcal I}_\e^+(k),$ we readily get $$ {\mathcal I}_\e^+(k) = 2^{-d} \int_{\mathop{\mathbb R\kern 0pt}\nolimits}\int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d} \sum_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d} \ep^{-1}\wh \chi(\ep^{-1}\lam) \theta(n,n+k,\lam) {\bf 1}_{Q_{n,\lam}}(\dot x) \,d\dot x\,d\lambda. $$ Let us write that $$ \longformule{ 2^d{\mathcal I}_\e^+(k)=\int_{\mathop{\mathbb R\kern 0pt}\nolimits}\int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d}\sum_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d}\e^{-1}\wh\chi(\e^{-1}\lam)\theta(\dot x,k) {\bf 1}_{Q_{n,\lam}}(\dot x) \,d\dot x \,d\lam } { {} + \int_{\mathop{\mathbb R\kern 0pt}\nolimits}\int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d} \ep^{-1}\wh \chi(\ep^{-1}\lam) \sum_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d} \bigl(\theta(n,n+k,\lam)-\theta(\dot x,k)\bigr) {\bf 1}_{Q_{n,\lam}}(\dot x) \,d\dot x\,d\lambda\,. } $$ Using the second property of \eqref{convergesimplecouchH_0demoeq0}, the fact that~$\wh \chi$ is of integral~$1,$ and that the summation may be restricted to those indices $n$ in~$\mathop{\mathbb N\kern 0pt}\nolimits^d$ such that~$|\lam n|\leq K$ (because $\theta$ is supported in~${\mathcal B}_K$), we end up with $$ 2^d{\mathcal I}_\e^+(k)-\int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d}\!\theta(\dot x,k)\,d\dot x =\int_{\mathop{\mathbb R\kern 0pt}\nolimits}\!\int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d}\! \ep^{-1}\wh \chi(\ep^{-1}\lam) \! \sum_{|n\lam|\leq K} \bigl(\theta(n,n+k,\lam)-\theta(\dot x,k)\bigr) {\bf 1}_{Q_{n,\lam}}(\dot x) \,d\dot x\,d\lambda. $$ As~$\theta$ is uniformly continuous on~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d$ (being compactly supported), we have $$ \forall \eta>0\,,\ \exists \e>0,\ |2\lam n-\dot x|+|\lam| <\e \Longrightarrow \bigl | \theta(n,n+k,\lam)-\theta(\dot x,k)\bigr | <\eta. $$ One can thus conclude that for any $\eta>0,$ if $\e$ is small enough then we have $$ \bigg|2^d{\mathcal I}_\e^+(k)-\int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d}\theta(\dot x,k)\,d\dot x\biggr| \leq \eta \int_{\mathop{\mathbb R\kern 0pt}\nolimits} \ep^{-1}\wh \chi(\ep^{-1}\lam) \biggl( \sum_{|n\lam|\leq K} \int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d} {\bf 1}_{Q_{n,\lam}}(\dot x) \,d\dot x\biggr) d\lambda. $$ Using once again that the measure of $Q_{n,\lam}$ is $(2|\lam|)^d$ and noting that the set of indices $n$ in~$\mathop{\mathbb N\kern 0pt}\nolimits^d$ for which $|n\lam|\leq K$ is bounded by $C_d K^d|\lam|^{-d}$ for some constant $C_d$ depending only on $d,$ we conclude that for small enough $\e,$ we have \begin{equation} \label {convergesimplecouchH_0demoeq1} \Bigl | {\mathcal I}_\e^+(k) -2^{-d} \int_{({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d} \theta(\dot x,k)\, d\dot x\Bigr | \leq C_d\eta K^d. \end{equation} Of course, handling ${\mathcal I}_\e^-(k)$ is strictly similar. Because the set of $k$ in~${\mathop{\mathbb Z\kern 0pt}\nolimits}^d$ with $|k|\leq K$ is finite (and independent of $\e$), this proves the lemma in the case where $\theta$ is compactly supported. \medbreak To handle the general case, one may fix some cut-off function $\psi:{\mathop{\mathbb R\kern 0pt}\nolimits}_+\to{\mathop{\mathbb R\kern 0pt}\nolimits}_+$ with value $1$ on $[0,1]$ and supported in $[0,2],$ and, for all $K>0,$ decompose $\theta$ into $$\theta= \theta_K+\theta^K\quad\hbox{with}\quad\theta_K(\wh w)\buildrel\hbox{\footnotesize def}\over = \psi(K^{-1}(|\lam|(2|n|+d)+|m-n|))\theta(\wh w).$$ The first part of the proof applies to $\theta_K$ and for all positive real number~$\eta$, one may thus find some $\ep_{K,\eta}$ so that \begin{equation}\label{eq:un} \biggl|\int_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d}\ep^{-1}\wh\chi(\ep^{-1}\lam)\theta_K(\wh w)\,d\wh w-\langle \mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}_0^d},\theta_K\rangle\biggr| \leq\eta\ \hbox{ for }\ \ep<\ep_{K,\eta}. \end{equation} To bound the term corresponding to $\theta^K,$ we shall use the fact that Condition \eqref{eq:condtheta} ensures that there exists some constant $C$ so that \begin{equation}\label{eq:thetadotx} \forall (\dot x,k)\in\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0\,,\ |\theta(\dot x,k)|\leq C(1+|\dot x|+|k|)^{-2d-1}. \end{equation} Now, we have, denoting ${\mathop{\mathbb R\kern 0pt}\nolimits}^d_{\mp}\buildrel\hbox{\footnotesize def}\over = ({\mathop{\mathbb R\kern 0pt}\nolimits}_-)^d\cup({\mathop{\mathbb R\kern 0pt}\nolimits}_+)^d,$ $$ \int_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} |\theta^K(\dot x,k)|\,d\mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}_0^d}(\dot x,k)\leq 2^{-d}\biggl(\sum_{|k|\geq K} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}^d_{\mp}} |\theta(\dot x,k)|\,d\dot x +\sum_{k\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d}\int_{|\dot x|\geq K} |\theta(\dot x,k)|\,d\dot x\biggr)\cdotp $$ In light of \eqref{eq:thetadotx} and making an obvious change of variables, we get $$\begin{aligned} \sum_{|k|\geq K} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}^d_{\mp}} |\theta(\dot x,k)|\,d\dot x&\leq C\sum_{|k|\geq K} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}_+^d} (1+|\dot x|+|k|)^{-2d-1}\,d\dot x\\ &\leq C\sum_{|k|\geq K} (1+|k|)^{-d-1} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}^d_+} (1+|\dot y|)^{-2d-1}\,d\dot y \leq C K^{-1}.\end{aligned} $$ Likewise, $$\begin{aligned} \sum_{k\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d}\int_{|\dot x|\geq K} |\theta(\dot x,k)|\,d\dot x&\leq C\sum_{k\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d}\int_{|\dot x|\geq K}(1+|\dot x|+|k|)^{-2d-1}\,d\dot x\\ &\leq C \sum_{k\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d} \frac1{(1+|k|)^{d+1}}\int_{|\dot y|>K/(1+|k|)} \frac{d\dot y}{(1+|\dot y|)^{2d+1}}\\ &\leq C \sum_{k\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d} \frac1{(1+|k|)^{d+1}} \frac1{(1+K/(1+|k|))^{d+1}}\\ &\leq C K^{-1}.\end{aligned} $$ Therefore, if we take $K$ large enough then one may ensure that \begin{equation}\label{eq:deux} \bigl|\langle \mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}_0^d},\theta^K\rangle\bigr|\leq\eta. \end{equation} Finally, $$\begin{aligned} \biggl|\int_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d}\ep^{-1}\wh\chi(\ep^{-1}\lam)\theta^K(\wh w)\,d\wh w\biggr| &\leq {\mathcal J}_K^1(\ep)+{\mathcal J}_K^2(\ep)\quad\hbox{with}\quad\\ {\mathcal J}_K^1(\ep)&\buildrel\hbox{\footnotesize def}\over =\ep^{-1}\int_{\mathop{\mathbb R\kern 0pt}\nolimits} \sum_{|k|\geq K} \sum_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d} \wh\chi(\ep^{-1}\lam)|\theta(\wh w)|\,|\lam|^dd\lam \quad\hbox{and}\quad\\ {\mathcal J}_K^2(\ep)&\buildrel\hbox{\footnotesize def}\over =\e^{-1} \int_{\mathop{\mathbb R\kern 0pt}\nolimits} \sum_{k\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d} \sum_{|n\lam|\geq K} \wh\chi(\ep^{-1}\lam)|\theta(\wh w)|\,|\lam|^dd\lam. \end{aligned} $$ Because $\theta$ satisfies \eqref{eq:condtheta}, we have $$ {\mathcal J}_K^1(\ep)\leq C \sum_{|k|\geq K} \sum_{n\in\mathop{\mathbb N\kern 0pt}\nolimits^d} \int_{\mathop{\mathbb R\kern 0pt}\nolimits}\ep^{-1}\wh\chi(\ep^{-1}\lam) (1+|k|+|\lam n|)^{-2d-1}\,|\lam|^dd\lam. $$ Clearly, because the sum below has ${\mathcal O}(|k|/|\lam|)^d$ terms, we may write $$\int_{\mathop{\mathbb R\kern 0pt}\nolimits} \ep^{-1}\wh\chi(\ep^{-1}\lam) \sum_{|n\lam|\leq|k|} (1+|k|+|\lam n|)^{-2d-1}\,|\lam|^dd\lam \lesssim (1+|k|)^{-d-1} $$ and, similarly, because $$\sum_{|n\lam|\geq|k|} |\lam|^d(1+|k|+|\lam n|)^{-2d-1} \lesssim \sum_{|n\lam|\geq|k|} |\lam|^d(1+|\lam n|)^{-2d-1}\lesssim (1+|k|)^{-d-1},$$ we get $$\int_{\mathop{\mathbb R\kern 0pt}\nolimits} \ep^{-1}\wh\chi(\ep^{-1}\lam) \sum_{|n\lam|\geq|k|} (1+|k|+|\lam n|)^{-2d-1}\,|\lam|^dd\lam \lesssim (1+|k|)^{-d-1}. $$ Therefore $$ {\mathcal J}_K^1(\ep)\lesssim K^{-1}. $$ Proving that ${\mathcal J}_K^2(\ep)\lesssim K^{-1}$ relies on similar arguments. Putting together with\refeq{eq:un} and\refeq{eq:deux}, it is now easy to conclude the proof of the lemma. \end{proof} \begin{proof} [Proof of Theorem\refer {Fourierhorizontal}] Let~$\chi$ in~${\mathcal S}({\mathop{\mathbb R\kern 0pt}\nolimits})$ have a compactly supported Fourier transform, and value $1$ at $0$ (hence the integral of $\wh\chi$ is~$2\pi$). Let~$\theta:\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d\to\mathop{\mathbb C\kern 0pt}\nolimits$ be continuous and compactly supported, and set $$ {\mathcal I}_\ep(g,\theta)\buildrel\hbox{\footnotesize def}\over =\langle {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}(g\otimes\chi(\e\cdot)), \theta\rangle $$ By definition of the Fourier transform of~$L^1$ functions, one may write: \begin{eqnarray*} {\mathcal I}_\e(g, \theta) & = & \int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d\times \wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d}e^{-is\lam} \chi(\e s) \ov{\mathcal W}(\wh w,Y) g(Y) \theta(\wh w) \,dY\,ds\,d\wh w\\ & = &\int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d} \frac 1 {\e} \wh \chi\Bigl(\frac \lam {\e}\Bigr) G(\wh w) \theta(\wh w) \,d\wh w\quad\quad\hbox{with}\quad G(\wh w) \buildrel\hbox{\footnotesize def}\over = \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal W}(\wh w,Y) g(Y) \,dY. \end{eqnarray*} As the function~$g$ is integrable on~$T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d$, Proposition\refer {ProofCFL1_Prop1} implies that the (numerical) product~$G\theta$ is a continuous compactly supported function on~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d$. Lemma\refer {convergesimplecouchH_0} applied to this function~$G\theta$ implies that $$ \lim_{\e\rightarrow0} {\mathcal I}_\e(g,\theta) = 2\pi \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g(\dot x,k) \theta(\dot x,k)d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} (\dot x,k) $$ This means that the measure~${\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits} (g\otimes\chi(\e\cdot)) d\wh w$ converges weakly to~$2\pi({\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g) d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} $ which is exactly Theorem\refer {Fourierhorizontal}. \end{proof} \section{Computing the kernel~${\mathcal K}$} \label {proofFormulacK} We have already seen in Remark \ref{rk:K0} that ${\mathcal K}_d(0,k,Y)=\delta_{0,k}$ for all $Y$ in $T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d,$ so let us now prove the symmetry identities pointed out in the introduction. The first relation in \eqref{eq:Ksym} stems from the observation that for all $(n,m,\lam)$ in $\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d$ and $Y$ in~$T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d,$ we have $$ {\mathcal W}(m,n,\lam,-Y)=\overline{{\mathcal W}(n,m,\lam,Y)}. $$ Therefore, for any $(\dot x,k)$ in $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0$ passing to the limit $(n,m,\lam)\to(\dot x,k)$ yields \begin{equation}\label{eq:Kconj} {\mathcal K}_d(\dot x,-k,-Y)=\overline{{\mathcal K}_d(\dot x,k,Y)}. \end{equation} In order to establish the second symmetry relation for ${\mathcal K}_d,$ it suffices to notice that \begin{equation}\label{eq:symW} \forall (n,m,\lam,Y) \in \wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d\times T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d\,,\ {\mathcal W}(n,m,\lam,Y)=(-1)^{|n+m|}{\mathcal W}(m,n,-\lam,Y). \end{equation} and to pass to the limit $(n,m,\lam)\to(\dot x,k).$ \smallbreak The last relation in \eqref{eq:Ksym} just follows from passing to the limit $(n,m,\lam)\to(\dot x,k)$ in \begin{equation}\label{eq:WWW} {\mathcal W}(n,m,-\lam,Y =\ov{\mathcal W}(n,m,\lambda,Y). \end{equation} Identity \eqref{eq:KLap} is a consequence of Relation\refeq {DeltaWignerHermite}. Indeed, observe that for any smooth function~$f:T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d\to\mathop{\mathbb C\kern 0pt}\nolimits$, we have $$ e^{-is\lam} \D_{\mathop {\mathbb H\kern 0pt}\nolimits} \bigl (e^{is\lam} f(Y)\bigr) = \Delta_Y f(Y) + 4i\lam\sum_{j=1}^d {\mathcal T}_j f(Y) -4\lam^2 |Y|^2f(Y)\quad\hbox{with}\quad {\mathcal T}_j \buildrel\hbox{\footnotesize def}\over = \eta_j\partial_{y_j} -y_j\partial_{\eta_j}.$$ Taking~$f(Y)= {\mathcal W}(\wh w,Y),$ using\refeq {DeltaWignerHermite} and having~$(n,m,\lam)$ tend to~$(\dot x,k)$ yields \begin{equation} \label {computcKdemoeq1} \D_Y {\mathcal K}_d (\dot x,k,Y) = -4 |\dot x| {\mathcal K}_d (\dot x,k,Y). \end{equation} Relation\refeq{eq:Kk} is a consequence of\refeq {Fourierhorizontaldemoeq112} which implies in particular that $$ |\lam| (n_j-m_j) {\mathcal W}(\wh w,Y) = i\lam {\mathcal T}_j {\mathcal W} (\wh w,Y). $$ Passing to the limit when~$(n,m,\lam)$ tends to~$(\dot x,k)$ ensures \begin{equation} \label {computcKdemoeq2} ik _j{\mathcal K}_d(\dot x,k,Y) = {\rm sgn}(\dot x) {\mathcal T}_j {\mathcal K}_d(\dot x,k,Y) \end{equation} which is exactly\refeq {eq:Kk}. \smallbreak Proving Identity\refeq{Convollam=0} is bit more involved. To achieve it, let us fix some function $\al$ of~${\mathcal S}({\mathop{\mathbb R\kern 0pt}\nolimits})$ and two functions~$g_1$ and~$g_2$ of~${\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$. By definition of convolution and Fourier transform, we have $$ \longformule{ {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}\bigl( (g_1\otimes \al)\star (g_2\otimes \al) \bigr) (\wh w) } { {} =\int_{{\mathop {\mathbb H\kern 0pt}\nolimits}^d\times {\mathop {\mathbb H\kern 0pt}\nolimits}^d} e^{-is\lam} \ov{\mathcal W}(\wh w,Y) g_1(Y-Y') \al\bigl(s-s'-2\s(Y',Y)\bigr) g_2(Y') \al(s')\, dw \, dw'. } $$ Integrating first with respect to~$s$ and next with respect to $s'$ yields $$ {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}\bigl( (g_1\otimes \al)\star (g_2\otimes \al) \bigr) (\wh w) =\wh \al^2(\lam) \int_{(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)^2} e^{2i\lam \s(Y,Y')} \ov{\mathcal W}(\wh w,Y) g_1(Y-Y') g_2(Y') \,dY\,dY'. $$ {}From the fact that~$\s$ is symplectic, we infer that \begin{equation} \label {FormulaconvWgene} \begin{aligned} & {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}\bigl( (g_1\otimes \al)\star (g_2\otimes \al) \bigr) (\wh w)\\ & \qquad\qquad\qquad{} =\wh \al^2(\lam) \int_{(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)^2} e^{2i\lam \s(Y_1,Y_2)} \ov{\mathcal W}(\wh w,Y_1+Y_2)g_1(Y_1) g_2(Y_2)\,dY_1\,dY_2. \end{aligned} \end{equation} Of course, because both $g_1\otimes \al$ and $g_2\otimes \al$ are in ${\mathcal S}({\mathop {\mathbb H\kern 0pt}\nolimits}^d),$ we are guaranteed, thanks to the convolution formula \refeq {newFourierconvoleq1}, that $$ {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}\bigl( (g_1\otimes \al)\star (g_2\otimes \al) \bigr) (n,n+k,\lambda)= G_{12}\quad\hbox{with}\quad G_{12} \buildrel\hbox{\footnotesize def}\over = \bigl({\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}(g_1\otimes \al)\cdot {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}(g_2\otimes \al)\bigr) (n,n+k,\lam). $$ Now, we have, setting $k'=n+k-\ell$ in the second line, $$\begin{aligned} G_{12} & = \sum_{\ell\in\mathop{\mathbb N\kern 0pt}\nolimits^d} {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}(g_1\otimes \al) (n,\ell,\lam) {\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits}(g_2\otimes \al) (\ell,n+k,\lam) \\ &= \wh \al^2(\lam) \int_{(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)^2} \sum_{k'\leq n+k} \ov{\mathcal W}(n,n+k-k',\lam,Y_1) \ov{\mathcal W}(n+k-k',n+k,\lam,Y_2)\\ &\hspace{10cm}\times g_1(Y_1)g_2(Y_2)\, dY_1\,dY_2. \end{aligned} $$ Hence, reverting to Relation\refeq {FormulaconvWgene} and keeping in mind that the above computations hold true for any functions $\alpha,$ $g_1$ and $g_2$ in the Schwartz class, one may conclude that $$ e^{-2i\lam \s(Y_1,Y_2)} {\mathcal W}(n,n+k,\lam,Y_1+Y_2) = \sum_{k'\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d} {\mathcal W}(n,n+k-k',\lam,Y_1) {\mathcal W}(n+k-k',n+k,\lam,Y_2). $$ Taking advantage of the decay of~${\mathcal W}$ with respect to the variable~$k$, (for~$Y_1$ and~$Y_2$ in a given compact subset of~$T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d$ by virtue of\refeq {Fourierhorizontaldemoeq112}), we can pass to the limit for~$2\lam n$ tending to~$\dot x$ and~$\lam$ tending to~$0$. This gives \begin{equation}\label{eq:Kd} {\mathcal K}_d( \dot x,k,Y_1+Y_2) = \sum_{k'\in {\mathop{\mathbb Z\kern 0pt}\nolimits}^d} {\mathcal K}_d( \dot x,k-k',Y_1)\,{\mathcal K}_d( \dot x,k',Y_2) \end{equation} which is the generalization of Formula\refeq {Convollam=0} in any dimension. \medbreak In order to fully benefit from Relations\refeq {eq:KLap},\refeq{eq:Kk} and\refeq {Convollam=0} so as to eventually compute ${\mathcal K},$ it is wise to introduce the following function~$\wt {\mathcal K}$ on ${\mathop{\mathbb R\kern 0pt}\nolimits}\times\mathop{\mathbb T\kern 0pt}\nolimits\times T^\star{\mathop{\mathbb R\kern 0pt}\nolimits},$ where $\mathop{\mathbb T\kern 0pt}\nolimits$ denotes the one-dimensional torus: \begin{equation} \label {wtKdef} \wt {\mathcal K}(\dot x ,z,Y) \buildrel\hbox{\footnotesize def}\over = \sum_{k\in \mathop{\mathbb Z\kern 0pt}\nolimits} {\mathcal K}(\dot x, k,Y) e^{ikz}. \end{equation} {}From Relation\refeq {Fourierhorizontaldemoeq112} (after having~$(n,m,\lam)$ tend to~$(\dot x,k)$), we infer that if~$(\dot x,Y)$ lies in any given bounded set ${\mathcal B},$ then \begin{equation}\label{eq:fastdecayK} \forall N \in \mathop{\mathbb N\kern 0pt}\nolimits\,,\ \sup_{(\dot x,k,Y)\in{\mathcal B}} (1+|k|)^N | {\mathcal K}(\dot x,k,Y) |<\infty. \end{equation} Thus the series\refeq {wtKdef} defines a function~$\wt{\mathcal K}$ on ${\mathop{\mathbb R\kern 0pt}\nolimits}\times\mathop{\mathbb T\kern 0pt}\nolimits\times T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}.$ \medbreak {}Furthermore, from \refeq {Convollam=0} we infer immediately that \begin{equation} \label {computcKdemoeq4} \wt {\mathcal K}(\dot x ,z,Y_1+Y_2) = \wt {\mathcal K}(\dot x ,z,Y_1) \, \wt {\mathcal K}(\dot x ,z,Y_2), \end{equation} and, in light of \eqref{eq:Kconj}, we discover that for any $(\dot x,z,Y)$ in ${\mathop{\mathbb R\kern 0pt}\nolimits}\times\mathop{\mathbb T\kern 0pt}\nolimits\times T^\star{\mathop{\mathbb R\kern 0pt}\nolimits},$ \begin{equation} \wt{\mathcal K}(\dot x,z,-Y)=\overline{\wt{\mathcal K}(\dot x,z,Y)}. \end{equation} Combined with \eqref{eq:K0} and \eqref{computcKdemoeq4} this implies that for any couple~$(\dot x,z)$ in ${\mathop{\mathbb R\kern 0pt}\nolimits}\times\mathop{\mathbb T\kern 0pt}\nolimits,$ the function~$Y\mapsto \wt {\mathcal K}( \dot x ,z,Y)$ is a character of ${\mathop{\mathbb R\kern 0pt}\nolimits}^2.$ Identifying $T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}$ with ${\mathop{\mathbb R\kern 0pt}\nolimits}^2,$ we thus conclude that there exists a function $\Phi=(\Phi_y,\Phi_\eta)$ from~${\mathop{\mathbb R\kern 0pt}\nolimits}\times \mathop{\mathbb T\kern 0pt}\nolimits$ to~${\mathop{\mathbb R\kern 0pt}\nolimits}^2$ such that $$ \wt{\mathcal K} (\dot x,z,Y)= e^{iY\cdot \Phi(\dot x,z)}= e^{i(y\Phi_y(\dot x,z)+\eta\Phi_\eta (\dot x,z))}. $$ Taking advantage of\refeq {computcKdemoeq1} which implies that~${\mathcal K}$ is a smooth function of~$Y$ and arguing as above, we find out that for any multi-index $\alpha= (\alpha_1, \alpha_2)$ in $\mathop{\mathbb N\kern 0pt}\nolimits^2$ and any~$(\dot x,k,Y)$ in some bounded set~${\mathcal B},$ we have $$ \forall N \in \mathop{\mathbb N\kern 0pt}\nolimits\,,\ \sup_{(\dot x,k,Y)\in{\mathcal B}} (1+|k|)^N |\partial^\alpha_{\dot x,Y} {\mathcal K}(\dot x,k,Y) |<\infty. $$ Therefore invoking Relation\refeq {eq:Kk}, we deduce that for any positive $\dot x$ $$ \partial_z \wt {\mathcal K}(\dot x,z,Y)= \eta\partial_y \wt {\mathcal K} (\dot x,z,Y) - y \partial_\eta \wt {\mathcal K} (\dot x,z,Y) $$ which entails that $\partial_z\Phi(\dot x, z) = R \Phi(\dot x, z)$ where~$R$ denotes the rotation of angle~$\pi/ 2$. Hence $$ \Phi(\dot x, z) = R(z)\wt \Phi(\dot x) $$ where~$R(z)$ denotes the rotation of angle~$z$. Now, Relation\refeq {computcKdemoeq1} ensures that~$|\wt \Phi(\dot x) | = 2|\dot x|^{\frac 12},$ and thus there exists a function $\phi$ from ${\mathop{\mathbb R\kern 0pt}\nolimits}$ to the unit circle of~${\mathop{\mathbb R\kern 0pt}\nolimits}^2$ so that for positive~$\dot x$ \begin{equation} \label {computcKdemoeq5} \wt{\mathcal K}(\dot x,z,Y) = e^{2i|\dot x|^{\frac 12} Y\cdot (R(z) \phi(\dot x))}. \end{equation} Let us finally establish Identity\refeq {Y2FouriercK}. It relies on the study of the action of the Fourier transform on the \emph{weight function} $M^2$ defined by $$ (M^2f)(Y,s)\buildrel\hbox{\footnotesize def}\over =|Y|^2f(Y,s). $$ For any functions~$g$ in~${\mathcal S}(T^\star {\mathop{\mathbb R\kern 0pt}\nolimits})$ and~$\psi:{\mathop{\mathbb R\kern 0pt}\nolimits}_+\times{\mathop{\mathbb Z\kern 0pt}\nolimits}\to{\mathop{\mathbb R\kern 0pt}\nolimits},$ smooth and compactly supported in~$[r_0,\infty[\times \mathop{\mathbb Z\kern 0pt}\nolimits$ for some positive real number~$r_0$, let us define \begin{eqnarray*} \Theta_\psi (\wh w) &\buildrel\hbox{\footnotesize def}\over = & \psi\bigl(|\lam|(n+m+1),m-n\bigr) \quad\hbox{and}\quad\\ {\mathcal B}(g,\psi) & \buildrel\hbox{\footnotesize def}\over = & \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}\times \wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^1} |Y|^2 {\mathcal K}(\dot x,k,Y) g(Y) \psi(\dot x,k)\, dY d \mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^1_0} (\dot x,k). \end{eqnarray*} Lemma\refer {convergesimplecouchH_0} implies that if $\wh\chi:{\mathop{\mathbb R\kern 0pt}\nolimits}\to{\mathop{\mathbb R\kern 0pt}\nolimits}$ is integrable, supported in $[-1,1]$ and with integral~$1$,~then \begin{eqnarray*} {\mathcal B}(g,\psi) & = & \lim_{\e\rightarrow 0} {\mathcal B}_\e(g,\psi) \quad\hbox{with}\quad \\ {\mathcal B}_\e(g,\psi) & \buildrel\hbox{\footnotesize def}\over = & \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}} g(Y) \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} \sum_{(n,m)\in \mathop{\mathbb N\kern 0pt}\nolimits^2} |Y|^2 {\mathcal W}(n,m,\lam,Y) \Theta_\psi (n,m,\lam) \frac 1 \e \wh \chi\Bigl(\frac \lam \e\Bigr) \,|\lam| \,d\lam\,dY. \end{eqnarray*} The following lemma gives a formula for~$ |Y|^2 {\mathcal W}(\wh w,Y)$. \begin{lemma} \label {Y2WignerHermite} {\sl For all~$\wh w$ in~$\wt{\mathop {\mathbb H\kern 0pt}\nolimits}^d$ and $Y$ in~$ T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d,$ we have $$|Y|^2{\mathcal W}(\wh w,Y) = -\wh\D {\mathcal W}(\cdot ,Y) (\wh w) \quad\hbox{with}\quad $$ \vspace{-8mm} \begin{multline} \label {decayWignerHermiteeq1} \wh \D \theta(\wh w) \buildrel\hbox{\footnotesize def}\over = - \frac 1{2|\lam|} ( |n+m| +d) \theta(\wh w) \\[-1ex]+\frac 1 {2|\lam|} \sum_{j=1} ^d \Bigl\{ \sqrt {(n_j+1) (m_j+1)}\, \theta(\wh w^+_j) +\sqrt {n_jm_j}\, \theta(\wh w^-_j)\Bigr\} \end{multline} where~$\wh w^\pm_j \buildrel\hbox{\footnotesize def}\over = (n\pm \d_j, m\pm\d_j, \lam)$. } \end{lemma} \begin{proof} {}From the definition of~${\mathcal W}$ and integrations by parts, we get \begin{eqnarray*} |Y|^2{\mathcal W}(\wh w,Y) & = &\int_{{\mathop{\mathbb R\kern 0pt}\nolimits}^d} \Bigl(|y|^2-\frac 1 {4\lam^2} \D_z\Bigr) \bigl(e^{2i\lambda\langle \eta,z\rangle}\bigr) H_{n,\lam} (y+z) H_{m,\lam} (-y+z)\, dz \\ &= & \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}^d} e^{2i\lambda\langle \eta,z\rangle} |\lam|^{\frac d2} {\mathcal I}(\wh w,y,z) \,dz \\\quad\hbox{with}\quad {\mathcal I}(\wh w,y,z) &\buildrel\hbox{\footnotesize def}\over = & \Bigl(|y|^2-\frac 1 {4\lam^2} \D_z\Bigr)\bigl (H_{n} (|\lam|^{\frac 12} (y+z)) H_{m} (|\lam|^{\frac 12} (-y+z))\bigr). \end{eqnarray*} Using Leibniz formula, the chain rule and $4|y|^2=|y+z|^2+|y-z|^2+2(y+z)\cdot(y-z),$ we get \begin{eqnarray*} {\mathcal I}(\wh w,y,z) & = & -\frac 1 {4\lam^2} \bigl( (\D_z-\lam^2 |y+z|^2) H_{n} (|\lam|^{\frac 12} (y+z)) \bigr) H_{m} (|\lam|^{\frac 12} (-y+z)) \\ &&{} -\frac 1 {4\lam^2} \bigl( (\D_z-\lam^2 |y-z|^2) H_{m} (|\lam|^{\frac 12} (-y+z)) \bigr) H_{n} (|\lam|^{\frac 12} (y+z)) \\ &&{}-\frac 1 {2|\lam|}\sum_{j=1}^d(\partial_j H_{n}) (|\lam|^{\frac 12} (y+z)) (\partial_j H_{m}) (|\lam|^{\frac 12} (-y+z))\\ &&{}-\frac 12 (z+y)\cdot(z-y) H_{n} (|\lam|^{\frac 12} (y+z)) H_{m} (|\lam|^{\frac 12} (-y+z)). \end{eqnarray*} Using\refeq{relationsHHermiteD}, we end up with \begin{eqnarray*} {\mathcal I}(\wh w,y,z) &= & \frac 1{2|\lam|} (|n+m|+d) H_{n} (|\lam|^{\frac 12} (y+z)) H_{m} (|\lam|^{\frac 12} (-y+z)) \\ &&\qquad \qquad{} -\frac 1{2|\lam|} \sum_{j=1}^d\Bigl\{ (\partial_j H_{n}) (|\lam|^{\frac 12} (y+z)) (\partial_j H_{m}) (|\lam|^{\frac 12} (-y+z))\\ &&\qquad \qquad\qquad \qquad\qquad \qquad{} +(M_j H_{n}) (|\lam|^{\frac 12} (y+z)) (M_j H_{m}) (|\lam|^{\frac 12} (-y+z))\Bigr\}\cdotp \end{eqnarray*} Then, taking advantage of\refeq{relationsHHermiteCAb}, we get Identity\refeq {decayWignerHermiteeq1}. \end{proof} \medbreak Let us resume to the proof of Identity\refeq {Y2FouriercK}. Using the above lemma for $d=1$ and performing obvious changes of variable in the sum give \begin{eqnarray*} {\mathcal B}_\e(g,\psi) & = & - \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}} g(Y) \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} \sum_{(n,m)\in \mathop{\mathbb N\kern 0pt}\nolimits^2} (\wh\D {\mathcal W}(\cdot,Y) ) (n,m,\lam) \Theta_\psi (n,m,\lam) \frac 1 \e \wh \chi\Bigl(\frac \lam \e\Bigr)|\lam|\, d\lam\,dY\\ & = & -\int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}} g(Y) \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} \sum_{(n,m)\in \mathop{\mathbb N\kern 0pt}\nolimits^2} {\mathcal W}(n,m,\lam, Y) (\wh \D\Theta_\psi) (n,m,\lam) \frac 1 \e \wh \chi\Bigl(\frac \lam \e\Bigr) |\lam|\,d\lam\,dY\,. \end{eqnarray*} The key to proving the convergence of ${\mathcal B}_\e$ for $\e\to0$ is the asymptotic description of the operator~$\wh\D$ when~$\lam$ tends to~$0,$ given in the following lemma: \begin{lemma} \label {whDeltaoverlim0} {\sl Let~$\psi$ be a smooth function compactly supported in~$[r_0,\infty[\times \mathop{\mathbb Z\kern 0pt}\nolimits$ for some positive real number~$r_0$. Then $$ \wh\D \Theta_\psi (n,m,\lam) \simH 1 \Theta_{L\psi} (n,m,\lam) \quad\hbox{with}\quad (L\psi) (\dot x,k) \buildrel\hbox{\footnotesize def}\over = \dot x \psi'' (\dot x,k) +\psi'(\dot x,k)-\frac {k^2}{4\dot x} \psi(\dot x, k) $$ where the notation~$\Theta_1 \simH p \Theta_2$ means that for any positive integer~$N$, there is a constant~$C_{N, p}$ such that for all~$(n,m,\lam)$ in~$\mathop{\mathbb N\kern 0pt}\nolimits^2\times ]0,\infty[ $ satisfying $$ \lam (n+m)\geq \frac {r_0} 2\quad\hbox{and}\quad \lam\leq \lam_0/(1+|n-m|), $$ with a sufficiently small positive real number $\lam_0$ depending only on $r_0$, we have $$ \bigl|\Theta_1(n,m,\lam) -\Theta_2(n,m, \lam)\bigr|\leq C_{N, p} \, \lam^p \, \bigl(1+|\lam|(|n+m|+1)+|m-n|\bigr)^{-N}. $$ } \end{lemma} \begin{proof} By definition of the operator~$\wh \D$, and for~$\lam>0,$ we have, denoting $k\buildrel\hbox{\footnotesize def}\over = m-n$ and~$y\buildrel\hbox{\footnotesize def}\over = \lam(n+m),$ $$-2\lam^2 \wh\D \Theta_\psi (\wh w) = (y\!+\!\lam) \psi (y\!+\!\lam,k) - \lam\sqrt {(n+1)(m+1)} \, \psi(y\!+\!3\lam,k)- \lam\sqrt {nm} \, \psi (y-\lam,k).$$ Using that $$ \lam ^2 n m = \frac {\lam^2} 4 (n+m)^2 - \frac {\lam^2} 4 (m-n)^2 = \frac {y^2} 4 - \frac {\lam^2} 4 k^2\,, $$ we get that $$ \lam\sqrt {(n+1)(m+1)} = \frac y 2 \sqrt {1+\frac {4\lam} y +\Bigl(\frac {4- k^2} { y^2}\Bigr) \lam^2 } \simH 3 \frac y 2 +\lam -\frac {k^2} {4y} {\lam^2} \quad\hbox{and}\quad \lam\sqrt {nm} \simH 3 \frac y 2 -\frac {k^2} {4y} {\lam^2}. $$ Writing the Taylor expansion for~$\psi$ gives (omitting the dependency with respect to $k$ for notational simplicity), \begin{eqnarray*} (y+\lam) \psi (y+\lam) & \simH 3 & y\psi(y) +\bigl( \psi(y)+y\psi'(y) ) \lam + \Bigl( \psi'(y) +\frac y 2 \psi''(y)\Bigr)\lam^2 \,, \\ - \lam\sqrt {(n+1)(m+1)} \, \psi(y+3\lam) & \simH 3 & -\frac y 2 \psi(y) -\Bigl( \psi(y) +\frac 3 2 y\psi'(y)\Bigr) \lam \\ &&\!\!\!\!\!\!\!\!{} - \Bigl(\frac 9 4 y\psi''(y)+ 3\psi'(y) - \frac {k^2} {4y}\psi(y)\Bigr) \lam^2\quad\hbox{and}\quad\\ - \lam\sqrt {nm} \, \psi(y- \lam) & \simH 3 & -\frac y 2 \psi(y) +\frac 12 y\psi'(y)\lam - \Bigl( \frac y {4} \psi''(y) - \frac {k^2} {4y} \psi(y)\Bigr) {\lam^2}. \end{eqnarray*} By summation of these three identities, we get $$ - 2 \lam^2 \wh\D \Theta_\psi (\wh w) \simH 3 - \Bigl( 2y\psi''(y) +2\psi'(y) - \frac {k^2} {2y} \psi(y)\Bigr) \lam^2, $$ whence the lemma. \end{proof} \medbreak From the above lemma, it is easy to complete the proof of Identity\refeq {Y2FouriercK}. Indeed, we get $$ \displaylines{ {\mathcal B}_\e(g,\psi) = - \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}} g(Y)\!\! \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} \!\sum_{(n,m)\in \mathop{\mathbb N\kern 0pt}\nolimits^2} \!\! {\mathcal W}(n,m,\lam, Y) L\psi\bigl(|\lam|(n+m), m-n\bigr) \frac 1 \e \wh \chi\Bigl(\frac \lam \e\Bigr)|\lam| \, d\lam\,dY \hfill\cr\hfill+ {\mathcal R}_\e(g,\psi),} $$ where the remainder ${\mathcal R}_\ep$ is such that for all $N\in\mathop{\mathbb N\kern 0pt}\nolimits$ there exists $C_N$ so that $$\bigl|{\mathcal R}_\e(g,\psi)\bigr| \leq C_N \|g\|_{L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)} \sum_{(n,m)\in \mathop{\mathbb N\kern 0pt}\nolimits^2} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} |\lam| \bigl(1+|\lam|(|n+m|+1)+|m-n|\bigr)^{-N} \frac 1 \e |\wh \chi| \Bigl(\frac \lam \e\Bigr) |\lam| d\lam \,. $$ Taking~$N$ large enough, we find out that $$ \sum_{(n,m)\in \mathop{\mathbb N\kern 0pt}\nolimits^2} \int_{{\mathop{\mathbb R\kern 0pt}\nolimits}} |\lam| \bigl(1+|\lam|(|n+m|+1)+|m-n|\bigr)^{-N} \frac 1 \e |\wh \chi| \Bigl(\frac \lam \e\Bigr) |\lam| \,d\lam \leq C_N\int_{\mathop{\mathbb R\kern 0pt}\nolimits} \frac {|\lam|} \e |\wh \chi| \Bigl(\frac \lam \e\Bigr) d\lam\leq C_N \e. $$ Then Lemma\refer {convergesimplecouchH_0} ensures $$ {\mathcal B}(g,\psi) = - \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}\times \wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^1} {\mathcal K}(\dot x,k,Y) g(Y) \Bigl(\dot x \psi'' (\dot x,k) +\psi'(\dot x,k)-\frac {k^2}{4\dot x} \psi(\dot x, k)\Bigr) dY \,d \mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^1_0} (\dot x,k). $$ Integration by parts yields $$ {\mathcal B}(g,\psi) = \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}\times \wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^1} g(Y) \Bigl(\frac {k^2}{4\dot x} {\mathcal K}(\dot x, k,Y)-\partial_{\dot x}{\mathcal K}(\dot x,k,Y)-\dot x \partial_{\dot x} ^2 {\mathcal K} (\dot x,k,Y)\Bigr) \psi(\dot x, k) \,dY \,d \mu_{\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^1_0} (\dot x,k). $$ Using the fact that the above equality holds true for all $g$ in ${\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits})$ and for functions~$\psi$ smooth and compactly supported in~$[r_0,\infty[\times{\mathop{\mathbb Z\kern 0pt}\nolimits}$ for some $r_0>0,$ and combining with a density argument, one can conclude to Identity\refeq {Y2FouriercK} for all positive $\dot x$ and~$k$ in~${\mathop{\mathbb Z\kern 0pt}\nolimits}.$ \medbreak In order to complete the proof of\refeq{FourierL1basiceq2b}, let us translate\refeq {Y2FouriercK} in terms of $\wt{\mathcal K}.$ We have $$ \frac1{4\dot x}\partial_z^2\wt{\mathcal K} +\partial_{\dot x}(\dot x\partial_{\dot x}\wt{\mathcal K})+|Y|^2\wt{\mathcal K}=0. $$ Now, plugging the ansatz\refeq{computcKdemoeq5} into the above relation yields for any positive $\dot x$, any~$k$ in~${\mathop{\mathbb Z\kern 0pt}\nolimits}$ and any~$Y$ in~$ T^\star{\mathop{\mathbb R\kern 0pt}\nolimits},$ $$\displaylines{\quad |Y|^2 \bigl(Y\cdot (R'(z)\phi(\dot x))\bigr)^2 +\bigl(Y\cdot(R(z)\phi(\dot x))+2\dot x Y\cdot(R(z)\phi'(\dot x))\bigr)^2 \hfill\cr\hfil -4i\sqrt{\dot x} Y\cdot(R(z)\phi'(\dot x))-2i\dot x^{3/2}Y\cdot(R(z)\phi''(\dot x)).\quad} $$ Taking the imaginary part implies that $\phi$ satisfies $$\dot x\phi''(\dot x)+2\phi'(\dot x)=0\quad\hbox{for }\ \dot x>0. $$ Now, as $\phi$ is valued in the unit circle, this implies that $\phi$ is a constant. Therefore there exists some number $z_0$ in~$(-\pi,\pi]$ so that for any positive~$\dot x$, any~$z$ in~${\mathop{\mathbb R\kern 0pt}\nolimits}$ and any~$Y$ in~$T^\star{\mathop{\mathbb R\kern 0pt}\nolimits},$ we have $$ \wt{\mathcal K}(\dot x,z,Y) = e^{2i|\dot x|^{\frac 12}(y\cos(z+z_0)+\eta\sin(z+z_0))}. $$ Inverse Fourier theorem for periodic functions implies that $$ {\mathcal K}(\dot x,k,Y)=\frac1{2\pi}\int_{-\pi}^\pi e^{2i|\dot x|^{\frac 12}(y\cos(z+z_0)+\eta\sin(z+z_0))} e^{-ikz}\,dz. $$ In order to compute the value of $z_0,$ one may take advantage of the symmetry relations in\refeq{eq:Ksym} that imply \begin{equation}\label{eq:ksym2} {\mathcal K}(\dot x,-k,y,-\eta)=(-1)^k{\mathcal K}(\dot x,k,y,\eta). \end{equation} Now, the above formula for ${\mathcal K}$ and an obvious change of variable give $$ \begin{aligned} 2\pi{\mathcal K}(\dot x,-k,y,-\eta)&= \int_{-\pi}^\pi e^{ikz} e^{2i|\dot x|^{\frac 12}(y\cos(z+z_0)-\eta\sin(z+z_0))}\,dz\\ &= \int_{-\pi}^\pi e^{ik(\pi-z)} e^{2i|\dot x|^{\frac 12}(y\cos(\pi-z+z_0)-\eta\sin(\pi-z+z_0))}\,dz\\ &=(-1)^k \int_{-\pi}^\pi e^{-ikz} e^{-2i|\dot x|^{\frac 12}(y\cos(z-z_0)+\eta\sin(z-z_0))}\,dz. \end{aligned} $$ Hence \eqref{eq:ksym2} is fulfilled for all positive~$\dot x$,~$k$ in~${\mathop{\mathbb Z\kern 0pt}\nolimits}$ and~$(y,\eta)$ in~$ T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}$ if and only if $$ \forall z\in(-\pi,\pi)\,,\ \cos(z+z_0)=-\cos(z-z_0)\quad\hbox{and} \quad\sin(z+z_0)=-\sin(z-z_0) $$ which is equivalent to $z_0\equiv \frac\pi2 [\pi].$ Hence there exists $\e\in\{-1,1\}$ so that $$ \wt{\mathcal K}(\dot x,z,Y) = e^{2i\e\sqrt{\dot x}(y\sin z-\eta\cos z)}. $$ To determine the value of $\ep,$ one may use the fact that for all positive~$\dot x$ and~$\eta$ in~${\mathop{\mathbb R\kern 0pt}\nolimits},$ the above formula implies that $$ \sum_{k\in{\mathop{\mathbb Z\kern 0pt}\nolimits}} {\mathcal K}(\dot x,k,(0,\eta))=\wt{\mathcal K}(\dot x,0,(0,\eta))=e^{-2i\ep\sqrt{\dot x}\,\eta} =\cos(2\sqrt{\dot x}\,\eta)-i\ep\sin(2\sqrt{\dot x}\,\eta). $$ Now, from the expansion of ${\mathcal K}$ given in\refeq{definPhaseFlambda=0}, we infer that for all $\eta\in{\mathop{\mathbb R\kern 0pt}\nolimits}$ and $\dot x>0,$ $$ \wt{\mathcal K}(\dot x,0,(0,\eta))=\sum_{\ell_1\in\mathop{\mathbb N\kern 0pt}\nolimits}\sum_{|k|\leq\ell_1} \frac{i^{\ell_1}}{\ell_1!} F_{\ell_1,0}(k) \eta^{\ell_1} \dot x^{\frac{\ell_1}2}. $$ Note that the imaginary part of the term corresponding to $\ell_1$ is positive (indeed $F_{1,0}(k)$ is positive), which implies that $\ep=-1$. This completes the proof of Identity\refeq {FourierL1basiceq2b} in the case where $\dot x$ is non negative. The negative case just follows from\refeq{eq:Ksym}. Thus the whole Theorem\refer {FourierL1basic} is proved. \section {Some properties of operator~${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$} \label {StutyofcGH} We end this paper with a short presentation of basics properties of the transformation~${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits},$ that highlight some analogy (but also some difference) with the classical Fourier transform on~$T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d$. The main result of this section reads as follows. \begin{theorem} \label {FourierhorizontalMore} {\sl The operator ${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$ maps continuously~$L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ to the space~${\mathcal C}_0(\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0)$ of continuous functions on $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0$ going to $0$ at infinity and, for any couple $(f,g)$ of functions in~$L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d),$ we have the convolution identity: \begin{equation}\label{eq:convGH} {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}(f\star g)(\dot x,k)=\sum_{k'\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d} {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} f(\dot x,k-k')\,{\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}(\dot x,k') \quad\hbox{for all }\ (\dot x,k)\in\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0. \end{equation} Moreover, for any $g$ in~${\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d),$ we have the following inversion formula: $$ g(Y) = \Bigl(\frac 2 \pi \Bigr) ^d \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} {\mathcal K}_d (\dot x,k,Y) {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g(\dot x,k) \,d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0} (\dot x,k). $$ Finally, the following Fourier-Plancherel identity holds true: $$ \forall g\in {\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)\,,\ \|g\|_{L^2(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)} ^2 = \Bigl(\frac 2 \pi \Bigr) ^d \|{\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g\|_{L^2(\wh{\mathop {\mathbb H\kern 0pt}\nolimits}_0^d)}^2. $$ } \end{theorem} \begin{proof} The first property stems from the fact that, because~$|{\mathcal K}_d|\leq1,$ we have $$ \|{\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g\|_{L^\infty(\wh{\mathop {\mathbb H\kern 0pt}\nolimits}_0^d)} \leq \|g\|_{L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)}. $$ Furthermore, as the kernel ${\mathcal K}_d$ is continuous with respect to~$(\dot x,k),$ we get from the explicit expression of ${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$ that the range of $L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ by ${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$ is included in the set of continuous functions on~$\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0.$ Proving that $({\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g)(\dot x,k )$ tends to $0$ when $(\dot x,k )$ goes to infinity is based on the regularity and decay properties of the kernel~${\mathcal K}_d.$ More specifically, Identity\refeq {eq:KLap} implies that $$ \forall p \in \mathop{\mathbb N\kern 0pt}\nolimits\,,\ 4^p|\dot x| ^p{\mathcal K}_d(\dot x,k,Y) = \bigl((-\D_{Y} )^p {\mathcal K}_d\bigr)(\dot x,k,Y), $$ while Relation\refeq {eq:Kk} gives for all multi-index $\alpha$ in $\mathop{\mathbb N\kern 0pt}\nolimits^d,$ $$ (ik \mathop{\rm sgn}\nolimits \dot x)^\al {\mathcal K}_d (\dot x,k,Y) = ( {\mathcal T}^\al {\mathcal K}_d) (\dot x,k,Y)\bigr) \quad\hbox{with}\quad {\mathcal T}^\al \buildrel\hbox{\footnotesize def}\over = \prod_{j=1}^d ( \eta_j\partial_{y_j} -y_j\partial_{\eta_j} )^{\al_j} . $$ Hence, if $g\in{\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ then performing suitable integration by parts in the integral defining~${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g$ yields $$ 4^p|\dot x|^p ({\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g)(\dot x,k )= {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} ((-\D_Y)^pg)(\dot x,k) \quad\hbox{and}\quad (-i k \mathop{\rm sgn}\nolimits \dot x )^\al ({\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g)(\dot x,k )= ({\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} {\mathcal T}^\al g)(\dot x,k). $$ This implies that, for any positive integer~$p$, a constant~$C_p$ and an integer~$N_p$ exist such that \begin{eqnarray} \label {decaydotx1} ( 1+|\dot x|+|k|) ^p|{\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}(g) (\dot x,k)| \leq C_p \, \|g\|_{N_p,{\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)}. \end{eqnarray} This proves that $({\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g)(\dot x,k )$ tends to~$0$ when~$(\dot x,k )$ goes to infinity for any $g$ in~${\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d).$ Now, because~$L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ is dense in~${\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ and~${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$ is continuous from~$L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ to the set~${\mathcal C}_b(\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0)$ of bounded continuous functions on $\wh{\mathop {\mathbb H\kern 0pt}\nolimits}^d_0$, one can conclude that the range of~$L^1(T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ by~${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$ is included in~${\mathcal C}_0(\wh {\mathop {\mathbb H\kern 0pt}\nolimits}^d_0)$. \medbreak In order to establish\refeq{eq:convGH}, it suffices to see that, by virtue of the definition of ${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits},$ of Identity\refeq{eq:Kd} and of Fubini theorem (here the decay inequality\refeq{eq:fastdecayK} comes into play), one may write that for any couple $(f,g)$ of integrable functions on $T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d,$ we have $$ \begin{aligned} {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}(f\star g)(\dot x,k)&=\int_{(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)^2}\ov{\mathcal K}_d(\dot x,k,Y)\,f(Y-Y') \, g(Y')\,dYdY'\\ &=\sum_{k'\in{\mathop{\mathbb Z\kern 0pt}\nolimits}^d}\int_{(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)^2}\ov{\mathcal K}_d(\dot x,k',Y') g(Y')\:\ov{\mathcal K}_d(\dot x,k-k',Y-Y') f(Y-Y')\,dYdY'. \end{aligned} $$ Then performing an obvious change of variable, and using again Fubini theorem and the definition of ${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$ gives\refeq{eq:convGH}. \medbreak In order to prove the \emph{inversion Fourier formula for ${\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits}$}, let us consider~$g$ in~${\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)$ and~$\chi$ in~${\mathcal S}({\mathop{\mathbb R\kern 0pt}\nolimits})$ with value~$1$ near~$0$. For any sequence~$\suite \e p \mathop{\mathbb N\kern 0pt}\nolimits$ of positive real numbers which tends to~$0$, we have according to the inverse Fourier formula~\eqref{MappingofPHdemoeq1}, \begin{eqnarray*} g(Y) \chi(\e_p s) & = & \frac {2^{d-1}} {\pi^{d+1} } \int_{\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d} e^{is\lam} {\mathcal W}(\wh w, Y){\mathcal F}_{\mathop {\mathbb H\kern 0pt}\nolimits} (g\otimes \chi(\e_p\cdot)) (\wh w) \, d\wh w\\ & = & \frac {2^{d-1}} {\pi^{d+1} } \int_{\wt {\mathop {\mathbb H\kern 0pt}\nolimits}^d} e^{is\lam} {\mathcal W}(\wh w, Y) \Bigl( \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal W} (\wh w,Y') g(Y') dY'\Bigr) \frac 1 {\e_p} \wh \chi \Bigl(\frac \lam {\e_p}\Bigr) \, d\wh w. \end{eqnarray*} {}From the definition of $\D_{\mathop {\mathbb H\kern 0pt}\nolimits}$ in\refeq{defLaplace}, we gather that for any integer $p$ and positive real number~$\e$, there exist some function $f_\e^p$ on ${\mathop {\mathbb H\kern 0pt}\nolimits}^d,$ and constant $C_p$ (depending only on $p$) so that \begin{equation} \label {use1} (-\D_{\mathop {\mathbb H\kern 0pt}\nolimits})^p \chi(\e s) g(Y)= \chi(\e s) (-\D_Y)^p g(Y) + \e f^p_\e(Y,s)\quad\hbox{with}\quad \|f^p_\e(\cdot,s)\|_{L^1(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d)}\leq C_p. \end{equation} Therefore, having $\ep$ tend to $0,$ we deduce that $$ |\lam|^p (2|m|+d) ^p \Big| \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal W} (\wh w,Y) g(Y) \,dY \Big| \leq C_p \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \Big|(-\D_Y)^pg(Y)\Big| dY \, . $$ Along the same lines, taking advantage of the \emph{right-invariant} vector fields defined in\refeq{eq:rightinv}, we get for any integer $p$ $$ |\lam|^p (2|n|+d) ^p \Big| \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal W} (\wh w,Y) g(Y) dY \Big| \leq C_p \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \Big|(-\D_Y)^pg(Y)\Big| dY \, .$$ Identity\refeq {Fourierhorizontaldemoeq112} together with integrations by parts implies that for any multi-index $\al$ $$ \bigl(-i\,\mathop{\rm sgn}\nolimits\lam\bigr)^{|\al|} \prod_{j=1}^d (n_j-m_j)^{\al_j} \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal W} (\wh w,Y) g(Y) \,dY = \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal W} (\wh w,Y) {\mathcal T}^\al g(Y) \,dY \, . $$ We deduce that the function $$ \wh w\longmapsto {\mathcal W}(\wh w, Y) \Bigl( \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal W} (\wh w,Y') g(Y')\, dY'\Bigr) $$ satisfies the hypothesis of Lemma\refer {convergesimplecouchH_0}. Thus combining with Proposition\refer {ProofCFL1_Prop1} gives $$\begin{aligned} g(Y) & = \frac {2^{d-1}} {\pi^{d+1} } \:2\pi \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d} {\mathcal K}_d(\dot x,k,Y) \Bigl( \int_{T^\star {\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal K}_d (\dot x,k,Y') g(Y') dY'\Bigr) d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d}(\dot x,k) \\ & = \Bigl(\frac2\pi\Bigr)^{d} \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d} {\mathcal K}_d(\dot x,k, Y) \,{\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g (\dot x, k) \, d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d}(\dot x,k), \end{aligned} $$ which completes the proof of the inversion formula. \medbreak Of course, as in the classical Fourier theory, having an inversion formula implies a Fourier-Plancherel type relation. Indeed we have for any function~$g$ in~${\mathcal S}(T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d),$ using Fubini theorem, \begin{eqnarray*} \int_{T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d} g(Y)\overline g(Y) \,dY & = & \Bigl(\frac2\pi\Bigr)^{d} \int_{T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d} \biggl( \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d} {\mathcal K}_d(\dot x,k, Y) \,{\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g (\dot x, k) \, d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d}(\dot x,k)\biggr) \overline g(Y) \,dY\\ & = & \Bigl(\frac2\pi\Bigr)^{d} \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d} {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g (\dot x, k)\overline {\biggl( \int_{T^\star{\mathop{\mathbb R\kern 0pt}\nolimits}^d} \ov{\mathcal K}_d(\dot x,k,Y) g(Y) \,dY\biggr) } \, d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d} (\dot x,k)\\ & = & \Bigl(\frac2\pi\Bigr)^{d} \int_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d} {\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g (\dot x, k)\,\overline {{\mathcal G}_{\mathop {\mathbb H\kern 0pt}\nolimits} g (\dot x, k) } \, d\mu_{\wh {\mathop {\mathbb H\kern 0pt}\nolimits}_0^d} (\dot x,k). \end{eqnarray*} The whole Theorem\refer {FourierhorizontalMore} is proved. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and context of the paper} At the heart of many biological systems are chemical reaction networks (CRNs), and the question of when these admit oscillation is of both theoretical and practical interest. Oscillation is known to occur -- and play a key role -- in a great variety of biological contexts. Examples include the natural rhythms of body clocks and ovulation, biochemical oscillations in cellular signalling, cyclic behaviour of various diseases, and periodic fluctuations in Lotka-Volterra-type models of interacting populations. Several chapters of \cite{MurrayMathBio} and \cite{mathphys} detail mathematical models of oscillation in biological settings. Some general biological principles underlying biological oscillation are discussed in \cite{novaktyson}. Once a network admitting oscillation is identified, we might naturally wonder whether this network occurs as a ``motif'' in other larger networks and, if so, whether the larger networks must themselves admit oscillation. The desire to phrase this question precisely and provide some simple and partial answers motivates this work. Several papers have treated analogous questions about the inheritance of multistationarity in CRNs \cite{joshishiu,feliuwiufInterface2013,Joshi.2013aa,JoshiShiu2016}. In a recent contribution it was shown that a great deal can be done in this direction using the implicit function theorem \cite{banajipanteaMPNE}. An (incomplete) list of network modifications proven to preserve the property of admitting nondegenerate multistationarity were listed; these collectively define a partial order $\preceq$ on the set of all CRNs such that if a CRN $\mathcal{R}$ admits nondegenerate multistationarity, then so do all CRNs $\succeq \mathcal{R}$ in this partial order. Although it is likely that most, if not all, of the results in \cite{banajipanteaMPNE} can be restated with ``nondegenerate oscillation'' replacing ``nondegenerate multistationarity'', only part of this task is undertaken here: we prove four results about general CRNs, Theorems~\ref{thmnewdepreac}~to~\ref{thmnewwithopen}, which are analogues of related results about multistationarity in \cite{banajipanteaMPNE}, also numbered Theorems~1~to~4. An example of what these tell us is the following corollary about fully open CRNs: \begin{prop} \label{propMAfo} If a fully open CRN $\mathcal{R}$ with mass action kinetics admits nondegenerate (resp., stable) oscillation, then so does any fully open CRN with mass action kinetics which includes $\mathcal{R}$ as an induced subnetwork. \end{prop} The definitions required to make this result precise will follow. Proposition~\ref{propMAfo} is the specialisation for mass action kinetics of a result with more general kinetic assumptions, Proposition~\ref{coropeninduced}, (see Remark~\ref{remMAfo}) which is a natural starting point for some computational exploration on small fully open CRNs admitting oscillation. It is worth noting at the outset that Proposition~\ref{propMAfo} fails if the CRNs are not assumed to be fully open. An example is provided in the concluding section (Example~\ref{exinherit}). Much of the mathematical literature on oscillation in CRNs has focussed on conditions which forbid oscillation, or forbid stable oscillation of the kind which might be observed in numerical simulations, or forbid bifurcations leading to oscillation. For CRNs with mass action kinetics, there are the original results of deficiency theory \cite{horn72,hornjackson,feinberg0,feinberg}; for CRNs with more general kinetics there are results based on the theory of monotone dynamical systems (\cite{banajidynsys,angelileenheersontag,donnellbanaji,banajimierczynski} for example), and algebraic approaches (\cite{abphopf} for example). Various papers which do not directly treat CRNs also have natural applications to forbidding oscillation or stable oscillation in CRNs, including the work of Angeli, Hirsch and Sontag on ``coherent'' systems \cite{angelihirschsontag}, and of Li and Muldowney on generalised Bendixson's criteria \cite{li_muldowney_1993, li_muldowney_1996, li_muldowney_2000}. On the other hand oscillation has been shown to occur in numerical studies of various CRNs of interest (for example, \cite{dicera, WolfOsci, Kholodenko.2000aa, Qiao.2007aa}). Aside from numerical work, there exists an important strand of theory drawing on approaches in convex and toric geometry which provides {\em sufficient} conditions for Hopf bifurcations in CRNs with mass action and generalised mass action kinetics \cite{eiswirth91, eiswirth96,gatermann, errami2015}. These approaches lead to algorithms for the determination of parameter regions where Hopf bifurcation occurs. Other papers treating the question of sufficient conditions for oscillation in chemical reaction networks include \cite{minchevaroussel} and \cite{domijan}. The work here is aimed at closing the gap between theory which forbids oscillation and examples of oscillatory networks or particular sufficient conditions for oscillation. It is likely that many examples of CRNs admitting oscillation in fact oscillate because they inherit this property from a smaller CRN which admits oscillation, and the goal is then to identify an appropriate notion of inheritance, and minimal oscillatory CRNs in some sense. The importance of inheritance approaches is increasingly recognised. In \cite{ConradiShiuPTM}, Conradi and Shiu pose a question closely related to the main question in this paper, namely whether Hopf bifurcation is preserved when CRNs are modified in natural ways. The problem of identifying a ``minimal'' oscillatory subnetwork was tackled for the biologically important MAPK cascade in \cite{hadac}. Computational work on fully open CRNs towards the end of the paper confirms the practical usefulness of inheritance approaches. As oscillation may occur in very small regions of parameter space, it may be hard to find by brute force in numerical simulations, even where it is straightforward to predict its occurrence by inheritance results. Finding a single small oscillatory CRN on the other hand immediately gives us knowledge of a large number of CRNs which inherit this oscillation. Ultimately, the hope is that examining CRNs which can neither be proven to forbid oscillation nor be shown to oscillate (using numerics, known sufficient conditions for oscillation, or inheritance results such as here) may lead to new theorems about necessary conditions for oscillation. \subsection{Notational preliminaries} \begin{notation}[Nonnegative and positive vectors] \label{notpos} A real vector $x = (x_1, \ldots, x_n)^{\mathrm{t}}$ is nonnegative (resp., positive) if $x_i\geq 0$ (resp., $x_i > 0$) for each $i$, and we refer to the nonnegative (resp., positive) orthant in $\mathbb{R}^n$ as $\mathbb{R}^n_{\geq 0}$ (resp., $\mathbb{R}^n_{\gg 0}$). Subsets of $\mathbb{R}^n_{\gg 0}$ are referred to as positive. \end{notation} \begin{notation}[Vector of ones] $\mathbf{1}$ denotes a vector of ones whose length is inferred from the context. \end{notation} \begin{notation}[Identity matrix] $I_n$ is the $n \times n$ identity matrix. \end{notation} \begin{notation}[Set theoretic inverse] Given sets $X,Y$ and a function $f\colon X \to Y$, not necessarily invertible, $f^{-1}$ will generally refer to the set theoretic inverse, namely, given $Y_0 \subseteq Y$, $f^{-1}(Y_0) = \{x \in X\colon f(x) \in Y_0\}$. \end{notation} \begin{notation}[Monomials, vector of monomials] \label{notmon} Given $x=(x_1,\ldots, x_n)^{\mathrm{t}}$ and $a = (a_1,\ldots, a_n)$, $x^a$ is an abbreviation for the (generalised) monomial $\prod_ix_i^{a_i}$. If $A$ is an $m \times n$ matrix with rows $A_1, \ldots, A_m$, then $x^A$ means the vector of (generalised) monomials $(x^{A_1}, x^{A_2}, \ldots, x^{A_m})^{\mathrm{t}}$. \end{notation} \begin{notation}[Entrywise product] \label{nothad} Given two matrices $A$ and $B$ with the same dimensions, $A \circ B$ will refer to the entrywise (or Hadamard) product of $A$ and $B$, namely $(A\circ B)_{ij} = A_{ij}B_{ij}$. \end{notation} \section{Periodic orbits} We remind the reader of some standard results from Floquet theory (Chapters 3 and 4 of \cite{HaleOsci} for example) as needed here. Let $X \subseteq \mathbb{R}^r$ be open, $F\colon X \to \mathbb{R}^r$ be $C^1$, and consider the ODE \begin{equation} \label{Floq0} \dot x = F(x) \end{equation} on $X$. Assume that (\ref{Floq0}) has a nontrivial periodic solution $\theta\colon \mathbb{R} \to X$ with smallest positive period $T$, and with corresponding periodic orbit $\mathcal{O}:=\mathrm{im}\,\theta$. The variational equation about $\theta$ is \begin{equation} \label{Floq1} \dot z= DF(\theta(t))z. \end{equation} $DF(\theta(t))$ is an $r \times r$ $T$-periodic matrix and Floquet theory tells us that any fundamental matrix solution $Z(t)$ of (\ref{Floq1}) can be written in the form \[ Z(t) = A(t)e^{tB} \] where $A$ is a nonsingular $T$-periodic matrix, and $B$ is a constant matrix. The eigenvalues of $e^{TB}$ are termed the {\em characteristic multipliers} (or {\em Floquet multipliers}) of $\mathcal{O}$. If $Z(0) = I$, then $A(T) = A(0) = I$, in which case the characteristic multipliers are the eigenvalues of $Z(T)$. $\mathcal{O}$ is termed {\em hyperbolic} (resp., {\em linearly stable}) if $r-1$ of its characteristic multipliers are disjoint from (resp., inside) the unit circle in $\mathbb{C}$. Hyperbolicity (resp., linear stability) of a periodic orbit is precisely hyperbolicity (resp., linear stability) of the associated fixed point of any Poincar\'e map constructed on a section transverse to the periodic orbit: see Chapter~10 onwards of \cite{wiggins}, for example. Hyperbolic periodic orbits survive under sufficiently small perturbations of vector fields in a sense made precise in Lemma~\ref{lemreg} below. Linear stability of a periodic orbit implies asymptotic orbital stability, namely that forward trajectories of all sufficiently nearby initial conditions converge to the periodic orbit (Theorem~4.2 in \cite{HaleOsci}). The following is a well-known result of regular perturbation theory. $d_{\mathrm{H}}(\cdot, \cdot)$ denotes the Hausdorff distance between nonempty compact subsets of Euclidean space. \begin{lemma1} \label{lemreg} Let $X \subseteq \mathbb{R}^r$ be open, $\epsilon' > 0$ and $F\colon X \times (-\epsilon', \epsilon')\to \mathbb{R}^r$ be $C^1$. Consider the $\epsilon$-dependent family of ODEs on $X$ \begin{equation} \specialnumber{${}_\epsilon$}\label{eqnFloqe} \dot x = F(x,\epsilon)\,. \end{equation} Suppose that \specialeqref{eqnFloqe}{${}_0$} has a nontrivial hyperbolic (resp., linearly stable) $T$-periodic orbit $\mathcal{O} \subseteq X$. Then there exists $\epsilon_0>0$ s.t. for $\epsilon \in (-\epsilon_0, \epsilon_0)$ \specialeqref{eqnFloqe}{${}_\epsilon$} has a hyperbolic (resp., linearly stable) periodic orbit $\mathcal{O}_\epsilon$ satisfying $\lim_{\epsilon \to 0} d_{\mathrm{H}}(\mathcal{O}_\epsilon, \mathcal{O}) = 0$ and with period $T_\epsilon$ satisfying $\lim_{\epsilon \to 0}T_\epsilon = T$. \end{lemma1} \begin{pf} These claims are proved, for example, by constructing a family of Poincar\'e maps $\Pi_\epsilon$ for \specialeqref{eqnFloqe}{${}_\epsilon$} and applying the implicit function theorem at the fixed point of $\Pi_0$ corresponding to $\mathcal{O}$ as described in Section IV of \cite{Fenichel79}. \hfill$\square$ \end{pf} We next consider some specialisations of Floquet theory to systems with linear first integrals relevant to the study of CRNs. Let $x \in \mathbb{R}^n_{\gg 0}$, $v\colon\mathbb{R}^n_{\gg 0} \to \mathbb{R}^m$ be $C^1$, $\Gamma$ be an $n \times m$ real matrix of rank $r$, and consider the ODE \begin{equation} \label{Floq2} \dot x = \Gamma v(x). \end{equation} Assume that (\ref{Floq2}) has a nontrivial positive periodic orbit $\mathcal{O}$ (see Notation~\ref{notpos}), namely there exists some periodic solution $\theta\colon \mathbb{R} \to \mathbb{R}^n_{\gg 0}$ of (\ref{Floq2}) with smallest period $T>0$ and with $\mathcal{O}:=\mathrm{im}\,\theta$. Clearly, $S_{\mathcal{O}} := (\mathcal{O} + \mathrm{im}\,\Gamma) \cap \mathbb{R}^n_{\gg 0}$ is locally invariant under (\ref{Floq2}). If $r \neq n$, then $\mathcal{O}$ cannot be hyperbolic or linearly stable in the senses defined above. However, our interest is in whether it is hyperbolic (resp., linearly stable) {\em relative} to $S_{\mathcal{O}}$. Associated with $S_{\mathcal{O}}$ are $r$ characteristic multipliers and we would like to know whether $r-1$ of these are disjoint from (resp., inside) the unit circle. The single remaining multiplier associated with $S_{\mathcal{O}}$, corresponding to travel along the periodic orbit, is $1$, while the additional $n-r$ multipliers associated with directions transverse to $S_{\mathcal{O}}$ are also easily shown all to be $1$. An explicit calculation of the multipliers of $\mathcal{O}$ relative to $S_{\mathcal{O}}$ is needed in certain proofs to follow. This proceeds as follows. Choose $x_0 \in S_{\mathcal{O}}$ and choose $\Gamma_0$ to be any matrix whose columns form a basis for $\mathrm{im}\,\Gamma$. Define $Q$ by $\Gamma = \Gamma_0Q$, and define the bijection $h\colon \mathbb{R}^r \to x_0 + \mathrm{im}\,\Gamma$ by $h(y) = x_0 + \Gamma_0 y$. Note that $W:= h^{-1}(S_{\mathcal{O}})$ is an open subset of $\mathbb{R}^r$, and $\left.h\right|_{W}$ is an affine bijection between $W$ and $S_{\mathcal{O}}$. Setting $x=h(y)$ we get, for the evolution of $y$: \begin{equation} \label{Floq3} \dot y = Qv(x_0 + \Gamma_0y)\,. \end{equation} (\ref{Floq3}) has a $T$-periodic solution $\psi\colon \mathbb{R} \to W$ defined by $\psi(t) = h^{-1}(\theta(t))$. Let $\mathcal{O}':=\mathrm{im}\,\psi = h^{-1}(\mathcal{O})$ be the corresponding periodic orbit. The multipliers of $\mathcal{O}'$ are precisely the multipliers of $\mathcal{O}$ relative to $S_{\mathcal{O}}$. By definition $\mathcal{O}'$ is hyperbolic (resp., linearly stable) if it has $r-1$ characteristic multipliers disjoint from (resp., inside) the unit circle in $\mathbb{C}$. This motivates the following definitions: \begin{def1}[NPPO, SPPO] \label{defNPPO} Let $\mathcal{O}$ be a positive periodic orbit of (\ref{Floq2}). With $h$ defined as above, $\mathcal{O}$ is a {\em nondegenerate positive periodic orbit (NPPO)} of (\ref{Floq2}) if $\mathcal{O}':= h^{-1}(\mathcal{O})$ is a hyperbolic periodic orbit of (\ref{Floq3}). $\mathcal{O}$ is a {\em linearly stable positive periodic orbit (SPPO)} of (\ref{Floq2}) if $\mathcal{O}':= h^{-1}(\mathcal{O})$ is a linearly stable periodic orbit of (\ref{Floq3}). An SPPO is clearly also an NPPO. \end{def1} The use of the transformation $h$ to define new coordinates on $S_\mathcal{O}$ is illustrated schematically in Figure~\ref{figschematic}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.9] \node at (2.3,2.3) {$\mathbb{R}^r$}; \draw [->, line width=0.04cm] (-3,0) -- (2.5,0); \draw [->, line width=0.04cm] (0,-2.5) -- (0,2.5); \fill[color=black!30, fill opacity=0.7] (-1,2) -- (2.5,-1) -- (-2.5,-2) -- cycle; \begin{scope}[scale=0.7,cm={1.2,0.1,0.3,1,(-7.3cm,-4.3cm)}] \draw[->, line width=0.04cm] (5.5, 2) .. controls (6, 2) and (6,3) .. (6,3.5); \draw[->, line width=0.04cm] (6, 3.5) .. controls (6, 4) and (4,5) .. (3.5,4.5); \draw[->, line width=0.04cm] (3.5, 4.5) .. controls (3, 4) and (5,2) .. (5.5,2); \end{scope} \node at (0.25,1.4) {$W=h^{-1}(S_\mathcal{O})$}; \node at (2.25,-0.9) {$\mathcal{O}' = \mathrm{im}\,\psi = h^{-1}(\mathcal{O})$}; \node at (4,1.6) {$h$}; \draw[->, line width=0.04cm] (2.5, 1) .. controls (3.5, 1.3) and (4.5,1.3) .. (5.5,1); \begin{scope}[scale=0.6,xshift=10cm, yshift=-4cm] \node at (9,8) {$\mathbb{R}^n$}; \draw[->, line width=0.04cm] (-0.5,0) -- (10,0); \draw[->, line width=0.04cm] (0,-0.5) -- (0,8); \draw[->, line width=0.04cm] (-0.5,-0.25) -- (8,4); \fill[color=black!30, fill opacity=0.7] (0,7) -- (7,3.5) -- (6.5,0) -- cycle; \draw[->, line width=0.04cm] (5.5, 2) .. controls (6, 2) and (6,3) .. (6,3.5); \draw[->, line width=0.04cm] (6, 3.5) .. controls (6, 4) and (4,5) .. (3.5,4.5); \draw[->, line width=0.04cm] (3.5, 4.5) .. controls (3, 4) and (5,2) .. (5.5,2); \fill (5.1,3.7) circle (3pt); \node at (4.85,3.35) {$x_0$}; \node at (7.3,2.1) {$\mathcal{O} = \mathrm{im}\,\theta$}; \node at (2.1,5.5) {$S_\mathcal{O}$}; \end{scope} \end{tikzpicture} \end{center} \caption{\label{figschematic} $h$ defines an affine embedding of $\mathbb{R}^r$ into $\mathbb{R}^n$, illustrated in the case $r=2$ and $n=3$. The image of $h$ is $x_0 + \mathrm{im}\,\Gamma$, assumed to include a positive periodic orbit $\mathcal{O}$, and $h$ thus defines local coordinates on $S_\mathcal{O}$, the positive stoichiometry class of $\mathcal{O}$. Of interest is the hyperbolicity or linear stability of $\mathcal{O}$ relative to $S_\mathcal{O}$, and by definition $\mathcal{O}$ is an NPPO (resp., SPPO) if $\mathcal{O}' = h^{-1}(\mathcal{O})$ is nondegenerate (resp., linear stable).} \end{figure} \begin{remark} Note that the overloading of the term ``linearly stable'' in Definition~\ref{defNPPO} is an abuse of terminology which should cause no confusion: if $\mathrm{im}\,\Gamma = \mathbb{R}^n$, then linearly stable has its usual meaning; if $\mathrm{im}\,\Gamma \neq \mathbb{R}^n$, then no periodic orbit of (\ref{Floq2}) can truly be linearly stable, and linear stability is taken to mean linear stability relative $\mathrm{im}\,\Gamma$. \end{remark} We can easily verify that Definition~\ref{defNPPO} makes sense: different choices of $x_0$ or $\Gamma_0$ lead to the same characteristic multipliers. To see this, recall that according to Floquet theory the variational equation of (\ref{Floq3}) about $\psi(t) = h^{-1}(\theta(t))$, namely, \begin{equation} \label{Floq3a} \dot z= QDv(\theta(t))\Gamma_0 z \end{equation} has a fundamental matrix solution $Z(t)$ which can be written $Z(t) = A(t)e^{tB}$ with $A$ a nonsingular $T$-periodic matrix and $B$ a constant matrix. The characteristic multipliers associated with $\psi$ are the eigenvalues of $e^{TB}$. Now suppose we make some different choices $x_0' \in S_{\mathcal{O}}$ and $\Gamma_0'$ and let $h'\colon \mathbb{R}^r \to S_{\mathcal{O}}$ be defined by $h'(y) := x_0'+\Gamma_0' y$. As the columns of $\Gamma_0'$ are a basis for $\mathrm{im}\,\Gamma$, $\Gamma_0' = \Gamma_0R$ where $R$ is a nonsingular $r \times r$ matrix. Thus $\Gamma = \Gamma_0Q = \Gamma_0'R^{-1}Q$. With $x = h'(y)$, we get the evolution on $W':=h'^{-1}(S_{\mathcal{O}})$ \begin{equation} \label{Floq4} \dot y = R^{-1}Qv(x_0' + \Gamma_0'y) \end{equation} with $T$-periodic solution $\psi'\colon \mathbb{R} \to W'$ defined by $\psi'(t) = h'^{-1}(\theta(t))$. The variational equation of (\ref{Floq4}) about $\psi'(t)$ is \begin{equation} \label{Floq4a} \dot z= R^{-1}[QDv(\theta(t))\Gamma_0]R z. \end{equation} Then $Z'(t):=R^{-1}Z(t) = R^{-1}A(t)e^{tB}$ is a fundamental matrix solution of (\ref{Floq4a}) with $R^{-1}A(t)$ clearly a $T$-periodic matrix. Thus the characteristic multipliers associated with the solution $\psi'(t)$ of (\ref{Floq4}) are again the eigenvalues of $e^{TB}$, i.e., those associated with the solution $\psi(t)$ of (\ref{Floq3}). \section{Background on CRNs} \label{secbackground} As the framework and terminology closely follow that of \cite{banajipantea}, the reader is referred to this paper for some of the detail. The goal is to remain precise while minimising the extensive preamble on basic notation, terminology and definitions which accompanies many papers on CRNs. We consider a CRN involving $n$ chemical species $X_1, \ldots, X_n$. \begin{def1}[Complexes, the zero complex, stoichiometry]A {\em complex} is a formal linear combination of species. If $a = (a_1, \ldots, a_n)^{\mathrm{t}}$ is a nonnegative integer vector, then $a\cdot X := a_1X_1 + a_2X_2 + \cdots + a_nX_n$ is a complex. $a_i$ is the {\em stoichiometry} of $X_i$ in the complex $a \cdot X$. The {\em zero complex} $0X_1 + \cdots + 0X_n$ is denoted $0$. \end{def1} An irreversible reaction is an ordered pair of complexes, termed the {\em source complex} (or left hand side) and the {\em target complex} (or right hand side). We always assume that the source and target complexes are distinct. A reversible reaction may be considered either as two irreversible reactions or, equivalently, as an unordered pair of (distinct) complexes. A CRN is a set of species and a set of reactions. We adopt the common convention that the reactions of a CRN are distinct. However, for technical reasons, we do not forbid {\em a priori} the possibility that some chemical species occurs in a CRN but participates in none of its reactions. \begin{def1}[Flow reaction, fully open CRN, fully open extension of a CRN] For the purposes of this paper, reactions of the form $0 \rightarrow A$ or $A \rightarrow 0$ are referred to as {\em flow reactions}, while all others are {\em non-flow reactions} (even where these clearly violate any conservation laws: for example $2A \rightarrow 0$ or $0 \rightarrow A+B$ are referred to as non-flow reactions). A CRN involving species $X_1, \ldots, X_n$ is {\em fully open} if it includes all flow reactions $0 \rightleftharpoons X_i$ ($i = 1, \ldots, n$). Note that if, for example, a CRN includes all reactions of the form $0 \rightleftharpoons 2X_i$, but not all the reactions $0 \rightleftharpoons X_i$, we do not refer to it as fully open. The {\em fully open extension} of a CRN $\mathcal{R}$ is the smallest fully open CRN containing all the reactions of $\mathcal{R}$, namely the CRN created by adjoining to $\mathcal{R}$ any flow reactions which are absent from $\mathcal{R}$. \end{def1} \subsection{Combinatorial representations of CRNs} CRNs are combinatorial objects which give rise to dynamical systems in different ways depending on various modelling choices. The most common combinatorial representation of a CRN is via its {\em complex graph} \cite{horn}, a digraph whose vertices are complexes and whose arcs correspond to (irreversible) reactions. For example, the reaction $X_1+2X_2 \rightarrow X_3$ is an ordered pair of complexes naturally represented as an arc from source complex $X_1+2X_2$ to target complex $X_3$. The set of species and the complex graph together make up a formal description of the CRN. An alternative representation, particularly useful when discussing isomorphism of CRNs, is a {\em Petri net (PN) graph} \cite{angelipetrinet}, an edge-weighted bipartite digraph, defined in the form used here in \cite{banajipanteaMPNE}. The PN graph of a CRN $\mathcal{R}$, denoted $PN(\mathcal{R})$, has two vertex sets $V_S$ (species vertices) and $V_R$ (reaction vertices) identified with the species and the reactions of $\mathcal{R}$. Given $X_i \in V_S$ and $R_j \in V_R$, there exists an arc $X_iR_j$ (resp., $R_jX_i$) with weight $w$ if and only if the species corresponding to $X_i$ occurs with stoichiometry $w>0$ in the source complex (resp., target complex) of the reaction corresponding to $R_j$. Arc weights of $1$ are omitted from drawings for neatness. An unlabelled PN graph is referred to as a {\em motif}. CRNs $\mathcal{R}_1$ and $\mathcal{R}_2$ are isomorphic if $PN(\mathcal{R}_1)$ and $PN(\mathcal{R}_2)$ are isomorphic in a natural sense, namely there exists a relabelling of the vertices of $PN(\mathcal{R}_1)$ which preserves the bipartition and gives $PN(\mathcal{R}_2)$. Given CRNs $\mathcal{R}_1$ and $\mathcal{R}_2$, we say that $\mathcal{R}_1$ is an {\em induced subnetwork} of $\mathcal{R}_2$, and write $\mathcal{R}_1 \leq \mathcal{R}_2$, if $PN(\mathcal{R}_1)$ is a vertex-induced subgraph of $PN(\mathcal{R}_2)$. Clearly, the induced subnetwork relationship induces a partial order on the set of CRNs as discussed in \cite{banajiCRNcount,banajipanteaMPNE}. Note that if $\mathcal{R}_1 \leq \mathcal{R}_2$, the occurrence of a reaction $R$ in both $\mathcal{R}_1$ and $\mathcal{R}_2$ does {\em not} mean that $R$ is, physically speaking, the same reaction with the same source and target complexes in $\mathcal{R}_2$ as in $\mathcal{R}_1$: identifying reactions with (labelled) vertices in a PN graph means that they maintain their identity as graph theoretic modifications are carried out equivalent to inserting or deleting species. If $\mathcal{R}_1 \leq \mathcal{R}_2$, and both have the same set of species, we say that $\mathcal{R}_1$ is a {\em reaction-induced subnetwork} of $\mathcal{R}_2$, and write $\mathcal{R}_1 \leq_{R} \mathcal{R}_2$. If $\mathcal{R}_1 \leq \mathcal{R}_2$, and both have the same set of reactions, we say that $\mathcal{R}_1$ is a {\em species-induced subnetwork} of $\mathcal{R}_2$, and write $\mathcal{R}_1 \leq_{S} \mathcal{R}_2$. Some of the definitions are illustrated in the following example. \begin{example} \label{examplePN} Consider the following CRN: \[ X+Y \rightarrow 2Y,\quad Y+Z \rightarrow X \rightarrow W+Z, \quad W \rightarrow X. \tag{$\mathcal{R}$} \] $\mathcal{R}$ involves $4$ species $\{W,X,Y,Z\}$, $6$ complexes $\{W, X, 2Y,X+Y, Y+Z, W+Z\}$ and four (irreversible) reactions. The complex graph of $\mathcal{R}$ is shown below to the left and the PN graph in the centre. Removing the highlighted vertices and their incident arcs leads to the induced subnetwork \[ X+Y \rightarrow 2Y, \quad Y+Z \rightarrow X, \tag{$\mathcal{R}_1$} \] represented in unlabelled form with species vertices as open circles and reaction vertices as filled circles to the right. Note that two reactions and a species were removed from $\mathcal{R}$ to obtain $\mathcal{R}_1$, and so the subnetwork is neither species-induced nor reaction-induced. \begin{center} \begin{tikzpicture}[scale=1,transition/.style={rectangle,draw=black!50,fill=black!5,thick,inner sep=0pt,minimum size=5mm}] \node[transition] at (0,1.5) {$\,X+Y\,$}; \draw [->, thick] (0,1.25) --(0,0.75); \node[transition] at (0,0.5) {$\,2Y\,$}; \node[transition] at (1.5,2) {$\,Y+Z\,$}; \node[transition] at (3,2) {$\,W\,$}; \draw [->, thick] (1.6,1.75) --(1.95,1.25); \draw [->, thick] (2.8,1.75) --(2.45,1.25); \node[transition] at (2.2,1) {$\,X\,$}; \draw [->, thick] (2.2,0.75) --(2.2,0.25); \node[transition] at (2.2,0) {$\,W+Z\,$}; \end{tikzpicture} \hspace{0.7cm} \begin{tikzpicture}[scale=1.1] \draw [-,color=black!25, line width=0.25cm] (0,0) .. controls (0.3,0.3) and (0.5,0.5) .. (0.9,0.5); \draw [-,color=black!25, line width=0.25cm] (1.85,0.15) .. controls (1.7,0.3) and (1.5,0.5) .. (1.1,0.5); \draw [-,color=black!25, line width=0.25cm] (0.8,0.5) -- (1.2,0.5); \draw [-,color=black!25, line width=0.25cm] (0,0) .. controls (0.3,-0.3) and (0.5,-0.5) .. (0.9,-0.5); \draw [-,color=black!25, line width=0.25cm] (1.85,-0.15) .. controls (1.7,-0.3) and (1.5,-0.5) .. (1.1,-0.5); \draw [-,color=black!25, line width=0.25cm] (0.8,-0.5) -- (1.2,-0.5); \draw [-,color=black!25, line width=0.25cm] (1.05,-0.6) .. controls (1.2,-0.75) and (1.4,-0.95) .. (1.8,-0.95); \fill[color=black!25] (-0.1,0) circle (6pt); \draw [->, thick] (1.05,-0.6) .. controls (1.2,-0.75) and (1.4,-0.95) .. (1.8,-0.95); \draw [<-, thick] (2.95,-0.6) .. controls (2.8,-0.75) and (2.6,-0.95) .. (2.2,-0.95); \node at (2,-0.95) {$Z$}; \draw [->, thick] (0.1,0.1) .. controls (0.3,0.3) and (0.5,0.5) .. (0.9,0.5); \draw [<-, thick] (1.85,0.15) .. controls (1.7,0.3) and (1.5,0.5) .. (1.1,0.5); \node at (1,0.7) {$\scriptstyle{4}$}; \fill[color=black] (1,0.5) circle (1.5pt); \node at (1,-0.3) {$\scriptstyle{3}$}; \fill[color=black] (1,-0.5) circle (1.5pt); \node at (3,0.7) {$\scriptstyle{1}$}; \fill[color=black] (3,0.5) circle (1.5pt); \node at (3,-0.3) {$\scriptstyle{2}$}; \fill[color=black] (3,-0.5) circle (1.5pt); \node at (-0.1,0) {$W$}; \node at (2,0) {$X$}; \node at (4,0) {$Y$}; \draw [<-, thick] (1.85,0.15) .. controls (1.7,0.3) and (1.5,0.5) .. (1.1,0.5); \draw [->, thick] (1.85,-0.15) .. controls (1.7,-0.3) and (1.5,-0.5) .. (1.1,-0.5); \draw [<-, thick] (0.1,-0.1) .. controls (0.3,-0.3) and (0.5,-0.5) .. (0.9,-0.5); \draw [->, thick] (2.15,0.15) .. controls (2.3,0.3) and (2.5,0.5) .. (2.9,0.5); \draw [<-, thick] (3.85,0.15) .. controls (3.7,0.3) and (3.5,0.5) .. (3.1,0.5); \draw [<-, thick] (2.15,-0.15) .. controls (2.3,-0.3) and (2.5,-0.5) .. (2.9,-0.5); \draw [->, thick] (3.85,-0.15) .. controls (3.7,-0.3) and (3.5,-0.5) .. (3.1,-0.5); \draw[->, thick] (3.85,0.05) .. controls (3.5, 0.05) and (3.3, 0.2) .. (3.07,0.43); \node at (3.6,0.5) {$\scriptstyle{2}$}; \end{tikzpicture} \hspace{0.7cm} \begin{tikzpicture}[scale=1.1] \draw[color=black] (3,0.4) circle (1.5pt); \draw[color=black] (3,-0.4) circle (1.5pt); \draw[color=black] (1,0) circle (1.5pt); \fill[color=black] (2,0) circle (1.5pt); \fill[color=black] (4,0) circle (1.5pt); \draw [->, thick] (1.15,0) --(1.85,0); \draw [<-, thick] (2.15,0.1) .. controls (2.3,0.25) and (2.5,0.4) .. (2.9,0.4); \draw [->, thick] (3.85,0.1) .. controls (3.7,0.25) and (3.5,0.4) .. (3.1,0.4); \draw[<-, thick] (3.85,0.05) .. controls (3.5, 0.05) and (3.3, 0.2) .. (3.07,0.33); \draw [->, thick] (2.15,-0.1) .. controls (2.3,-0.25) and (2.5,-0.4) .. (2.9,-0.4); \draw [<-, thick] (3.85,-0.1) .. controls (3.7,-0.25) and (3.5,-0.4) .. (3.1,-0.4); \node at (3.6,0.5) {$\scriptstyle{2}$}; \node at (2.4,-0.5) {$\textcolor{white}{\scriptstyle{2}}$}; \end{tikzpicture} \end{center} To preview the nature of results to follow, the motif on the right leads to stable periodic behaviour in fully open CRNs with mass action kinetics, and so the fully open extension of $\mathcal{R}$ with mass action kinetics admits an SPPO as a consequence of the presence of this motif. \end{example} \subsection{ODE models of CRNs: basic definitions}We take the concentrations of chemical species to be nonnegative real numbers. Consider a CRN $\mathcal{R}$ involving $n$ chemical species $X_1, \ldots, X_n$ with corresponding concentration vector $x= (x_1, \ldots, x_n)^{\mathrm{t}}$, and $m$ irreversible reactions between the species. Orderings on the species and reactions are arbitrary but assumed fixed. Define nonnegative $n \times m$ matrices $\Gamma_l$ and $\Gamma_r$ as follows: $(\Gamma_l)_{ij}$ (resp., $(\Gamma_r)_{ij}$) is the stoichiometry of species $X_i$ on the left (resp., right) of reaction $j$. The {\em stoichiometric matrix} of $\mathcal{R}$ is $\Gamma=\Gamma_r-\Gamma_l$. The $j$th column of $\Gamma$ is termed the {\em reaction vector} for the $j$th reaction. If the reactions of $\mathcal{R}$ proceed with rates $v_1(x), v_2(x),\ldots, v_m(x)$, we define the {\em rate function} of $\mathcal{R}$ to be $v(x) =(v_1(x), v_2(x),\ldots, v_m(x))^{\mathrm{t}}$. The evolution of the species concentrations is then governed by the ODE: \begin{equation} \label{genCRN} \dot x = \Gamma v(x). \end{equation} If $v$ is defined and $C^1$ on $\mathbb{R}^n_{\gg 0}$ then (\ref{genCRN}) defines a local flow on $\mathbb{R}^n_{\gg 0}$, while if $v$ is defined and $C^1$ on $\mathbb{R}^n_{\geq 0}$ (namely, on an open subset of $\mathbb{R}^n$ containing $\mathbb{R}^n_{\geq 0}$) then, under physically reasonable assumptions on $v$ which ensure $\mathbb{R}^n_{\geq 0}$ is forward invariant, (\ref{genCRN}) defines a local semiflow on $\mathbb{R}^n_{\geq 0}$. See the introductory chapter of \cite{bhatiahajek} for definitions of local flows (there termed ``local dynamical systems'') and local semiflows (there termed ``local semi-dynamical systems''). $\mathrm{im}\,\Gamma$ is referred to as the {\em stoichiometric subspace} of the CRN. The nonempty intersection of a coset of $\mathrm{im}\,\Gamma$ with $\mathbb{R}^n_{\geq 0}$ (resp., $\mathbb{R}^n_{\gg 0}$) is a {\em stoichiometry class} (resp., {\em positive stoichiometry class}) of the CRN. If $\mathbb{R}^n_{\gg 0}$ (resp., $\mathbb{R}^n_{\geq 0}$) is forward invariant under the evolution defined by (\ref{genCRN}), then positive stoichiometry classes (resp., stoichiometry classes) are invariant under (\ref{genCRN}). \subsection{Kinetics} \label{seckin} In order to state the results to follow with maximum applicability, we need some discussion of the rate functions of CRNs, namely the allowed functions $v$ in (\ref{genCRN}). The reader familiar with and primarily interested in mass action kinetics can skip directly to Proposition~\ref{propMAgen} below. Given a CRN $\mathcal{R}$ with evolution governed by (\ref{genCRN}) we may assume that $v(x)$ belongs to some set of functions $\mathcal{K}$ with domain $\mathbb{R}^n_{\gg 0}$ and codomain $\mathbb{R}^m$. We refer to $\mathcal{K}$ as the {\em kinetics} of $\mathcal{R}$ and to the pair $(\mathcal{R},\mathcal{K})$ as a ``CRN with kinetics''. $\mathcal{K}$ may be finitely parameterised or a larger class of functions. Given a CRN with kinetics $(\mathcal{R}, \mathcal{K})$, and a given reaction $R$ in $\mathcal{R}$, the set of reaction rates for $R$ allowed by $\mathcal{K}$ is denoted $\mathcal{K}^{(R)}$. When discussing kinetics it is assumed that a CRN consists of irreversible reactions: the allowed rates of a reversible reaction are derived by considering it as a pair of irreversible reactions. In each case below we assume that the CRN $\mathcal{R}$ involves $n$ species and $m$ (irreversible) reactions, and the $n \times m$ matrices $\Gamma_l$, $\Gamma_r$ and $\Gamma$ are defined as above. The following is a very large class of kinetics. \begin{def1}[Positive general kinetics] A rate function $v$ for $\mathcal{R}$ belongs to the class of {\em positive general kinetics} if and only if $v(x)$ is defined, positive-valued, and $C^1$ on $\mathbb{R}^n_{\gg 0}$ and satisfies for each $x \in \mathbb{R}^n_{\gg 0}$: (i) $\frac{\partial v_j}{\partial x_i} > 0$ if species $X_i$ occurs on the left of reaction $j$, (ii) $\frac{\partial v_j}{\partial x_i} = 0$ if species $X_i$ does not occur on the left of reaction $j$. Conditions (i) and (ii) can together be rephrased as ``the matrix $Dv(x)$ of partial derivatives of $v$ has the same sign pattern as $\Gamma_l$''. (Note that in \cite{banajipantea} condition (ii) was not spelled out explicitly, although it is implicit throughout.) \end{def1} \begin{def1}[General kinetics] A rate function $v$ for $\mathcal{R}$ belongs to the class of {\em general kinetics} if and only if $v(x)$ is defined and $C^1$ on $\mathbb{R}^n_{\geq 0}$, satisfies all the restrictions of positive general kinetics on $\mathbb{R}^n_{\gg 0}$, and $v_j(x) = 0$ if and only if $x_i = 0$ for some species $X_i$ occurring on the left of reaction $j$. $\mathbb{R}^n_{\geq 0}$ can easily be shown to be positively invariant for (\ref{genCRN}) under the assumption of general kinetics. \end{def1} \begin{def1}[Power-law kinetics, physical power-law kinetics, mass action kinetics] A rate function $v$ for $\mathcal{R}$ belongs to the class of {\em power-law kinetics} if there exist $K \in \mathbb{R}^m_{\gg 0}$ and $M \in \mathbb{R}^{m \times n}$ such that $v(x) = K\circ x^M$ (recall Notation~\ref{notmon}~and~\ref{nothad} above). $K$ is the {\em vector of rate constants} and $M$ is the {\em matrix of exponents}. $v$ belongs to the class of {\em physical} power-law kinetics if, additionally, $M$ has the same sign pattern as $\Gamma_l^{\mathrm{t}}$, and of {\em mass action kinetics} if $M = \Gamma_l^{\mathrm{t}}$. \end{def1} \begin{remark}[Fixed power-law kinetics] Stating only that $\mathcal{R}$ has power-law kinetics, or physical power-law kinetics, implies that the entries of both $M$ and of $K$ are parameters which may vary. Stating that $\mathcal{R}$ has {\em fixed} power-law kinetics means that $M$ is fixed, while only the entries of $K$ are parameters which may vary. \end{remark} \begin{remark}[Relationships between kinetic classes] It is easily seen that physical power-law kinetics is a subclass of positive general kinetics, and that mass action kinetics is a particular case of fixed, physical power-law kinetics, and also of general kinetics. Further inclusions amongst classes of kinetics are detailed in \cite{banajipantea}. \end{remark} When we refer to $(\mathcal{R}, \mathcal{K})$ as a ``CRN with mass action kinetics'', or more briefly a ``mass action CRN'', this means that the set of allowed rate functions $\mathcal{K}$ is precisely that given by the assumption of mass action kinetics. A similar comment applies to other classes of kinetics. \begin{def1}[Derived power-law kinetics] \label{derived} Let $(\mathcal{R}_1, \mathcal{K}_1)$ and $(\mathcal{R}_2, \mathcal{K}_2)$ be CRNs with fixed power-law kinetics and corresponding matrices of exponents $M_1$ and $M_2$. Let $\mathcal{R}_2$ have $n$ species and $m$ reactions and let $\mathcal{R}_1$ be an induced subnetwork of $\mathcal{R}_2$ with species indexed from $\alpha \subseteq \{1, \ldots, n\}$ and reactions indexed from $\beta \subseteq \{1, \ldots, m\}$. Then $\mathcal{K}_2$ is {\em derived} from $\mathcal{K}_1$ if $M_1 = M_2(\beta|\alpha)$, where $M_2(\beta|\alpha)$ is the submatrix of $M_2$ with rows from $\beta$ and columns from $\alpha$. \end{def1} \begin{def1}[Scaling invariant kinetics] Let $(\mathcal{R}, \mathcal{K})$ be a CRN with kinetics. Then $\mathcal{K}$ is {\em scaling invariant} if, for each reaction $R$ of $\mathcal{R}$ and each $\epsilon > 0$, $F \in \mathcal{K}^{(R)}$ implies that $\epsilon F \in \mathcal{K}^{(R)}$. \end{def1} \begin{remark}[Scaling invariant kinetics] A CRN with any reasonable kinetics, including positive general kinetics, power-law kinetics, physical power-law kinetics, or any fixed power-law kinetics (including mass action) has kinetics which is scaling invariant: if $v_j(x)$ is an allowed reaction rate from one of these classes for reaction $j$, then so is $\epsilon v_j(x)$ for each $\epsilon > 0$. \end{remark} \subsection{Extending the kinetics of an induced subnetwork} Consider CRNs $\mathcal{R}_1 \leq \mathcal{R}_2$ with $\mathcal{R}_1$ given kinetics $\mathcal{K}_1$. Are there natural ways of ``extending'' $\mathcal{K}_1$ to a kinetics $\mathcal{K}_2$ for $\mathcal{R}_2$? For example, it is reasonable and often mathematically convenient to assume that: \begin{itemize}[align=left,leftmargin=*] \item Where a reaction of $\mathcal{R}_1$ occurs with the same source and target complexes in $\mathcal{R}_2$, the rates for this reaction allowed by $\mathcal{K}_2$ should include those allowed by $\mathcal{K}_1$. \item Where a reaction of $\mathcal{R}_1$ occurs in $\mathcal{R}_2$ with some new species involved, fixing the concentrations of the new species at some positive values should give back (at least) all the rate functions allowed by $\mathcal{K}_1$. \end{itemize} These notions are formalised in the following two definitions. \begin{def1}[Reaction-extensions] \label{defreacext} Consider CRNs with kinetics $(\mathcal{R}_1, \mathcal{K}_1)$ and $(\mathcal{R}_2, \mathcal{K}_2)$. Then $(\mathcal{R}_2, \mathcal{K}_2)$ is a {\em reaction-extension} of $(\mathcal{R}_1, \mathcal{K}_1)$, written $(\mathcal{R}_1, \mathcal{K}_1)\leq_{R} (\mathcal{R}_2, \mathcal{K}_2)$, if $\mathcal{R}_1$ is a reaction-induced subnetwork of $\mathcal{R}_2$ and $\mathcal{K}_1^{(R)} \subseteq\mathcal{K}_2^{(R)}$ for each reaction $R$ occurring in both $\mathcal{R}_1$ and $\mathcal{R}_2$. In other words, reactions which $\mathcal{R}_2$ inherits from $\mathcal{R}_1$ are allowed (at least) all the rate functions allowed by $\mathcal{K}_1$. \end{def1} \begin{def1}[Species-extensions] \label{defspecext} Consider CRNs with kinetics $(\mathcal{R}_1, \mathcal{K}_1)$ and $(\mathcal{R}_2, \mathcal{K}_2)$ and suppose that $\mathcal{R}_1$ is a species-induced subnetwork of $\mathcal{R}_2$. Let $\mathcal{R}_2$ have $n_2$ species $X_1, \ldots, X_{n_2}$ and assume, without loss of generality, that the species of $\mathcal{R}_1$ are $X_1, \ldots, X_{n_1}$ where $n_1 \leq n_2$. Let $\hat{x} = (x_1, \ldots, x_{n_1})$ and $\doublehat{x} = (x_{n_1 + 1}, \ldots, x_{n_2})$. Then $(\mathcal{R}_2, \mathcal{K}_2)$ is a {\em species-extension} of $(\mathcal{R}_1, \mathcal{K}_1)$, written $(\mathcal{R}_1, \mathcal{K}_1)\leq_{S} (\mathcal{R}_2, \mathcal{K}_2)$, if for each $v(\hat{x}) \in \mathcal{K}_1$ there exists $w(\hat{x},\doublehat{x}) \in \mathcal{K}_2$ such that $w(\hat{x},\mathbf{1}) = v(\hat{x})$, and such that if $v$ is $C^k$ on $\mathbb{R}^{n_1}_{\gg 0}$ (resp., $\mathbb{R}^{n_1}_{\geq 0}$), then $w$ is $C^k$ on $\mathbb{R}^{n_2}_{\gg 0}$ (resp., $\mathbb{R}^{n_2}_{\geq 0}$). (In the case $n_1=n_2$ and $\mathcal{K}_1 \subseteq \mathcal{K}_2$ we take $(\mathcal{R}_1, \mathcal{K}_1)\leq_{S} (\mathcal{R}_2, \mathcal{K}_2)$ to be trivially true.) \end{def1} \begin{lemma1}[Species-extensions] \label{lemspecext} Let $\mathcal{R}_1\leq_{S} \mathcal{R}_2$. Then the CRNs with kinetics $(\mathcal{R}_1, \mathcal{K}_1)$ and $(\mathcal{R}_2, \mathcal{K}_2)$ satisfy $(\mathcal{R}_1, \mathcal{K}_1) \leq_S (\mathcal{R}_2, \mathcal{K}_2)$ if any of the following hold: \begin{enumerate}[align=left,leftmargin=*] \item $\mathcal{K}_1$ and $\mathcal{K}_2$ are both given by positive general kinetics. \item $\mathcal{K}_1$ and $\mathcal{K}_2$ are both given by power-law kinetics. \item $\mathcal{K}_1$ and $\mathcal{K}_2$ are both given by physical power-law kinetics. \item $\mathcal{K}_1$ and $\mathcal{K}_2$ are both given by mass action kinetics. \item $\mathcal{K}_1$ and $\mathcal{K}_2$ are both given by fixed power-law kinetics with $\mathcal{K}_2$ derived from $\mathcal{K}_1$ (see Definition~\ref{derived}). \end{enumerate} \end{lemma1} \begin{pf} Using the notation in Definition~\ref{defspecext}, for each rate function $v \in \mathcal{K}_1$, set $w(\hat{x}, \doublehat{x}) = v(\hat{x})\circ\doublehat{x}^{M}$ where, in cases (1)~to~(4), $M$ consists of the final $n_2-n_1$ columns of $\Gamma_l^{\mathrm{t}}$, while in case (5) $M$ consists of the final $n_2-n_1$ columns of $M_2$, the matrix of exponents of $\mathcal{R}_2$. In each case it is easily seen that $w \in \mathcal{K}_2$, that $w(\hat{x},\mathbf{1}) = v(\hat{x})$, and that if $v$ is $C^k$ on $\mathbb{R}^{n_1}_{\gg 0}$ (resp., $\mathbb{R}^{n_1}_{\geq 0}$ in case 4), then $w$ is $C^k$ on $\mathbb{R}^{n_2}_{\gg 0}$ (resp., $\mathbb{R}^{n_2}_{\geq 0}$ in case 4). \end{pf} \begin{def1}[Species-reaction-extensions] \label{defspecreacext} Let $\mathcal{R}_1 \leq \mathcal{R}_2$ and consider CRNs with kinetics $(\mathcal{R}_1, \mathcal{K}_1)$ and $(\mathcal{R}_2, \mathcal{K}_2)$. Observe that there is a uniquely defined CRN $\mathcal{R}'$ satisfying $\mathcal{R}_1 \leq_S \mathcal{R}' \leq_{R} \mathcal{R}_2$. ($\mathcal{R}'$ is obtained by inserting all missing species into the reactions of $\mathcal{R}_1$, but without adding any new reactions.) Then $(\mathcal{R}_2, \mathcal{K}_2)$ is a {\em species-reaction-extension} of $(\mathcal{R}_1, \mathcal{K}_1)$ if there exists $\mathcal{K}'$ such that \[ (\mathcal{R}_1, \mathcal{K}_1) \leq_S (\mathcal{R}', \mathcal{K}') \leq_R (\mathcal{R}_2, \mathcal{K}_2). \] Intuitively, we first add in missing species and extend the kinetics of any modified reactions consistent with the species-extension condition, and then add in any remaining missing reactions. \end{def1} \section{Results on the inheritance of NPPOs and SPPOs} \label{secthms} A CRN with kinetics $(\mathcal{R}, \mathcal{K})$ {\em admits} an NPPO (resp., SPPO) if there exists {\em some} rate function $v \in \mathcal{K}$ s.t. the associated ODE system (\ref{genCRN}) has an NPPO (resp., SPPO). A broad question is when, given CRNs with kinetics $(\mathcal{R}_1, \mathcal{K}_1)$ and $(\mathcal{R}_2, \mathcal{K}_2)$ related in some natural way, knowledge that one admits an NPPO (resp., SPPO) allows us to predict the same for the other. Four ``inheritance'' theorems in this direction will be proved below under varying kinetic assumptions. For the reader primarily interested in mass action kinetics, these can be summarised in a single corollary: \begin{prop} \label{propMAgen} Let $\mathcal{R}$ and $\mathcal{R}'$ be CRNs, and suppose that $\mathcal{R}$ admits an NPPO (resp., SPPO) with mass action kinetics. Suppose that we create $\mathcal{R}'$ from $\mathcal{R}$ by \begin{enumerate}[align=left,leftmargin=*] \item adding to $\mathcal{R}$ a new reaction with reaction vector in the span of reaction vectors of $\mathcal{R}$; or \item taking the fully open extension of $\mathcal{R}$; or \item adding into some reactions of $\mathcal{R}$ a new species $Y$ which occurs with the same stoichiometry on both sides of each reaction in which it participates; or \item adding into reactions of $\mathcal{R}$ a new species $Y$ in any way, while also adding the new reaction $0 \rightleftharpoons Y$. \end{enumerate} Then, with mass action kinetics, $\mathcal{R}'$ admits an NPPO (resp., SPPO). \end{prop} \begin{pf} Claims 1 to 4 are immediate corollaries of Theorems~\ref{thmnewdepreac}~to~\ref{thmnewwithopen} below and the surrounding remarks. In order to apply the results we need only note that mass action kinetics is polynomial and hence certainly $C^2$, is scaling invariant, and that the assumptions imply that $\mathcal{R}'$ with mass action kinetics is a species-reaction extension of $\mathcal{R}$ with mass action kinetics. \hfill$\square$ \end{pf} Theorems~\ref{thmnewdepreac}~and~\ref{thmopenextension} require only basic regular perturbation theory to prove: in Theorem~\ref{thmnewdepreac} the application is almost trivial while in Theorem~\ref{thmopenextension} it takes a little more work to set up the problem. Theorem~\ref{thmtrivial} requires essentially no machinery to prove: the proof is almost immediate from the definitions. Theorem~\ref{thmnewwithopen} requires some results from singular perturbation theory. Theorems~\ref{thmnewdepreac}~and~\ref{thmnewwithopen} together imply an important corollary about fully open networks spelled out as Proposition~\ref{coropeninduced}. In each of the following theorems, $\mathcal{R}$ is a CRN with $m$ reactions involving $n$ species $X_1, \ldots, X_n$ with concentrations $x_1, \ldots, x_n$. $\Gamma$, the stoichiometric matrix of $\mathcal{R}$, has rank $r$, $\Gamma_0$ is a matrix whose columns are a basis for $S := \mathrm{im}\,\Gamma$, and $Q$ is defined by $\Gamma = \Gamma_0Q$. Given a periodic orbit $\mathcal{O}$, $S_\mathcal{O}:=(\mathcal{O} + S) \cap \mathbb{R}^n_{\gg 0}$ is the positive stoichiometry class of $\mathcal{O}$, and $x_0$ is some point on $S_\mathcal{O}$ (recall Figure~\ref{figschematic}). \begin{thm}[Adding a dependent reaction] \label{thmnewdepreac} Let $(\mathcal{R}, \mathcal{K})$ be a CRN with $C^1$ kinetics admitting an NPPO (resp., SPPO). Let $(\mathcal{R}', \mathcal{K}')$ be a reaction-extension of $(\mathcal{R}, \mathcal{K})$ created by adding to $\mathcal{R}$ a new irreversible reaction with $C^1$, scaling invariant, kinetics, and with reaction vector in the span of reaction vectors of $\mathcal{R}$. Then $(\mathcal{R}', \mathcal{K}')$ admits an NPPO (resp., SPPO). \end{thm} \begin{pf} Fix the rate function $v \in \mathcal{K}$ such that $\mathcal{R}$ has an NPPO (resp., SPPO) $\mathcal{O}$. Let the new reaction of $\mathcal{R}'$ be $a\cdot X \rightarrow a' \cdot X$. Define $\alpha = a'-a$ and define $c$ by $\alpha = \Gamma_0 c$. Consistent with the kinetic assumptions, set the rate of the new reaction to be $\epsilon f(x)$ where $f\colon \mathbb{R}^n_{\gg 0} \to \mathbb{R}$ is $C^1$ and $\epsilon$ is a parameter to be controlled (for example, with mass action kinetics the rate would be $\epsilon x^a$). The evolution of $\mathcal{R}'$ is then governed by: \begin{equation} \specialnumber{${}_\epsilon$}\label{eqCRN1} \dot x = \Gamma v(x) + \epsilon \alpha f(x) = \Gamma_0(Qv(x) +\epsilon c f(x)). \end{equation} Define $h \colon \mathbb{R}^r \to x_0+S$ by $h(z) := x_0+\Gamma_0 z$. $h$ is an affine bijection between $h^{-1}(S_\mathcal{O})$ and $S_\mathcal{O}$ and defines local coordinates on $S_\mathcal{O}$ via $x = h(z)$. $z$ evolves according to \begin{equation} \specialnumber{${}_\epsilon$}\label{eqnThm1} \dot z = Qv(x_0+\Gamma_0 z) + \epsilon c f(x_0+\Gamma_0 z). \end{equation} By definition, \specialeqref{eqnThm1}{${}_0$} has the hyperbolic (resp., linearly stable) periodic orbit $\mathcal{O}':=h^{-1}(\mathcal{O})$. By Lemma~\ref{lemreg} there exists $\epsilon_0>0$ s.t. for $\epsilon \in (-\epsilon_0, \epsilon_0)$ \specialeqref{eqnThm1}{${}_\epsilon$} has a hyperbolic (resp., linearly stable) periodic orbit $\mathcal{O}'_\epsilon$. Thus, for $\epsilon \in (-\epsilon_0, \epsilon_0)$, \specialeqref{eqCRN1}{${}_\epsilon$} has the NPPO (resp., SPPO) $\mathcal{O}_\epsilon := h(\mathcal{O}'_\epsilon)$. \hfill$\square$ \end{pf} \begin{remark}[Adding the reverse of a reaction] \label{newdeprev} Clearly, by Theorem~\ref{thmnewdepreac}, given a CRN $\mathcal{R}$ with kinetics from any $C^1$ class admitting an NPPO (resp., SPPO), adding the reverse of any existing reaction to $\mathcal{R}$ with $C^1$, scaling invariant, kinetics preserves this property. Thus if a CRN with, say, mass action kinetics admits an NPPO (resp., SPPO), then so does the corresponding reversible CRN with mass action kinetics. \end{remark} \begin{remark}[Preservation of bifurcations when dependent reactions are added] In \cite{ConradiShiuPTM} Conradi and Shiu posed the question of whether Hopf bifurcations in CRNs are preserved when some irreversible reactions are made reversible. In fact, any generic bifurcation (\cite{wiggins} or \cite{kuznetsov} for example) survives the addition of dependent reactions with sufficiently smooth, scaling-invariant, kinetics. Although Theorem~\ref{thmnewdepreac} is not about bifurcation {\em per se}, the key idea in its proof is the construction of local coordinates on a stoichiometry class $S$ so that the vector field of $\mathcal{R}'$ in these local coordinates is a perturbation of the original vector field of $\mathcal{R}$. Suppose that some $C^r$ $k$-parameter family of vector fields $\mathcal{F}_\lambda$ on $S$ associated with $\mathcal{R}$ admits a nondegenerate codimension-$k$ bifurcation at $(x_0, \lambda_0)$. Then, as we see from \specialeqref{eqnThm1}{${}_\epsilon$}, addition of a new dependent reaction with $C^r$, scaling-invariant, kinetics gives rise, for each fixed $\epsilon$, to a new $C^r$, $k$-parameter, family $\mathcal{F}^\epsilon_\lambda$ of vector fields for $\mathcal{R}'$, $C^r$ close to $\mathcal{F}_\lambda$; for $r$ sufficiently large, and $\epsilon$ sufficiently small, the family $\mathcal{F}^\epsilon_\lambda$ will admit the same nondegenerate bifurcation. Analogous remarks apply to the other network modifications detailed in the theorems to follow. As a practical note, confirming that a given CRN does indeed admit a generic Hopf bifurcation at some parameter values is not always entirely straightfoward, as it may involve approximation of a parameter-dependent center manifold in order to confirm the nondegeneracy conditions. \end{remark} \begin{thm}[Adding inflows and outflows of all species] \label{thmopenextension} Let $(\mathcal{R}, \mathcal{K})$ be a CRN with $C^1$ kinetics admitting an NPPO (resp., SPPO). Suppose that $\mathcal{R}$ includes no flow reactions (i.e., no reactions of the form $0 \rightarrow X_i$ or $X_i \rightarrow 0$). Let $(\mathcal{R}', \mathcal{K}')$ be a reaction-extension of $(\mathcal{R}, \mathcal{K})$ created by adding to $\mathcal{R}$ all the reactions $0 \rightleftharpoons X_i$ ($i= 1, \ldots, n$) with kinetics from a class including mass action kinetics. Then $(\mathcal{R}', \mathcal{K}')$ admits an NPPO (resp., SPPO). \end{thm} \begin{pf} Fix the rate function $v \in \mathcal{K}$ such that $\mathcal{R}$ has an NPPO (resp., SPPO) $\mathcal{O}$. Treat the $i$th inflow-outflow reaction as a single reversible reaction with mass action kinetics and forward and backwards rate constants $\epsilon (x_0)_i$ and $\epsilon$ respectively. The evolution of $\mathcal{R}'$ is then governed by: \[ \dot x = \Gamma v(x) + \epsilon I_n (x_0 - x). \] Let $\Gamma_0' = [\Gamma_0|\Gamma_1]$ where $\Gamma_1$ is any $n \times (n-r)$ matrix chosen so that $\Gamma_0'$ has rank $n$. Observe that $\Gamma = \Gamma_0'\left(\begin{array}{c}Q\\0\end{array}\right)$. Define new coordinates $z = (\hat{z}, \doublehat{z}) \in \mathbb{R}^{r} \times \mathbb{R}^{n-r}$ by $x = h(z) := x_0 + \Gamma_0'z$. $h$ is an affine bijection between $W:=h^{-1}(\mathbb{R}^n_{\gg 0}) \subseteq \mathbb{R}^{r} \times \mathbb{R}^{n-r}$ and $\mathbb{R}^n_{\gg 0}$, and $z$ evolves according to \begin{equation} \specialnumber{${}_\epsilon$}\label{eqnFO} \frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{cc}\hat{z}\\ \doublehat{z}\end{array}\right) = \left(\begin{array}{c}Q\\0\end{array}\right)v(x_0+\Gamma_0' z) - \epsilon \left(\begin{array}{cc}\hat{z}\\ \doublehat{z}\end{array}\right)\,. \end{equation} Define $W_1:=W \cap (\mathbb{R}^r \times \{0\})$, so that $h(W_1) = S_{\mathcal{O}}$. Define $\mathcal{O}':=h^{-1}(\mathcal{O}) \subseteq W_1$ and define $\overline{\mathcal{O}} \subseteq \mathbb{R}^r$ by $\overline{\mathcal{O}} \times \{0\} = \mathcal{O}'$. $W_1$ is locally invariant for \specialeqref{eqnFO}{${}_\epsilon$}, and restricting \specialeqref{eqnFO}{${}_\epsilon$} to $W_1$ gives the differential equation: \begin{equation} \specialnumber{${}_\epsilon$}\label{eqnFOa} \frac{\mathrm{d}\hat{z}}{\mathrm{d}t} = Qv(x_0+\Gamma_0 \hat{z}) - \epsilon \hat{z}\,. \end{equation} By definition, $\mathcal{O}$ is an NPPO (resp., SPPO) of $\mathcal{R}$ if and only if $\overline{\mathcal{O}}$ is a hyperbolic (resp., linearly stable) periodic orbit of \specialeqref{eqnFOa}{${}_0$}. In this case, by Lemma~\ref{lemreg}, there exists $\epsilon_0 > 0$ s.t. for $\epsilon \in (-\epsilon_0, \epsilon_0)$, \specialeqref{eqnFOa}{${}_\epsilon$} has a hyperbolic (resp., linearly stable) periodic orbit $\overline{\mathcal{O}}_\epsilon$ close to $\overline{\mathcal{O}}$ with period $T_\epsilon$ close to $T$. It remains to show that $\mathcal{O}'_\epsilon:=\overline{\mathcal{O}}_\epsilon \times \{0\}$ is hyperbolic (resp., linearly stable) for \specialeqref{eqnFO}{${}_\epsilon$} for all sufficiently small $\epsilon > 0$. This will imply immediately that $\mathcal{O}_\epsilon := h(\mathcal{O}'_\epsilon)$ is an NPPO (resp., SPPO) of $\mathcal{R}'$. For each fixed $\epsilon \in (0, \epsilon_0)$, choose $\psi_\epsilon$ to be some solution of \specialeqref{eqnFOa}{${}_\epsilon$} with initial condition on $\overline{\mathcal{O}}_\epsilon$. The variational equation of \specialeqref{eqnFOa}{${}_\epsilon$} about $\psi_\epsilon$ is: \begin{equation} \specialnumber{${}_\epsilon$}\label{eqFOred} \frac{\mathrm{d}\hat{\zeta}}{\mathrm{d}t}= [QDv(x_0+\Gamma_0 \psi_\epsilon(t))\Gamma_0 - \epsilon I_{r}]\hat{\zeta}\,. \end{equation} The fundamental matrix solution $\hat{Z}_\epsilon(t)$ of \specialeqref{eqFOred}{${}_\epsilon$} with $\hat{Z}_\epsilon(0) = I_r$ can be written $\hat{Z}_\epsilon(t) = A_\epsilon(t)e^{tB_\epsilon}$ where $A_\epsilon(t)$ is a nonsingular periodic matrix of period $T_\epsilon>0$ and $B_\epsilon$ is a constant matrix. Hyperbolicity (resp., linear stability) of $\overline{\mathcal{O}}_\epsilon$ for \specialeqref{eqnFOa}{${}_\epsilon$} means that $\hat{Z}_\epsilon(T_\epsilon) = e^{T_\epsilon B_\epsilon}$ has exactly one eigenvalue equal to $1$ with the remaining $r-1$ eigenvalues disjoint from (resp., inside) the unit circle. For each $\psi_\epsilon$ chosen as above, $(\psi_\epsilon, 0)$ is clearly a periodic solution of \specialeqref{eqnFO}{${}_\epsilon$}, with image $\mathcal{O}'_\epsilon$. The full variational equation of \specialeqref{eqnFO}{${}_\epsilon$} about $(\psi_\epsilon, 0)$ is: \begin{equation} \label{eqnFOredb} \frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{cc}\hat{\zeta}\\ \doublehat{\zeta}\end{array}\right) = \left(\begin{array}{cc}QDv(x_0+\Gamma_0 \psi_\epsilon(t))\Gamma_0 - \epsilon I_{r} & QDv(x_0+\Gamma_0 \psi_\epsilon(t))\Gamma_1\\0 & -\epsilon I_{n-r}\end{array}\right)\left(\begin{array}{cc}\hat{\zeta}\\ \doublehat{\zeta}\end{array}\right)\,. \end{equation} Our goal is to compute $Z_\epsilon(t)$, the fundamental matrix solution of (\ref{eqnFOredb}) satisfying $Z_\epsilon(0) = I$. Solving the second equation of (\ref{eqnFOredb}) gives $\doublehat{\zeta}(t) = e^{-\epsilon t}\doublehat{\zeta}(0)$. Substituting into the first equation of (\ref{eqnFOredb}) gives \[ \frac{\mathrm{d}\hat{\zeta}}{\mathrm{d}t} = [QDv(x_0+\Gamma_0 \psi_\epsilon(t))\Gamma_0 - \epsilon I_{r}]\hat{\zeta} + e^{-\epsilon t}QDv(x_0+\Gamma_0 \psi_\epsilon(t))\Gamma_1\doublehat{\zeta}(0). \] Setting $\doublehat{\zeta}(0) = 0$ gives back \specialeqref{eqFOred}{${}_\epsilon$}. The above calculations give: \[ Z_\epsilon(t) = \left(\begin{array}{cc}\hat{Z}_\epsilon(t) & A(t)\\0 & e^{-\epsilon t} I_{n-r}\end{array}\right),\quad \mbox{and hence,} \quad Z_\epsilon(T_{\epsilon}) = \left(\begin{array}{cc}\hat{Z}_\epsilon(T_\epsilon) & A(T_\epsilon)\\0 & e^{-\epsilon T_\epsilon} I_{n-r}\end{array}\right). \] Here $A(t)$ is some matrix which can be determined by integration but which does not affect the subsequent argument. The characteristic multipliers of $\mathcal{O}'_\epsilon$ are precisely the eigenvalues of $\hat{Z}_\epsilon(T_\epsilon)$, namely the characteristic multipliers of $\overline{\mathcal{O}}_\epsilon$ for \specialeqref{eqnFOa}{${}_\epsilon$}, and the single value $e^{-\epsilon T_\epsilon}$ occurring with multiplicity $n-r$. As $\overline{\mathcal{O}}_\epsilon$ is a hyperbolic (resp., linearly stable) periodic orbit of \specialeqref{eqnFOa}{${}_\epsilon$}, and $e^{-\epsilon T_\epsilon}$ lies inside the unit circle for any $\epsilon > 0$, $T_\epsilon>0$, $\mathcal{O}'_\epsilon$ is a hyperbolic (resp., linearly stable) periodic orbit of \specialeqref{eqnFO}{${}_\epsilon$}, and consequently $\mathcal{O}_\epsilon= h(\mathcal{O}'_\epsilon)$ is an NPPO (resp., SPPO) of $\mathcal{R}'$. \hfill$\square$ \end{pf} \begin{remark}[Geometric interpretation of Theorem~\ref{thmopenextension}, the role of mass action kinetics] Inflows and outflows were chosen to guarantee that $S_{\mathcal{O}}$ remained invariant for $\mathcal{R}'$: this necessitated mass action kinetics for the flow reactions. The construction ensured that for sufficiently small $\epsilon > 0$ $S_{\mathcal{O}}$ is exponentially attracting and the vector field of $\mathcal{R}'$ restricted to $S_{\mathcal{O}}$ is $\epsilon$-close to that of $\mathcal{R}$ restricted to $S_{\mathcal{O}}$, ensuring the existence on $S_{\mathcal{O}}$ of a hyperbolic (resp., linearly stable) periodic orbit $\mathcal{O}_\epsilon$ close to $\mathcal{O}$. \end{remark} \begin{remark}[Theorem~\ref{thmopenextension} and fully open extensions] Suppose that $(\mathcal{R},\mathcal{K})$ is any CRN with $C^1$ kinetics such that if $v$ is an allowed rate for some reaction $X_i \rightarrow 0$ of $\mathcal{R}$, then so is $v+\epsilon x_i$ for all sufficiently small $\epsilon > 0$, and if $v$ is an allowed rate for some reaction $0 \rightarrow X_i$ of $\mathcal{R}$, then so is $v+\epsilon$ for all sufficiently small $\epsilon > 0$. Then the condition that $\mathcal{R}$ excludes flow reactions can clearly be dropped in Theorem~\ref{thmopenextension}. In particular, if $(\mathcal{R},\mathcal{K})$ is any mass action CRN admitting an NPPO (resp., SPPO) then Theorem~\ref{thmopenextension} tells us that its fully open extension with mass action kinetics admits an NPPO (resp., SPPO). The same holds for CRNs with positive general kinetics. However, we cannot arrive at this conclusion for CRNs with arbitrary fixed physical power-law kinetics. \end{remark} \begin{thm}[Adding a trivial species] \label{thmtrivial} Let $(\mathcal{R}, \mathcal{K})$ be a CRN with $C^1$ kinetics admitting an NPPO (resp., SPPO). Let $(\mathcal{R}', \mathcal{K}')$ be a species-extension of $(\mathcal{R}, \mathcal{K})$ created by adding into some reactions of $\mathcal{R}$ a new species $Y$ with concentration $y$, which occurs with the same stoichiometry on both sides of each reaction in which it participates. Then $(\mathcal{R}', \mathcal{K}')$ admits an NPPO (resp., SPPO). \end{thm} \begin{pf} Fix the rate function $v \in \mathcal{K}$ such that $\mathcal{R}$ has an NPPO (resp., SPPO) $\mathcal{O}$. Fix $w \in \mathcal{K}'$ such that $w(x,1) = v(x)$, possible by assumption. With this rate function, the evolution of $\mathcal{R}'$ is governed by \begin{equation} \label{eqtriv} \left(\begin{array}{c}\dot x\\ \dot y\end{array}\right) = \left(\begin{array}{c}\Gamma\\0\end{array}\right)w(x,y). \end{equation} (\ref{eqtriv}) leaves $\mathbb{R}^n_{\gg 0} \times \{y\}$ locally invariant for each $y >0$. Since $w(x,1) = v(x)$, $\mathcal{O}':= \mathcal{O} \times \{1\}$ is a periodic orbit of $\mathcal{R}'$. Let $S' = S \times \{0\}$ so that $S'_\mathcal{O'}:=(\mathcal{O}' + S') \cap (\mathbb{R}^{n}_{\gg 0} \times \mathbb{R}_{> 0}) = S_\mathcal{O} \times \{1\}$ is the positive stoichiometry class of $\mathcal{O}'$ for $\mathcal{R}'$. Define $h \colon \mathbb{R}^r \to x_0+S$ by $h(z) = x_0+\Gamma_0 z$. $h$ is an affine bijection between $W:=h^{-1}(S_\mathcal{O}) \subseteq \mathbb{R}^r$ and $S_\mathcal{O}$, and defines local coordinates on $S_\mathcal{O}$ which evolve according to \[ \dot z = Qv(x_0+\Gamma_0 z)\,. \] By definition, as $\mathcal{O}$ is an NPPO (resp., SPPO), $h^{-1}(\mathcal{O})$ is nondegenerate (resp., linearly stable). Now define $h' \colon \mathbb{R}^r \to (x_0+S) \times \{1\}$ by $h'(z) = (x_0+\Gamma_0 z, 1)$ and note that $h'$ is an affine bijection between $W$ and $S'_\mathcal{O'}$. Moreover $h'$ gives rise to precisely the same evolution in local coordinates (since $w(x_0+\Gamma_0 z, 1) = v(x_0 + \Gamma_0 z)$) and $h'^{-1}(\mathcal{O}') = h^{-1}(\mathcal{O})$. Thus, by definition, $\mathcal{O}'$ is an NPPO (resp., SPPO) of $\mathcal{R}'$. \hfill$\square$ \end{pf} \begin{thm}[Adding a new species with inflow and outflow] \label{thmnewwithopen} Let $(\mathcal{R}, \mathcal{K})$ be a CRN with $C^2$ kinetics admitting an NPPO (resp., SPPO). Let $(\mathcal{R}', \mathcal{K}')$ be a species-reaction-extension of $(\mathcal{R}, \mathcal{K})$ created by \begin{enumerate}[align=left, leftmargin=*] \item[(i)] adding into the reactions of $\mathcal{R}$ a new species $Y$ with arbitrary stoichiometries; and \item[(ii)] adding the new reaction $0 \rightleftharpoons Y$ with $C^2$ kinetics belonging to a scaling invariant subset of positive general kinetics. \end{enumerate} Then $(\mathcal{R}', \mathcal{K}')$ admits an NPPO (resp., SPPO). \end{thm} \begin{pf} Fix the rate function $v \in \mathcal{K}$ such that $\mathcal{R}$ has an NPPO (resp., SPPO) $\mathcal{O}$. As in the proof of Theorem~\ref{thmnewdepreac} define $h \colon \mathbb{R}^r \to x_0+S$ by $h(z) = x_0+\Gamma_0 z$ and note that $h$ is an affine bijection between the open set $h^{-1}(S_\mathcal{O}) \subseteq \mathbb{R}^r$ and $S_\mathcal{O}$. $h$ defines local coordinates on $S_\mathcal{O}$ via $x = h(z)$, and $z$ evolves according to \begin{equation} \label{eqbasic} \dot z = Qv(x_0+\Gamma_0 z)\,. \end{equation} (\ref{eqbasic}) has a hyperbolic (resp., linearly stable) periodic orbit $\mathcal{O}' = h^{-1}(\mathcal{O})$. The assumptions on the kinetics mean that: \begin{enumerate}[align=left,leftmargin=*] \item The new rate function $w(x,y)$ of the existing reactions can be chosen to satisfy $w(x,1) = v(x)$. \item There exists a $C^2$ function $f\colon \mathbb{R}_{>0} \to \mathbb{R}_{>0}$ satisfying $f(1)=1$ and $f'(y)>0$ for all $y > 0$ and such that we may choose the rate of $0 \rightleftharpoons Y$ to be $\frac{1}{\epsilon}(1-f(y))$ where $\epsilon > 0$ is a parameter to be controlled. \end{enumerate} With these choices, $\mathcal{R}'$ gives rise to the following singularly perturbed system: \begin{equation} \specialnumber{${}_\epsilon$}\label{eqSP} \begin{array}{rcl}\dot x & = & \Gamma w(x,y)\\\epsilon\dot y & = & \epsilon s w(x,y) + (1-f(y)).\end{array} \end{equation} Here $s_i$ is the net change in the stoichometry of $Y$ in the $i$th reaction of $\mathcal{R}'$, and $s:=(s_1, \ldots, s_m)^{\mathrm{t}}$. For any fixed $\epsilon > 0$, rescaling time in the ``slow time system'' \specialeqref{eqSP}{${}_\epsilon$} gives the ``fast time system'': \begin{equation} \specialnumber{${}_\epsilon$}\label{eqSPa} \begin{array}{rcl}\dot x & = & \epsilon\Gamma w(x,y)\\\dot y & = & \epsilon s w(x,y) + (1-f(y)).\end{array} \end{equation} Define $h' \colon \mathbb{R}^r\times \mathbb{R} \to (x_0+S) \times \mathbb{R}$ by $h'(z,y) := (h(z), y)= (x_0+\Gamma_0 z, y)$. Note that $h'$ is an affine bijection between $h^{-1}(S_\mathcal{O}) \times \mathbb{R}_{> 0}$ and $S_\mathcal{O} \times \mathbb{R}_{> 0}$ and defines local coordinates on $S_\mathcal{O} \times \mathbb{R}_{> 0}$ via $(x,y) = (h(z),y)$. In $(z,y)$ coordinates the slow time system \specialeqref{eqSP}{${}_\epsilon$} becomes: \begin{equation} \specialnumber{${}_\epsilon$}\label{eqSPred} \begin{array}{rcl}\dot z&=&Qw(x_0+\Gamma_0 z,y)\\\epsilon\dot y&=&\epsilon sw(x_0+\Gamma_0 z,y) + (1-f(y)).\end{array} \end{equation} while the fast time system \specialeqref{eqSPa}{${}_\epsilon$} becomes: \begin{equation} \specialnumber{${}_\epsilon$}\label{eqSPreda} \begin{array}{rcl}\dot z&=&\epsilon Q w(x_0+\Gamma_0 z,y)\\\dot y&=&\epsilon s w(x_0+\Gamma_0 z,y) + (1-f(y)).\end{array} \end{equation} \specialeqref{eqSPred}{${}_0$} is the decoupled differential-algebraic system \[ \dot z = Q v(x_0+\Gamma_0 z) =: H(z), \quad y=1\,, \] (as $f(y) = 1$ if and only if $y=1$, and $w(x_0+\Gamma_0 z, 1) = v(x_0+\Gamma_0 z)$). We observe that \begin{enumerate}[align=left,leftmargin=*] \item The vector field $H(z)$ has a hyperbolic (resp., linearly stable) periodic orbit $\mathcal{O}'$ by assumption, and hence \specialeqref{eqSPred}{${}_0$} has a periodic orbit $\overline{\mathcal{O}}:=\mathcal{O}'\times\{1\}$. \item $y=1$ is a linearly stable equilibrium of $\dot y = 1-f(y)$ or, equivalently, the Jacobian matrix of \specialeqref{eqSPreda}{${}_\epsilon$} evaluated at $y=1, \epsilon = 0$, namely, \[ \left(\begin{array}{cc}0&0\\0&-f'(1)\end{array}\right)\,, \] has a single nontrivial eigenvalue $-f'(1) < 0$. \end{enumerate} By Theorems~13.1~and~13.2 in \cite{Fenichel79}, observations~(1)~and~(2) together tell us that there exists $\epsilon_0 > 0$ s.t. that for $\epsilon \in (0, \epsilon_0)$, \specialeqref{eqSPred}{${}_\epsilon$} has a hyperbolic (resp., linearly stable) periodic orbit $\overline{\mathcal{O}}_\epsilon$ close to $\overline{\mathcal{O}}$. Thus, for $\epsilon \in (0, \epsilon_0)$, $\mathcal{R}'$ has an NPPO (resp., SPPO) $\mathcal{O}_\epsilon := h'(\overline{\mathcal{O}}_\epsilon)$. \hfill$\square$ \end{pf} \begin{remark}[Geometrical interpretation of the proof of Theorem~\ref{thmnewwithopen}] The differential algebraic system \specialeqref{eqSPred}{${}_0$} defines a local flow on the $r$-dimensional (smooth) manifold $\mathcal{Y} := h^{-1}(\mathcal{S}_{\mathcal{O}}) \times \{1\}$ which includes the periodic orbit $\overline{\mathcal{O}}$. Let $\mathcal{Y}_0$ be some compact subset of $\mathcal{Y}$ containing $\overline{\mathcal{O}}$. The theory developed by Fenichel \cite{Fenichel79} shows (roughly, and omitting a myriad of technical details) that for sufficiently small $\epsilon > 0$ \specialeqref{eqSPred}{${}_\epsilon$} has an $r$-dimensional locally invariant manifold $\mathcal{Y}_\epsilon$ close to $\mathcal{Y}_0$. The vector field of \specialeqref{eqSPred}{${}_\epsilon$} restricted to $\mathcal{Y}_\epsilon$ is $\epsilon$-close to that of \specialeqref{eqSPred}{${}_0$} on $\mathcal{Y}_0$ and consequently, by regular perturbation theory, for sufficiently small $\epsilon \neq 0$, \specialeqref{eqSPred}{${}_\epsilon$} has a periodic orbit $\overline{\mathcal{O}}_\epsilon$ on $\mathcal{Y}_\epsilon$ close to $\overline{\mathcal{O}}$. The technical assumption that all vector fields involved are $C^2$ is to ensure that the family of vector fields on $\mathcal{Y}_\epsilon$ is $C^1$, allowing use of regular perturbation theory. The $r-1$ nontrivial Floquet multipliers of $\overline{\mathcal{O}}_\epsilon$ relative to $\mathcal{Y}_\epsilon$ are close to those of $\overline{\mathcal{O}}$ relative to $\mathcal{Y}$ which, by assumption, are disjoint from (resp., inside) the unit circle. Meanwhile, the single Floquet multiplier of $\overline{\mathcal{O}}_\epsilon$ transverse to $\mathcal{Y}_\epsilon$ lies inside the unit circle as a consequence of the fact that $-f'(1) < 0$. \end{remark} \begin{remark}[Kinetic assumptions in Theorem~\ref{thmnewwithopen}] \label{remnewwithopen} The added flow reaction $0 \rightleftharpoons Y$ in Theorem~\ref{thmnewwithopen} may have, for example, mass action kinetics, positive general kinetics, physical power-law kinetics, or any fixed physical power-law kinetics (these all define scaling invariant subsets of positive general kinetics). \end{remark} Theorem~\ref{thmnewwithopen}, combined with Theorem~\ref{thmnewdepreac} allows us to deduce an important corollary: \begin{prop}[Inheritance in fully open species-reaction extensions] \label{coropeninduced} Let $(\mathcal{R}, \mathcal{K})$ be a fully open CRN with $C^2$ kinetics admitting an NPPO (resp., SPPO). Let $(\mathcal{R}', \mathcal{K}')$ be a fully open CRN with kinetics, which is a species-reaction extension of $(\mathcal{R}, \mathcal{K})$ (Definition~\ref{defspecreacext}), and such that for each new reaction $R$ in $\mathcal{R}'$, $\mathcal{K}'^{(R)}$ is $C^2$, and belongs to a scaling invariant subset of positive general kinetics. Then $(\mathcal{R}', \mathcal{K}')$ admits an NPPO (resp., SPPO). \end{prop} \begin{pf} Let $\mathcal{R}$ have $n_1$ species and $m_1$ non-flow reactions (i.e., reactions not of the form $0 \rightarrow X_i$ or $X_i \rightarrow 0$), and $\mathcal{R}'$ have $n_2$ species and $m_2$ non-flow reactions. We can construct $(\mathcal{R}', \mathcal{K}')$ from $(\mathcal{R}, \mathcal{K})$ via a sequence of steps as follows: \begin{enumerate} \item[(i)] Beginning with $\mathcal{R}$, for each absent species $X_j$ (if any) we add the species to all existing reactions and add $0 \rightleftharpoons X_j$. The kinetic assumptions ensure that this corresponds to $n_2-n_1$ applications of Theorem~\ref{thmnewwithopen}, one for each absent species. Note that, as $\mathcal{R}$ is fully open, the new CRN created at each stage is fully open. \item[(ii)] We add each remaining absent reaction (if any). The kinetic assumptions ensure that this corresponds to $m_2-m_1$ applications of Theorem~\ref{thmnewdepreac}, one for each reaction added. Theorem~\ref{thmnewdepreac} applies because a fully open CRN has stoichiometric subspace which is the whole state space, and hence any added reaction is a dependent reaction. \end{enumerate} We can see the above procedure as constructing a sequence of intermediate (fully-open) CRNs with kinetics, beginning with $(\mathcal{R}, \mathcal{K})$ and terminating with $(\mathcal{R}', \mathcal{K}')$: \[ (\mathcal{R}, \mathcal{K}) = (\mathcal{R}_0, \mathcal{K}_0)\underbrace{\strut \quad \rightarrow \quad \cdots \quad \rightarrow \quad }_{\mathclap{\mbox{add in species and flows (Thm.~\ref{thmnewwithopen})}}}(\mathcal{R}_{n_2-n_1}, \mathcal{K}_{n_2-n_1})\underbrace{\strut \quad \rightarrow \quad \cdots \quad \rightarrow \quad }_{\mathclap{\mbox{add in reactions (Thm.~\ref{thmnewdepreac})}}}(\mathcal{R}_p, \mathcal{K}_p) = (\mathcal{R}', \mathcal{K}')\,. \] (Here $p = n_2+m_2-n_1-m_1$.) If $(\mathcal{R}, \mathcal{K})$ admits an NPPO (resp., SPPO), then each step of the above procedure preserves this property, and consequently $(\mathcal{R}', \mathcal{K}')$ admits an NPPO (resp., SPPO). \hfill$\square$ \end{pf} \begin{remark}[Kinetic assumptions in Proposition~\ref{coropeninduced}, proof of Proposition~\ref{propMAfo}] \label{remMAfo} The somewhat unwieldy kinetic assumptions in Proposition~\ref{coropeninduced} are in order to maximise generality. They are satisfied if $\mathcal{R} \leq \mathcal{R}'$ and, for example, \begin{enumerate}[align=left,leftmargin=*] \item Both $\mathcal{R}$ and $\mathcal{R}'$ have mass action kinetics. \item Both $\mathcal{R}$ and $\mathcal{R}'$ have physical power-law kinetics. \item $\mathcal{R}$ has any fixed power-law kinetics and $\mathcal{R}'$ has any power-law kinetics derived from that of $\mathcal{R}$ (see Definition~\ref{derived}). \item Both $\mathcal{R}$ and $\mathcal{R}'$ have $C^2$ positive general kinetics. \end{enumerate} Thus, in particular Proposition~\ref{propMAfo} follows immediately from Proposition~\ref{coropeninduced}. Explorations in Section~\ref{secnum} are carried out using Proposition~\ref{coropeninduced} with $\mathcal{R}$ and $\mathcal{R}'$ both given mass action kinetics or both given physical power-law kinetics. \end{remark} In the light of Proposition~\ref{coropeninduced}, and adapting the terminology of \cite{joshishiu}, the following definitions make sense. \begin{def1}[Atoms of oscillation, atoms of stable oscillation] \label{atomosci} A fully open mass action CRN which admits an NPPO (resp., SPPO), and which is minimal with respect to the induced subnetwork ordering amongst fully open mass action CRNs admitting NPPOs (resp., SPPOs), is referred to as a {\em fully open mass action atom of oscillation} (resp., {\em stable oscillation}). Atoms with respect to other classes of kinetics, such as physical power-law kinetics, are similarly defined. \end{def1} Observe that Definition~\ref{atomosci} is restricted to {\em fully open} CRNs as the presence of an oscillatory induced subnetwork in a general CRN does not necessarily imply oscillation; an example is provided in the concluding section (Example~\ref{exinherit}). Note also that, as in the case of multistationarity \cite{banajipanteaMPNE}, a fully open mass action atom of oscillation with respect to the induced subnetwork ordering may not be minimal with respect to other, better partial orders. Note finally that a fully open mass action atom of oscillation may include an induced subnetwork admitting an NPPO but which is not fully open; thus if we do not restrict attention to fully open CRNs, fully open atoms of oscillation may not be minimal oscillatory CRNs even with respect to the induced subnetwork ordering. \section{The occurrence of stable oscillation in small, fully open, CRNs} \label{secnum} A fully open CRN is taken to be ``small'' if it has few species, few non-flow reactions, and is at most bimolecular, namely the total stoichiometry of all species on each side of every reaction is no more than two. The goal of this section is to provide some lower bounds on the frequency with which small fully open CRNs admit SPPOs under the assumptions of (i) mass action kinetics and (ii) physical power-law kinetics. This is done via a mixture of basic analysis, numerical simulation, and application of the inheritance result in Proposition~\ref{coropeninduced}. Define a $(k,l)$ CRN to be a fully open, at most bimolecular, CRN with $k\geq 1$ species and $l\geq 0$ irreversible non-flow reactions. It is easy to see that $(1,l)$ and $(k,0)$ CRNs can admit no nontrivial periodic orbits for any reasonable kinetics: if $k=1$ then regardless of the kinetics (\ref{genCRN}) is a one dimensional autonomous system which forbids nontrivial oscillation; if $l=0$ then, with positive general kinetics, (\ref{genCRN}) is a decoupled system of $k$ autonomous univariate ODEs which again forbids nontrivial oscillation. We proceed as follows. We first treat the smallest nontrivial case, namely $(k,l)=(2,1)$, which is simple enough to be fully analysed using fairly basic ideas from dynamical systems. The results of this analysis are summarised in Propostion~\ref{prop21}. We then proceed as follows, ensuring that $(k,l)$ CRNs are treated after $(k-1,l)$ and $(k,l-1)$ CRNs, and treating the cases of mass action, and of physical power-law kinetics separately. \begin{enumerate}[align=left,leftmargin=*] \item Whenever an SPPO is found in a $(k,l)$ CRN $\mathcal{R}$, we use the powerful and widely available graph-isomorphism software NAUTY \cite{nauty} to identify all $(k+1,l)$ CRNs and $(k,l+1)$ CRNs $\geq \mathcal{R}$ (i.e., which include $\mathcal{R}$ as an induced subnetwork). Proposition~\ref{coropeninduced} then tells us that these must admit SPPOs. \item We search numerically for SPPOs in $(k,l)$ CRNs, limiting the search to those CRNs not already found to inherit SPPOs via step (1) or believed to forbid oscillation by Conjecture~\ref{conjJac} below. \end{enumerate} Via this process we obtain a lower bound on the occurrence of stable oscillation in small fully open CRNs. The methodological details are in \ref{appmethod}. \begin{prop} \label{prop21} There are 14 non-isomorphic $(2,1)$ CRNs. These are, upto isomorphism, the fully open extensions of: \[ \begin{array}{lllll} \mbox{(i)}\,\,0\rightarrow 2X & \mbox{(ii)}\,\,0\rightarrow X+Y & \mbox{(iii)}\,\,X\rightarrow Y & \mbox{(iv)}\,\,X\rightarrow 2Y & \mbox{(v)}\,\,X\rightarrow X+Y\\ \mbox{(vi)}\,\,2X\rightarrow 0 & \mbox{(vii)}\,\,2X\rightarrow X & \mbox{(viii)}\,\,2X\rightarrow Y & \mbox{(ix)}\,\,2X\rightarrow 2Y & \mbox{(x)}\,\,2X\rightarrow X+Y\\ \mbox{(xi)}\,\,X+Y\rightarrow X & \mbox{(xii)}\,\,X+Y\rightarrow 0 & \mbox{(xiii)}\,\,X\rightarrow 2X & \mbox{(xiv)}\,\,X+Y\rightarrow 2Y.& \end{array} \] Let $\mathcal{R}_{(k)}$ refer to the fully open extension of reaction (k), namely the CRN consisting of reaction (k) along with $X\rightleftharpoons 0 \rightleftharpoons Y$. \begin{enumerate}[align=left,leftmargin=*] \item With mass action kinetics $\mathcal{R}_{(i)}$ to $\mathcal{R}_{(xiv)}$ forbid oscillation. All but $\mathcal{R}_{(xiii)}$ have a unique equilibrium which is locally asymptotically stable and attracts all of $\mathbb{R}^2_{\geq 0}$. $\mathcal{R}_{(xiii)}$ either has a unique locally asymptotically stable equilibrium which attracts all of $\mathbb{R}^2_{\geq 0}$, or all orbits are unbounded. \item With positive general kinetics $\mathcal{R}_{(i)}$ to $\mathcal{R}_{(xiii)}$ forbid oscillation. \item With physical power-law kinetics or general kinetics $\mathcal{R}_{(xiv)}$ admits an SPPO. \end{enumerate} \end{prop} The proof of Proposition~\ref{prop21} is fairly straightforward, but somewhat lengthy, and is in \ref{pf21}. In order to proceed more efficiently, we make the following conjecture. \begin{conj} \label{conjJac} Let $(\mathcal{R}, \mathcal{K})$ be a fully open CRN with kinetics, where $\mathcal{K}$ is any scaling invariant subset of positive general kinetics (for example, $\mathcal{K}$ may be given by mass action kinetics or physical power-law kinetics). Suppose that $\Gamma$ is the stoichiometric matrix of $\mathcal{R}$ so that $\mathcal{R}$ gives rise to the family of ODEs on $\mathbb{R}^n_{\gg 0}$ \[ \dot x = \Gamma v(x), \quad v \in \mathcal{K}\,. \] If, for all $x \in \mathbb{R}^n_{\gg 0}$ and all $v \in \mathcal{K}$, the Jacobian matrix $\Gamma Dv(x)$ has no purely imaginary eigenvalues, then $\mathcal{R}$ does not admit a positive periodic orbit. \end{conj} A theoretical justification for Conjecture~\ref{conjJac} is not attempted here, but it is not hard to believe the rather stronger claim that such families of CRNs admit oscillation if and only if they admit Hopf bifurcation (a similar conjecture is made in Section~2.2 of \cite{eiswirth91}). If Conjecture~\ref{conjJac} holds, it is possible to rule out oscillation by examining, with the help of computer algebra, certain polynomials associated with $\Gamma Dv(x)$ whose positivity is sufficient to forbid purely imaginary eigenvalues. This process, which will be described in forthcoming work, is computationally much less expensive than simulating the differential equations with tens of thousands of parameter choices. No counterexamples to Conjecture~\ref{conjJac} were found during a large number of numerical simulations. Note also that as the claims such as those drawn from the data in Table~\ref{tabdat} concern {\em lower bounds} on the frequency of oscillation in CRNs, they are not invalidated if Conjecture~\ref{conjJac} is false. \begin{table}[h] \begin{center} \begin{tikzpicture}[scale=0.5] \fill[color=black!20] (9,3) -- (15,3) -- (15,6) -- (9,6) -- cycle; \draw [-, line width=0.04cm] (2,11) -- (23,11); \draw [-, line width=0.04cm] (2,10) -- (23,10); \draw [-, line width=0.04cm] (0,9) -- (23,9); \draw [-, line width=0.04cm] (1,6) -- (23,6); \draw [-, line width=0.04cm] (1,3) -- (23,3); \draw [-, line width=0.04cm] (0,0) -- (23,0); \draw [-, line width=0.04cm] (0,0) -- (0,9); \draw [-, line width=0.04cm] (1,0) -- (1,9); \draw [-, line width=0.04cm] (2,0) -- (2,11); \draw [-, line width=0.04cm] (5,0) -- (5,10); \draw [-, line width=0.04cm] (9,0) -- (9,10); \draw [-, line width=0.04cm] (15,0) -- (15,10); \draw [-, line width=0.04cm] (23,0) -- (23,11); \draw [-, line width=0.01cm] (2,1) -- (23,1); \draw [-, line width=0.01cm] (2,2) -- (23,2); \draw [-, line width=0.01cm] (2,4) -- (23,4); \draw [-, line width=0.01cm] (2,5) -- (23,5); \draw [-, line width=0.01cm] (2,7) -- (23,7); \draw [-, line width=0.01cm] (2,8) -- (23,8); \draw [-, line width=0.01cm] (3.5,0) -- (3.5,2); \draw [-, line width=0.01cm] (3.5,3) -- (3.5,5); \draw [-, line width=0.01cm] (3.5,6) -- (3.5,8); \draw [-, line width=0.01cm] (7,0) -- (7,2); \draw [-, line width=0.01cm] (7,3) -- (7,5); \draw [-, line width=0.01cm] (7,6) -- (7,8); \draw [-, line width=0.01cm] (12,0) -- (12,2); \draw [-, line width=0.01cm] (12,3) -- (12,5); \draw [-, line width=0.01cm] (12,6) -- (12,8); \draw [-, line width=0.01cm] (19,0) -- (19,2); \draw [-, line width=0.01cm] (19,3) -- (19,5); \draw [-, line width=0.01cm] (19,6) -- (19,8); \node at (12.5,10.5) {number of non-flow reactions $l$}; \node[rotate=90] at (0.5,4.5) {number of species $k$}; \node at (3.5,9.5) {$1$}; \node at (7,9.5) {$2$}; \node at (12,9.5) {$3$}; \node at (19,9.5) {$4$}; \node at (1.5,7.5) {$2$}; \node at (1.5,4.5) {$3$}; \node at (1.5,1.5) {$4$}; \node at (3.5,8.5) {14}; \node at (7,8.5) {169}; \node at (12,8.5) {1,312}; \node at (19,8.5) {7,514}; \node at (3.5,5.5) {19}; \node at (7,5.5) {622}; \node at (12,5.5) {16,135}; \node at (19,5.5) {322,854}; \node at (3.5,2.5) {20}; \node at (7,2.5) {1,059}; \node at (12,2.5) {59,379}; \node at (19,2.5) {2,840,062}; \node at (2.75,7.5) {0}; \node at (4.25,7.5) {0}; \node at (6,7.5) {0}; \node at (8,7.5) {0}; \node at (10.5,7.5) {0}; \node at (13.5,7.5) {0}; \node at (17,7.5) {0}; \node at (21,7.5) {0}; \node at (2.75,6.5) {1}; \node at (4.25,6.5) {0}; \node at (6,6.5) {25}; \node at (8,6.5) {25}; \node at (10.5,6.5) {293}; \node at (13.5,6.5) {289}; \node at (17,6.5) {2,257}; \node at (21,6.5) {2,246}; \node at (2.75,4.5) {0}; \node at (4.25,4.5) {0}; \node at (6,4.5) {5}; \node at (8,4.5) {0}; \node at (10.5,4.5) {444}; \node at (13.5,4.5) {401}; \node at (17,4.5) {$\geq$ 18,859}; \node at (21,4.5) {18,859}; \node at (2.75,3.5) {1}; \node at (4.25,3.5) {1}; \node at (6,3.5) {94}; \node at (8,3.5) {82}; \node at (10.5,3.5) {4,268}; \node at (13.5,3.5) {4,080}; \node at (17,3.5) {$\geq$ 123,990}; \node at (21,3.5) {123,990}; \node at (2.75,1.5) {0}; \node at (4.25,1.5) {0}; \node at (6,1.5) {8}; \node at (8,1.5) {8}; \node at (10.5,1.5) {$\geq$ 1,657}; \node at (13.5,1.5) {1,657}; \node at (17,1.5) {$\geq$ 166,676}; \node at (21,1.5) {166,676}; \node at (2.75,0.5) {1}; \node at (4.25,0.5) {1}; \node at (6,0.5) {140}; \node at (8,0.5) {139}; \node at (10.5,0.5) {$\geq$ 14,373}; \node at (13.5,0.5) {14,373}; \node at (17,0.5) {$\geq$ 1,038,785}; \node at (21,0.5) {1,038,785}; \end{tikzpicture} \end{center} \caption{\label{tabdat} The table shows (i) the total number of nonisomorphic $(k,l)$ CRNs for $k = 2,\ldots, 4$ and $l=1, \ldots, 4$, (ii) lower bounds on the number of $(k,l)$ CRNs admitting SPPOs under the assumptions of mass action kinetics and physical power-law kinetics, and (iii) lower bounds on how many of these admit SPPOs as a consequence of the inheritance results in this paper. Each block of five cells corresponding to a particular pair of $(k,l)$ contains the total number of nonisomorphic $(k,l)$ CRNs (top row); the number shown to admit SPPOs with mass action kinetics followed by the number of these which follow as a consequence of inheritance results (middle row); and the number shown to admit SPPOs with physical power-law kinetics followed by the number of these which follow as a consequence of inheritance results (bottom row). For example, the data in the highlighted block tells us that there are 16,135 nonisomorphic $(3,3)$ CRNs. Of these, at least 444 (about $3\%$) admit SPPOs with mass action kinetics: 401 (about $90\%$) by inheritance, namely because they include as an induced subnetwork either a $(3,2)$ CRN or a $(2,3)$ CRN which admits an SPPO, with the remainder found in numerical simulations. Similarly, at least 4,264 (about $26\%$) of the $(3,3)$ CRNs admit SPPOs with physical power-law kinetics: 4,072 (about $95\%$) by inheritance, with the remainder being found in numerical simulations. For $k+l \geq 7$, only the inheritance data is presented namely, no numerical search was carried out to find CRNs admitting SPPOs not predicted by the inheritance results. A ``$\geq$'' is inserted in order to highlight this. The lists of CRNs from which the data is drawn are at \protect\url{https://reaction-networks.net/networks/osci.html}. } \end{table} The results of simulations and analysis for $k= 2,\ldots, 4$ and $l=1, \ldots, 4$ are summarised in Table~\ref{tabdat}. The table suggests, assuming that Conjecture~\ref{conjJac} is true, and that large numbers of oscillatory CRNs were not missed by the numerical simulations, that the great majority of CRNs admitting stable oscillation do so as a consequence of inheritance (this becomes even more evident as we increase the number of reactions in the CRNs). As a particular example, the motif \begin{center} \begin{tikzpicture}[scale=1.2] \fill[color=black] (3,0) circle (1.5pt); \draw (2,0) circle (1.5pt); \draw (4,0) circle (1.5pt); \draw [->, thick] (2.15,0) -- (2.85,0); \draw [->, thick] (3.1,0.05) .. controls (3.4,0.15) and (3.6,0.15) .. (3.9,0.05); \draw [->, thick] (3.9,-0.05) .. controls (3.6,-0.15) and (3.4,-0.15) .. (3.1,-0.05); \node at (3.5,0.25) {$\scriptstyle{2}$}; \end{tikzpicture} \end{center} corresponding to the single reaction $X+Y \rightarrow 2Y$ occurs in $22\%$ of all the CRNs in Table~\ref{tabdat}, which consequently admit SPPOs with physical power-law kinetics by Propositions~\ref{prop21}~and~\ref{coropeninduced}. A total of about $33\%$ of the CRNs in Table~\ref{tabdat} were found to admit SPPOs with physical power-law kinetics and thus this single motif is responsible for about two thirds of the oscillation found under the assumption of physical power-law kinetics. Additional investigation revealed that this motif occurs in a total of about $2.52\times 10^7$ ($75\%$) of all $3.36 \times 10^7$ $(2,l)$ CRNs ($l$ ranges from $1$ to $26$ by the counting arguments in \ref{appmethod}). Thus identifying small atoms of oscillation is worthwhile from a practical viewpoint, as these appear to be the source of most oscillation in CRNs. Table~\ref{tabdat} also highlights the importance of kinetics, and in particular how much more frequently stable oscillation occurs in small CRNs with physical power-law kinetics as compared to those with mass action kinetics. Presumably the linear or quadratic nature of at most bimolecular mass action systems significantly restricts the allowed dynamics in many cases. As in the case of physical power-law kinetics, small oscillatory motifs account for most of the oscillation in the table found in mass action CRNs. For example, at least one of the five $(3,2)$ (presumed) mass action atoms of stable oscillation found in simulations occurs in about $5\%$ of all the CRNs in Table~\ref{tabdat}, which consequently admit SPPOs with mass action kinetics; this accounts for almost $90\%$ of the oscillation in mass action CRNs detailed in Table~\ref{tabdat}. While numerical investigations in \ref{appmethod} indicate that the lower bounds in Table~\ref{tabdat} can be improved with additional simulation, it remains true that inheritance results applied to a few small oscillatory motifs automatically give us large numbers of oscillatory CRNs. Not visible in the table are relationships amongst the atoms of stable oscillation. For example, the five $(3,2)$ mass action atoms of stable oscillation are the fully open extensions of: (i) $X+Z\rightarrow 2Y \rightarrow Y+Z$, (ii) $X+Z \rightarrow 2Y$, $Y+Z \rightarrow 2Z$, (iii) $X+Z \rightarrow Y, Y+Z \rightarrow 2Z$, (iv) $X+Z \rightarrow 2Y \rightarrow 2Z$; (v) $X+Z \rightarrow 0, Y+Z \rightarrow 2Z$. These correspond to the following motifs: \begin{center} \begin{tikzpicture}[scale=1] \node at (1,0.5) {(i)}; \draw[color=black] (3,0.4) circle (1.5pt); \draw[color=black] (3,-0.4) circle (1.5pt); \draw[color=black] (1,0) circle (1.5pt); \fill[color=black] (2,0) circle (2pt); \fill[color=black] (4,0) circle (2pt); \draw [->, thick] (1.15,0) --(1.85,0); \draw [->, thick] (2.15,0.1) .. controls (2.3,0.25) and (2.5,0.4) .. (2.9,0.4); \draw [<-, thick] (3.85,0.1) .. controls (3.7,0.25) and (3.5,0.4) .. (3.1,0.4); \draw [<-, thick] (2.15,-0.1) .. controls (2.3,-0.25) and (2.5,-0.4) .. (2.9,-0.4); \draw [->, thick] (3.85,-0.1) .. controls (3.7,-0.25) and (3.5,-0.4) .. (3.1,-0.4); \draw[->, thick] (3.85,0.05) .. controls (3.5, 0.05) and (3.3, 0.2) .. (3.07,0.33); \node at (3.6,0.5) {$\scriptstyle{2}$}; \node at (2.4,0.5) {$\scriptstyle{2}$}; \node at (2.4,-0.5) {$\textcolor{white}{\scriptstyle{1}}$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture}[scale=1] \node at (1,0.5) {(ii)}; \draw[color=black] (3,0.4) circle (1.5pt); \draw[color=black] (3,-0.4) circle (1.5pt); \draw[color=black] (1,0) circle (1.5pt); \fill[color=black] (2,0) circle (2pt); \fill[color=black] (4,0) circle (2pt); \draw [->, thick] (1.15,0) --(1.85,0); \draw [<-, thick] (2.15,0.1) .. controls (2.3,0.25) and (2.5,0.4) .. (2.9,0.4); \draw [->, thick] (3.85,0.1) .. controls (3.7,0.25) and (3.5,0.4) .. (3.1,0.4); \draw[<-, thick] (3.85,0.05) .. controls (3.5, 0.05) and (3.3, 0.2) .. (3.07,0.33); \draw [->, thick] (2.15,-0.1) .. controls (2.3,-0.25) and (2.5,-0.4) .. (2.9,-0.4); \draw [<-, thick] (3.85,-0.1) .. controls (3.7,-0.25) and (3.5,-0.4) .. (3.1,-0.4); \node at (3.6,0.5) {$\scriptstyle{2}$}; \node at (2.4,-0.5) {$\scriptstyle{2}$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture}[scale=1] \node at (1,0.5) {(iii)}; \draw[color=black] (3,0.4) circle (1.5pt); \draw[color=black] (3,-0.4) circle (1.5pt); \draw[color=black] (1,0) circle (1.5pt); \fill[color=black] (2,0) circle (2pt); \fill[color=black] (4,0) circle (2pt); \draw [->, thick] (1.15,0) --(1.85,0); \draw [<-, thick] (2.15,0.1) .. controls (2.3,0.25) and (2.5,0.4) .. (2.9,0.4); \draw [->, thick] (3.85,0.1) .. controls (3.7,0.25) and (3.5,0.4) .. (3.1,0.4); \draw[<-, thick] (3.85,0.05) .. controls (3.5, 0.05) and (3.3, 0.2) .. (3.07,0.33); \draw [->, thick] (2.15,-0.1) .. controls (2.3,-0.25) and (2.5,-0.4) .. (2.9,-0.4); \draw [<-, thick] (3.85,-0.1) .. controls (3.7,-0.25) and (3.5,-0.4) .. (3.1,-0.4); \node at (3.6,0.5) {$\scriptstyle{2}$}; \node at (2.4,-0.5) {$\textcolor{white}{\scriptstyle{2}}$}; \end{tikzpicture} \begin{tikzpicture}[scale=1] \node at (1,0.5) {(iv)}; \draw[color=black] (3,0.4) circle (1.5pt); \draw[color=black] (3,-0.4) circle (1.5pt); \draw[color=black] (1,0) circle (1.5pt); \fill[color=black] (2,0) circle (2pt); \fill[color=black] (4,0) circle (2pt); \draw [->, thick] (1.15,0) --(1.85,0); \draw [<-, thick] (2.15,0.1) .. controls (2.3,0.25) and (2.5,0.4) .. (2.9,0.4); \draw [->, thick] (3.85,0.1) .. controls (3.7,0.25) and (3.5,0.4) .. (3.1,0.4); \draw [->, thick] (2.15,-0.1) .. controls (2.3,-0.25) and (2.5,-0.4) .. (2.9,-0.4); \draw [<-, thick] (3.85,-0.1) .. controls (3.7,-0.25) and (3.5,-0.4) .. (3.1,-0.4); \node at (3.6,0.5) {$\scriptstyle{2}$}; \node at (2.4,-0.5) {$\scriptstyle{2}$}; \node at (3.6,-0.5) {$\scriptstyle{2}$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture}[scale=1] \node at (1,0.5) {(v)}; \draw[color=black] (3,0.4) circle (1.5pt); \draw[color=black] (3,-0.4) circle (1.5pt); \draw[color=black] (1,0) circle (1.5pt); \fill[color=black] (2,0) circle (2pt); \fill[color=black] (4,0) circle (2pt); \draw [->, thick] (1.15,0) --(1.85,0); \draw [<-, thick] (2.15,0.1) .. controls (2.3,0.25) and (2.5,0.4) .. (2.9,0.4); \draw [->, thick] (3.85,0.1) .. controls (3.7,0.25) and (3.5,0.4) .. (3.1,0.4); \draw[<-, thick] (3.85,0.05) .. controls (3.5, 0.05) and (3.3, 0.2) .. (3.07,0.33); \draw [<-, thick] (3.85,-0.1) .. controls (3.7,-0.25) and (3.5,-0.4) .. (3.1,-0.4); \node at (3.6,0.5) {$\scriptstyle{2}$}; \node at (2.4,-0.5) {$\textcolor{white}{\scriptstyle{2}}$}; \end{tikzpicture} \end{center} Representing these motifs pictorially highlights the close relationships between them. Observe that there are various subnetwork relationships between the motifs. For example, (v) is a subnetwork of (iii), but not an {\em induced} subnetwork of (iii), and hence oscillation in the fully open extension of (iii) cannot be predicted from that in the fully open extension of (v) using the theorems in this paper. There remains the possibility that there exists an inheritance result rather different from those in this paper which predicts oscillation in the fully open extension of (iii) from that in the fully open extension of (v). More generally, it seems likely that there are interesting theorems to be discovered on sufficient conditions for stable oscillation in mass action CRNs which might explain something about the structures of oscillatory motifs. \section{Conclusions} Armed with the results in this paper one can predict the occurrence of oscillation in CRNs from its occurrence in smaller CRNs. Our main conclusion is: \begin{quote} Any CRN built from an oscillatory CRN via a sequence of modifications of the kind described in Theorems~\ref{thmnewdepreac}~to~\ref{thmnewwithopen} is again oscillatory. \end{quote} Here ``oscillatory'' may be taken to mean either ``which admits an NPPO'' or ``which admits an SPPO'', and the conclusion is valid under mild assumptions on the kinetics and for general CRNs (not necessarily fully open). We emphasised the consequence that a fully open, mass action, CRN which includes a fully open oscillatory subnetwork is itself oscillatory, illustrating how certain motifs are associated with oscillation in fully open CRNs. It was mentioned, however, that this particular conclusion does not extend to CRNs which are not fully open: such a CRN may include an oscillatory subnetwork but fail to be oscillatory. The following is a typical example: \begin{example} \label{exinherit} Consider the following CRNs $\mathcal{R}$, $\mathcal{R}'$ and $\mathcal{R}''$ which satisfy $\mathcal{R} \leq_{S} \mathcal{R}' \leq_{R} \mathcal{R}''$: \[ \begin{array}{lcl} X+Z\rightleftharpoons 2Y \rightleftharpoons X+Y, \quad 0 \rightleftharpoons X, \quad 0 \rightleftharpoons Y, \quad 0 \rightleftharpoons Z && (\mathcal{R})\\ X+Z\rightleftharpoons 2Y \rightleftharpoons X+Y, \quad 0 \rightleftharpoons X, \quad 0 \rightleftharpoons Y+V, \quad 0 \rightleftharpoons Z+W && (\mathcal{R}')\\ X+Z\rightleftharpoons 2Y \rightleftharpoons X+Y, \quad 0 \rightleftharpoons X, \quad 0 \rightleftharpoons Y+V, \quad 0 \rightleftharpoons Z+W, \quad 0 \rightleftharpoons V, \quad 0 \rightleftharpoons W. && (\mathcal{R}'')\\ \end{array} \] $\mathcal{R}$ admits an SPPO with mass action kinetics as it is just the fully open extension of motif (i) above, with the reverse of some reactions added (see Remark~\ref{newdeprev}). On the other hand $\mathcal{R}'$ is a weakly reversible, deficiency zero, network and, consequently, with mass action kinetics, forbids oscillation by the deficiency zero theorem \cite{feinberg}. Finally, by Theorem~\ref{thmnewwithopen} applied twice to $\mathcal{R}$, $\mathcal{R}''$ admits an SPPO with mass action kinetics. \end{example} Example~\ref{exinherit} illustrates that predicting oscillation in CRNs is rather subtle: enlarging a CRN in natural ways can both destroy and create oscillation. Moreover, $\mathcal{R}'$ and $\mathcal{R}''$ involve the same set of species and have the same stoichiometric subspace (namely, all of $\mathbb{R}^5$); but adding the flow reactions $0 \rightleftharpoons V, \,0 \rightleftharpoons W$ to $\mathcal{R}'$ gives rise to oscillation. This corresponds to adding constant and linear terms to the differential equations describing the evolution of $\mathcal{R}'$ with mass action kinetics. It is highly likely that further results of the kind presented in this paper hold: following Theorems~5~and~6 in \cite{banajipanteaMPNE} we expect that modifications such as including new reactions with new species, or inserting intermediate complexes involving new species into reactions should, with mild additional hypotheses, preserve the capacity for NPPOs or SPPOs. Some oscillatory CRNs, minimal w.r.t. to the modifications described in Theorems~\ref{thmnewdepreac}~to~\ref{thmnewwithopen} of this paper, may cease to be minimal under the improved partial order such results would bring. There are also interesting questions on the connections between inheritance approaches as described here, and known sufficient conditions for oscillation such as those in \cite{eiswirth91, eiswirth96,gatermann, errami2015}. The families of chemical oscillators described in these papers can provide a starting point for application of the inheritance results here. It is also possible that some of the theory on families of chemical oscillators or algorithmic conditions for oscillation might suggest further inheritance results not described here. These possibilities remain to be explored. The investigation of small, fully open, CRNs in Section~\ref{secnum} highlights two important points: \begin{itemize} \item identifying small oscillatory motifs is a worthwhile pursuit as it automatically implies oscillation in the large number of CRNs which ``inherit'' these motifs; and \item stable oscillation is much more common given larger classes of kinetics such as physical power-law kinetics as compared to mass action kinetics. \end{itemize} Similar studies could also be carried out for general CRNs (not necessarily fully open), using Theorems~\ref{thmnewdepreac}~to~\ref{thmnewwithopen}. The difficulty of finding oscillation in mass action CRNs by numerical experiment is evidenced by additional data in \ref{appmethod}. This data suggests that often oscillation is confined to small parameter regions, and encourages the use of more systematic algorithmic approaches to the detection of oscillation such as those in \cite{errami2015}. Finally, the ``enumerate and simulate'' methodology which provided the data in Section~\ref{secnum} and is described in more detail in \ref{appmethod} may also prove useful for studying the frequency of other behaviours such as chaos in CRNs \cite{pojman}. Some modification to the approach may be needed to explore sets of CRNs too large to be studied exhaustively. For example, there are more than $10^8$ nonisomorphic $(4,5)$ CRNs, and exploring the dynamics of such large numbers numerically becomes challenging; however, it should be possible either to restrict attention to certain interesting subsets of these CRNs, such as those which are weakly reversible for example, or to explore randomly chosen CRNs from such sets in order to draw some conclusions about how often various behaviours might occur. \section*{Acknowledgements} I would like to thank Anne Shiu and the anonymous referees for a number of helpful comments on the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Dozens of circumstellar disks have been successfully resolved in scattered light using high-contrast imaging techniques on large, ground-based telescopes or the Hubble Space Telescope (HST)\footnote{Examples are collected at http://www.circumstellardisks.org/}. A particular powerful technique is polarimetric differential imaging (PDI) which allows for a very accurate subtraction of the central star's point spread function (PSF) revealing the significantly fainter signal of the surrounding disk even without the use of a coronagraph. This gives access to inner working angles as small as $\approx$0.1\arcsec~with 8-m class, ground-based telescopes, which is of great relevance for planet formation studies: At the distance of the observed stars these separations correspond to the innermost few tens of AU of circumstellar disks where most of the planet formation is expected to occur. Recently, using PDI, numerous circumstellar disks around young, nearby stars were directly imaged at near-infrared (NIR) wavelengths. Interestingly, a lot of these images showed a variety of sub-structures and distinct morphological features in these disks that could be related to planet formation processes, such as gaps, cavities and spiral arms \citep[e.g.][]{avenhaus2014, garufi2013, quanz2013b, grady2013, hashimoto2012, mayama2012, muto2012}. A very interesting target is the young Herbig Ae/Be star HD100546, where first PDI results in the $H$ and $K_{\rm s}$ filter were presented in \citet{quanz2011}. Basic parameters for this star are given in Table \ref{table:HD142527}. The star is surrounded by a transition disk consisting of a small inner disk between $\sim$0.2--0.7 AU \citep{panic2012}\footnote{Note that studies based on NIR interferometry prefer an outer radius of the inner disk that is larger \citep[$\sim$4 AU,][]{benisty2010, tatulli2011}} followed by a disk gap out to $\sim$13--15 AU and then a large outer disk extending out to a few hundreds of AU \citep[e.g.,][]{pantin2000, augereau2001, grady2001, grady2005, ardila2007}. The disk gap was initially proposed based on SED models \citep{bouwman2003} and observationally confirmed with far UV spectra using HST/STIS \citep{grady2005}. Also the images of the first PDI study of the disk found evidence for a disk rim of the outer disk around $\sim$15 AU \citep{quanz2011}. From ro-vibrational CO emission lines \citet{brittain2009} found evidence for an inner cavity existing not only in the dust but also in the gaseous CO component of the disk \citep[c.f.][]{vanderplas2009}. Prominent, large scale spiral arms were clearly detected in HST images at optical and NIR wavelengths \citep[e.g.,][]{grady2001, ardila2007}. Using ground-based NIR images, \citet{boccaletti2013} found evidence for multiple spiral arms in the southern side of the disk. \begin{deluxetable}{lcc} \centering \tablewidth{0pt} \tablecaption{Basic parameters of HD100546. \label{table:HD142527}} \tablehead{ \colhead{Parameter} & \colhead{Value for HD100546} & \colhead{Reference\tablenotemark{a}} } \startdata RA (J2000) & 11$^h$33$^m$25$^s$.44 & (1) \\ DEC (J2000) & -70$^\circ$11$'$41$''$.24 & (1)\\ $J$ [mag] & 6.43$\pm 0.02$ & (2)\\ $H$ [mag]& 5.96$\pm 0.03$ &(2)\\ $K_{\rm s}$ [mag]& 5.42$\pm 0.02$ & (2)\\ $L$ [mag]& 4.02$\pm 0.06$ & (3)\\ Mass [M$_\sun$] & 2.4$\pm$0.1 & (4)\\ Age [Myr]& $5...>10 Mry$ & (4),(5) \\ Distance [pc] & $97^{+4}_{-4}$ & (6)\\ Sp. Type & B9Vne & (7) \\ \enddata \tablenotetext{a}{References --- (1) \citet{perryman1997} (2) 2MASS point source catalog \citep{cutri2003}, (3) \citep{dewinter2001}, (4) \citet{vandenancker1997}, (5) \citet{guimaraes2006}, (6) \citet{vanleeuwen2007}, (7) \citet{houk1975}. } \end{deluxetable} In particular the disk gap was often seen as possible indication of young planets orbiting in the disk \citep[e.g.,][]{bouwman2003, tatulli2011}. Observational support for a companion in the gap was provided by \citet{acke2006} based on temporal changes in the [OI] line profile possibly being a signpost for a yet unseen 20 Jupiter mass planet orbiting within the gap. More recently, \citet{liskowsky2012} observed asymmetric line profiles in the OH spectrum of HD100546 which are consistent with emission coming from an eccentric annulus near the disk rim possibly driven by an orbiting companion. A more direct indication of a close-in companion comes from non-axisymmetric structures in the gaseous CO emission \citep{brittain2013}. The spectro-astrometric signal in the $\nu =1-0$ CO emission varies significantly over a baseline of several years, and can be fit with emission from a non-varying circumstellar disk plus a compact source of emission that varies in velocity as it orbits the star \citep{brittain2013}. The required emitting area ($\sim$0.1 AU$^2$) of the orbiting component can be explained by a circumplanetary disk in agreement with model predictions \citep[e.g.,][]{ayliffe2009}. A first direct upper limit on possible companions inside the gap was provided by \citet{grady2005} who could exclude a stellar companion. Recently, \citet{mulders2013b} used hydrodynamical simulations to model the rounded-off shape of the outer disk rim, which is constrained by mid-infrared (MIR) interferometric data \citep{panic2012}. The apparent gradient in the rim's surface density depends on the disk viscosity and also on the mass of the body orbiting in the gap. These simulations suggested that the mass of the orbiting body is in the range of 60$^{+20}_{-40}$ Jupiter masses. In addition to the suspected object orbiting in the disk gap, a second planet candidate was discovered by means of high-contrast, direct imaging \citep{quanz2013a}. Using the APP coronagraph installed at VLT/NACO \citep{kenworthy2010}, an $L'$ emission source located roughly $\sim$0.5\arcsec~(de-projected 70 AU) from the central star was detected, i.e., right in the middle of the optically thick, large outer disk. This emission source was best explained with a combination of a point source component and some extended emission, and given its brightness and small separation from HD100546 it is unlikely to be a background object. \citet{quanz2013a} argued that the object is possibly a young, forming gas giant planet that still undergoes gas accretion. This could explain both the observed luminosity (part of the luminosity is coming from the accretion process via a circumplanetary disk) and the apparently smooth circumstellar disk at these separations (the object is young, not yet very massive and hence did not alter the circumstellar disk structure significantly). The previous paragraphs strongly emphasize that HD100546 is not only an extremely well-studied object, but also features a wealth of structures possibly related to (ongoing) planet formation. In this paper we present new images of the HD100546 transition disk taken in PDI mode in the $H$, $K_{\rm s}$ and $L'$ filters. These data have a higher signal-to-noise than previous data sets allowing a more robust analysis of the disk morphology, and, in addition, in combination with a re-reduction of earlier data taken in 2006 \citep{quanz2011}, these data allow us to investigate possible changes in disk morphology and brightness over a baseline of $\sim$7 years. \begin{deluxetable*}{cc@{\hspace{8pt}$\times$\hspace{8pt}}c@{\hspace{8pt}$\times$\hspace{8pt}}c@{\hspace{8pt}=\hspace{8pt}}cccccccc} \centering \tablecaption{Summary of observations. \label{table:observations}} \tablewidth{450px} \tablehead{ \colhead{} & \multicolumn{4}{c}{Integration Time} & \colhead{} & \multicolumn{4}{c}{Observing Conditions} \\ \cline{2-5} \cline{7-10} \\ \colhead{Filter} & \colhead{DIT\tablenotemark{a}}\hspace{12pt} & \colhead{NDIT\tablenotemark{a}}\hspace{16pt} & \colhead{NINT\tablenotemark{a}}\hspace{16pt} & \colhead{Total\tablenotemark{a}} & \colhead{} & \colhead{Airmass} & \colhead{Seeing\tablenotemark{b}} & \colhead{$\tau_0$\tablenotemark{c}} & \colhead{Coh. Energy\tablenotemark{d}} } \startdata H & 0.3454s&80&27 (27) & 746s (746s)&&1.43&0.81$\arcsec$&1.9ms&41.7$\%$&\\K$_{\rm s}$ & 0.3454s&80&30 (28) & 829s (774s)&&1.50&0.86$\arcsec$&1.8ms&35.9$\%$&\\L & 0.175s&180&18 (16) & 567s (504s)&&1.63&1.03$\arcsec$&1.6ms&20.9$\%$&\vspace{3pt}\\H (cube mode) & 0.039s&1300 (975)&4 & 203s (152s)&&1.45&0.59$\arcsec$&2.6ms&51.2$\%$&\\K$_{\rm s}$ (cube mode) & 0.039s&2000 (1500)&3 & 234s (176s)&&1.44&0.70$\arcsec$&2.2ms&33.0$\%$&\vspace{3pt}\\H (2006) & 0.3454s&85&15 (15) & 440s (440s)&&1.58&1.09$\arcsec$&2.4ms&34.2$\%$&\\K$_{\rm s}$ (2006) & 0.3454s&85&13 (9) & 382s (264s)&&1.46&1.01$\arcsec$&2.7ms&40.6$\%$&\\ \enddata \tablenotetext{a}{The detector integration time (DIT) multiplied by the number of integrations per frame (NDIT) multiplied by the number of integrations summed over all dither positions (NINT) gives the total integration time per retarder plate position. Numbers in brackets are the number of frames used and integration times achieved after frame selection was applied.} \tablenotetext{b}{Average DIMM seeing in the optical during the observations, monitored by the seeing monitor at VLT.} \tablenotetext{c}{Average coherence time of the atmosphere as calculated by the real time computer of the AO system.} \tablenotetext{d}{Average coherent energy according to the ESO real time computer.} \end{deluxetable*} \section{Observations and data reduction}\label{observations_section} The new observations were performed on the night of March 31, 2013 using the NAOS/CONICA (NACO) instrument mounted on UT4 (Yepun) of the Very Large Telescope (VLT) at Cerro Paranal, Chile, in the $H$, $K_{\rm s}$ and $L'$ filters. We used the SL27 camera (27 mas pixel$^{-1}$) in $HighDynamic$ mode ($HighWellDepth$ for the $L'$ filter) and read out in $Double RdRstRd$ mode ($Uncorr$ for the $L'$ filter). HD100546 is bright enough to saturate the detector in both the $H$ and $K_{\rm s}$ filter at the shortest detector integration time available in full frame mode (0.3454s). We used windowing in cube mode mode (only 256x256 of the 1024x1024 pixels of the NACO detector are read out, the shortest possible integration time is reduced to 0.039s) in order to get unsaturated images to study the innermost parts of the disk in the $H$ and $K_{\rm s}$ filter. These were also used to perform the photometric calibration as described in \citet{avenhaus2014}. There is a general uncertainty of $\sim$30$\%$ to this technique. In the $L'$ filter, the star was unsaturated at the shortest possible integration time of 0.175s and these data could be used for the photometric calibration directly. In PDI mode, a Wollaston prism splits the incoming beam into an ordinary and extraordinary beam separated by 3.5$\arcsec$~on the detector. A polarimetric mask prevents the two beams from interfering, but limits the field of view to stripes of $\sim$27$\arcsec\times$3$\arcsec$. The rotatable half-wave retarder plate (HWP), controlling the orientation of the polarization was set to 0$^\circ$ / $-$45$^\circ$ to measure Stokes $Q$ and $-$22.5$^\circ$ / $-$67.5$^\circ$ to measure Stokes $U$. This means that we cycled through four retarder plate positions for each dither position and each integration. The total on-source integration times were 2984s, 3316s and 2268s in the $H$, $K_{\rm s}$ and $L'$ filter, respectively, and 811s / 936s in the $H$ and $K_{\rm s}$ filter using cube mode. Complementing these new data are the data taken on April 7, 2006, in the $H$ (1762s) and $K_{\rm s}$ (1527s) filter \citep[discussed in][]{quanz2011}, which we include in our analysis. A summary of the observations is given in Table \ref{table:observations}. The data reduction procedure is described in detail in the appendix of \citet{avenhaus2014}. Two improvements to the pipeline are worth noting. First, we implemented a frame selection techniqu to exclude frames that were taken when the adaptive optics (AO) performed poorly or are degraded in image quality for other reasons. We note the amount of frames selected and the resulting on-source integration time in Table \ref{table:observations}. Furthermore, for the $L'$ filter reduction, it was necessary to carefully subtract the high thermal background. To do this, from a given frame we subtracted another frame that was taken close in time, but at a different dither position For the cube mode images, each frame from the image stack was handled individually. We then compute the images showing the tangential ($P_\perp$) and radial ($P_\parallel$) polarization directions, meaning polarization perpendicular to the line between the star and a given point in the image plane ($P_\perp$) and polarization parallel to this line ($P_\parallel$). We do this because single scattering off dust grains in a protoplanetary disk is expected to cause only polarization in the tangential direction, but not in the radial one. This technique has the advantage that $P_\perp$ gives an unbiased estimate of the polarized intensity $P$ \citep[c.f.][]{avenhaus2014}. However, strictly this is only true for disks that are either optically thin and where the signal is thus dominated by single scattering, or for optically thick disks seen face-on. In the case of inclined, optically thick disks, it only holds approximately. However, any deviation of the scattered light from being polarized in the tangential direction would show up in the $P_\parallel$ image and is thus included in our error estimates. The error from this effect (the polarization not being perfectly tangential) is significantly smaller than the other error sources in our images and can therefore be neglected. For comparison, we also use the conventional way of calculating $P$ ($P=\sqrt{Q^2+U^2}$). \section{Results and Analysis}\label{results} \label{sec:results} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{HD100546_F1_part1.pdf} \vspace{8pt} \caption{NACO PDI results in $H$ and $K_{\rm s}$ filter from epochs 2006 and 2013. From left to right: $P_\perp$, capturing the structure of the disk, $P_\parallel$, which is expected to be zero and dominated by noise, $P_\parallel$ scaled by a factor of five to better show the noise signature, and $P$, which is identical to $P_\perp$ in the absence of any noise and when there is no rotation of the polarization due to multiple-scattering effects (see also text). Positive values are in orange, negative values in blue. The grey area in the center represents positions where no data is available due to saturation effects. The red cross marks the position of the star. North is up and east is to the left in all images. The images are 1.62\arcsec~($\sim$ 160 AU) on each side, they all show the same section of the disk. For reference, there is a scale in each of the $P$ images. All images scaled with $r^2$. \label{images}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{HD100546_F1_part2.pdf} \vspace{8pt} \caption{Same as Figure \ref{images}, but for the $H$ and $K_{\rm s}$ cube-mode and the $L'$ filter observations. \label{images2}} \end{figure*} The scattered light images of HD100546 in all filters and at both epochs are summarized in Figures \ref{images} and \ref{images2}. Besides $P_\perp$ (left), and $P$ (right), which are very similar to each other as expected, we also show $P_\parallel$ (as an indication of the noise level in the images) in two representations: Once scaled like the $P_\perp$ image (middle left) and once multiplied by a factor of 5 (middle right). We emphasize here that while these images only show the disk out to $\sim$0.8\arcsec, we can trace the disk to more than 1.5\arcsec~in every direction (see Figure \ref{fig:HD100546SB}). The overall structure in the $P_\parallel$ images is similar in all $H$ and $K_{\rm s}$ observations. Static structure in $P_\parallel$ could hint towards a rotation in the polarization, but this is misleading: The structure seen here depends on the choice of reduction parameters, specifically on the inner and outer radius used for correcting the instrumental polarization \citep[see][]{avenhaus2014}. Because of this, we do not interpret the structure seen in these images, but note that because it is consistent in all datasets, the different final images can be compared very well relative to each other. The residuals in $P_\parallel$ are small compared to $P_\perp$. It can also be seen that the differences between $P$ and $P_\perp$ are small, but the $P$ images show slightly more noise very close to the star. The general structure of the disk is very well seen in all $H$ and $K_{\rm s}$ filter observations in both 2006 and 2013. The $L'$ filter observations suffer from more noise, but show similar structure in the regions where the SNR is high enough. The reason for the higher noise is the strong background emission in $L'$, which is orders of magnitude higher compared to the shorter wavelengths. While the cube-mode and non-cube-mode observations in 2013 are comparable for the $H$ filter, the $K_{\rm s}$ filter cube mode observations appear darker (the observations were scaled to the same detector counts per time). The structure is similar. A possible explanation for this is that the signal is dampened by an effect similar to the one suppressing a polarization detection at very small separations (see discussion in Section \ref{sec:innerHole}), i.e. a smearing out of the butterfly pattern in the Stokes $Q$ and $U$ vectors due to the PSF of the observation. While the observing conditions were slightly better, the coherent energy was slightly worse (c.f. Table \ref{table:observations}). \subsection{Global Scattering Signature} \label{sec:globalScatteringSignature} With our new data, we confirm the basic disk structure already described in \citep{quanz2011}: The major axis of the disk runs in southeast-northwest direction, and the brightest parts of the disk are roughly along this axis. The northeastern part of the disk appears brighter compared to the southwestern part. For the first time, we identify a dark lane between $\sim$0.2\arcsec~and $\sim$0.6\arcsec~on this forward-scattering side in all $H$ and $K_{\rm s}$ filter observations including the cube mode observations. The scattered light picks up (in this representation scaled with r$^2$) outside of $\sim$0.6\arcsec. This dark lane together with the northeastern side of the disk appearing significantly brighter suggest that the grains in the disk are preferentially backscattering in polarization (scattering albedo multiplied with polarization fraction, which is what our data measure). Furthermore, the polarization efficiency in scattering usually peaks around 90$^\circ$ \citep[e.g.,][]{perrin2009}, which explains the two bright lobes in the southeast and northwest: The semi-major axis of the disk runs along this direction, and the scattering angle at these positions is close to 90$^\circ$ depending on the exact flaring angle. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{HD100546_explained.pdf} \caption{\small Disk features seen in HD100546. The 2013 $K_{\rm s}$ filter data is overlaid with the main features detected in the disk. The dark lane in the southwest seems to fold around the stellar position. The spiral arm is marked in cyan, the positions of the two planet candidates from \citet{quanz2013a} and \citet{brittain2013} are marked in green. \label{fig:diskStructure}} \end{figure} The structures seen in the disk, along with the position of the two planet candidates \citep{quanz2013a, brittain2013}, are marked in Figure \ref{fig:diskStructure} on the left. With respect to the semi-minor axis, the dark lane in the southwest is relatively symmetric and seems to fold around the star. In the $H$ filter images, there seems to be a bridge of stronger scattering exactly in the direction of the semi-minor axis. This effect is weaker in the $K_{\rm s}$ filter images. The dark lane is not seen in the surface brightness plots (Figure \ref{fig:HD100546SB}) partly because of this and partly because these are not scaled with $r^2$. \begin{figure*} \centering \includegraphics[width=1\textwidth]{HD100546_SB_v3.eps} \caption{\small Surface brightness plots of HD100546 for the $H$ (blue), $K_{\rm s}$ (black) and $L'$ (red) filter data of the 2013 saturated observations. The measurements were taken in 10$^\circ$ wedges along the semi-major (top) and semi-minor (bottom) axis of the disk. As the position angle for the semi-major axis, we use 138$^\circ$ \citep[][Section \ref{sec:diskPA} of this paper]{quanz2011}. The area not accessible with our data in the $H$ and $K_{\rm s}$ band is shaded in grey. The error bars represent 1$\sigma$ errors calculated in the same way as has been done for HD142527 \citep{avenhaus2014}. Downward-facing triangles represent 1$\sigma$ upper limits. The errors do not include a general calibration uncertainty of $\sim$30$\%$. They also do not account for the dampening effect of the PSF smearing described in Section \ref{sec:HD100546_PSF_effects}, which can be around one magnitude at the position of the inner rim. The $L'$ filter data is strongly dominated by noise outside $\sim$0.5\arcsec, which is why we restrict our plot to this distance. Along the semi-major axis, the inner hole is detected in all three filters and in both directions, while it is not seen in this representation along the semi-minor axis. \label{fig:HD100546SB}} \end{figure*} While it is in principle possible that such a dark lane is produced by shadowing effects within the disk, i.e., a shadow cast from the inner rim, we deem this unlikely for two reasons. First, it is difficult to imagine that such a shadow appears on only one side of the disk and almost perfectly aligned with the semi-minor axis. Furthermore, the disk rotated by $\sim$75$^\circ$ at the position of the inner rim (c.f. Section \ref{sec:innerHole}) during our seven years baseline between the 2006 and 2013 observations, yet the dark lane stays at the same position. Similar dark lanes have been seen for instance in non-polarimetric HST observations of IM Lupi \citep{pinte2008} and GM Aurigae \citep{schneider2003}. These authors explain the dark lane with a strongly inclined and flared disk, which causes a shadow on the forward-scattering side. Further out, where the disk becomes optically thin enough, the brightness increases again because scattering from the lower surface of the disk can be seen. It is questionable that this explanation works in the case of HD100546. First, the inclination and flaring angle derived by other authors for this disk \citep[$\sim$45-50$^\circ$ inclination, see discussion in Section \ref{sec:HD100546discussion}, and $\sim$7$^\circ$ flaring angle, see][]{benisty2010} are too small. Second, the disk would have to be optically thin in the near-IR at a radius of $\sim$100 AU. This, however, is in agreement with \citet{augereau2001}, who estimate the disk to be optically thin in the near-IR outside $\sim$80 AU. A third possibility is that the dark lane results from the scattering function of the dust grains. This requires that the scattering angle varies across the disk in the direction of the semi-minor axis It also requires that the polarized scattering function has a minimum somewhere below 90$^\circ$ and increases again towards smaller scattering angles. This seems to be the case, as we discuss in more detail in Section \ref{sec:ScatteringFunction}. While we cannot explain the exact details of the polarized scattering curve (multiple scattering and dust grain properties both play a role here), we deem this explanation the most likely. \subsection{Surface Brightness Profiles}\label{sec:HD100546SB} The surface brightness profiles in the $H$, $K_{\rm s}$ and $L'$ bands along the semi-major and semi-minor axes are shown in Figure \ref{fig:HD100546SB}. As can be seen, we are able to trace the disk significantly further than shown in the images in Figures \ref{images} and \ref{images2}, where we concentrate on the most interesting inner part of the disk. The inner region ($\lesssim$10 AU) contains no data for the $H$ and $K_{\rm s}$ band due to saturation and is marked in grey. The numeric values of these surface brightnesses have to be treated with caution. As we discuss in Section \ref{sec:HD100546_PSF_effects}, the measured surface brightnesses are significantly dampened by the PDI technique. Because of this, we do not fit power laws to our data. However, we still observe that the slope of the surface brightness profiles is not constant. There seems to be a break between an inner region, where the slope is steeper, and an outer one, where the slope is less steep. The break can be observed at $\sim$40-50 AU in the semi-major axis and possibly a little further in in the semi-minor direction. This could be suggestive of changes in the dust grain properties \citep[see][]{pineda2014}. Along the semi-major axis, the inner hole (see next section) is clearly detected. In the direction of the semi-major axis, the depletion at small radii is detected in all three filters, though it appears to be smaller in the $L'$ filter (this is likely due to stronger PSF smearing at longer wavelengths, see discussion in Section \ref{sec:HD100546_PSF_effects}). In the direction of the semi-minor axis, the gap is not seen so clearly in the surface brightness profiles. We note that these surface brightness profiles are generated from the saturated (not cube-mode) data. In the cube-mode images, the hole is clearly visible in both the semi-major and semi-minor direction. \subsection{Disk Gap}\label{sec:innerHole} Besides being visible in the surface brightness profiles, the disk gap can also be seen in all images. In the cube mode observations, the gap is detected very clearly. Taking into account the PSF smearing effect, we visually overlay the data with a ring for the inner rim in the various filters. The data in the $H$ and $K_{\rm s}$ filter (both normal and cube mode observations) are consistent with a circular inner rim at 14$\pm$2 AU and with a rather sharp inner rim edge which is only smeared out by the PSF. To analyze the degree of eccentricity of the inner cavity, we use the surface brightness images from the $H$ and $K_{\rm s}$ cube mode observations, estimated along the semi-major axis in wedges with 20$^\circ$ opening angle. We find the distance from the star in both directions along the semi-major axis where the surface brightness first reaches half the maximum brightness along this axis. We use the errors on the surface brightness, as estimated from the $P_\parallel$ images, to get an error estimate on this distance in both directions. Using these as Gaussian errors, we simulate 1'000'000 realizations of actual distances, taking into account the derived errors as well as the uncertainty of the position of the star. The star is unsaturated in the cube mode observations, and thus its position can be accurately determined. We estimate the uncertainty to be $\sim$5 mas ($\sim$0.5 AU / 0.2 pixels). From each pair of simulated values, we calculate the resulting eccentricity, allowing us to estimate probabilities for different values of the true eccentricity. We estimate from the $H$ filter results that the eccentricity is smaller than 0.113 with 95\% confidence and smaller than 0.178 with 99.8\% confidence. The $K_{\rm s}$ filter results lead to upper limits of 0.127 and 0.201 for these confidence levels, respectively. Combining the results from the two filters, we arrive at an upper limit of 0.085 at 95\% confidence and 0.133 at 99.8\% confidence. We conclude that the eccentricity along the semi-major axis is small, and our results are consistent with no eccentricity. However, we cannot make such statements for an eccentricity aligned with the semi-minor axis of the disk because of inclination effects. The distance to the rim in the northeast seems to be larger than the one to the southwest, but this is consistent with an inclined, flared inner rim which is intrinsically circular around the star. Our data suggest that the inclination of this rim is below $\sim$50$^\circ$. The exact limit our data put on the inclination and the flaring is hard to determine, because the forward- and backward-scattering regions are intrinsically fainter. A high inclination would generate a more elliptic inner hole in the data, on the other hand the faintness in the direction of the semi-major axis reduces the optical visibility of such an ellipticity. We do not detect any significant structure inside the disk cavity. The faint, ring-like structure seen in our cube-mode observations around the position of the star in the $P$ images is a noise artifact and not seen in the $P_\perp$ images. As discussed in Section \ref{sec:HD100546_PSF_effects}, the inner disk is not detectable with our observations due to PSF smearing effects if it resides at a radius of $\sim$3 AU or smaller. \subsection{Inner Rim and Position Angle of the Disk} \label{sec:diskPA} The two bright points in the rim are at 127$^\circ$$\pm$5$^\circ$ / 126$^\circ$$\pm$6$^\circ$ ($H$ / $K_{\rm s}$ filter measurement for the bright peak in the southeast) and 333$^\circ$$\pm$7$^\circ$ / 327$^\circ$$\pm$6$^\circ$ ($H$ / $K_{\rm s}$ filter measurement for the fainter peak in the northwest) east of north. Combining these measurements, this means that they are 203$^\circ$$\pm$9$^\circ$ apart, i.e. not exactly opposite from each other, but slightly displaced w.r.t. the semi-major axis. The reason for this is most likely that the disk is not flat, but flared. This shifts the points of 90$^\circ$-scattering a little bit to the back side of the disk. While the measurements are not accurate enough to put constraints on either the inclination or the flaring angle, we can compare the two peaks individually to the adopted position angle of 138.0$^\circ$$\pm$3.9$^\circ$ from \citet{quanz2011}. The bright peak is displaced from this by 11$^\circ$$\pm$6$^\circ$ towards the back side of the disk, while the fainter peak is displaced by 12$^\circ$$\pm$6$^\circ$. Turning this around, we can calculate the position angle (PA) of the disk by assuming that the two bright peaks are displaced from the semi-major axis by the same amount. This calculation yields a value of 138.2$^\circ$$\pm$3.0$^\circ$ when combining the data from $H$ and $K_{\rm s}$ filter, consistent with the adopted value. We stress at this point that our error estimate does not include systematic effects. The reflection points are intrinsically asymmetric in their brightness, which implies a physical difference between the two sides (southeast vs. northwest) of the disk. The southeast side of the inner rim is significantly brighter than the northwestern one in our 2013 observations in all three filters. This can be seen both in the images and in the surface brightness plots (Figure \ref{fig:HD100546SB}). However, we emphasize that these plots are along the semi-major axis, which does not pass through the brightest areas exactly. We find the peak in the southeast to be brighter than the peak in the northwest by a factor of $1.67\pm0.33$, $1.92\pm0.33$ and $1.51\pm0.70$ in the $H$, $K_{\rm s}$ and $L'$ filter, respectively. The errors on these values have been estimated from the residuals in the $P_\parallel$ images. We exclude the possibility that this difference in brightness is caused by one side of the disk being closer to the star, because the asymmetry is too strong to be explained by such an effect and the distance of both bright spots to the star is similar. In our 2006 observations, the asymmetry is only seen in the $K_{\rm s}$ filter. Also, the $H$ filter shows a significantly weaker overall scattering signal \subsection{Spiral Arm} A new feature detected with our data is a faint spiral arm extending from the bright southeastern region in a clockwise manner towards the north and then the west. This feature is most clearly seen in the 2013 $K_{\rm s}$ filter data, but can also be spotted in the 2006 $K_{\rm s}$ and 2013 $H$ filter data. We are confident that this feature is not an artifact from the data reduction because it can be seen in several datasets. \subsection{Scattering function}\label{sec:ScatteringFunction} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{HD100546_scatterFunction_v1.eps} \caption{\small Scattering function determined from our data between 30 and 50 AU (left) and 80 and 110 AU (right). We use an inclination of 48$^\circ$ and a constant flaring angle of 7$^\circ$ for these calculations, the same values used by \citet{quanz2011}. The values have been normalized to the maximum value for each graph. For discussion, see text.\label{fig:scatterFunction} } \end{figure*} HD100546 is one of the few disks which is suitably inclined to determine the scattering function over a large range of scattering angles, and we derive it between $\sim$35$^\circ$ and $\sim$130$^\circ$. In the case of PDI data, one measures the product of the scattering albedo of the dust grains and their polarization efficiency. To calculate the scattering function, we assume the flaring angle of the disk to be constant at 7$^\circ$ \citep[c.f.][]{benisty2010} and an inclination of 48$^\circ$, the same values used by \citet{quanz2011}. We calculate the scattering angles at two different annuli (30 to 50 and 80 to 110 AU) and show our results in Figure \ref{fig:scatterFunction}. As can be seen, the grains are preferentially backscattering in polarized light. The scattering function reaches a minimum at $\sim$60$^\circ$ and rises again towards smaller values. A forward-scattering peak could explain the brighter bridge of light towards the southwest described in Section \ref{sec:globalScatteringSignature}. The values between the $H$ and $K_{\rm s}$ filter are consistent at both annuli, but seem to differ between the two. For the outer annulus, the curve seems to be overall flatter, rising more strongly towards small scattering angles ($\lesssim$50$^\circ$) and rising later towards larger scattering angles ($\gtrsim$80$^\circ$). An explanation for this behavior could be that the grain properties vary with radius. Another possibility is that the flaring angle is not constant with radius, but increases towards the outer regions of the disk. In this case, the analysis at 80 to 110 AU would probe smaller scattering angles, moving the entire graph to the left by a few degrees - and making the two curves more consistent with each other. This, as well as the strong scattering at small scattering angles, is in agreement with the interpretation of the dark lane in Section \ref{sec:globalScatteringSignature}. Because of that, we prefer this second interpretation, without being able to exclude the possibility of grain properties varying with disk radius. \subsection{Disk Color} The 2013 $H$ and $K_{\rm s}$ filter images of the disk and surface brightness plots show no color variations which we would deem significant. In the $L'$ filter, the difference between the semi-major and semi-minor axis seems stronger compared to the $H$ and $K_{\rm s}$ filter, but the SNR is very low in the semi-minor direction. The three filters allow us to determine the overall scattering color of the disk. To do this, and to be able to also determine the scattered-light flux in the low-SNR $L'$ filter data, we calculate the total scattered light in an annulus between 0.12\arcsec~and 0.3\arcsec. By comparison to the stellar flux, we can then determine the scattering color of the disk in this annulus. Because we do not need to convert to 2MASS magnitudes in between, this direct comparison yields color estimates with smaller errors. The resulting colors are 0.19$\pm$0.11 mag in [$H$]-[$K_{\rm s}$], -1.08$\pm$0.35 mag in [$H$]-[$L'$] and -1.27$\pm$0.35 mag in [$K_{\rm s}$]-[$L'$], meaning that the disk scattering is weaker in the $L'$ filter. Between the $H$ and $K_{\rm s}$ filter, the color is almost grey, consistent with being zero. We emphasize at this point that the PSF smearing effect discussed in the next section can have an influence on color. Specifically, it could dampen the longer wavelengths stronger, which would particularly affect the $L'$ filter measurements. We estimate that this effect could explain only part of the lack in the $L'$ filter, though, and that the scattering in the $L'$ filter is truly significantly weaker than the scattering in $H$ and $K_{\rm s}$ filter by at least half a magnitude. \subsection{PSF smearing effects} \label{sec:HD100546_PSF_effects} \begin{figure*} \centering \plotone{HD100546_damping.eps} \caption{\small Two theoretical disk models at infinite resolution and surface brightness falling off as $r^{-2.5}$ compared with the same disk models (Stokes $Q$ and $U$ vectors separately) convolved with the PSF taken from unsaturated observations ($K_{\rm s}$ filter cube mode). Left: Model. Middle: Expected observations. Right: Surface brightness of model and expected observations compared. As can be seen, a hole appears at the stellar position as an artifact, together with structures in the disk which stem from the diffraction rings of the PSF. The polarimetric signal is also significantly dampened. All images scaled in the same way.\label{fig:PolDampingCloseToStar}} \end{figure*} PSF smearing affects all observations of protoplanetary disks, but in the case of polarimetric differential imaging, these effects are more complicated because the polarized flux is not measured directly, but derived from the Stokes vectors $Q$ and $U$. In contrast to direct flux measurements, both the $Q$ and the $U$ vector can be negative and usually show a butterfly pattern for protoplanetary disks, as shown for instance in \citet{quanz2011}. As a consequence, the polarization signal ($P$ and $P_\perp$, respectively) calculated from these vectors is not only smeared, but also dampened close to the star. The butterfly pattern in the Stokes $Q$ and $U$ vector at the center of the image (close to the star) is washed out by the PSF, and as a result, even disks without an inner hole (or an unresolvable hole at 0.001\arcsec) would show a hole in the polarized light as an artifact. This is shown in Figure \ref{fig:PolDampingCloseToStar}. Furthermore, the typical dilution of local features is seen, smearing out the scattered-light signal from the inner rim of the disk (lower part of Figure \ref{fig:PolDampingCloseToStar}). The effects can be severe, in this example dampening the flux from the inner rim by more than one magnitude. The effect is stronger at longer wavelengths, where the PSF is larger. We emphasize at this point that our calculations show that this effect is clearly not the reason for the inner hole in our observations. Its size would be smaller and also depend on the observing conditions and wavelength, with the hole being larger at longer wavelengths, which not what we see. However, this effect erases the signal very close to the star (inside of $\sim$0.03$\arcsec$). Thus, an inner disk at $\sim$3 AU or less would not be detected in our observations. \subsection{Comparison of 2006 and 2013 Data}\label{sec:HD100546comp} Comparing the results from the 2006 data discussed in \citet{quanz2011} to our new results, we have to refine some of the findings. In that paper, a hole was seen towards the north, in the direction where a protoplanet candidate was subsequently detected in \citet{quanz2013a}. The damping of the Stokes $U$ vector described in \citet{avenhaus2014} was not taken into account in that earlier work, because it was only realized later when more observations were available. Applying the corrections calculated from the data as described in the appendix of \citet{avenhaus2014}, the hole mostly disappears. There still is a slight depletion in polarized scattered light at the inner rim of the disk, but the interpretation of a large-scale hole in polarized scattered light in the northern direction is not supported by our results seen in Figures \ref{images} and \ref{images2}. The detection of the inner hole at $sim$14 AU, though, is clearly supported. Also, a clump seen in the north-northwestern direction of the disk still seems to be present, at least, the disk is not smooth in this direction. The position angle determined in \citet{quanz2011} is consistent with the position angle we calculate from the 2013 data using a different technique. The dataset which differs most from the rest is the 2006 $H$ filter data. We do not have a clear explanation for this and cannot completely exclude the possibility that instrumental and data reduction effects cause this We would deem the 2013 data more reliable, because it has longer integration times and thus better SNR. In addition the data were taken with a better understanding of the instrument (improved setup). We also checked our calibration w.r.t. Stokes $Q$ and $U$ by rotating the field by 45$^\circ$ in the middle of the $H$ filter observations. The results from this test clearly support our interpretations and calculations found in the Appendix of \citet{avenhaus2014}. \section{Discussion}\label{sec:HD100546discussion} Measurements of the PA of the disk, which we estimate at 138.2$^\circ$$\pm$3.0$^\circ$, vary strongly and are often not consistent with each other. Measurements found in the literature range from 127$^\circ$$\pm$5$^\circ$ \citep{grady2001, pantin2000} through 145$^\circ$$\pm$5$^\circ$ \citep{ardila2007, panic2012} up to 161$^\circ$$\pm$5$^\circ$ \citep{augereau2001}. Our data only allow us to constrain the inclination of the disk to smaller than $\sim$50$^\circ$. This is consistent with literature values of 42$^\circ$$\pm$5$^\circ$ \citep{ardila2007}, 50$^\circ$$\pm$5$^\circ$ \citep{pantin2000}, 53$^\circ$$\pm$8$^\circ$ \citep{panic2012}, 49$^\circ$$\pm$4$^\circ$ \citet{grady2001}, 45$^\circ$$\pm$15$^\circ$ \citet{liu2003}, and 51$^\circ$$\pm$3$^\circ$ \citet{augereau2001}. \citet{benisty2010} found a scale height for the surface layer of 12AU at a distance of 100AU, which is equivalent to a flaring angle of 7$^\circ$. Our results suggest that the flaring angle varies with radius, both because of the dark lane found in the disk and because of the scattering function which we derive at two different radii. The radius of the inner rim has been estimated from observations in the MIR to be $\sim$13 AU \citep{panic2012, tatulli2011}. \citet{vanderplas2009} and \citet{brittain2009} see a peak in the ro-vibrational CO line emission also at $\sim$13 AU. This is consistent with our measurement from the $H$ and $K_{\rm s}$ filter data of 14$\pm$2 AU for the radius of the inner rim. However, we do not see a gradual increase in scattered light between $\sim$10 AU and $\sim$25 AU as suggested by \citet{mulders2013b} for the MIR emission. Interestingly, \citet{liskowsky2012} suggest an eccentric inner rim of the disk along the semi-major axis with an eccentricity of $0.18^{+0.12}_{-0.11}$. We cannot confirm any eccentricity, from our scattered-light NIR data. As discussed, we exclude eccentricities greater than 0.133 at 99.8$\%$ confidence. The time variable properties of the CO ro-vibrational emission from HD100546 led \citet{brittain2013} to infer the presence of a source of excess CO ro-vibrational emission that orbits the star at a distance of ~13 AU. The position and velocity information obtained through CO spectroastrometry located the excess source at a PA of $-2\pm10$ degrees in 2006 and $102\pm10$ degrees in 2013. That is, in 2013, the inner companion would be located close to the position of the SE peak, and in 2006, the companion would have been located far away. This is suggestive of a connection between the brightness asymmetry seen in our images and the companion, which orbits very close to the inner rim of the disk. If the companion was able to stir up the inner rim, increasing its scale-height, this could naturally explain the brighter scattering. While this is a tempting explanation and works well in the $H$ filter, where the asymmetry is much weaker in the 2006 observations, it does not for the $K_{\rm s}$ filter data, where the asymmetry is also present in 2006, when the companion was far away. This emphasizes the need for multi-color observations, but also makes an interpretation of this asymmetry challenging. Another possible interpretation for the differences between the 2006 and 2013 measurements would be that either the illumination or the structure of the disk changed. The orbital timescale at the position of the inner rim ($\sim$14 AU) is $\sim$34 yr and $\sim$0.23 yr at 0.5 AU. Changes in the inner disk casting a shadow onto the inner rim of the outer disk could thus be responsible for the detected variations because they happen on timescales faster than our $\sim$7 yr baseline. However, we would expect them to influence both the $H$ and $K_{\rm s}$ filter. A change in the grain properties would also be a possibility, but we would do not expect grain properties to change significantly on such short timescales, unless there is an inherent asymmetry in the azimuthal direction which would rotate by $\sim$75$^\circ$ between our observations. We are not able to detect the inner disk. This is not surprising given the fact that we are unable to detect even strong scattering at radii smaller than $\sim$3 AU due to the PSF smearing effects discussed in Section \ref{sec:HD100546_PSF_effects}. The inner disk is less than 0.7 AU and likely even less than 0.3 AU in size \citep{panic2012, mulders2013b}. Even if the disk extends out to $\sim$4 AU as suggested by \citet{tatulli2011}, it is unclear whether we would detect it. The newly detected spiral arm has a direction consistent with spiral arms seen further out in the disk. The spiral arm has no obvious connection to any of these spiral arms detected by either \citet{ardila2007} or \citet{boccaletti2013}. It is important to remember that the scattered light traces the surface rather than the mid-plane of the disk, so we do not know whether this spiral arm represents a surface density enhancement or just a feature on the disk surface. ALMA observations tracing the mid-plane of the disk with comparable spatial resolution might be able to help answer this question. The two companions suggested to orbit in this disk \citep{quanz2013a, brittain2013} should have an impact on the disk. While the inner companion seems to be responsible for the gap in the disk, and could be related to the brightness asymmetry of the inner rim, we see no obvious effect of the outer companion. The disk at this position seems to be relatively smooth. We do not see any evidence for a disk gap formed by the planet, alhtough a sufficiently small gap would not be detected with our observations. The gap would have to be significantly smaller than the our spatial resolution, though. A causal connection to the spiral arm is possible, but unclear. A connection to the break in the surface brightness profile around 0.5$\arcsec$~is also conceivable, but speculative. \section{Conclusion}\label{conclusion} The data presented in this paper clearly resolve the circumstellar environment of HD100546 close to the star at high SNR. The inner hole is detected with a radius of 14$\pm$2 AU and an inclination of less than $\sim$50$^\circ$. Some of the other disk features are puzzling. The general structure of the disk is well explained by preferentially backscattering grains, making the northeastern side of the disk the far and the southwestern side of the disk the near side. This also gives a natural explanation for the bright spots at the inner rim along the semi-major axis due to the scattering angle of $\sim$90$^\circ$. As a side effect, these scattering peaks allow us to constrain the position angle of the semi-major axis to 138.2$^\circ$$\pm$3.0$^\circ$. We emphasize that the error given here does not include possible systematic errors which could arise from intrinsic differences between the northwestern and southeastern side of the inner rim, but we expect such an error to be small (on the order of the statistical error or smaller). The disk hole towards the north detected by \citet{quanz2011} is not confirmed with our new data. It seems to be an artifact of the diminished flux in the Stokes $U$ vector, which we could correct for in this paper The dark lane in the near side of the disk is likely an effect of the polarized scattering function of the grains. The scattering function has a broad minimum at $\sim$60$^\circ$. This, together with the differences in the scattering function derived at two different radii, supports the interpretation of a flaring angle increasing with radius in the disk. To understand the dust scattering in detail, we would need a complete, self-consistent radiative transfer model of the disk, from which artificial PDI images could be produced. Such an analysis is beyond the scope of this paper and is left for future investigations. Another unexplained penomenon is the brightness asymmetry of the disk rim, with the southeast side being significantly brighter in the $H$, $K_{\rm s}$ and $L'$ filter. The asymmetry cannot be caused by an ellipticity of the inner rim, but could be related to the companion orbiting within this rim. This connection, however, is speculative. It would be consistent with the fact that the inner companion should be close to the bright spot in early 2013 \citep{brittain2013}, but the asymmetry has been detected in the $K_{\rm s}$ band in 2006 as well The newly detected spiral arm could also have its origin in the companion(s), but again, this is unclear. We do not find a connection with any of the spiral arms detected by other authors. ALMA observations at similar resolution would help answer the questions about the nature of this feature, and would also allow us to determine whether the spiral arm is a surface feature of the disk or whether it is also present in the surface density. \acknowledgments This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. We thank the staff at VLT for their excellent support during the observations. This work is supported by the Swiss National Science Foundation. Basic research in infrared astronomy at the Naval Research Laboratory is supported by 6.1 base funding. S.D.B. acknowledges support for this work from the National Science Foundation under grant number AST-0954811. {\it Facilities:} \facility{VLT:Yepun (NACO)} \bibliographystyle{apj.bst}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Describing condensed matter phases using entanglement quantifiers from quantum information theory is a rapidly growing interdisciplinary topic \cite{AmicoFazioOsterlohVedral_RMP08, EisertRev}. Of special interest are the relatively rare cases where entanglement can provide information not readily captured by conventional quantities such as correlation functions. This is the situation for systems possessing \emph{topological order} \cite{top-order_various}, where entanglement considerations have proven useful \cite{KitaevPreskill_PRL06, LevinWen_PRL06, LiHaldane_PRL08}. In particular this has led to insight into the structure of fractional quantum Hall (FQH) states \cite{HaqueZozulyaSchoutens, HaqueZozulyaSchoutens2, LiHaldane_PRL08, Fradkin_top-entropy-inChernSimons_JHEP08, ZozulyaHaqueRegnault, RegnaultBernevigHaldane_PRL09,MorrisFeder,Lauchli,ronny}. In this Article we focus on entanglement in this most realistic class of topologically ordered states. A prominent measure of entanglement is the von Neumann entropy of entanglement, $S_A$, measuring the entanglement between a block ($A$) and the rest ($B$) of a many-particle system in a pure state. The entanglement entropy $S_A = -\tr\left[} \def\rbc{\right]\rho_A\ln\rho_A\rbc$ is defined in terms of the reduced density matrix, $\rho_A = \tr_B\rho$, obtained by tracing out $B$ degrees of freedom from the system density matrix $\rho=|\psi\rangle\langle\psi|$, with $|\psi\rangle$ denoting a ground state wave function. In one dimension the scaling behavior of the block entanglement entropy is well understood, see, {\it e.g.}, Ref.~\cite{EisertRev}. In two dimensions (2D), no such generic classification exists. However, for topologically ordered states in two dimensions, the entanglement entropy contains topological information about the state: $S_A$ scales as \begin{equation}\label{eq_sa} S_A = \alpha{L} - n \gamma +\mathcal{O}(1/L), \end{equation} where $L$ is the block boundary length, $\gamma$ characterizes the topological field theory describing the state \cite{KitaevPreskill_PRL06,LevinWen_PRL06}, while $n$ counts the number of disconnected components of the boundary. The value of $\gamma$ is related to the ``quantum dimensions'' of the quasiparticle types of the theory, $\gamma = \ln\mathcal{D}$, where $\mathcal{D}$ is the total quantum dimension. For Laughlin states at filling $\nu=1/m$, $\gamma = \frac{1}{2}\ln{m}$. For more intricate FQH states, some examples of $\gamma$ values are provided in Refs.\ \cite{KitaevPreskill_PRL06, FendleyFisherNayak_JStatPhys07, HaqueZozulyaSchoutens2}. If $\gamma$ can be determined accurately, its value can in principle be used to determine whether a topological phase belongs to the universality class of a given topological field theory. A numerical determination of $\gamma$ and $\alpha$ requires information about $S_A$ for a number of different boundary lengths, $L$. In Ref.~\cite{HaqueZozulyaSchoutens} such information was obtained from finite-size FQH wavefunctions by approximating the spatial partitioning by partitioning of the discrete set of Landau level orbitals. Refs.~\cite{HaqueZozulyaSchoutens,HaqueZozulyaSchoutens2,ZozulyaHaqueRegnault} used spherical geometries, and explored ways of extrapolating entanglement information from such geometries to the thermodynamic limit. Ref.~\cite{MorrisFeder} used disk geometries to calculate $\gamma$ for bosonic FQH wavefunctions. The accuracy in the determination of $\gamma$ from finite-size wavefunctions on these geometries remains disappointing (10\%--30\% for the simplest Laughlin states). Improved methods for calculating $\gamma$ are thus sorely needed. In this work, we report a significant advance in this direction, through the use of the torus geometry and the fact that the aspect ratio (circumference) of the torus can be varied continuously without drastically altering the torus setup or symmetry. Varying the circumference changes the length of the boundary between $A$ and $B$. No natural analogous continuous parameter exists in the other geometries, so that in those cases each system size and bipartition provides only one point in parameter space. Exploiting the continuous parameter, we present a procedure that leads to an accuracy in $\gamma$ down to a few percent. Our analysis also provides a visual and physical indication of the reliability of the extracted $\gamma$ value. Previously, Ref.~\cite{friedman} reported a na\"ive modification of the sphere algorithm of Ref.~\cite{HaqueZozulyaSchoutens} to the torus geometry with fixed aspect ratio. We will show why this analysis was inappropriate and based on an extrapolation procedure that is not meaningful for the torus geometry. In addition to the topological content of the subleading term in (\ref{eq_sa}), the dominant linear term itself is also of some importance. The rate of entanglement growth, $\alpha$, indicates how challenging the state is to simulate on a classical computer, through a one-dimensional algorithm like DMRG \cite{White,dmrgrev}, or through recently-proposed true two-dimensional algorithms like PEPS \cite{PEPS} or MERA \cite{MERA}. DMRG has been used to simulate FQH states \cite{Shibata,bk03,feiguin,kovrizhin}, and these states pose a future challenge for two-dimensional algorithms currently under development. The calculation of the topological entanglement entropy $\gamma$ is of significant current interest, not only for FQH states but also for various other topologically ordered states. For the zero-temperature Kitaev model, it is relatively easy to calculate $\gamma$; so the concept has been used in exploring issues such as temperature effects and quantum phase transitions \cite{CastelnovoChamon_PRB07, CastelnovoChamon_topol-QPT_PRB08, Hamma-etal_KitaevQPT_PRB08, IblisdirPerezGarciaAguadoPachos_PRB09}. For quantum dimer models and related states, considerations of entanglement scaling are more intricate \cite{FurukawaMisguich_PRB07,PapanikolaouRamanFradkin_PRB07}, of difficulty comparable to FQH states. More generally, entanglement scaling in 2D states of all kinds has become the focus of intense study at present \cite{entanglement_2Dscaling, EisertRev,integer}. In Section \ref{setup} we show how the torus geometry allows us to map the interacting Landau level (LL) problem onto a one-dimensional lattice model, appropriate for studying bipartite entanglement. In Section \ref{sec_lA-dependence_degeneracy} we outline the general behavior of entanglement entropy on the torus geometry and deal with the issue of torus degeneracies. In Section \ref{results} we present our main results, including an analysis leading to the determination of $\gamma$. The concluding Section \ref{discussion} connects to the existing literature and discusses implications of our results. \section{Torus setup -- geometry, partitioning}\label{setup} We study an $N$-electron system on a torus (see Fig.~\ref{fig:torus_geometry}) with periods $L_1, L_2$ in the $x$- and $y$-directions, satisfying $L_1L_2=2\pi N_s$ in units of the magnetic length. The integer $N_s=N/\nu$ is the number of magnetic flux quanta. In the Landau gauge, $\mathbf{A}=By\mathbf{\hat{x}}$, a basis of single particle states in the lowest Landau level can be taken as \begin{equation}\psi_{j}=\pi^{-1/4}L_1^{-1/2}\sum_{m} e^{i(\frac{2\pi}{L_1}j+mL_2)x} e^{-(y+\frac{2\pi}{L_1}j+mL_2)^2/2}\label{psi}\end{equation} with $j=0,1,...,N_s-1$. The states $\psi_j$ are centered along the lines $y=-2\pi j/L_1$. Thus the $y$-position is given by the $x$-momentum $j$. \begin{figure}[ht] \centerline{ \includegraphics*[width=0.9\textwidth]{gammasetupAsym2.pdf}} \caption{ \label{fig:torus_geometry} {\bf Geometry of the torus and bipartitioning}. The lowest Landau level is spanned by orbitals which in the Landau gauge are centered along the circles shown. On the right, we represent the torus as a rectangular region with periodic boundary conditions in both directions. The dimensions of this rectangle ($L_1$, $L_2$) are the two circumferences of the torus. The example shown here has $N_s=12$ orbitals with $l_A=4$ orbitals in the $A$ block. } \end{figure} A generic translation-invariant two-body interaction Hamiltonian, acting within a Landau level, can be written as \begin{equation} \label{ham} H =\sum_n \sum_{k > |m|} V_{km}c^\dagger_{n+m}c^\dagger_{n+k}c_{n+m+k}c_n \ \ , \end{equation} where $c^\dagger_m$ creates an electron in the state $\psi_m$ and $V_{km}$ is the amplitude for two particles to hop symmetrically from separation $k+m$ to $k-m$ \cite{bk}. Hence, the problem of interacting electrons in a Landau level maps onto a one-dimensional, center-of-mass conserving, lattice model with lattice constant $2\pi/ L_1$. This provides a natural setting for defining entanglement, by bipartitioning the system into blocks $A$ and $B$, which consist respectively of $l_A$ consecutive orbitals and the remaining $l_B=N_s-l_A$ orbitals (Fig.~\ref{fig:torus_geometry}). Since the orbitals are localized in the direction of the lattice, this is a reasonable approximation to spatial partitioning, as on the sphere \cite{HaqueZozulyaSchoutens,HaqueZozulyaSchoutens2,LiHaldane_PRL08,ZozulyaHaqueRegnault,RegnaultBernevigHaldane_PRL09}. Because this partitioning implies two disjoint edges between the blocks, each of length $L_1$, the entanglement entropy should satisfy the following specific form of (\ref{eq_sa}): \begin{equation}\label{satorus} S_A(L_1) =2\alpha{L_1} -2\gamma +\mathcal{O}(1/L_1). \end{equation} Thus our setup should yield a linear scaling form of the entropy with the $L_1=0$ intercept at $-2\gamma$. In this work, we obtain ground states of (\ref{ham}), in the orbital basis (\ref{psi}) using the Lanczos algorithm for numerical diagonalization. We study bipartite entanglement in these ground states. Apart from diagonalizing the Coulomb problem we also consider pseudopotential interactions \cite{haldane83,Trugman-K} which have the Laughlin states \cite{laughlin83} as exact ground states. The largest Hilbert space sizes considered are 208'267'320 for 39 orbitals at $\nu=1/3$, 19'692'535 for 45 orbitals at $\nu=1/5$ and 66'284'555 for 35 orbitals at $\nu=2/5$. The simulations are however currently limited by the size of the reduced density matrices to be calculated and fully diagonalized. Fractional quantum Hall states have degenerate ground states on the torus geometry. It is convenient to label the ground states by their corresponding thin torus (or Tao-Thouless, TT) patterns \cite{bk,seidel,anderson,tt,RH94}. For example, for $\nu=1/3$ there are three degenerate states, which correspond to the TT configurations \begin{eqnarray} 100100100\Big{|}100100100100100100\Big{|}100100100\nonumber\\ 010010010\Big{|}010010010010010010\Big{|}010010010\nonumber\\ 001001001\Big{|}\!\underbrace{001001001001001001}_{A}\!\Big{|}001001001 \ ,\label{ttonethird} \end{eqnarray} for $N_s=36$. Here the positions of $1$'s indicate the positions (or, equivalently, the transverse momenta) of filled single particle states. An equal partitioning ($l_A=l_B=N_s/2$) is illustrated. In general, abelian FQH states at $\nu=p/q$ have $q$ degenerate ground states, related to each other through translation and corresponding to $q$ thin-torus patterns, each composed of unit cells with $p$ electrons on $q$ sites. These states are ground states for generic (two-body) interactions as $L_1\rightarrow 0$ \cite{bk}. For non-abelian states there is an enhanced degeneracy and the corresponding thin torus patterns are not simply translations of each other. The thin torus states are unentangled product states, in the orbital basis. As $L_1$ is increased from zero, fluctuations on top of the TT states will make the states entangled. A crucial property of the FQH states is that their bulk versions are, for appropriate interactions, adiabatically connected to their respective TT states and the gap is finite for all $L_1$ \cite{bk,seidel}. This allows us to probe the response of these states as the geometry is deformed. Such deformations have also been considered to extract properties such as the Hall viscosity \cite{read09,haldane09}, to put consistency conditions of FQH states \cite{seidel10}, to find instabilities to competing states (see {\it eg} \cite{cdw,Yang}) and to deform the torus to the solvable thin torus limit as discussed above. All the three fractions studied in this paper ($\nu=1/3$, $1/5$, and $2/5$) are, for the pseudopotential as well as the Coulomb interactions, continuously connected to their TT states. \begin{figure}[ht] \centerline{ \includegraphics*[width=0.9\linewidth]{SectorAveragingEtAl.pdf} } \caption{ {\bf Degeneracy averaging and $l_A$ dependence}. (a) Entanglement entropy in different degenerate sectors for the $\nu=1/3$ Laughlin state, and their arithmetic average. (b) difference between degeneracy-averaged $S_A$ and largest individual-sector $S_A$, as a function of $L_1$. They differ significantly only for intermediate $L_1$. (c) Degeneracy-averaged $S_A$ versus $l_A$, for different $L_1$ values. \label{fig:averaged}} \end{figure} \section{Degeneracy averaging, Area law at constant $L_1$} \label{sec_lA-dependence_degeneracy} For any finite $L_1$ the charge density modulations of the TT pattern will prevail to some extent, leading to different entanglement in the $q$ degenerate ground states. We illustrate this in Fig.~\ref{fig:averaged}(a) where we plot the entanglement entropy $S_A(l_A)$ as a function of $l_A$ in the three degenerate $\nu=1/3$ Laughlin wave functions. For each $l_A$ two out of the three entanglement entropies are equal, while the third one is different, as can be inferred from examining the partitionings shown in \Eref{ttonethird}. Two of the TT patterns have 1-0 and 0-1 cuts at the block boundaries, while the third has only 0-0 cuts. Since the microscopic environment at the two boundaries determines the entanglement spectrum \cite{Lauchli} and hence the entanglement entropy, this implies that two of the entanglement entropies are equal at each $l_A$. The $S_A(l_A)$ each have prominent oscillatory behavior. However, we find that the \emph{arithmetic mean} of the three individual entanglement entropies is remarkably free of oscillations. We will thus base all our following discussions on the degeneracy-averaged entropy \footnote{ In principle the averaging could be done in other ways, for example, one could average over the density matrices $\rho\rightarrow \sum_{i=1}^{N_{GS}}\rho^{(i)}/N_{GS}$ or reduced density matrices, and then compute the entanglement entropy from this averaged matrix. We do not however pursue such alternate averaging procedures in the present work.}. Ultimately this averaging will become unimportant for very large $L_1$, as Fig.~\ref{fig:averaged}(b) shows that the difference $\Delta S$ between the maximum and the mean entanglement entropies in the $q$ sectors vanishes rapidly at large $L_1$ (starting to decrease after $L_1 \sim 10$ in the Laughlin $\nu=1/3$ case shown). In Fig.~\ref{fig:averaged}(c) we study the $l_A$ and $N_s$ dependence for a given $L_1$. At constant $L_1$ we expect the entropy to saturate once $l_A$ is large enough, since the block boundary length ($2 L_1$) is held constant. This is indeed what is found numerically in Fig.~\ref{fig:averaged}(c). The length scale controlling the saturation is the (real space) correlation length $\xi_r$ in the $y$ direction of the incompressible FQH liquid. The correlation length $\xi_o$ measured in number of orbitals is expected to scale as $\xi_o \sim \xi_r \times L_1/2\pi$. The saturation of the entanglement entropy for large $l_A$ is in complete analogy to the area law for one-dimensional gapped systems~\cite{VidalLatorreRicoKitaev_PRL03, hastings07}. It is this saturation value $S_A(L_1)$ of the entanglement entropy obtained for $l_A\gg \xi_o$ that will be analyzed in the following. To avoid the finite size effect as far as possible we consider $l_A=N_s/2$ for $N_s$ even, and $l_A=(N_s-1)/2$ for $N_s$ odd, in the rest of this article. From Fig. \ref{fig:averaged}(c) we can again infer that the averaged $S_A$ indeed has a much smoother dependence on the block size compared to the $S_A$ for the individual degenerate states. \section{Accessing the scaling regime} \label{results} \begin{figure}[t] \centerline{ \includegraphics*[width=0.8\linewidth]{EntropyDerivative_p_1_q_3_Laughlin.pdf} } \caption{ {\bf $\nu=1/3$ Laughlin state: Entanglement entropy}, (a) $S_A$ and (b) its derivative $\mathrm{d}S_A/\mathrm{d}L_1$ for the Laughlin state at $\nu=1/3$ as a function of $L_1$. From the plateau behavior in $\mathrm{d}S_A/\mathrm{d}L_1$ for $L_1\gtrsim 12$ we infer $\alpha \approx 0.153(2)$. \label{fig:ee_onethird_laughlin} } \end{figure} In this Section we provide our main numerical results and discuss how $\alpha$ and $\gamma$ can be extracted by continuously varying $L_1$. As mentioned above, from now on $S_A$ refers to the equal-partitioning entanglement entropy ($l_A=N_s/2$ for $N_s$ even, and $l_A=(N_s-1)/2$ for $N_s$ odd). \paragraph{$\nu=1/3$ Laughlin state ---} Fig.~\ref{fig:ee_onethird_laughlin} shows the behavior of $S_A(L_1)$ (a) and its derivative $dS_A/dL_1$ (b), for the Laughlin state at fraction $\nu=1/3$, arguably the most prominent and also the simplest FQH state. In this figure and subsequently, we use a five-point formula to numerically obtain derivatives. The entanglement entropy $S_A(L_1)$ remains minuscule until $L_1\sim3$, and then gradually changes to the expected linear increase behavior which is reached around $L_1\sim 7$. There are oscillations on top of the linear behavior, which are more prominent in the derivative plot (b). The oscillations can be interpreted as an interplay between the finite circumference along the $x$ direction (finite $L_1$) and the interparticle distance. The oscillations die off as a function of $L_1$, so that if $N_s$ is large enough one can get the scaling form at large $L_1$. For small $L_1$ the finite size convergence is essentially perfect. At larger $L_1$, the $S_A(L_1)$ and $dS_A/dL_1$ curves show stronger dependence on $N_s$. The $N_s$-dependence shows up first for the smallest system sizes and at increasing $L_1$ for progressively larger system sizes. This reflects the fact that, for any finite-size system, at very large $L_1$ the edges of $A$ get too close (small $L_2$) and cannot be thought of as independent \cite{Lauchli}. In particular, once $L_1$ exceeds some value we enter the ``dual thin torus'' or ``thick torus'' limit \cite{RH94,fat,seidel10}, and the entanglement entropy levels off to some saturation value. Corresponding to the saturation of $S_A(L_1)$, the derivative $dS_A/dL_1$ drops off to zero after some $L_1$. Thus, the scaling form of (\ref{satorus}) is valid only in a window of $L_1$, after the oscillations have subsided but before $S_A(L_1)$ saturates, or shows other precursor finite size effects. This plateau region can be seen clearly in the $dS_A/dL_1$ curve for the $\nu=1/3$ Laughlin data for $L_1 \gtrsim 12$. The finite size convergence of the data also provides a clear signal showing whether the bulk scaling regime is reached or whether (geometrical) finite size effects are still significant in the numerically accessible $L_1$ regime. \begin{figure}[t] \centerline{ \includegraphics*[width=0.8\linewidth]{EntropyDerivative_p_1_q_3_Coulomb.pdf} } \caption{ {\bf $\nu=1/3$ Coulomb ground state: Entanglement entropy}, (a) $S_A$ and (b) its derivative $\mathrm{d}S_A/\mathrm{d}L_1$ for the Coulomb ground state at $\nu=1/3$ as a function of $L_1$. One curve for the Laughlin state is also shown for comparison (dashed line). In the inset (c) the difference in $S_A(L_1)$ between the Coulomb and the Laughlin is displayed (for $N_s=30, 36$). \label{fig:ee_onethird_coulomb} } \end{figure} \paragraph{$\nu=1/3$ Coulomb ground state ---} Fig.\ \ref{fig:ee_onethird_coulomb} plots (a) $S_A(L_1)$ and (b) $dS_A/dL_1$ for the Coulomb ground states at $\nu=1/3$. While this state has somewhat more severe finite-size effects and oscillatory behaviors compared to the model Laughlin state of Fig.~\ref{fig:ee_onethird_laughlin}, we note that the scaling form of their entanglement entropies are very similar. To further highlight this fact we plot the difference between the entanglement entropies as a function of $L_1$ in the inset of Fig.~\ref{fig:ee_onethird_coulomb}(c). The similarity of entanglement entropies of the two states is not unexpected from the perspective that the states have a large overlap for all $L_1$ \cite{Yang}, but it is nevertheless interesting considering that a more "generic" state such as the Coulomb state is expected to have larger entanglement. One could thus have expected the Coulomb state to have a larger $\alpha$, as defined in Eq.\ \ref{eq_sa}, but Fig.~\ref{fig:ee_onethird_coulomb}(b) suggests a very similar $\alpha \approx 0.15(1)$. \begin{figure}[t] \centerline{ \includegraphics*[width=1\linewidth]{EntropyDerivative_p_1_q_5.pdf} } \caption{ {\bf $\nu=1/5$ Laughlin vs Coulomb:} Entanglement entropy (a) and its derivative (b) for the $\nu=1/5$ Laughlin wave functions. Entanglement entropy (c) and its derivative (d) for the $\nu=1/5$ Coulomb ground state. \label{fig:S_dSdL_one_fifth} } \end{figure} \paragraph{$\nu=1/5$ ---} Fig.~\ref{fig:S_dSdL_one_fifth} shows the $S_A(L_1)$ and $dS_A/dL_1$ behaviors at $\nu=1/5$, for both the Laughlin (a) \& (b) and the Coulomb ground state (c) \& (d). As expected, the finite-size oscillations are much more severe in these states. This is expected as the interparticle distance is larger; thus larger systems should be required to reach the scaling regime. Moreover, the proximity to the Wigner crystal phase makes the Coulomb ground state deviate more substantially from the Laughlin state than is the case at $\nu=1/3$ \cite{lam,Yang}. While we are able to get an almost $N_s$-converged $S(L_1)$ curve up to $L_1\sim 18$ for the Laughlin state [leading to a rough estimate of $\alpha\approx 0.17(2)$], the finite size effects in the Coulomb ground state are so severe that no meaningful extraction of $\alpha$ is possible with current system sizes. \paragraph{Extraction of the topological entanglement entropy $\gamma$ --- } In Fig.~\ref{fig:gamma_panel} we show calculations of $\gamma$ for the Laughlin state at $\nu=1/3$ (a) and $\nu=1/5$ (c) as well as for the Coulomb ground state at the same fractions (b) \& (d). Evaluating the $L_1$ derivative using a centered 5-point formula, we plot $S_A(L_1)-L_1\times dS/dL_1$ as a function of $L_1$. This quantity is the intercept of a linear approximation made to the $S_A(L_1)$ curve locally at each $L_1$. It should take the value $-2\gamma$ in the scaling region, see \Eref{satorus}. Not surprisingly, the intercept oscillates at intermediate $L_1$, has a plateau in the ``scaling window'' described above, and then moves off to a large positive value when $L_1$ is yet larger entering the thick torus regime. The plateau region value gives us the best estimate for the topological entanglement entropy. A significant advantage of our analysis is that, by examining the $dS_A/dL_1$ curve (and its $N_s$ dependence), we can identify the correct window of $L_1$ values to use. \begin{figure}[t] \centerline{ \includegraphics*[width=0.9\linewidth]{Gamma_5pt_p_1_q_3_5.pdf} } \caption{ {\bf $L_1$-local extraction of $\gamma$}. The intercept of local linear approximations to the $S_A(L_A)$ curves, \emph{i.e.}, $S_A(L_1)-L_1\times dS/dL_1$, plotted as a function of $L_1$. In the scaling regime, this quantity should give $-2\gamma$. The symbols for $\nu=1/3$ ($\nu=1/5$) are the same as in Fig.~\protect{\ref{fig:ee_onethird_coulomb}}~(Fig.~\protect{\ref{fig:S_dSdL_one_fifth}}). Theoretically expected $-2\gamma$ values are shown as dashed horizontal lines. In panel (c) the solid line through the largest size data is the fit obtained using \Eref{eqn:damped_fit}. \label{fig:gamma_panel} } \end{figure} In Fig.~\ref{fig:gamma_panel}(a) the $\nu=1/3$ Laughlin shows such a clear plateau region. The plateau region value around $L_1 \sim 18$ gives us the best estimate for the topological entanglement entropy $\gamma\approx 0.565(5)$, to be compared to the theoretical expectation $\gamma=\ln(3)/2\approx 0.5493$. The difference amounts to only 3 percent in this ideal case. For the $\nu=1/5$ Laughlin [Fig.~\ref{fig:gamma_panel}(c)], the finite-size issues are significantly larger, and the oscillations have not yet damped out at accessible sizes. However, one can take the average of the oscillating values to get a reasonable estimate of $\gamma$. We use a simple damped oscillation fitting ansatz of the form: \begin{equation} f(L_1) = -2\gamma + a \times \exp[-b L_1] \times \sin (c L_1- d), \label{eqn:damped_fit} \end{equation} and fit the $N_s=45$ curve for $L_1 > 9$, yielding an estimate of $\gamma\approx 0.81$. This value again compares very favorably to the theoretical expectation $\gamma=\ln(5)/2\approx 0.8047$. At each of these fractions, the finite-size convergence is worse for the Coulomb ground state compared to the Laughlin ground state. While for the $\nu=1/3$ Coulomb [Fig.~\ref{fig:gamma_panel}(b)] a fitting analysis along the lines of the $\nu=1/5$ Laughlin still provides a reasonable $\gamma$ estimate: $\gamma \approx 0.60$, the $\nu=1/5$ Coulomb state [Fig.~\ref{fig:gamma_panel}(d)] clearly does not allow a meaningful $\gamma$ extraction from system sizes presently reachable through numerical exact diagonalization. \begin{figure}[t] \centerline{ \includegraphics*[width=0.8\linewidth]{EntropyDerivativeGamma_p_2_q_5_Coulomb.pdf} } \caption{ {\bf$\nu=2/5$ Coulomb ground state.} (a) $S_A$ and (b) its derivative $\mathrm{d}S_A/\mathrm{d}L_1$ for the Coulomb ground state at $\nu=2/5$ as a function of $L_1$. (c) Estimation of $-2\gamma$ through a plot of the $L_1$-local intercept against $L_1$. \label{fig:twofifth} } \end{figure} \paragraph{$\nu=2/5$ Coulomb interaction ---} Finally, in Figure \ref{fig:twofifth} we consider the Coulomb ground state at filling $\nu=2/5$, {whose $\gamma$ value has not been studied numerically so far}. This state is, for all $L_1$, well described by the torus version \cite{torus} of the Jain \cite{jain89} (or, equivalently, the hierarchy \cite{haldane83}) state. The finite-size effects are somewhat less severe than the $\nu=1/5$ Coulomb case [Figs.~\ref{fig:S_dSdL_one_fifth}(c,d) and \ref{fig:gamma_panel}(d)]. One obtains an entanglement growth rate of $\alpha\approx 0.188(16)$. While the $N_s$-convergence is not good enough for a precise determination of $\gamma$ (expected to be $\frac{1}{2}\ln5$), examination of the largest two available sizes suggests that two or three additional sizes may be enough to provide an estimate at the $\sim$10\% accuracy level. \section{Discussion}\label{discussion} In this article we have shown how continuous geometric deformations of the torus can be employed to explore the scaling form of the entanglement entropy. This has allowed us to propose a method for determining the topological part, $\gamma$, from finite-size wavefunctions, to greater precision compared to earlier analyses which did not utilize any continuous parameter. Our analysis indicates that current state-of-the-art system sizes are enough to obtain reliable $\gamma$ calculations for the simplest fractional quantum Hall states (Laughlin states), but that more intricate states would require larger sizes than currently accessible, in order to reach the scaling limit. Our procedure provides a clear method for identifying whether the scaling window has been reached or not. There has been an earlier report of entanglement entropy and $\gamma$ calculations on the torus \cite{friedman}, using fixed aspect ratio, $L_1/L_2=1$. Ref.~\cite{friedman} performed $N_s\rightarrow\infty$ extrapolations at fixed $l_A$, and expected the extrapolated values to scale as $c_1\sqrt{l_A}-2\gamma$. We illustrate such a fixed $l_A$ extrapolation in Fig.~\ref{fig:FriedmanLevineScaling}. The extrapolation does not lead to a physically meaningful limit because the boundary lengths diverge and the two boundaries get infinitesimally close to each other, in the $N_s\rightarrow\infty$ limit. \begin{figure}[ht] \centerline{ \includegraphics*[width=0.9\linewidth]{blockscaling.pdf} } \caption{ {\bf Extrapolation at fixed aspect ratio and fixed $l_A$}. Fixed $l_A$ implies that the area covered by the region $A$ is constant. Such extrapolation does not lead to a well-defined limit as the limiting case is one with infinitesimal thickness and infinite boundary length. \label{fig:FriedmanLevineScaling} } \end{figure} The extrapolation of Ref.\ \cite{friedman} arose from an incorrect adaptation of the procedure of Ref.\ \cite{HaqueZozulyaSchoutens} which was designed for the sphere geometry. For the spherical case, the fixed-$l_A$ extrapolation takes one to the well-defined limit of an infinite disk with circular $A$ region, where the boundary length indeed scales as $\sim\sqrt{l_A}$. Also for the disk \cite{MorrisFeder}, the fixed-$l_A$ extrapolation to $N\rightarrow\infty$ is well-defined as the limiting situation is again an infinite disk with constant density. (The density is not constant in finite-size disk simulations.) However, for the torus with unit aspect ratio, the fixed-$l_A$ extrapolation to $N\rightarrow\infty$ has a pathological limit for the shape of the $A$ block, and the limiting entanglement entropy has no reason to scale as $\sim\sqrt{l_A}$. A plot of the extrapolated $S_A$ versus $\sqrt{l_A}$ thus has no obvious connection to the entropy scaling as a function of the boundary length, or to the definition of $\gamma$ as formulated in Refs.\ \cite{KitaevPreskill_PRL06,LevinWen_PRL06}. The idea of obtaining entanglement entropy scaling through varying discrete torus circumferences has been employed in Ref.\ \cite{FurukawaMisguich_PRB07} for the dimer model on the triangular lattice. The details are quite different from the FQH case. It is possible that some of the ideas developed here for the FQH context might be transferred fruitfully to numerical work on dimer models or other lattice models. Ref.\ \cite{integer} has studied entanglement of \emph{integer} quantum Halls states on a torus, between true spatial partitions rather than orbital partitions. The orbital partitioning entanglement is zero for integer quantum Hall states because they are product states in the orbital basis. Since Ref.\ \cite{HaqueZozulyaSchoutens, HaqueZozulyaSchoutens2} reported entanglements of the same Laughlin states on a different geometry, it is interesting to compare the magnitudes of the entanglement entropy. The data tabulated in Ref.\ \cite{HaqueZozulyaSchoutens2} for the $\nu=1/3$ state yields a value of $\alpha\sim0.15$, which is close to that obtained from the torus entanglement data reported in this work. While this is not unexpected, it has several instructive implications. First, it can be regarded as additional evidence that orbital partitioning entanglement is a good approximation to spatial partitioning entanglement. Second, it shows that the entanglement entropy contributions from the two edges simply add for a block with two edges compared to one edge. In addition, the fact that the sphere and the torus have the same ``entanglement entropy density'' per unit boundary length, allows us to compare the difficulty of DMRG simulations on spherical and toroidal geometries based on considerations of linear sizes alone. Conventional wisdom might be that the torus is significantly more difficult to treat using DMRG, because it is a system with periodic boundary conditions while the sphere is more analogous to a lattice system with open boundary conditions. However, the torus has the same block boundary length everywhere ($2L_1 = 2\sqrt{2{\pi}N_s}$ for the hardest case of unit aspect ratio), while the block boundary on a sphere varies. For DMRG on a spherical geometry, the dominant reduced entropy contributions comes from the equator region, where the block boundary is $L=\pi \sqrt{2 N_s}$. The torus boundary is only a factor $2/\sqrt{\pi}\approx1.13$ larger than the sphere case, rather than being twice as large. Moreover, from the two edges one benefits twice the (negative) contribution from the topological part of the entanglement entropy (in contrast to a single contribution on the sphere). We therefore infer that DMRG on the torus geometry is not as drastically more difficult compared to the sphere case, as would be suggested by the argument of two block boundaries versus one. The entanglement entropy $S$ in a bipartition of a quantum state puts a lower bound on the number of states $m$ to be kept in an accurate DMRG simulation through the relation $m \sim \exp(S)$. The computational complexity of DMRG being polynomial in $m$, the scaling of $S$ with system geometry is thus of primary importance. In the simplest cases, {\it e.g.}, one dimensional gapped quantum systems with local interactions, the entropy $S$ does not depend on the block length, enabling DMRG simulations for basically infinite systems at constant $m$. Torus FQH simulations at {\em constant} $L_1$~\cite{bk03} also belong to this tractable class. If one is however interested in describing true bulk FQH systems at fixed aspect ratio ($L_1/L_2=\mathrm{const}$), then entropy $S$ will scale linearly with $L_1 \propto \sqrt{N_s}$. This translates into $m \propto \exp[ \mathrm{const} \times \sqrt{N_s}]$, i.e., accurate DMRG simulations for bulk FQH states scale exponentially in the physical width, similar to 2D lattice models~\cite{dmrgrev}. The present work opens up a number of directions deserving exploration. Our analysis provides a way to decide on whether or not available wavefunction sizes provide access to the entanglement scaling regime. Thus, as bigger wavefunctions become available, our analysis can be applied directly to obtain better calculations of the topological entanglement entropy $\gamma$. Eventually, this type of calculation could become a standard tool for diagnosing unknown or poorly-understood FQH states. Another obvious direction is the study of entanglement scaling through geometric deformations at other fractions and more complicated states. It may also be interesting to try to devise continuous geometric tuning parameters for other geometries, \emph{e.g.}, one may consider ellipsoidal geometries as deformations of the sphere, although setting up the Landau level problem in such geometries is not straightforward or convenient. It is possible that a combination of various deformation considerations might lead to further refined procedures for estimating entanglement scaling and $\gamma$. \section{Acknowledgments} We thank Kareljan Schoutens for suggesting entanglement calculations on the torus with varying geometry, and Juha Suorsa for collaboration on related topics. MH thanks Nicolas Regnault for related discussions. We acknowledge MPG RZ Garching and ZIH TU Dresden for allocation of computing time. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{s:intro} Before attaining virialization, galaxy clusters passes through the phases of continuous accretion and serial mergers of bigger and bigger galaxy groups. Particularly, mergers are extremely energetic process and the energy released during merger is dissipated in the Inter Cluster Medium (ICM) by thermalizing it through strong collision-less shocks. These strong shocks with an efficient Fermi acceleration of energetic particles could generate strong MHD waves in the upstream and downstream regions of shocks and strongly amplify the upstream magnetic field present in the ICM \citep{bykov2008}. This also transforms kinetic energy to turbulent energy by injecting volume-filling turbulence in the ICM \citep{Subra2006}. Thermal to turbulent energy fraction in such an incident can reach as high as 30$\%$ \citep{Vazza2006,Paul2011}. Turbulent dissipation in such system acts on a significantly longer time-scale than shocks (shock time scale $\sim$ 2 Gyr \citep{Paul2011}), and in principle can stochastically re-accelerate the ambient electrons \citep{Brunetti2007}. As a result, such objects produce a significant amount of synchrotron radio emission. And, thus they are promising candidates for radio observations. Observations of galaxy clusters in radio waves are thus important as they became instrumental in tracing back the formation history of the galaxy-clusters. Large scale ($l \gtrsim500$ kpc) diffuse radio emission from galaxy clusters are broadly of two types.`Radio halo' and the `radio relics' of sizes $\sim$ Mpc. Radio halos have smooth morphologies, are extended with sizes $\gtrsim$1 Mpc, unpolarized, and are found at the centres of clusters, co-spatial with the thermal X-ray emitting gas of the ICM. Giant radio relics are observed in the cluster periphery, sometimes showing symmetric or ring-like structures and are highly polarized (p$\sim$ 10-$50 \%$ at 1.5 GHz). They are probably signatures of electrons accelerated at large-scale shocks. The vast majority of clusters with diffuse extended radio sources are massive, X-ray luminous and show signs of undergoing mergers. \vspace{-0.3cm} \section{GMRT observations and data analysis} The principal goal of this work is to obtain the GMRT deep radio maps of diffuse emission from the massive clusters. We want to understand the effect of mergers on production of non-thermal emission and controlling the energy budget of the central and cluster peripheral ICM. It was then an obvious choice to search for the biggest clusters from the MAssive Cluster Survey (MACS). This survey was designed to find the population of strongly evolving clusters, with the most X-ray luminous systems using a specific X-ray selection function described in \citep{Ebeling2001}. From the MACS list (upto 2010; \citet{Ebeling2001,Horesh2010}) we choose the clusters that are showing clear merging activity in X-ray/Temperature map and also from mass distribution calculated from weak lensing of these objects \citep{Zitrin2011}. Significantly, hitherto unexplored faint diffuse radio emission (of radio-halo and/or relic type) is also identified from the 1.4 GHz NVSS survey in their central or peripheral regions. In this project, we observed four MACS clusters using GMRT dual band 235 \& 610 MHz with 32 MHz band width. Observation was done during June-August 2011 (Project Code : 20$\_$062, PI S. Paul), each with on source time of 5 hours. For data analysis and imaging, the code 'SPAM' has been used (for details see \citet{Intema2009}). \vspace{-0.3cm} \section{Results} Interestingly, in MACSJ0014.3-3022, one of our four observed clusters, we have detected both a magnificent relic and an extremely large halo (Panel 1 and 4 in Fig~\ref{radio}). Both the relic and halo are almost of the same dimension of $\sim$ 1.5 Mpc. The relic is placed at more than a Mpc away from the cluster centre i.e. at the virial radius of the system. This object is also known as Abell 2744, with previously studied halo \& relic (e.g., \citep{Govoni2001,Orr2007}. Interestingly, it's also one of the HST frontier fields. \begin{figure}[h!] \includegraphics[width=4.3cm]{MACS0014_rad610.eps} \includegraphics[width=4.5cm]{MACS0152_rad610.eps} \includegraphics[width=4.5cm]{MACS1931_rad610.eps}\\ \includegraphics[width=4.3cm]{MACS0014_rad235.eps} \includegraphics[width=4.5cm]{MACS0152_rad235.eps} \includegraphics[width=4.5cm]{MACS1931_rad235.eps} \caption{ {\bf Top Panels:} 610 MHz GMRT radio continuum map as grey scale and black contours for the three clusters namely MACSJ0014.3-3022, MACSJ0152.5-2852 and MACS 1931-2635 respectively. Specific sizes, contour level of flux are written on the images itself. {\bf Bottom Panels:} GMRT 235 MHz Radio images with all other details as above.\label{radio}} \end{figure} We made a spectral index map by combining GMRT 240 MHz and 610 MHz data (Fig~\ref{f:spectral}). Spectral index of the relic is steepening gradually from $\sim$ -0.7 at the outer surface to $\sim$ - 1.3 towards the inner side of the relic. This is because of higher shock compression rate and greater efficiency of particle acceleration through first order Fermi acceleration. On the contrary the halo has a flatter spectral index ($\sim$ -0.8) in the central part and it steepens towards the outer part of the halo ($\sim$ -1.1). The other interesting object is MACSJ0152.5-2852 (Panel 2 and 5 in Fig~\ref{radio}). This cluster is a very high redshift z=0.413 cluster, with a possible relic of more than 0.5 Mpc long. Possibly, one of the earliest and young merging system detected in radio waves. The cluster MACS 1931-2635 (Panel 3 and 6 in Fig~\ref{radio}) doesn't show any clear diffuse emission. But, an interesting bent jet and a central bright radio galaxy is found. We didn't detect any significant radio emission from the cluster MACSJ0025.4-1222. \begin{figure} \begin{center} \begin{tabular}{p{6cm}cp{6cm}} \raisebox{-\height}{\includegraphics[width=5.5cm]{Spectral2.eps}} & \quad & \caption{This is the spectral index image of MACSJ0014.3-3022. GMRT 235 and 610 MHz images with synthesised beam width of $35^{''} \times 35^{''}$ are combined to construct spectral index maps of the east side relic and central radio halo.\label{f:spectral}} \end{tabular} \end{center} \end{figure} \section{Conclusions} Our dual frequency (235/610 MHz) GMRT observations have produced interesting results. We have discovered a unique object MACSJ0014.3-3022, with both a spectacular bow shock like flat spectrum radio relic and a huge central radio halo. This is also unique as, its central radio halo is unusually flat spectrum. Such a flat spectrum can only be possible if there is a continuous injection of shock accelerated high energy particles or if ambient electrons are stochastically re-accelerated. So, this indicates the cluster is still going through a massive merging phase. \vspace{-0.4cm} \section*{Acknowledgements} {\scriptsize We would like to thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. {\bf SP} Acknowledges DST-SERB Young Scientist funding under Fast Track Scheme (Govt. of India) for funding this project. {\bf HTI} acknowledges financial support by the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. {\bf AD} has been supported by NASA Postdoctoral Fellowship Program through NASA Lunar Science Institute.} \vspace{-0.3cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} Since their introduction to the area of computer vision, Convolutional Neural Networks (CNNs) have continued to improve upon the state-of-the-art. Recently, a growing collection of research has been brought forward applying CNNs to tasks which are auditory in nature, including: speech recognition \cite{abdel2014,deng2013}, musical information retrieval \cite{choi2017,humphrey2012}, and acoustic scene classification \cite{piczak2015,salamon2016}. Inspired by the compelling results obtained in the previously mentioned domains, researchers in oceanography and marine biology have started to investigate similar solutions to problems in their field. One such problem is the analysis of underwater acoustic data, which is one of the primary methods used to measure the presence, abundance, and migratory patterns of marine mammals \cite{zimmer2011}. The necessary acoustic data for modelling marine mammal life is often collected using Passive Acoustic Monitoring (PAM) techniques. PAM is non-invasive and reduces the risk of altering the behaviour of a species of interest, unlike GPS tagging. PAM is also less susceptible to harsh weather conditions compared to visual surveys. Acoustic data collected for PAM is often carried out using moored recorders equipped with hydrophones. Stakeholders make use of PAM to adjudicate environmental and governmental policy decisions, for example implementing reduced speed limits on vessels travelling through shipping channels in order to reduce their risk of collision with endangered species of whales \cite{shipping2019}. Due to their high cost of deployment, PAM recording devices may be left unattended for months or years at a time before resurfacing, producing very large amounts of data; typically several terabytes per deployment. It is becoming increasingly common for collections of acoustic data to be described at the petabyte scale, making complete human analysis infeasible. As a result, research into automated Detection and Classification Systems (DCS) is widespread and continuing to grow. From a machine learning perspective, a DCS can be interpreted as a hierarchical model containing a binary classifier recognizing whether a signal of interest is present within an acoustic recording, combined with a multi-class classifier for determining the source of the signal. Importantly, marine biologists and oceanographers are typically concerned with the presence or absence of specific species in an acoustic recording. While there have been great advances in the research and development of these systems, many DCS are based on the acoustic properties of a signal of interest and may be specific on a per-dataset basis depending on the equipment that was used or the geographic location of the recordings. Therefore, such systems are often not generalizable and may require being formulated from scratch for a new data set. Moreover, attempts at producing generalizable systems yield high rates of false detections \cite{baumgartner2011}. In this work, we present a deep learning implementation of a DCS composed of a CNN trained on spectrogram representations of acoustic recordings. The main contributions of this work are: \begin{itemize} \item A CNN capable of classifying three species of marine mammals as well as non-biological sources and ambient noise. \item The classifier makes up an automated DCS that is generalizable and can be adapted to include additional species that produce vocalizations below 1000Hz. \item A novel visual representation of acoustic data based on interpolating and stacking multiple spectrograms produced using distinct Short-time Fourier Transform parameters. \end{itemize} This work describes a complete application using original data collected for scientific research that could have substantial implications towards environmental policy and conservation efforts. The data was manually selected based on the target species of interest, however, it has not been cleaned and manipulated unlike many research projects in machine learning that use common sets of image data or preprocessed acoustic recordings. Additionally, while the results focused on in this paper are centred on detection and classification of marine mammals, the framework outlined in this paper can be adapted to other tasks such as acoustic scene classification. The remainder of this paper is organized as follows. In Section \ref{sec:background} we review related work on the topic of marine mammal species classification and provide further details on the complexities of the problem. An overview of common representations of acoustic data as well as a novel representation formulated especially for the task of marine mammal species classification is provided in Section \ref{sec:representations}. The data set used in training the CNN and additional information regarding the experimental setup is provided in Section \ref{sec:dataset}. The corresponding experimental results are analyzed in Section \ref{sec:results}. Finally, concluding remarks and future work are presented in Section \ref{sec:conclusion}. \section{Background and Related Work}\label{sec:background} CNNs have traditionally been applied to visual recognition tasks on large collections of labelled images. Most notably, CNNs have lead to state-of-the-art performance for classifying commonly used benchmark image data sets and have surpassed human levels of performance \cite{he2016}. Beyond image classification, CNNs have also been used for object detection \cite{girshick2015,he2017} and in conjunction with Recurrent Neural Networks for natural language processing \cite{karpathy2015}. Recently, several factors have led researchers to apply CNNs outside of the visual paradigm such as classifying events or patterns found in acoustic recordings. An obvious reason for adapting CNNs to acoustic tasks is the performance levels of the classifiers cited above. A less obvious reason to those not working in the field of acoustics or digital signal processing, is that human analysis of acoustic data is often carried out visually using spectrograms as it is faster to visually identify signals of interest without having to listen to the entire recording. Another reason for using visual representations of acoustic data is that they allow for the analysis and interpretation of sounds outside of the human hearing range. One area, alluded to in Section \ref{sec:introduction}, that makes frequent use of visual representations of acoustic data is the detection and classification of marine mammal vocalizations within underwater acoustic recordings (i.e., DCS research). Research into automated DCS has been a growing topic of interest, in part, as a by-product of the reduced costs in recording equipment which has produced vast amounts of data. Another reason for the growth in DCS research is for conservation purposes, particularly as it relates to endangered species of whales. In developing an automated DCS for marine mammal vocalizations, one hopes to accurately detect and assign a label to an instance of an acoustic recording containing one or more vocalizations produced by a species of interest. However, developing a generalizable DCS presents several distinct challenges. For one, underwater recordings often have a low signal-to-noise ratio making feature extraction difficult. Another challenge is that ground truth labelled data is difficult to obtain due to the required expertise and training of the labeller. As a result, only a very small fraction of the large collections of acoustic data is suitable for supervised learning. Furthermore, the small numbers of some species coupled with the low rate of occurrence of their vocalizations make for highly unbalanced data. Traditionally, many of the algorithms used to detect and classify marine mammal vocalizations are derived from the properties of a signal of interest. In general, these approaches can be divided into two categories. The first category of algorithms involves comparing unlabelled data to templates of certain vocalizations. Examples of this approach include \textit{matched filtering}, where a template corresponding to the vocalization of interest is convolved with a signal to produce a detection function that is evaluated using a pre-determined threshold parameter \cite{clark1987}. Another example is \textit{spectrogram correlation}, which first computes a correlation kernel using segments of template spectrograms, following which, the correlation kernel is convolved over a spectrogram of the unlabelled data producing a vector representing the similarity between the spectrogram and the kernel over time. Large similarity values correspond to possible detections. The second category of algorithms involves detecting regions of interest in a spectrogram and extracting features (e.g.: the duration of the detection or the absolute change in frequency) to be used as input vectors for classification. Various detection algorithms are used in the first step of this approach including: neighbourhood search algorithms (e.g., pixel connectivity) in spectrograms that have been filtered, smoothed, and cast to binary representations \cite{baumgartner2011} and contour detectors that operate by continually searching for local maxima within pre-specified frequency bands of normalized spectra over time \cite{mellinger2011}. These detection algorithms are heavily dependent on the filtering, normalization, and smoothing operations that are performed on each spectrogram. Once the regions of interest are determined, feature vectors are then handed to commonly used classification algorithms such as: linear and quadratic discriminant analysis \cite{baumgartner2011,gillespie2013}, support vector machines \cite{dugan2010}, and artificial neural networks \cite{dugan2010}. Researchers have also likened the task to automatic speech recognition and used Gaussian mixture models and hidden Markov models for classification \cite{roch2011,skowronski2006}. The algorithms described above involve a large amount of human input--often from experts--which is a limitation to the development of future classifiers for several reasons. In the former category the templates used for detection and classification are largely specific to not only certain species, but also different types of vocalizations produced by the same species. Furthermore, the detection threshold may require fine-tuning depending on the noise characteristics of the data set. For the latter category of algorithms, many of the hyper-parameters provided to the smoothing and noise-removal routines are dependent on the data set. Subsequently, the hand-engineered features are contaminated by these specifications as well as human bias. These limitations yield systems which are not easily generalizable to a broad category of species using data collected at different sampling rates, geographic locations, or using different recording devices. More recently, researchers have attempted to use deep learning to learn generalizable representations of spectrograms for the purpose of DCS development. In one study, Halkias et al. \cite{halkias2013} contrast the performance of a restricted Boltzmann machine and a sparse auto-encoder for classifying five species of baleen whales (\textit{mysticetes}), however, the regions of interest containing the whale calls were assumed to be known. Wang et al. \cite{wang2018} use CNNs to classify spectrograms containing vocalizations of killer whales (\textit{Orcinus orca}) and pilot whales (\textit{Globicephala melas/macrorhynchus}) but similarly do not include non-biological or ambient noise sources. Liu et al. \cite{liu2018} also use CNNs but focus on the classification of call types as opposed to the species that produced them. Finally, Luo et al. \cite{luo2019} train a CNN to detect the high-frequency echolocation clicks of toothed whales (\textit{odontocetes}) using a combination of real audio recordings and synthetic data, however, we are interested in classifying baleen whale vocalizations that occur at a much lower frequency and can be masked by low tonal sounds created by shipping activity. \section{Visual Representations of Acoustic Data}\label{sec:representations} Human analysis of acoustic recordings is performed aurally by listening to an acoustic recording as well as visually using spectrograms. A popular approach for generating spectrograms is through a Short-time Fourier Transform (STFT). The STFT procedure calculates the sinusoidal frequency and phase content of an acoustic signal over time and is most commonly visualized in two dimensions with time on the $x$-axis, frequency on the $y$-axis, and intensity expressed by varying colour. The equation of the discrete-time STFT of a signal $x[n]$ can be expressed as: \begin{equation}\label{eqn:stft} X(n,\omega) = \sum_{m=-\infty}^\infty x[m]w[m-n]e^{-j\omega m}~, \end{equation} \noindent where $w$ is a windowing function with a pre-specified length centred at time $n$. In the equation expressed above, time is discrete and frequency ($\omega$) is continuous, however, in practice both units are discretized and each successive STFT is computed using an implementation of the Fast Fourier Transform (FFT) algorithm (e.g., the Cooley-Tukey algorithm \cite{cooley1965}). Equation \ref{eqn:stft} describes a complex function, therefore, we take the square of the absolute value of $X(n,\omega)$ yielding a spectrogram of the power spectral density. Finally, we convert the intensity from power to a logarithmic scale (i.e., decibels (dB)), as is commonly the case in underwater acoustics. \subsection{Mel-scaled Spectrograms}\label{subsec:mel-spectrograms} A spectrogram computed using the approach formulated above is linear in frequency. Unfortunately, because CNNs are spatially invariant, they are incapable of understanding human perceptions of pitch when frequency is expressed on a linear scale. For example, while the difference between two signals occurring at 1000Hz and 1500Hz and two other signals occurring at 10kHz and 10.5kHz are numerically equivalent (i.e., equal to 500Hz), the difference of the lower frequency signals is perceptually much larger to a human listener. The bandwidth of the data we are attempting to classify is relatively low (i.e., $\leq1000$Hz), therefore the CNNs imperception to pitch is not a major concern. However, in order to test this hypothesis, we additionally generate mel-scaled spectrograms whereby frequency is transformed from hertz to mels (from the word melody) using the formula outlined in Equation \ref{eqn:mels}. \begin{equation}\label{eqn:mels} \omega_{mel} = 2595\log_{10}\left(1 + \frac{\omega_{Hz}}{700}\right)~. \end{equation} Following this transformation, the resulting frequency scale more closely aligns with the $\log$-like human perception of pitch. \subsection{Novel Representation: Stacked \& Interpolated Spectrograms}\label{subsec:interpolated} The majority of the DCS detailed in Section \ref{sec:background} were trained using large collections of single channel inputs in the form of spectrograms. During the creation process of said data sets, a decision must be made on the appropriate combination of parameters to pass to the STFT. In practice, when marine biologists analyze acoustic recordings, they will often generate multiple spectrograms using different STFT parameters, for example: changing the length of the FFT window and/or the window overlap. By changing the parameters of the STFT, the time and frequency resolutions of the spectrogram are altered. Using multiple spectrograms with varying resolutions is particularly helpful when annotating underwater acoustic recordings containing marine mammal vocalizations because some species tend to make prolonged low-frequency vocalizations with a small bandwidth (e.g.: blue whale moans), while other species make shorter vocalizations with a larger bandwidth (e.g.: humpback songs). Depending on the set of parameters used to generate the spectrogram, one can easily misclassify a vocalization as a different species or miss the vocalization entirely. \begin{figure}[!htb] \centering \begin{tikzpicture}[ arrow/.style={->,myblue,line width=.2cm}, box/.style={myblue,rounded corners=.2cm,line width=.2cm} ] \node[inner sep=0pt] at (-0.41,-0.05) {\includegraphics[width=2.5cm]{waveform}}; \draw[arrow, line width=1pt] (1,0) -- (1.75,0) node{}; \pgfmathsetmacro{\cubex}{2}; \pgfmathsetmacro{\cubey}{1}; \draw[box, line width=1pt] (3.9,0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle; \node[text width=3cm] (fft) at (3.6,0) {\textbf{(1) STFT}}; \draw[arrow, line width=1pt] (4.05,0) -- (4.8,0) node{}; \pgfmathsetmacro{\cubex}{3.1}; \pgfmathsetmacro{\cubey}{1}; \draw[box, line width=1pt] (8,0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle; \node[text width=4cm] (fft) at (7.1,0) {\textbf{(3) Interpolation}}; \draw[arrow, line width=1pt] (8.15,0) -- (8.9,0.45) node{}; \draw[arrow, line width=1pt] (8.15,0) -- (8.9,0) node{}; \draw[arrow, line width=1pt] (8.15,0) -- (8.9,-0.45) node{}; \node[inner sep=0pt] at (9.5,0.15) {\includegraphics[width=.8cm]{F_nfft_256}}; \node[inner sep=0pt] at (9.7,0.00) {\includegraphics[width=.8cm]{F_nfft_16384}}; \node[inner sep=0pt] at (9.9,-0.15) {\includegraphics[width=.8cm]{F_nfft_2048}}; \end{tikzpicture} \caption{Simple illustration demonstrating the process of transforming a waveform of an acoustic signal into a multi-channel input via interpolation and stacking.} \label{fig:data-pipeline} \end{figure} We propose a novel representation of an acoustic signal that attempts to exploit the strategy used by human experts during the annotation process. First, following Equation \ref{eqn:stft}, several spectrograms are generated using multiple sets of STFT parameters. Because each of the spectrograms vary in resolution across time and frequency, they are interpolated using a simple linear interpolation spline over a grid proportionate to the smallest time and frequency resolutions. The equation of a linear interpolation spline for some point $(n, \omega)$ between $(n_i, \omega_i)$ and $(n_{i+1}, \omega_{i+1})$, where $n$ is known, can be expressed as: \begin{equation}\label{eqn:interpolation} \omega = \omega_i + \frac{\omega_{i+1} - \omega_i}{n_{i+1} - n_i}(n - n_i)~. \end{equation} After interpolation, the dimensions of the matrices corresponding to each spectrogram are the same. The interpolated spectrograms are then stacked to form a multi-channel tensor; imitating the concept of RGB channels in a digital colour image, as depicted in Figure \ref{fig:data-pipeline}. The details of the algorithm used to produce a single instance of the novel representation described above are outlined in Algorithm \ref{algo:novel-representation}. \begin{algorithm} \KwIn{The waveform $x$, function $w$, and parameters $\mathbf{\Theta}=[\theta_1,\theta_2,\dots,\theta_k]$} \KwOut{A tensor $\bm{\mathsf{Z}}$ with $k$ channels} Initialize the interpolation resolutions $\omega_0$ and $n_0$ to $\infty$ \\ \For{$i = 1$ \text{to} $k$}{ Generate a spectrogram $\mathbf{D}_i = \text{STFT}(x; w, \theta_i)$ (Eqn \ref{eqn:stft})\\ Maintain a running minimum of $\omega_0$ and $n_0$ \\ \If{$\Delta\omega_i < \omega_0$}{ $\omega_0 = \Delta\omega_i$ } \If{$\Delta n_i < n_0$}{ $n_0 = \Delta n_i$ } } \For{$i = 1$ \text{to} $k$}{ Interpolate each spectrogram $\mathbf{S}_i = \text{INTERPOLATE}(\mathbf{D}_i; \omega_0, n_0)$ (Eqn \ref{eqn:interpolation}) } Stack the interpolated spectrograms $\bm{\mathsf{Z}} = [\mathbf{S}_1, \mathbf{S}_2, \dots, \mathbf{S}_k]$ \\ Return $\bm{\mathsf{Z}}$ \caption{Generating an instance of the novel representation} \label{algo:novel-representation} \end{algorithm} \section{Data Processing and Experiment Setup}\label{sec:dataset} \subsection{Recordings of Marine Mammal Vocalizations}\label{subsec:vocalizations} The acoustic recordings used to train the classifier were collected by JASCO Applied Sciences using Autonomous Multichannel Acoustic Recorders (AMARs) during the summer and fall months of 2015 and 2016 in the areas surrounding the Scotian Shelf; along the coast of the Atlantic Canadian provinces (Figure \ref{fig:map}). \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{deployment_map} \caption{Map depicting the locations of the recording devices deployed by JASCO Applied Sciences along the Scotian Shelf located off the coast of Atlantic Canada.} \label{fig:map} \end{figure} The recordings were sampled at both 8kHz and 250kHz in order to capture the low frequency vocalizations of baleen whales and high frequency vocalizations of toothed whales, respectively. In this work we focus on the detection and classification of baleen whales. In particular, we are interested in three species: blue whales (\textit{Balaenoptera musculus}), fin whales (\textit{Balaenoptera physalus}), and sei whales (\textit{Balaenoptera borealis}). These species can be particularly challenging to classify as they are each capable of making a similar vocalization known as a down sweep during the summer months. A large collection of baleen whale vocalizations fall below 1000Hz, therefore, we restrict our set of acoustic recordings to those collected using the 8kHz sampling rate. The acoustic recordings were analyzed by marine biology experts producing over \num{30000} annotations in the form of bounding boxes around signals pertaining to the three species of whales and other acoustic sources labelled as ``non-biological''. Other species of whales present in the recording area were also annotated, however, they were not included in this paper. The distribution of annotations is heavily unbalanced in favour of the more vocal fin whales at a 6:1 ratio. The data sets used for training, validating, and testing each classifier were created in the following fashion. First, the human annotations were centred within an excerpt 30 seconds long. Four spectrograms depicting typical examples of the 30 second excerpts are provided in Figure \ref{fig:example-spectrograms}; one for each of the possible acoustic sources. Example annotations are drawn using dashed vertical lines. As we can see, not every vocalization that appeared in a spectrogram was labelled. In Figure \ref{fig:example-spectrograms}a for example, there appears to be three blue whale vocalizations occurring consecutively, however, only the second has been annotated. \begin{figure}[!htb] \centering \includegraphics[width=0.8\linewidth]{species_grid} \caption{Example spectrograms displaying frequency in hertz on a log-scale. Examples are provided for each of the three whale species: a) blue whales, b) fin whales, c) sei whales, and d) non-biological noise. Dashed vertical lines depict the upper and lower bounds of the expert annotations.} \label{fig:example-spectrograms} \end{figure} For each 30 second excerpt, a smaller ten second long sample (here on referred to as simply a ``sample'') containing the annotation was randomly selected from the larger excerpt. Due to the partial labelling of the recordings, it is possible that a sample may include more than one vocalization. For example, a sample from time 10 to 20 seconds in the file used to produce Figure \ref{fig:example-spectrograms}c, would in fact contain three sei whale vocalizations. The set of data containing only ambient noise was produced in a similar fashion, however, they were produced from a large set of files that were known to not contain baleen whale vocalizations. As such, the sampling routine simply selected a ten second sample randomly from the entire file. A spectrogram of each sample was produced corresponding to the CNN that was being trained and the matrices corresponding to the values of the spectrograms were used as training instances. In total there were five categories of classifiers: three trained on single-channel spectrograms using increasing FFT window lengths (i.e., \num{256}, \num{2048}, and \num{16384} samples); one trained on single-channel mel-scaled spectrograms using a window length of \num{2048} samples and 128 mels; and one trained on a three-channel version of the novel representation described in Section \ref{subsec:interpolated}. The three spectrograms used in creating the novel representation used window lengths of \num{256}, \num{2048}, and \num{16384} samples respectively and were interpolated to fit within a grid of height 256 and width 128 units. All of the above spectrograms were produced using the Hann window function and used an FFT window overlap of $1/4$ the window length. The choice of window lengths were chosen in order to capture short sweeping vocalizations such as whistles (i.e., $256\approx1/32$ the sampling rate), a more inclusive group of vocalizations (i.e., $2048\approx1/4$ the sampling rate), and long vocalizations that are fairly persistent in frequency (i.e., $16384\approx2\times$ the sampling rate). The computed spectrograms were truncated using an upper frequency bound of 1000Hz and a lower bound of 10Hz. Apart from the linear interpolant applied in the case of the novel representation, no additional filtering, smoothing, or noise removal was applied to the spectrograms. In practice, the ten second sampling routine and all subsequent steps including spectrogram generation were executed in parallel on the CPU while the CNN was trained on the GPU. In this way, the sampling routine acted as a quasi-data-augmentation strategy for each training batch. Further details with respect to the CPU, GPU, batch sizes, and other parameters used during training are provided in Section \ref{subsec:training-details}. Separate training, validation, and test data sets were produced using a random split ratio of 70/15/15, respectively. Table \ref{tbl:datasets} contains the number of files and the corresponding species distributions of each data set. \begin{table}[!htb] \caption{Number of files and the distribution of each acoustic source for the training, validation, and test sets.} \label{tbl:datasets} \centering \def1.5{1.25} \begin{tabular}{lcccrlcrlcrl} \specialrule{.1em}{.25em}{.05em} \textbf{Source} & ~ & \textbf{Label} & ~~~ & \multicolumn{2}{c}{\textbf{Training}} & ~~~ & \multicolumn{2}{c}{\textbf{Validation}} & ~~~ & \multicolumn{2}{c}{\textbf{Test}} \\ \hline Blue Whale & ~ & BW & ~~~ & \num{2692} & (6.23\%) & ~~~ & \num{601} & (6.49\%) & ~~~ & \num{574} & (6.20\%) \\ Sei Whale & ~ & SW & ~~~ & \num{1701} & (3.94\%) & ~~~ & \num{332} & (3.59\%) & ~~~ & \num{383} & (4.14\%) \\ Fin Whale & ~ & FW & ~~~ & \num{15118} & (35.01\%) & ~~~ & \num{3244} & (35.06\%) & ~~~ & \num{3272} & (35.36\%) \\ Non-biological & ~ & NN & ~~~ & \num{2078} & (4.81\%) & ~~~ & \num{449} & (4.85\%) & ~~~ & \num{398} & (4.30\%) \\ Ambient & ~ & AB & ~~~ & \num{21589} & (50.00\%) & ~~~ & \num{4626} & (50.00\%) & ~~~ & \num{4627} & (50.00\%) \\ \specialrule{.1em}{.25em}{.05em} \end{tabular} \end{table} \subsection{Neural Architectures and Training Parameters}\label{subsec:training-details} We evaluate the performance of two commonly used CNN architectures, namely: ResNet-50 \cite{he2016} and VGG-19 with batch normalization \cite{simonyan2014}. The CNNs were implemented in Python using the PyTorch open source deep learning platform \cite{paszke2017}. Training was distributed over four NVIDIA P100 Pascal GPUs each equipped with 16GB of memory. The sampling routine and subsequent data processing was performed in parallel on two 12-core Intel E5-2650 CPUs. Each CNN--regardless of the FFT window length or number of channels--was trained using the same hyper-parameters apart from the initial learning rate, which was set to 0.001 for the ResNet architecture and 0.01 for the VGG architecture. In both cases, the learning rate decayed exponentially by a factor of 10 using a step schedule of 30 epochs. The batch size of each training step was set to 128 instances. Stochastic Gradient Descent (SGD) with momentum equal to 0.9 and weight decay equal to $1e^{-4}$ was used to optimize a cross-entropy loss function. The CNNs were each trained for a total of 100 epochs. After each epoch, the validation set was evaluated and the model with the best performance in terms of F-1 Score was saved. An early stopping criteria was not used, however, if the model began to overfit to the training data and the F-1 Score of the validation set did not improve, the best model with respect to the validation set was still maintained. Finally, the training process of each classifier was repeated ten times using different random number generator seeds. \section{Experimental Results}\label{sec:results} Table \ref{tbl:results} contains the mean evaluation metrics and 95\% confidence intervals over ten training runs for the ResNet and VGG CNNs. \begin{table}[H] \centering \caption{Mean performance and 95\% confidence intervals of ten training/testing runs using random number generator seeds for each combination of CNN architecture and STFT parameter set.} \label{tbl:results} \def1.5{1.5} \setlength{\tabcolsep}{4pt} \scriptsize \begin{tabular}{lccccc} \specialrule{.1em}{.25em}{.05em} \multicolumn{6}{c}{\textbf{ResNet-50 Performance}} \textbf{}\rule{0pt}{1ex} \\ \textbf{}\rule{0pt}{2ex} & \multicolumn{1}{l}{\textbf{NFFT}} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F-1 Score} \\ \hline \textbf{3-channels (Hz)} & - & 0.953 ($\pm$0.016) & 0.887 ($\pm$0.045) & 0.871 ($\pm$0.036) & 0.878 ($\pm$0.031) \\ \multirow{3}{*}{\textbf{1-channel (Hz)}} & 256 & 0.883 ($\pm$0.022) & 0.714 ($\pm$0.060) & 0.641 ($\pm$0.037) & 0.675 ($\pm$0.046) \\ & 2048 & 0.944 ($\pm$0.009) & 0.863 ($\pm$0.036) & 0.838 ($\pm$0.039) & 0.850 ($\pm$0.023) \\ & 16384 & 0.943 ($\pm$0.013) & 0.860 ($\pm$0.032) & 0.847 ($\pm$0.058) & 0.853 ($\pm$0.031) \\ \multicolumn{1}{r}{\textbf{1-channel (mels)}} & 2048 & 0.895 ($\pm$0.031) & 0.762 ($\pm$0.067) & 0.723 ($\pm$0.048) & 0.742 ($\pm$0.044) \\ \multicolumn{6}{c}{\textbf{VGG-19 Performance}} \rule{0pt}{4ex} \\ \textbf{}\rule{0pt}{2ex} & \multicolumn{1}{l}{\textbf{NFFT}} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F-1 Score} \\ \hline \textbf{3-channels (Hz)} & - & 0.961 ($\pm$0.017) & 0.906 ($\pm$0.044) & 0.892 ($\pm$0.049) & 0.899 ($\pm$0.041) \\ \multirow{3}{*}{\textbf{1-channel (Hz)}} & 256 & 0.914 ($\pm$0.024) & 0.790 ($\pm$0.048) & 0.771 ($\pm$0.070) & 0.780 ($\pm$0.053) \\ & 2048 & 0.959 ($\pm$0.019) & 0.899 ($\pm$0.041) & 0.889 ($\pm$0.048) & 0.894 ($\pm$0.039) \\ & 16384 & 0.951 ($\pm$0.017) & 0.871 ($\pm$0.037) & 0.878 ($\pm$0.038) & 0.875 ($\pm$0.028) \\ \multicolumn{1}{r}{\textbf{1-channel (mels)}} & 2048 & 0.918 ($\pm$0.022) & 0.818 ($\pm$0.043) & 0.784 ($\pm$0.036) & 0.801 ($\pm$0.034) \\ \specialrule{.1em}{.25em}{.05em} \end{tabular} \end{table} The classifier trained on the novel representation outperforms the remaining classifiers trained on single-channel inputs. Paired two-sample $t$ -tests indicate that the improvement in performance between the classifier trained on the novel representation is statistically significant in all cases with one exception: the VGG-19 CNN trained on single-channel inputs using a window length of 2048 samples. Figure \ref{fig:confusion-matrices} contains four confusion matrices: two corresponding to the VGG-19 architecture and two corresponding to the ResNet-50 architecture. In both cases, the best two performing classifiers were those trained on the novel representation and the single-channel linearly scaled spectrogram produced using a window length of 2048 samples. \begin{figure}[H] \centering \includegraphics[width=0.675\linewidth]{resnet_confusion_matrices} ~\vskip .5em \includegraphics[width=0.675\linewidth]{vgg_confusion_matrices} \caption{Normalized confusion matrices of the two best performing classifiers in terms of F-$1$ Score for the ResNet-50 and VGG-19 CNNs.} \label{fig:confusion-matrices} \end{figure} \subsection{Generalization to Other Acoustic Sources}\label{subsec:generalizable} In order to the demonstrate the ability of the DCS that we have developed to generalize to other acoustic sources below 1000Hz, we train a new classifier using a transfer learning approach to include humpback whale (\textit{Megaptera novaeangliae}) vocalizations. Specifically, all sixteen convolutional layers in the VGG-19 network trained on the novel representation are frozen. The last three layers of the network are then re-learned on the data set described in Table \ref{tbl:datasets} with an additional \num{2100} humpback vocalizations. The hyper-parameters and optimization routine used for training the last layers of the network are equivalent to those detailed in Section \ref{subsec:training-details}. The trained classifier achieves performance levels in terms of accuracy, precision, and recall of \num{0.948}, \num{0.884}, and \num{0.871}, respectively, without the need of re-training the convolutional feature extraction layers of the CNN. \begin{figure}[!htb] \centering \includegraphics[width=0.4\linewidth]{humpback_confusion_matrix} \caption{Normalized confusion matrix of the transfer learning experiment evaluated on the test set described in Table \ref{tbl:datasets} with an additional 450 humpback annotations identified using the label ``HB''.} \label{fig:humpback-confusion-matrix} \end{figure} \subsubsection{t-SNE Embeddings}\label{subsec:t-sne} The transfer learning results exhibit that the CNN is capable of learning complex features contained within a spectrogram. Further proof of this statement can be found in Figure \ref{fig:t-sne}, which contains two-dimensional t-SNE embeddings \cite{maaten2008} generated using the output of the last frozen layer of the VGG-19 CNN trained on the novel representation. \begin{figure}[!htb] \centering \includegraphics[width=0.65\linewidth]{tsne_embeddings.png} \caption{t-SNE embeddings computed from the output of the last frozen layer of the VGG-19 CNN architecture.} \label{fig:t-sne} \end{figure} There is a distinct separation between the original five classes of acoustic sources. More importantly, even before learning the last three classifying layers of the VGG-19 CNN, a relatively distinct representation has already been learned for the humpback whale class. This result is significant as it implies additional species with less annotated data can be included in our implementation of a DCS through transfer learning. \section{Conclusion}\label{sec:conclusion} This paper presents a scientific application focused on detecting and classifying marine mammal vocalizations in acoustic recordings. In particular, we have developed a DCS based on Convolutional Neural Networks that is capable of classifying three species of baleen whales as well as non-biological sources against a background of ambient noise. A novel representation of acoustic signals was introduced and this representation increased the performance of the aforementioned classifier. The DCS was shown to be capable of learning generalizable representations for the purpose of including additional species. The latter note is substantial as it implies that species with very little annotated data--especially those species that are endangered--can be included in the training process of future classifiers through transfer learning. A well performing and generalizable DCS such as the one that we have developed is of great interest to researchers in the fields of marine biology, bioacoustics, and oceanography as it allows for fast analysis of large acoustic data sets. Such analysis may be used to inform governmental and environmental policy pertaining to conservation efforts of marine mammal life. \subsection{Future Work}\label{subsec:future-work} The work presented above is part of an ongoing research project focused on developing a DCS to be used in real time on specially developed autonomous hardware (e.g., moored recording devices and/or ocean gliders). With this goal in mind, we must consider time/space complexities and additional research into model compression is necessary. Further research and development is ongoing using data collected from recording devices deployed in various locations around the world. The supplementary data allows for the ability to include a variety of additional species of baleen whales as well as other marine mammals (e.g., pinnipeds). The additional data will also allow for the interpretation of different sources of ambient noise (i.e., soundscapes). Collectively, including additional data from various locations around the world will lead to a more robust DCS of marine mammal vocalizations. Another option for including additional species of marine mammals for which we have little available data is through data augmentation strategies. In particular, research into using unsupervised or semi-supervised approaches (e.g., Variational Auto-encoders, Generative Adversarial Networks) to increase the size of the training data could be highly beneficial. Recent work into neural network architectures that operate directly on the waveform of an acoustic signal have shown great promise \cite{van2016}. While the majority of these results are specific to generative tasks, these architectures--or a suitable alternative--may be used in training a classifier for acoustic recordings such as those described in this paper. In particular, through learning from the waveform directly we avoid any information loss that takes place during a Fourier transform. Finally, given the promising results of our early experiments reported in Section \ref{subsec:generalizable}, we also plan on investigating the use of various transfer learning and meta-learning techniques for the task at hand. \section*{Acknowledgements} Collaboration between researchers at JASCO Applied Sciences and Dalhousie University was made possible through a Natural Sciences and Engineering Research Council Engage Grant. The acoustic recordings described in this paper were collected by JASCO Applied Sciences under a contribution agreement with the Environmental Studies Research Fund. \bibliographystyle{splncs04}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} \label{sec:intro} The Fred Young Submillimeter Telescope (FYST) will be a 6-meter aperture crossed-Dragone telescope for the CCAT-prime Observatory at 5600~m elevation on Cerro Chajnantor in the Atacama Desert, Chile \cite{CCATscience,Parshley2022}. Prime-Cam, a 1.8-meter diameter cryogenic receiver, will be a first-generation science instrument for FYST, taking advantage of the high-efficiency telescope and high-elevation site to enable wide-field and deep mapping between 100 and 900 GHz (Fig.~\ref{fig:fystpcam}) \cite{EMVSPIE2018,ChoiPrimeCam}. FYST is currently under construction by CPI Vertex Antennentechnik GmbH in Germany, and will begin assembly at the Cerro Chajnantor site for first light in 2024. The Simons Observatory (SO)\cite{SO:2019} Large Aperture Telescope\cite{Xu2021} shares most aspects of the mechanical design and the same optical design as FYST, making it natural for the Prime-Cam and instrument module designs to evolve from the design of the SO Large Aperture Telescope Receiver\cite{Zhu2021} and modules. \begin{figure}[h!] \centering \includegraphics[width=1.0\linewidth]{fystpcam.png} \caption{Left: A cross section of the FYST model, revealing the 6 meter primary and secondary mirrors which focus light into the instrument space where Prime-Cam or Mod-Cam will be installed (Prime-Cam model shown in render). Right: A model of Prime-Cam with a possible instrument module configuration (Figure from Ref.~\citenum{ChoiPrimeCam}).} \label{fig:fystpcam} \end{figure} Prime-Cam's millimeter and sub-millimeter measurements will enable new science goals as well as overlap with surveys at other frequencies for synergistic analyses. Prime-Cam will house up to seven independently developed $\sim$41 cm diameter instrument modules, each with up to 1.3$^{\circ}$ field of view, filling a total of 4.9$^{\circ}$ of FYST's 8$^{\circ}$ diameter field of view at 3 mm. When fully populated, Prime-Cam will field up to five broadband polarization-sensitive modules for observations between 220 and 850 GHz, and at least two imaging spectrometer modules for line intensity mapping from 210 to 420 GHz. \cite{ChoiPrimeCam}. When populated with seven instrument modules, each deploying three microwave kinetic inductance detector (MKID) arrays, Prime-Cam will field a total of $>$100,000 detectors, larger than any deployment of broadband KIDs yet. Together, these modules will target science goals ranging from Big Bang cosmology through reionization and the formation of the first galaxies to energetic transients, galaxy cluster evolution via the Sunyaev-Zel'dovich (SZ) effects, galactic polarization and star formation within the Milky Way \cite{CCATscience}. Prime-Cam is currently under construction by Redline Chambers for delivery to Cornell University for initial testing in 2022. Mod-Cam is a single instrument module cryogenic receiver for Prime-Cam and a first light instrument for FYST, currently in testing at Cornell University \cite{DuellSPIE}. The first 280~GHz MKID array for the CCAT-prime project is currently in testing at Cornell University\cite{Choi2021}, and Mod-Cam will deploy the 280~GHz first light kinetic inductance detector arrays within the first instrument module on FYST for early science observations in 2024. When Prime-Cam is ready for deployment, Mod-Cam will serve as a module testbed for Prime-Cam at Cornell University. In Section \ref{sec:modcam}, we present the design and status of the Mod-Cam cryostat. In Section \ref{sec:280module}, we discuss the design of the 280~GHz instrument module. We detail the 280~GHz detector array design and development status in Section \ref{sec:detectors}, and the readout system design in Section \ref{sec:readout}. We present results from initial tests in Section \ref{sec:initial}, and future plans in Section \ref{sec:future}. \section{Mod-Cam}\label{sec:modcam} Mod-Cam is a 0.9 m diameter, 1.8 m long single instrument module cryogenic receiver with 40~K and 4~K stages, currently in testing at Cornell University (Fig.~\ref{fig:modcaminlab}). Mod-Cam will be a first light and commissioning instrument for FYST, and then serve as a testbed for Prime-Cam \cite{EMVSPIE2018}, allowing for optical testing of instruments before they are deployed to the CCAT-prime site for installation in Prime-Cam. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{modcaminlab.png} \caption{Mod-Cam in testing at Cornell University. In this configuration, a Cryomech PT-420 is installed in addition to the Bluefors LD-400 DR. Mod-Cam sits on a custom Minitec frame. A wheeled frame supporting the main cylinder can be adjustably positioned relative to the DR frame, which can be leveled, raised or lowered using four leveling feet. Left: Thermometry is read out through the readout harness. Right: The side-car DR design is visible. The hexagonal window is blanked off with a piece of 6061-T6 Al for initial cryogenic testing.} \label{fig:modcaminlab} \end{figure} \subsection{Mechanical Design}\label{sec:modcammechanical} Mod-Cam enables easier swapping of instrument modules as compared to Prime-Cam due to its off-axis dilution refrigerator design and significantly faster turnaround times than the larger cryostat. Each instrument module tested or deployed in Mod-Cam will be optimized for a specific subset of the overall Prime-Cam science goals and be able to hold up to three detector arrays cooled to 100\,mK by a Bluefors LD-400 dilution refrigerator (DR) along with silicon lenses and filter stacks at 1\,K and 4\,K. The instrument modules are mounted to Mod-Cam's 4~K plate. The Mod-Cam cryostat and G10 tabs were fabricated by Precision Cryogenics. The design of the Mod-Cam cryostat portion which houses the instrument module (viewed in cross-section in Fig.~\ref{fig:modcamhalf}) was scaled down from the Prime-Cam cryostat design \cite{EMVSPIE2018}. The off-axis DR shell portion was designed to allow flexible access to both the rear of the cryostat for instrument module installation and removal as well as to the DR. The instrument modules, which can be 41 cm in diameter or smaller, are installed from the back of the cryostat and are cantilevered off of the 4\,K stage. Mod-Cam's 6061-T6 Al 300 K vacuum shell consists of 89 cm diameter front and rear shells, front and back plates, a two-piece DR shell, and a DR shell bottom plate (Fig.~\ref{fig:DRxsec}). The size of the optical axis cryostat and the two-piece DR shell were chosen to accommodate one 41 cm diameter optics tube and enough clearance for the Prime-Cam G10 tab design and thermal connections to the cryogenics. The thicknesses of the plates (2.54 cm) and shells (0.64 cm) were motivated by finite-element analysis (FEA) results for the Prime-Cam cryostat \cite{EMVSPIE2018}. The front vacuum plate holds a 44 cm hexagonal ultra-high-molecular-weight polyethylene (UHMWPE) vacuum window, identical to the Simons Observatory Large Aperture Telescope Receiver \cite{Zhu2021} and Prime-Cam designs. The vacuum windows are designed to be thick enough to withstand the atmospheric pressure at the site while being as thin as possible to achieve our desired sensitivity. For site operations, a 0.32 cm (1/8'') thick window, anti-reflection (AR) coated at Cardiff University will be used. A 0.64 cm (1/4'') thick UHMWPE window will be used for laboratory testing to reduce the risk of vacuum window failure \cite{Zhu2021}. Behind the vacuum window lies a double-sided IR-blocking filter fabricated by Cardiff University to reduce loading \cite{Ade2006}. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{ModCamXsec.png} \caption{A cross-section view of the Mod-Cam model, showing the interior of the main cylinder of the cryostat which houses the instrument module. Section \ref{sec:modcammechanical} details the design of the 300 K vacuum shell and 40~K and 4~K stages of the cryostat, supported by a series of G-10 tabs and cooled by a Bluefors LD-400 dilution refrigerator and Cryomech PT-420. Light from the telescope passes through an AR-coated UHMWPE window and IR-blocking filters at 300 K and 40~K before entering the instrument module. The filters labeled ``80 K" will be mounted on Prime-Cam's 80 K stage, but are mounted on the 40~K stage in Mod-Cam. The 4K DR adapter thermal connection to the 4~K shell (purple) and DR cold finger connections to the instrument module cold fingers are visible, and shown in detail in Fig.~\ref{fig:DRxsec}. The design of the instrument module is presented in Sec.~\ref{sec:280module} and Fig.~\ref{fig:modulexsec}.} \label{fig:modcamhalf} \end{figure} Mod-Cam's 40~K stage consists of a 52 cm diameter, 0.64 cm thick front shell, a 65 cm diameter, 0.64 cm thick rear shell, a 1.27 cm thick front filter plate, a 0.32 cm thick back plate, a 1.27 cm thick G-10 mounting ring, a DR shield mounted to the 40~K DR plate, a DR shell to main cylinder adapter, and a DR shell bottom plate \cite{EMVthesis}. All stage components are fabricated of 6063-T5 Al (with the exception of the 6061-T6 Al DR shell bottom plate) for enhanced thermal conductivity and reduction of thermal gradients \cite{Scherer:2018}. The front filter plate holds a filter stack that will be held at 80 K in Prime-Cam, consisting of two double-sided IR-blocking filters fabricated by Cardiff University and one alumina filter. The alumina wedge filters act as IR absorbers, and for off-central tubes in Prime-Cam, as prisms to bend off-axis beams parallel to the long axis of the cryostat such that the instrument modules can all be coaxial with the cryostat shells \cite{dicker:2018}. While Mod-Cam will not see loads necessitating an 80 K plate, the 80 K filters are included at 40~K to test the full optical chain. An additional double-sided IR-blocking filter is mounted to the 40~K plate, which will also be mounted to the 40~K plate in Prime-Cam. The optical elements in Mod-Cam are module-specific and will be swapped out when testing other modules. They are also all compatible with Prime-Cam and will be installed in Prime-Cam along with the tested instrument module. Nine 0.24 cm thick, 16 cm by 16 cm G-10 tabs epoxied into 6061-T6 Al feet with Armstrong A-12B PT epoxy by Precision Cryogenics mechanically support the 40~K shells off of the rear 300 K vacuum shell while thermally isolating the 40~K stage (Fig.~\ref{fig:modcamhalf}). Precision Cryogenics sourced the G-10 material from McMaster-Carr\footnote{\url{https://www.mcmaster.com/}}. The design of the G-10 tabs follows from the FEA-motivated designs for the SO LATR \cite{Scherer:2018}. The rear 40~K shell is thermally connected to the 40~K stage of a Cryomech PT-420 via an oxygen-free high thermal conductivity (OFHC) tube and a set of custom compressed OFHC foil straps, as well as to the 40~K DR shell via the 40~K adapter and set of braided OFHC straps from TAI\footnote{\url{https://www.techapps.com/}} (Fig.~\ref{fig:DRxsec}). The rear shell also supports the 40~K stage of the readout harness (Figure \ref{fig:modulereadout}). The 40~K shells and plates are wrapped in 30 custom-cut layers of multi-layer insulation (MLI)\footnote{Beyond Gravity Austria GmbH, Stachegasse 13, 1120 Vienna}, and the G-10 tabs are wrapped in 10 layers of MLI to reduce radiative loading from the room temperature vacuum shell. Mod-Cam's 4~K stage consists of a 3.8 cm thick plate, a 57 cm diameter, 0.40 cm thick shell, a DR shell mounted to the 4~K plate of the DR, a DR shell to main cylinder adapter, and a DR shell bottom plate, all fabricated from 6061-T6 Al. The 4~K plate is mechanically supported and thermally isolated from the 40~K ring by a series of nine 21.4 cm x 15 cm, 0.24 cm thick G-10 tabs epoxied into 6061-T6 Al feet with Armstrong A-12B PT epoxy by Precision Cryogenics (Fig.~\ref{fig:modcamhalf}). The 4~K plate is the only mechanical mounting point for the instrument modules. The 4~K shell supports the 4~K stage of the readout harness (Figure \ref{fig:modulereadout}). The 4~K plate is thermally connected to the 4~K stage of a Cryomech PT-420 via a set of braided OFHC straps from TAI, and the 4~K DR adapter is also connected to the 4~K shell by two braided OFHC straps from TAI (Fig.~\ref{fig:DRxsec}). The 4~K shells, plates, and G-10 tabs are wrapped in 10 custom-cut layers of MLI to reduce radiative loading from the 40~K stage. \begin{figure}[t!] \centering \includegraphics[width=0.85\linewidth]{ModCamRearSec.PNG} \caption{A cutaway view of the Mod-Cam model to reveal the DR to main shell interfaces. 1~K and 100~mK cold fingers attach to the DR plates via OFHC copper blocks and extend into the main cylinder of Mod-Cam to attach to the OFHC 1~K and 100~mK instrument module cold fingers via braided OFHC copper straps from TAI. The 40~K DR adapter cylinder that runs between the 40~K DR shell and the 40~K rear shell is shown in teal. The adapter shell is connected to the 40~K rear shell via two braided OFHC copper straps from TAI (not visible). Similarly, the 4~K DR adapter cylinder that runs between the 4~K DR shell and the 4~K shell is shown in purple, and is connected to the 4~K shell via two braided OFHC copper straps from TAI (one shown).} \label{fig:DRxsec} \end{figure} A mock-up model of the Bluefors LD-400 300, 40, and 4\,K plates and offsets was designed and fabricated for construction of the Mod-Cam cryostat at Precision Cryogenics to ensure the alignment of the DR shells relative to the optical axis Mod-Cam shells. A two-part custom Minitec\footnote{\url{https://www.minitecframing.com}} frame allows for adjustable positioning of the main cylinder and DR shells (Fig.~\ref{fig:modcaminlab}). The side-mounted DR provides cooling to the 40 and 4\,K stages, and an optional Cryomech PT-420 (currently installed) or PT-410 pulse tube can provide cooling power at 40\,K and 4\,K, through custom OFHC braided straps from TAI or straps made in house (Sec.~\ref{sec:cryo}). Thermometry and RF signals are read out through a custom modular harness based on the SO Universal Readout Harness \cite{Moore2022,Rao2020} that is installed on the opposing side to the DR (Sec.~\ref{sec:readout}). The modularity of the harness allows for flexible and upgradable readout options, and its design leaves the rear of Mod-Cam relatively clear for instrument module installation and removal. \subsection{Cryogenic Design}\label{sec:cryo} \begin{table}[h!] \begin{center} \vspace{2mm} \begin{tabular}{ |c|c|c|c|c| } \hline \textbf{Stage} & \textbf{40~K [W]} &\textbf{4~K [W]} & \textbf{1~K [mW]} & \textbf{100~mK [$\mu$W]}\\ \hline Shell radiation & 9.4 & 0.07 & 0.002 & 0.2\\ Support structure & 3.6 & 0.12 & 0.755 & 16.1\\ Wiring & 5.4 & 0.29 & 0.121 & 3.1\\ Beam radiation & 11.8 & 0.03 & 0.029 & 31.2 \\ \hline Total heat load & 28.4 & 0.51 & 0.907 & 50.6 \\ \hline Available cooling power & 110 & 4 & 24 & 400 \\ \hline \end{tabular} \vspace{2mm} \caption{Loading estimates for each stage of Mod-Cam. The cooling power at 40~K and 4~K is supplied by the PT-420 in the Bluefors LD-400 and the backup Cryomech PT-420, and the cooling power at 1 and 0.1~K is supplied by the DR still and mixing chamber stages respectively. Our estimated cooling power is more than sufficient to meet our estimated needs at all stages for an SO-style instrument module.} \label{tab:modcamloadingestimates} \end{center} \end{table} The cooling requirements for Mod-Cam are driven by the 100~mK operation of our MKID detectors (Sec.~\ref{sec:detectors}), which are mounted on a stage thermally connected to the 100~mK plate of the LD-400 DR via an instrument module cold finger, flexible TAI straps, DR cold finger and mounting block (Fig.~\ref{fig:DRxsec}). The instrument module 1~K stage is connected to the 1~K DR plate through an analogous chain (Fig.~\ref{fig:DRxsec}). The Mod-Cam shells at 40 and 4~K and the 4~K instrument module stage are cooled by the PT-420 backing up the LD-400 DR as well as by an additional Cryomech PT-420. In considering the thermal loads for Prime-Cam and Mod-Cam, we scaled from the SO LATR thermal model \cite{Zhu2021}. This thermal model combines material properties and radiation estimates with custom Python estimates of the optical filter elements. The thermal loading estimates for Mod-Cam are presented in Table \ref{tab:modcamloadingestimates}. The thicknesses and materials of the Mod-Cam shells were determined based on estimated loading and temperature gradients, and the room-temperature mechanical offsets were designed based on anticipated contractions during cooling \cite{EMVthesis}. \section{280~GHz Instrument Module}\label{sec:280module} The first instrument module for Mod-Cam and Prime-Cam will be the 280~GHz module, which will contain the first light MKID arrays for the CCAT-prime Project, as described in Section \ref{sec:detectors} and Ref.~\citenum{DuellSPIE}. The instrument module design is based on the optics tube designs for the Simons Observatory LATR \cite{Xu2020,Zhu2021,gallardo:2018,Gudmundsson_2021}, and like all the modules planned for Prime-Cam, is a self-contained assembly of filters, lenses, and detector arrays. The module will be mounted on the 4\,K plate of Mod-Cam (and compatible with Prime-Cam) (Sec.~\ref{sec:modcam}). Each module is approximately 41 cm in diameter and 130 cm long, and mounts through the rear of Mod-Cam (Fig.~\ref{fig:modcamhalf}). \subsection{Cold Optics}\label{sec:moduleoptics} The design of the 280~GHz module is shown in Fig.~\ref{fig:modulexsec}. Light entering Mod-Cam is filtered through the AR-coated UHMWPE window, 300 K IR-blocking filter, 80 K IR-blocking filters and alumina wedge (located on the 40~K stage in Mod-Cam), and 40~K IR-blocking filter to reduce the loading on the colder stages (as described in Sec.~\ref{sec:modcam}) before entering the instrument module. At 4~K, two low-pass edge (LPE) filters (capacitive mesh filters on polypropylene substrates\cite{Ade2006}) manufactured at Cardiff University are mounted before the first lens of the module. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{ModuleCrossSection.png} \caption{A section view of the 280~GHz instrument module model, showing the 4~K, 1~K and 100~mK stages holding three silicon lenses and LPE filters. The details of the design are presented in Sec.~\ref{sec:280module}. Metamaterial tiles on the upper 4~K tube (Fig.~\ref{fig:baffles}) and 1~K Lyot stop serve as antireflection coatings to absorb stray light, as do the 1~K baffles and shell coated in carbon-loaded epoxy (Fig.~\ref{fig:baffles}). The detector array packages detailed in Sec.~\ref{sec:detectors} are shown, offset from the 1~K stage by a carbon fiber truss. The 100~mK to 4~K readout detailed in Sec.~\ref{sec:readout} is shown in the back along with the 1~K and 100~mK cold fingers which thermally connect to the cold stages of the DR inside Mod-Cam (Fig.~\ref{fig:DRxsec}).} \label{fig:modulexsec} \end{figure} The cold optics chain continues at 4~K with the first of three metamaterial AR-coated silicon lenses which re-image the telescope focal plane onto the detector arrays. The optical design for the 280~GHz module is adopted from the SO LATR designs \cite{dicker:2018,Zhu2021}. Silicon is the preferred lens material at Prime-Cam's desired wavelengths due to its high resistivity, extremely low loss, high thermal conductivity (ensuring lens temperature uniformity and limiting detector background loading), and high index of refraction \cite{dicker:2018,Zhu2021,EMVSPIE2018}. Mechanically robust metamaterial antireflection lens coatings applied with a custom CNC machine produce less than 1$\%$ reflection across an octave of bandwidth \cite{Golec2020,Coughlin2018,datta2013}. The second and third lenses are cooled to 1~K. One additional LPE is mounted before the second lens at 1~K and one before the feedhorn arrays at 100~mK (Fig.~\ref{fig:modulexsec}). To mitigate stray light inside the module, the injection molded, carbon-loaded plastic metamaterial tile coating design developed for the SO LATR modules was adopted \cite{Xu2021,Gudmundsson_2021,Zhu2021}. Approximately 240 wedge-shaped metamaterial tiles are installed in the upper 4~K section of the instrument module to absorb stray light before the 1~K optics (Fig.~\ref{fig:baffles}, left). The 1~K Lyot stop at the module's pupil is also coated in flat versions of these metamaterial absorbing tiles. After the stop, a series of 1~K ring baffles and the surrounding 1~K shield are coated with Stycast 2850 FT, loaded with coarse and fine carbon powder (Fig.~\ref{fig:baffles}, center). This stage of blackening is less critical for the optical design because of the larger radial beam clearance and position in the module \cite{Zhu2021}. The final lens then focuses the light onto the detector feedhorn arrays (Sec.~\ref{sec:detectors}). \begin{figure} \centering \includegraphics[width=1.0\linewidth]{blackening.png} \vspace{2mm} \caption{Left: The $\sim$240 metamaterial injection-molded plastic tiles\cite{Xu2021,Zhu2021} mounted to the upper 4~K shell of the 280~GHz module to absorb stray light. Center: 1~K baffles installed in lower 1~K tube and coated in carbon-loaded epoxy. Right: 1~K to 100~mK carbon fiber truss.} \label{fig:baffles} \end{figure} \subsection{Mechanical Design}\label{sec:modulemech} The 280~GHz instrument module, which contains 4~K, 1~K, and 100~mK stages, is mechanically supported by the 4~K mounting flange on the 4~K plate of Mod-Cam. The Prime-Cam 4~K plate is designed to mount up to seven of the same instrument modules with 4~K flanges. The LPE filters are mounted in 6061-T6 Al clamps axially spring-loaded with Spira\footnote{\url{www.spira-emi.com}} gaskets to reduce the likelihood of delamination via thermal contraction\cite{Zhu2021}. Each lens clamp also supports the lens with axial and radial Spira gaskets. The lens clamps were modified from the SO LATR tube design to include a finger hole and block for ease of assembly when installing the silicon lenses. The 4~K components of the module include the welded 6061-T6 Al upper tube which is covered with 4~K metamaterial wedge-shaped tiles (Fig.~\ref{fig:baffles}, left) and the welded 6061-T6 Al lower tube which includes the 4~K mounting flange and mounting point for the 4~K magnetic shielding. The A4K\footnote{Amuneal 4K material, \url{www.amuneal.com/}} magnetic shielding for the module was developed from the SO LATR design to accommodate the 280~GHz MKID readout design (Sec.~\ref{sec:readout}), and extends along the interior of the lower 4~K tube past the second lens, as well as around the outside of the rear of the module. Laboratory testing of superconducting detectors and readout components like the TESes and SQUIDs used for SO motivated the design of this shield\cite{VavagiakisASC2021}. MKIDs are anticipated to be less sensitive to magnetic fields than TESes or SQUIDs, but initial magnetic sensitivity testing has illustrated the importance of shielding MKID arrays\cite{Choi2021}. The 1~K stage of the instrument module is mechanically supported and thermally isolated from the 4~K shells by a 38.54 cm diameter, 0.3 cm thick carbon fiber tube from DragonPlate/Allred and Associates\footnote{www.DragonPlate.com}, epoxied into 6061-T6 Al rings in an alignment jig with Scotchweld 2216 cured at room temperature. The welded 6061-T6 Al upper 1~K tube supports the first LPE filter, second lens, and 1~K Lyot stop, which is coated with flat plastic metamaterial antireflection coating tiles (Sec.~\ref{sec:moduleoptics}). The welded 6061-T6 Al lower tube supports a series of blackened baffles (Fig.~\ref{fig:baffles}, center) and the third lens. A carbon fiber truss supports the 100~mK stage off of the rear of the 1~K stage (Fig.~\ref{fig:modulexsec},~\ref{fig:baffles} right). This truss will be composed of 4 mm outer diameter, 3 mm inner diameter carbon fiber rods made with a pultrusion process from vDijk Pultrusion Products\footnote{vDijk Pultrusion Products, Aphroditestraat 24, NL-5047 TW TILBURG, The Netherlands}. The design of the truss is based on the SO LATR tube truss, but is modified to minimize off-axis loading of the carbon fiber tubes by incorporating updated strut end cap designs from the SO Small Aperture Telescopes\cite{Crowley_2022}. The carbon fiber rods are epoxied into 6061-T6 Al feet with Scotch-Weld DP2216 following the procedure in Crowley at al. 2022\cite{Crowley_2022}. FEA performed on this new design predicts that this truss will support at least five times the expected operating load, and load testing of individual epoxied pultruded carbon fiber struts sourced from Aopin -confirm the FEA predictions, with results showing that each strut can support at least $10\times$ its expected load. SolidWorks modal analysis predicts that the lowest vibrational mode for this truss design will be above 200 Hz. The 100~mK stage supports the feedhorn and detector array packages described in Sec.~\ref{sec:detectors}. The 100~mK stage is surrounded by the 1~K radiation shield. The 1~K radiation shield and 4~K magnetic shielding support the module readout components described in Sec.~\ref{sec:readout}. The thermometry plan for Mod-Cam involves 18 temperature sensors at important thermal interfaces, locations on plates and shells to probe potential gradients, and within the instrument modules at each temperature stage \cite{EMVthesis}. The number of sensors planned at each temperature stage is presented in Table \ref{tab:thermotable}. Cernox\footnote{\url{shop.lakeshore.com/temperature-products/temperature-sensors/cernox.html}} 1080 thin film resistance cryogenic temperature sensors are selected for the 40\,K stage, Cernox 1050 for the 4\,K stage, Cernox 1030 for the 1\,K stage, and Ruthenium oxide sensors (ROXs) for the 100\,mK stage. LEMO connectors are used for four-lead sensor measurement and read out using Lakeshore resistance bridges. Custom cables from Universal Cryo, Inc.\footnote{\url{www.ucryo.com}} have been ordered for optical testing and deployment. \begin{table} \begin{center} \begin{tabular}{ |c|c|c|c|c|c| } \hline Stages & 40\,K & 4\,K & 1\,K & 100\,mK & Total\\ \hline Thermometers & 6 & 7 & 2 & 3 & 18\\ \hline \end{tabular} \vspace{2mm} \caption{Planned number of thermometers for each stage in Mod-Cam. Six 40\,K thermometers will measure temperature at the 40\,K PT stage, 40\,K DR adapter, and across the 40\,K shells and plates. Seven 4\,K thermometers will measure temperature at the 4\,K PT stage, 4\,K DR adapter, 4\,K instrument module components, and across the 4\,K shells and plates. Two 1\,K and three 100\,mK thermometers will measure temperatures in the instrument module. Cernox sensors will be used at 1\,K and above, while ROXs will be used at 100\,mK.} \label{tab:thermotable} \end{center} \end{table} \section{280~GHz Detector Arrays}\label{sec:detectors} In order to meet the desired instrument sensitivity and required detector densities, all of the currently planned instrument modules for Prime-Cam will use microwave kinetic inductance detectors (MKIDs). Signals are measured by coupling incident photons to a superconducting inductive element of an LC resonator and measuring the shift in kinetic inductance caused by Cooper-pair breaking. By combining the absorbing and inductive elements, MKIDs are naturally frequency multiplexed, and have greatly reduced readout complexity in comparison to that required for similarly sensitive transition edge sensors. \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{modulereadoutv2.PNG} \caption{Partial cutaway view of the cryogenic readout for Mod-Cam, including the readout harness, 280~GHz instrument module, and cold straps by TAI. For simplicity, coaxial cables and some transitional elements are not shown. Flexible RF stripline carries both input and output signals for all 18 networks between 300 K and 4~K, where they are transitioned to a combination of flexible and semi-rigid coaxial cables down to the three arrays at 100~mK. 18 low noise amplifiers are mounted and heatsunk on the rear face of the 4~K magnetic shield.} \label{fig:modulereadout} \end{figure} Within the first-light 280~GHz module, there will be more than 10,000 feedhorn-coupled, polarization-sensitive MKIDs divided between three tiled array packages with two unique designs. These include a first array of TiN detectors coupled to aluminum feedhorns and a further two arrays of Al detectors coupled to silicon feedhorns. The first array has been described previously in Ref.~\citenum{DuellSPIE}. It contains 3,456 total detectors (1,728 pixels), of which 3,450 are optically coupled. The array was fabricated by the Quantum Sensors Group at the National Institute for Standards and Technology, drawing on previous work done for BLAST-TNG\cite{dober_optical_2016, Galitzki2016} and TolTEC\cite{austermann_millimeter-wave_2018, austermann_large_2018}, while the aluminum feedhorns were machined at ASU. Two additional arrays using updated Al MKID designs are currently being fabricated and tested, along with Si-platelet feedhorn arrays, also by the Quantum Sensors Group at NIST\cite{AustermannSPIE22}. Each of these arrays will have 3,448 total detectors (1,724 pixels), of which 3,418 are optically coupled. This change from TiN to Al detectors was driven by dark testing results demonstrating reduced low frequency spectral noise\cite{AustermannSPIE22}. The array packaging differs slightly between both designs to accommodate the different alignment and heat-sinking requirements of the two feedhorn types, though overall pixel spacing and placement is the same. One advantage of this mixed design is the opportunity to sample the same sky using two sets of detectors with unique noise properties, though this does add complexity to data analysis at these frequencies. \section{Readout System}\label{sec:readout} The 280~GHz module cryogenic readout design is the first for the Prime-Cam module development effort, and enables the readout of more than 10,000 KIDs ($\sim$3,500 per array) across 18 networks. Each individual 280~GHz array is split into 6 networks with either 576 or 572 resonators placed at frequencies roughly between 500 MHz and 1 GHz to be measured over a single RF feedline. The readout of a fully populated 280~GHz module with three arrays will require 18 pairs of RF feedlines and accompanying low noise amplifiers (LNAs). The room temperature microwave frequency multiplexed readout system for Mod-Cam and Prime-Cam is currently in development, and is designed to run on the Xilinx ZCU111 Radio Frequency System on a Chip (RFSoC)\cite{Sinclair2022}. In both Mod-Cam and Prime-Cam, the cryogenic readout is broken up between a shared readout harness spanning 300 K to 4~K, individual instrument modules with stages from 4~K to 100~mK, and an isothermal 4~K transition to connect the two. While the instrument modules are fully shared between both receivers, the readout harness and isothermal transitions are modified in Mod-Cam to accommodate its unique layout and purpose as a flexible testbed. The readout design for the module is shown in Fig.~\ref{fig:modulereadout}. The readout from the 100~mK arrays to 4~K relies on a combination of semi-rigid and hand-formable coaxial cables, with the hand-formable cables being used at isothermal stretches to reduce complexity during installation. Attenuation is included at each stage on the input side to reach the desired tone power and noise temperature, and low-loss superconducting cables carry the output signal across all temperature stages between the array and low noise amplification at 4~K. Coaxial cables running from the focal plane arrays are heat sunk at 1~K on the 1~K radiation shield, as well as at 4~K on the magnetic shield where all LNAs are located. After being routed through the magnetic shield, coaxial cables are surrounded by slotted A4K covers to complete the magnetic shielding. PCBs for breakout of LNA bias lines are also located at 4~K. From 4~K to 300 K, an RF stripline design based on those used for ALPACA\cite{VishwasSPIE22} runs roughly 18 inches of flexible RF stripline through a readout harness with mechanical designs based on the Universal Readout Harness for the Simons Observatory\cite{Moore2022,Rao2020} (Fig.~\ref{fig:modulereadout}). Each flexible board holds 6 RF feedlines with custom SMP connectors on both ends. These SMP connectors then mate to a transition board that switches all lines to SMA connectors. The readout harness design shown, which is specific to Mod-Cam, sacrifices some efficiency in stripline density to allow for greater modularity when testing modules with alternative readout requirements or additional DC line requirements. Not shown in detail is coaxial cable routing required for transitioning between the readout harness and instrument module. This will require the most substantial modification between Mod-Cam and Prime-Cam, and, as such, is still being finalized. \section{Initial Tests}\label{sec:initial} \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{ModCamCooldown.png} \caption{A cooldown plot from Grafana for a dark test of the Mod-Cam receiver in which no instrument module is installed and the window plate, 40~K and 4~K plates are blanked off. A subset of the thermometers installed are plotted for clarity, including thermometers on the 40~K, 4~K, 1~K and 100~mK DR stages and the 40~K and 4~K thermometers on the Mod-Cam shells farthest from cryogenic thermal connections. Base temperature for the DR cold stages at 1~K and 100~mK is reached in roughly 8 hours after the DR is turned on, and the warmer stages reach base in approximately 2.5 days after the two PT-420s are turned on.} \label{fig:cooldown} \end{figure} Dark testing of the Mod-Cam cryogenic receiver without an instrument module installed has been performed. An initial DR-only cooldown led to a slower cooldown rate than anticipated, so we added a PT-410, and later a PT-420 for comparison. The performance of the DR was tested in a configuration in which a PT-410 was coupled to the 40 and 4\,K main shells of Mod-Cam, and the 40 and 4\,K DR adapters were uncoupled from the main shells such that the DR was not cooling the main shells. In this configuration, the DR mixing chamber achieved over 400 $\mu$W of cooling power at 100\,mK, which is more than sufficient for one LATR-style instrument module \cite{Xu2020}. Live monitoring of Mod-Cam's pressures and temperatures during cooldowns is achieved using Grafana\footnote{\url{https://grafana.com}}, an open source web application for visualization of time series data, via the Observatory Control System (OCS) used by the Simons Observatory\cite{Koopman2020}. A plot of a test cooldown from Grafana is shown in Fig.~\ref{fig:cooldown} for a dark test configuration in which no instrument is installed, the window is blanked off (Fig.~\ref{fig:modcaminlab}), and the 40 and 4~K Mod-Cam plates are covered. In this configuration, base temperature is reached approximately 3.5 days after the two PT-420s are turned on. The DR cold stages at 1~K and 100~mK reach base temperature in roughly 8 hours after the DR is turned on. Lab characterization of the aluminum feedhorn-coupled 280~GHz TiN polarimetric MKID array is ongoing at Cornell\cite{Choi2021}. An LED mapper array PCB has been developed to map MKID resonator frequencies to the physical array positions, in order to lithographically trim the interdigitated capacitors and remove frequency collisions \cite{Liu2017}. \section{Status and Future Plans}\label{sec:future} The 280~GHz module is currently under construction at Cornell University and will shortly be dark tested in Mod-Cam for the first time to test base temperatures and thermal gradients present in the module without optical loading, as well as robustness of the module's epoxied components and coatings after thermal cycling. Modal analysis of the Mod-Cam cryostat is ongoing, and vibration tests with a Buttkicker Mini Concert haptic transducer\footnote{\url{https://thebuttkicker.com/buttkicker-mini-concert}} are planned following the method outlined in Ref.~\citenum{Zhu2021} to test the heating of the cold stages from pickup of anticipated low-frequency modes that will be present in the vibrational environment on FYST. Characterization of the first Al feedhorn-coupled 280~GHz TiN MKID array is ongoing, and two more 280~GHz MKID arrays using aluminum inductors and silicon feedhorns are being fabricated and tested at NIST. Full characterization of the 280~GHz arrays will be presented in a future publication. Optical testing of the first MKID arrays in the 280~GHz module will follow dark testing, with the goal of early science observations on FYST in 2024. \section{Conclusion} The Mod-Cam cryogenic receiver will be a first light and commissioning instrument for the Fred Young Submillimeter Telescope (FYST), deploying the first light MKID arrays at 280~GHz for the CCAT-prime Project. Mod-Cam will also serve as the instrument module testbed for Prime-Cam, a first-generation science instrument for FYST that will perform unprecedented 280--850 GHz broadband and spectroscopic measurements with KIDs. The 0.9 m diameter, 1.8 m long Mod-Cam receiver with 40~K and 4~K stages is currently in testing at Cornell University, and the 41 cm diameter 280~GHz instrument module with cold stages at 40~K, 4~K, 1~K, and 100~mK, is currently under construction for initial cryogenic testing. The first 280~GHz MKID array is currently in testing in the lab, and the readout of more than 10,000 KIDs across 18 networks has been designed. Mod-Cam will be installed on FYST for early science observations in 2024. \section{Acknowledgements} The CCAT-prime Project, FYST and Prime-Cam instrument have been supported by generous contributions from the Fred M. Young, Jr. Charitable Trust, Cornell University, and the Canada Foundation for Innovation and the Provinces of Ontario, Alberta, and British Columbia. The construction of the FYST telescope was supported by the Gro{\ss}ger{\"a}te-Programm of the German Science Foundation (Deutsche Forschungsgemeinschaft, DFG) under grant INST 216/733-1 FUGG, as well as funding from Universit{\"a}t zu K{\"o}ln, Universit{\"a}t Bonn and the Max Planck Institut f{\"u}r Astrophysik, Garching. The construction of EoR-Spec is supported by NSF grant AST-2009767. The construction of the 350 GHz instrument module for Prime-Cam is supported by NSF grant AST-2117631. MDN acknowledges support from NSF grant AST-2117631. SKC acknowledges support from NSF award AST2001866. ZBH acknowledges support from a NASA Space Technology Graduate Research Opportunities Award. ZX is supported by the Gordon and Betty Moore Foundation through grant GBMF5215 to the Massachusetts Institute of Technology.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Intermittent type behaviour has been observed in a wide range of experimental and numerical studies of dynamical systems. Theoretical attempts at understanding such modes of behaviour fall into two groups: (i) stochastic, involving models in which intermittency is brought about through the presence of some form of external noise and (ii) deterministic, where the mechanism of production of intermittency is purely internal. Here we concentrate on the latter and in particular on an important subset of such mechanisms referred to as ``crisis intermittency'' \cite{grebogietal82,grebogietal87}, whereby attractors underlying the dynamics change suddenly as a system parameter is varied. There are both experimental and numerical evidence for such modes of behaviour (see for example \cite{dittoetal89,grebogietal87,karakotsu96,ottbook93} and references therein). As far as their detailed underlying mechanism and temporal signature are concerned, crises come in three varieties \cite{grebogietal87}. Of particular interest for our discussion here is the type of intermittency (which can occur in systems with symmetry) referred to as ``attractor merging crisis'', whereby as a system parameter is varied, two or more chaotic attractors merge to form a single attractor. \\ An important potential domain of relevance of dynamical intermittency is in understanding the mechanism of production of the so called ``grand or Maunder type minima'' in solar and stellar activity, during which the amplitude of the stellar cycle is greatly diminished \cite{weiss}. Many attempts have recently been made to account for such a behaviour by employing various classes of models, including truncated models involving ordinary differential equations (ODE) (c.f.\ Weiss et al.\ \cite{cwj84}, Zeldovich et al.\ \cite{zeldovich83}, Spiegel \cite{spiegel}) as well as axisymmetric mean field dynamo models modelled on partial differential equations (PDE), in both spherical shell \cite{offdynamo,tobias,tt} and torus \cite{brookemoss} topologies. In order to transcend phenomenological explanations and establish the underlying mechanism for such behaviour\footnote{or behaviours, since after all more than one intermittency mechanism may occur even in a single model but at different system parameters.}, it is of vital importance to be able to distinguish between the various intermittency mechanisms and this in turn is greatly assisted by determining the forms of intermittency that can occur for stellar dynamo models. Here we consider a truncation of an axisymmetric mean field dynamo model and demonstrate that it can possess crisis-induced intermittency. To begin with we find that the system possesses multiple attractors (including two chaotic ones) with fractal basin boundaries, over a wide range of control parameters. We also find parameter intervals over which the system has fractal parameter dependence for fixed initial conditions. Such fractal structures can give rise to a form of fragility (final state sensitivity), whereby small changes in the initial state or the control parameters of the system can result in a different final outcome. We find parameter regions where as the control parameter is varied, the chaotic attractors merge into one attractor thus resulting in crisis-induced intermittency. We verify this by investigating the phase space of the system and calculating the scaling exponent put forward by Grebogi et al. \cite{grebogietal87}. As far as we are aware, this is the first example of such behaviour in a dynamo model as well as in a 6--dimensional flow. \\ The structure of the paper is as follows. In section 2 we briefly introduce the model. Section 3 summarizes our results demonstrating the presence of crisis in this model and finally section 4 contains our conclusions. \section{The model} The dynamo model considered here is the so called $\alpha \omega$ mean field dynamo model with a dynamic $\alpha$--effect given by Schmalz \& Stix \cite{schmalzetal91} (see also Covas et al.\ \cite{covasetal96} for details). We assume a spherical axisymmetrical configuration with one spatial dimension $x$ (measured in terms of the stellar radius $R$) for which the magnetic field takes the form \begin{equation}\label{Bspherical} \vec{B}=\left(0,B_{\phi},\frac{1}{R}\frac{\partial A_{\phi}}{\partial x}\right), \end{equation} where $A_\phi$ is the $\phi$--component (latitudinal) of the magnetic vector potential and $B_\phi$ is the $\phi$--component of $\vec{B}$. The model is made up of two ingredients: \begin{description} \item[(I)] the mean field induction equation \begin{equation}\label{induction} \frac{\partial\vec{B}}{\partial t}=\nabla\,\times\,(\vec{v}\,\times\,\vec{B}+ \alpha\vec{B}-\eta_t\nabla\,\times\,\vec{B}), \end{equation} where $\vec{B}$ is the mean magnetic field, $\vec{v}$ is the mean velocity, $\eta_t$ is the turbulent magnetic diffusitivity and $\alpha$ represents the $\alpha$--effect. \\ \item[(II)] The $\alpha$--effect which arises from the correlation of small scale turbulent velocity and magnetic fields\cite{krause80} and is important in maintaining the dynamo action by relating the mean electrical current arising in helical turbulence to the mean magnetic field. Here $\alpha$ is assumed to be dynamic and expressible in the form $\alpha=\alpha_0\cos x-\alpha_M(t)$, where $\alpha_0$ is a constant and $\alpha_M$ is its dynamic part satisfying the equation \begin{equation}\label{dynamicalpha} \frac{\partial \alpha_M}{\partial t}= \nu_t \frac{\partial^2 \alpha_M}{\partial x^2} + Q\,\vec{J}\cdot\vec{B}, \end{equation} where $Q$ is a physical constant, $\vec{J}$ is the electrical current and $\nu_t$ is the turbulent diffusivity. \end{description} These assumptions allow Eq. (\ref{induction}) to be split into the following two equations: \begin{eqnarray} \label{p1} \frac{\partial A_{\phi}}{\partial t}&=&\frac{\eta_t}{R^2} \frac{\partial^2A_{\phi}}{\partial x^2}+\alpha B_{\phi},\\ \label{p2} \frac{\partial B_{\phi}}{\partial t}&=& \frac{\eta_t}{R^2}\frac{\partial^2 B_{\phi}} {\partial x^2}+\frac{\omega_0}{R}\frac{\partial A_{\phi}}{\partial x}. \end{eqnarray} Expressing these equations in a non-dimensional form, relabelling the new variables thus \begin{equation} (A_\phi,~B_\phi,~ \alpha_M) \Longrightarrow (A,~B,~C), \end{equation} and using a spectral expansion of the form \begin{eqnarray} A=\sum_{n=1}^{N}A_n(t)\sin nx,\\ B=\sum_{n=1}^{N}B_n(t)\sin nx,\\ C=\sum_{n=1}^{N}C_n(t)\sin nx, \end{eqnarray} where $N$ determines the truncation order, reduces the equations (\ref{dynamicalpha}), (\ref{p1}) and (\ref{p2}) into a set of ODE, the dimension of which depends on the truncation order $N$. In Covas et al.\ \cite{covasetal96}, the models were taken to be antisymmetric with respect to the equator and it was found that the minimum truncation order $N$ for which a similar asymptotic behaviour existed was $N=4$. Here in view of computational costs, we take this value of $N$ for which the set of truncated equations becomes: \begin{eqnarray} \label{tr1} \frac{\partial A_1}{\partial t}&=& -A_{{1}}+{\frac {D B_{{2}}}{2}}-{\frac {32\,B_{{2}}C_{{2}}}{15\,\pi }}+{\frac {64\,B_{{2}}C_{{4}}}{105\,\pi }}+{\frac {64\,B_{{4}}C_{{2 }}}{105\,\pi }}-{\frac {128\,B_{{4}}C_{{4}}}{63\,\pi }}\\ \label{tr2} \frac{\partial B_2}{\partial t}&=& -4\,B_{{2}}+{\frac {8\,A_{{1}}}{3\,\pi }}-{\frac {24\,A_{{3}}}{5\, \pi }}\\ \label{tr3} \frac{\partial C_2}{\partial t}&=& -4\,\nu\,C_{{2}}+{\frac {16\,A_{{1}}B_{{2}}}{5\,\pi }}-{\frac {32\, A_{{1}}B_{{4}}}{7\,\pi }}+{\frac {144\,A_{{3}}B_{{2}}}{7\,\pi }}+{ \frac {416\,A_{{3}}B_{{4}}}{15\,\pi }}\\ \label{tr4} \frac{\partial A_3}{\partial t}&=& -9\,A_{{3}}+{\frac {D B_{{2}}}{2}}+{\frac {D B_{{4}}}{2}}-{\frac {32 \,B_{{2}}C_{{2}}}{21\,\pi }}-{\frac {64\,B_{{2}}C_{{4}}}{45\,\pi }} -{\frac {64\,B_{{4}}C_{{2}}}{45\,\pi }}-{\frac {128\,B_{{4}}C_{{4}} }{165\,\pi }}\\ \label{tr5} \frac{\partial B_4}{\partial t}&=& -16\,B_{{4}}+{\frac {16\,A_{{1}}}{15\,\pi }}+{\frac {48\,A_{{3}}}{7 \,\pi }}\\ \label{tr6} \frac{\partial C_4}{\partial t}&=& -16\,\nu\,C_{{4}}+{\frac {96\,A_{{1}}B_{{2}}}{35\,\pi }}+{\frac {64 \,A_{{1}}B_{{4}}}{21\,\pi }}+{\frac {32\,A_{{3}}B_{{2}}}{3\,\pi }}+ {\frac {576\,A_{{3}}B_{{4}}}{55\,\pi }}, \end{eqnarray} where $D$ is the control parameter, the so called dynamo number, and $\nu=\frac{\nu_t}{\eta_t}$ which for compatibility with \cite{covasetal96,schmalzetal91} we take to be $\nu=0.5$. Clearly the details of the resulting dynamics will depend on the truncation order chosen. For example, the $N=2$ case is expressible as the 3--dimensional Lorenz system and the higher truncations can have different quantitative types of behaviour. The important point, as far as our discussion here is concerned, is that the multi-attractor regime discussed here seems to be present as the order of truncation is increased. In this way such a behaviour might be of potential relevance in understanding some of the intermittent behaviour in the output of the Sun and other stars. \section{Crisis-induced intermittency} A coarse study of the system (\ref{tr1}) -- (\ref{tr6}) and higher truncations was reported in \cite{covasetal96} from a different point of view. Here we demonstrate the occurrence of crisis-induced intermittency in this system by considering the detailed nature of its attractors, their basins and especially their metamorphoses (merging), while treating $D$ as the control parameter. To begin with we recall that symmetries are usually associated with this type of attractor merging. The six dimensional dynamical system considered here possesses the symmetries: \begin{equation} A_n \to -A_n, \quad B_n \to -B_n, \quad C_n \to C_n. \end{equation} Now assuming the existence of a crisis for this system at $D=D_c$, then for crisis-induced intermittency to exist one requires that for $D<D_c$ there exist two (or more) chaotic attractors and that as $D$ is increased, the attractors enlarge and at $D=D_c$ they simultaneously touch the boundary separating their basins. In that case, for $D$ slightly greater than $D_c$, a typical orbit will spend long periods of time in each of the regions where the attractors existed for $D<D_c$ and intermittently switch between them. An important signature for this mechanism is the way the average time $\tau$ between these switches scales with the system parameter $D$. According to Grebogi et al.\ \cite{grebogietal87}, for a large class of dynamical systems, this relation takes the form \begin{equation}\label{index} \tau \sim \left|D-D_c \right |^{-\gamma}, \end{equation} where the real constant $\gamma$ is the critical exponent characteristic of the system under consideration. To show that crisis-induced intermittency occurs for the system (\ref{tr1}) -- (\ref{tr6}), we begin by noting that our numerical results indicate that, for a wide range of parameter values, the system possesses multiple attractors consisting of fixed points, periodic orbits and chaotic attractors. Starting around $D=195$, two cycles coexist and both bifurcate in a doubling bifurcation sequence into two chaotic attractors that coexist after $D>203$. At $D \approx 200.4$ two other periodic orbits appear which persist for the parameter values considered here. Figures \ref{attractors1} and \ref{attractors2} show these attractors for $D=204$, where all 6 coexist and their positions in the 6--dimensional phase are well separated (note that the apparent overlaps in Figs \ref{attractors1} and \ref{attractors2} are due to projections). We also found the corresponding basins of attraction for each attractor which indicate fractal boundaries. This can be seen in Figure \ref{basins} which shows a two dimensional cut $(C_2=A_3=B_4=C_4=0)$ of the basin boundary for this system at the parameter value $D=204$ and Figure \ref{magnify} which shows the magnification of a region of Figure \ref{basins} where both chaotic attractors possess fractal basins \cite{html}. We also calculated the box counting dimension of the boundary between attractors on a horizontal 1--D cut of Figure \ref{magnify}, which turned out to be non integer, further substantiating the fractal nature of the boundaries. Now as $D$ is increased, the two chaotic attractors merge and give rise to a single connected attractor. Figure \ref{series} shows the time series for the variable $A_1$ after the merging and Figure \ref{merged} shows the projection of the merged attractors on the variables $A_1$, $B_2$ and $C_2$. Prior to $D_c\approx 204.2796$, there is no switch between the two attractors and the time series does not show the bimodal behaviour seen in Figure \ref{series}. These results show a clear indication for the occurrence of crisis-induced intermittency in this model. To substantiate this further, we checked that for this system the scaling relation (\ref{index}) is satisfied in the neighbourhood of $D_c\approx 204.2796$. Figure \ref{scaling} shows the plot of $\log_{10}|\tau|$ versus $\log_{10}\left|D-D_c\right|$. To produce the plot, 28 points were taken at regular spacings with the initial conditions chosen in the chaotic basin of the merged attractor after $D\approx 204.2796$ and 200 million iterations were taken for each point. The transitions between the ghosts of the previous attractors were detected using the averages of the variable $A_1$ over a pseudo-period of approximately $\Delta t\approx 1.5$ non-dimensional time units. As can be seen the points are well approximated by a straight line, which was obtained using a least squares fit which giving $\gamma \approx 0.79 \pm 0.03$. The $\gamma$ coefficient can be calculated also from theoretical grounds, as shown in Grebogi et al.\ \cite{grebogietal87}. The method involves calculating the stable and unstable manifolds of the unstable orbit (thereafter $B$) mediating the crisis. By examining the trajectories around the transitions between the ghosts of the previous attractors at $D=204.35>D_c$, we found the point where the orbit went inside the portion of the unstable manifold of the $B$ that has poked over to the other side of the stable manifold of $B$. The orbit then follows closely the orientation of the stable and unstable manifolds. We then calculated a estimate of the direction of the unstable and stable manifolds. Since this was very sensitive, the value of $\gamma$ had a large error bar, that is, the calculated value could be anywhere on the range $[0.4,1.2]$, depending on minor changes in the choice of the vectors that determine the unstable and stable manifolds. Because the system was high dimensional, all the projections in two dimensional planes we used were not very useful to determine with good precision the directions of the two manifolds. Therefore we were unable to calculate the critical exponent with sufficient precision to compare with the one calculated from the time between flips of the orbit. Finally we looked at the parameter dependence of the system for fixed initial conditions. We found that there are intervals of $D$ for which this is fractal. This can be seen from Figure \ref{parameter} which depicts the final state (attractor) of the system (\ref{tr1}) -- (\ref{tr6}) as a function of changes in the parameter $D$ and the initial condition $B_2$. \section{Conclusions} We have found the presence of multiple attractors with fractal basin boundaries as well as crisis-induced intermittency in a truncated axisymmetric $\alpha \omega$ dynamo model which is antisymmetric with respect to the equator. We have seen that this type of intermittency is due to the collision of the two chaotic attractors and have confirmed this by calculating the scaling coefficient suggested by Grebogi et al.\ \cite{grebogietal87}. The presence of crisis-induced intermittency, coupled with the facts that this type of multiple attractors seem to persist in higher order truncations and the presence of symmetry in dynamo models, may indicate the relevance of this type of intermittency in more realistic dynamo settings. We have also found that this system possesses fractal parameter dependence for fixed initial conditions. The presence of such fractal structures results in a form of fragility (final state sensitivity), whereby small changes in the initial conditions or the control parameter of the system can result in qualitative changes in its final dynamics. This type of sensitivity could be of significance in astrophysics in that, for example, it could potentially lead to stars of same spectral type, rotational period, age and compositions showing different modes of dynamical behaviour \cite{tt95}. Finally as far as we are aware, this is the first instance of such behaviour in a dynamo model as well as in a $6D$ flow. \acknowledgments We would like to thank John Brooke and Andrew Tworkowski for helpful discussions. We also thank an anonymous referee for his useful comments and criticisms. EC is supported by grant BD / 5708 / 95 -- Program PRAXIS XXI, from JNICT -- Portugal. RT benefited from PPARC UK Grant No. H09454. This research also benefited from the EC Human Capital and Mobility (Networks) grant ``Late type stars: activity, magnetism, turbulence'' No. ERBCHRXCT940483.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} \paragraph{Background} Emek {\em et al.} \cite{Emek} recently introduced the following {\em probabilistic single-item auction} model: An auctioneer sells a single item to $n$ bidders. The item comes from one of $m$ different types, and the valuations of the bidders for the item vary between the different $m$ types, with the valuation $v_{ij}$ of bidder $i$ for an item of type $j$ being common knowledge (or at least known to the auctioneer). The actual type of the item is determined by nature, with the probability $p_j$ of each type $j$ occurring also being common knowledge. There is asymmetry of information in the setting in one respect only: The auctioneer knows the \emph{realization} of the type of the item, whereas the bidders do not. The auction proceeds by the auctioneer broadcasting to the bidders a single {\em signal} about the type of the item. In the work of Emek {\em et al.}, the signaling schemes considered are {\em pure}. That is, the signal is simply some function of the type of the item and in particular, there is a one-to-one correspondence between signaling schemes and partitions of the set of types. After receiving the signal, the bidders bid for the item in a standard 2nd price sealed-bid auction. It is assumed that bidders are risk neutral and play the dominant strategy of bidding their expected valuation given their signal in this auction. Emek {\em et al.} investigated the following question: {\em To which extent can the auctioneer exploit her informational advantage to increase revenue by choosing the signaling scheme appropriately?} Emek {\em et al.} show examples where non-trivial schemes significantly outperform the two trivial ones (which are: fully revealing the type of the item and revealing nothing at all). They show that it is strongly NP-hard to compute the pure signaling scheme that maximizes revenue among all such schemes. Their main result is a polynomial time algorithm that finds a pure signaling scheme that approximates the revenue of the optimal one within a constant factor. \paragraph{Our Results} In this work, we consider the extension of the model of Emek {\em et al.} consisting of allowing the auctioneer to use a {\em mixed} signaling scheme. In such a scheme, the auctioneer, after witnessing the realization of the item, picks a signal at random according to some probability distribution depending on this realization. We show that by making this very natural extension of the model, we kill two birds with one stone: \begin{itemize} \item{}{\em We earn more:} We show that there are problem instances (with arbitrarily many bidders) where the optimal mixed signaling scheme generates twice the revenue generated by the optimal pure signaling scheme. Also, we show that the revenue generated is never less than $\ensuremath{\mathcal{B}}/2$, where $\ensuremath{\mathcal{B}} = \min_{i'} \left(\sum_j \max_{i\neq i'} p_j v_{i,j}\right)$. We postpone to Section~\ref{sec:approximating-benchmark} a detailed discussion as to why this particular benchmark is meaningful. \item{}{\em We work less:} We show that the optimal mixed signaling scheme can be found in polynomial time, by devising a concise linear program describing this optimal scheme. While it is certainly intuitive that linear programming should be used to find an optimal mixed strategy, we need to prove several structural results concerning the optimal solution before being able to devise a polynomial sized linear program in the present setting. \end{itemize} \paragraph{Discussion of the model} We are aware that in the setting of Emek {\em et al.} (which is our setting as well), having the valuations known to the auctioneer makes it is less than obvious why the model requires the item to be sold in a $2$nd price auction. Indeed, simply posting an appropriately chosen price would generate more revenue. Also, the assumption about valuations being known to the auctioneer is itself questionable (note in particular that there is no obvious way to truthfully elicit these valuations from the bidders). To address this critique, we note that Emek {\em et al.} use the complete information setup and the associated results outlined above as a component in an analysis of a {\em Bayesian} variant of the setup, where the auctioneer is unaware of the actual valuations and has to base her signaling scheme solely on a probabilistic model thereof. Our mixed signaling variant can replace the original pure one also in this Bayesian variant and will increase its revenue and decrease its computational complexity. (We believe it would be interesting to understand how well such a scheme approximates the revenue of the {\em optimal} Bayesian auction in the sense of Myerson \cite{Myer} in this setting, and suggest this as a possible topic for future work.) Another, more down-to-earth answer to the critique is that a 2nd price auction is simply a very natural, well-known and wide-spread scheme for selling an item and that it therefore makes sense to fix this part of the setup when the main agenda is to investigate how signaling can improve revenue. In essence, our setting allows us to give an \emph{exact quantification} of the gain the auctioneer can obtain by optimally leveraging her informational advantage. And, as discussed in Section~\ref{sec:approximating-benchmark}, if the valuations are not dominated by a single bidder, then our benchmark-approximation analysis shows that the revenue from $2$nd price auctions is comparable with the revenue of the posted price scheme. \paragraph{Related Research} Due to our setup being a variant of the setup of Emek {\em et al.}, we refer to their paper for an extensive discussion regarding works dealing with sellers exploiting their informational advantage (dating back to the Nobel Prize winning work of Akerlof~\cite{akerlof1970}). However, unlike their pure-signal scheme, our mixed-signal scheme has an alternative interpretation as a model in which the auctioneer sells $m$ \emph{divisible} goods to $n$ bidders which have simple linear valuations per item, by \emph{bundling} subsets of these goods together (see Section~\ref{subsec:divisible_goods_model}). The problem of bundling goods, including divisible goods, has received considerable attention in the economics literature (e.g., \cite{Adam}) as well as the problem of auctioning divisible goods (see~\cite{Back_Zender_2001, Ausubel04auctioningmany, Iyengar_Kumar_2008} and the books ~\cite{cramton2010combinatorial, klemperer2004auctions}). However, our particular model does not seem to have been considered. \paragraph{Organization of Paper} First, in Section~\ref{sec:preliminaries}, we provide the details of our mixed-signals model and demonstrate that it is equivalent to an auction model concerning bundling of divisible goods. In Section \ref{sec:earnmore}, we present the examples where sending mixed signals significantly increases revenue. Then, in Section~\ref{sec:opt-mixed-singals-LP} we show that it is feasible to devise the optimal mixed-signals scheme in poly-time, using a polynomial size LP. Finally, we show in Section~\ref{sec:approximating-benchmark} that the revenue of the mixed signal auction is at least half the benchmark $\ensuremath{\mathcal{B}}$. We conclude with discussion and open problems in Section~\ref{sec:conclusion}. \section{Preliminaries -- The Model} \label{sec:preliminaries} \subsection{The Problem Formalization} \label{subsec:problem_formally} In a {\em probabilistic single-item auction with mixed signals}, an auctioneer wants to sells an item drawn from a known distribution $(p_1,p_2,\ldots,p_m)$ over $m$ types. There are $n$ bidders that wish to purchase the item, each with valuation $v_{i,j}$ for an item of type $j$, with these valuations being common knowledge. The auctioneer observes the type of the item and broadcasts a signal to the bidders. The signaling scheme is strategically chosen by the auctioneer in advance and is given by a map $\varphi:[m] \times \mathbb{N} \rightarrow [0,1]$, such that for every $j$, the auctioneer declares signal $S$ with probability $\varphi(j,S)$. As we later show, the overall number of signals we send can be assumed to be finite in the scheme generating the largest revenue, so we can assume that from some signal $B$ and onwards, the function $\varphi$ is identically $0$ (formally, $\forall j$ and $\forall S\geq B, \ \varphi(j,S)=0$). We abuse notation and identify $S$ with its support (i.e., the set $\{j:\ \varphi(j,S)>0\}$). We also alternate between the notations $\varphi(j,S)$ and $\varphi_{j,S}$. We denote $\mathcal{S}_\varphi$ as the set of all possible signals, i.e. $\mathcal{S}_\varphi = \{S:\ \varphi(j,S) > 0 \textrm{ for some }j\}$. After receiving the signal, the bidders participate in a 2nd price auction for the item. A {\em pure} signaling scheme is one where $\varphi(j,S) \in \{0,1\}$ for all $j,S$. The variant of the above setup where the autioneer is restricted to use a pure signaling $\varphi$ is the probabilistic single-item auction with pure signals originally suggested by Emek {\em et al.} Let us repeat a derivation from Emek et al. for the more general mixed case. For a fixed signal $S$, the probability of the auctioneer broadcasting this signal is $\sum_j p_j \varphi(j,S)$, and so, given that the auctioneer broadcasted the signal $S$, the probably that the item is of type $j$ is $\Pr[j|S] = {p_j \varphi(j,S)}/ \left({\sum_{j'} p_{j'} \varphi(j',S)}\right)$. As a result, given signal $S$, the adjusted valuation of bidder $i$ over the item is $\E[v_i|S] = \sum_j \Pr[j|S]v_{i,j} = {\sum_j v_{i,j} p_j \varphi(j,S)}/ \left({\sum_{j'} p_{j'} \varphi(j',S)}\right)$. We assume risk neutral bidders, who follow the dominant strategy of bidding this adjusted valuation in the 2nd price auction. Therefore, for signal $S$, the auctioneer's revenue is \[\ensuremath{\mathop{\rm max2}}_i\left\{\E[v_i|S] \right\} = \ensuremath{\mathop{\rm max2}}_i \left\{ \frac {\sum_j v_{i,j} p_j \varphi(j,S)} {\sum_{j} p_{j} \varphi(j,S)} \right\}.\] We are interested in the $\varphi$ that maximizes the expected revenue: \begin{eqnarray*} & \textrm{maximize } & \sum_{S\in\mathcal{S}_\varphi} \Pr[S]\ensuremath{\mathop{\rm max2}}_i\left\{\E[v_i|S] \right\} \cr && = \sum_{S\in\mathcal{S}_\varphi} \ensuremath{\mathop{\rm max2}}_i \left\{\sum_j \left(v_{i,j}p_j\right)\varphi(j,S)\right\} = \sum_{S\in\mathcal{S}_\varphi} \ensuremath{\mathop{\rm max2}}_i \left\{ \sum_j \psi_{i,j}\ \varphi(j,S)\right\} \end{eqnarray*} where the last equality merely comes from introducing the definition $\psi_{i,j} = v_{i,j}p_j$. \subsection{Equivalent Model of Divisible Goods} \label{subsec:divisible_goods_model} We observe that a probabilistic single-item auction with mixed signals can alternatively be seen as an auction where $m$ divisible goods are bundled and sold. The mixed signals are crucial for this characterization. The alternative model may be defined as follows: The auctioneer wishes to sell $m$ heterogeneous divisible goods to $n$ bidders. She has $1$ unit of each of the goods (for example, she has $1$ kilogram from each of $m$ exotic spices). Each bidder $i$ has linear valuation of $\psi_{i,j}$ for each unit of good $j$, so bidder $i$ has a utility of $\sum_j x_j \psi_{i,j}$ if he receives $x_j$ units of each good $j$. The auctioneer sells her goods by \emph{bundling} several goods together. More precisely, she uses a bundling scheme $(\mathcal{S}, \phi)$, where in each bundle $S \in \mathcal{S}$, she places $\varphi_{j,S}$ units of good $j$, and then she runs a $2$nd price auction for each bundle. We assume that bidders follow their dominant strategy of bidding their valuation for the bundle for sale in each of these auctions. The analogy between signaling in the model of one good of $m$ different types, and bundling in the model of $m$ divisible goods, is clear. Given a probabilistic single-item auction with $n$ bidders and $m$ types, we can define a divisible goods auction with $n$ bidders and $m$ goods by letting $\psi_{i,j} = p_j v_{i,j}$. Conversely, given a divisible goods auction with $n$ bidders and $m$ goods, we can define a probabilisitc single-item auction with $n$ bidders and $m$ types by letting $(p_j)_{j=1}^m$ be an arbitrary probability distribution with $p_j > 0$ for each $j$ and letting $v_{i,j} = \psi_{i,j}/p_j$. Also, mixed signaling schemes in the probabilistic single-item auction and bundling schemes in the divisible goods auctions are syntactically the same objects. Finally, it is readily checked that the expected revenue in the probabilistic single-item auction is identical (up to a scaling factor) to the revenue in the corresponding divisible good auction. Therefore, finding an optimal mixed signaling scheme in the first model is equivalent to finding an optimal bundling scheme in the latter. As a result of the above, we allow ourselves the liberty to alternate between the two models. \section{Earning More by Sending Mixed Signals} \label{sec:earnmore} A simple example where the best mixed signaling scheme outperforms the best pure signaling scheme is the following. Assume it is the case where $m=n=3$, the item is equally likely to be any one of the three types, and the valuations are the identity matrix (bidder $i$ wants only item of type $i$, so $v_{i,i} = 1$, and no other type, so $v_{i,j} = 0$ when $i\neq j$). A pure signaling scheme is forced to pair two of the three types, and results in expected revenue of $\frac{1}{3}$. In contrast, a mixed signaling scheme may use all $3$ signals $\{1,2\}, \{1,3\}, \{2,3\}$, and declare any signal that type $j$ belongs to with equal probability. (E.g., if the item type is $1$, then with probability $\frac{1}{2}$ the auctioneer declares $\{1,2\}$ and with probability $\frac{1}{2}$ she declares $\{1,3\}$.) Now, no matter what cluster $\{j, j'\}$ was declared, both bidder $j$ and bidder $j'$ know there's a $50\%$ chance that the item is of their desired type, resulting in a bid of $\frac{1}{2}$ from both bidder $j$ and bidder $j'$. Thus, the auctioneer gains revenue of $\frac{1}{2}$ with a mixed signaling scheme, exhibiting a gap of $1.5$ between the best mixed signaling scheme and the best pure signaling scheme. By a slightly more complex construction, we can get a gap of 2: \begin{theorem} For any even number $k$, there is a probabilistic single-item auction with $n=k+1$ bidders and $m=k+1$ types so that the optimal mixed signaling scheme has an expected revenue which is twice as big as that of the optimal pure signaling scheme. \end{theorem} \begin{proof} Consider the auction with valuations as given in Figure~\ref{fig:fractional_vs_integral} and with nature choosing the type uniformly at random. \begin{figure} \begin{center} \fbox{\parbox{4in}{ \begin{tabular*}{4in}{r|c|c|c|c|c|c|c} & type $0$ & type $1$ & type $2$ & .. & .. & .. & type $k$ \cr \hline bidder $0$ & $m$ & 0 & 0 & .. & .. & .. & 0 \cr bidder $1$ & 0 & 1 & 0 & .. & .. & .. & 0 \cr bidder $2$ & 0 & 0 & 1 & .. & .. & .. & 0 \cr . & & & & $\ddots$ & & & \cr . & & & & & $\ddots$ & & \cr . & & & & & & $\ddots$ & \cr bidder $k$ & 0 & 0 & 0 & .. & .. & .. & 1 \cr \end{tabular*} }} \end{center} \caption{\label{fig:fractional_vs_integral} \scriptsize Valuations of an auction in which the optimal mixed signaling scheme has twice the revenue of the optimal pure signaling scheme.} \end{figure} A pure signaling scheme can only pair $k$ types, so in such a scheme, with probability $\frac k {k+1}$, the auctioneer gains revenue of $\frac{1}{2}$ in the $2$nd price auction for each signal. In contrast, consider the mixed signaling scheme with signals $\{0,j\}_{j=1,2,\ldots, k}$ where for every $j$, $\Pr[ \{0,j\} | \ \textrm{type }0] = \frac 1 k$ and $\Pr[ \{0,j\} | \ \textrm{type }j] = 1$. Now, for every signal, the auctioneer gains revenue of $\frac k {k+1}$. \end{proof} \section{Working Less by Sending Mixed Signals} \label{sec:opt-mixed-singals-LP} We now turn to showing that the mixed signaling scheme generating the largest revenue is polynomial-time computable. To that end, we construct a linear program, whose solution is the optimal signaling scheme. In order to devise the LP, we provide several observations, leading the way to formalization of the LP. But before proceeding to the LP and these observations, we introduce some notation. \subsection{Notation.} \label{subsec:notation} Given a signal $S$, we denote $w_1(S)$ as the bidder which is the winner of $S$ (the bidder with the highest bid), and $w_2(S)$ as the $2$nd highest bidder. Formally (recall that we identify $S$ and its support) \begin{eqnarray*} & w_1(S) & = \arg\max_i \{ \sum_{j\in S} \psi_{i,j}\varphi_{j,S} \} \cr & w_2(S) & = \arg\max_{i\neq w_1(S)} \{ \sum_{j\in S} \psi_{i,j}\varphi_{j,S} \} = \arg\ensuremath{\mathop{\rm max2}}_{i} \{ \sum_{j\in S} \psi_{i,j}\varphi_{j,S} \} \end{eqnarray*} We call a signal $S$ a \emph{singleton} if $|S|=1$. Whenever $S$ is a singleton so $S=\{j\}$ for some $j$, we abbreviate $w_1(j), w_2(j)$. Given $i$, we denote by $d(i)$ the set of types for which the bid of $i$ is the biggest: $d(i) = \{j: w_1(j) = i\}$. Given a signal $S$, we denote $\ensuremath{\mathrm{rev}}(S)$ as the revenue the auctioneer gets from a $2$nd price auction over $S$. I.e., $\ensuremath{\mathrm{rev}}(S) = \sum_{j\in S} \varphi_{j,S}~ \psi_{w_2(S),j}$, the bid of bidder $w_2(S)$. As always, we abbreviate singletons to $\ensuremath{\mathrm{rev}}(j)$ instead of $\ensuremath{\mathrm{rev}}(\{j\})$. The overall revenue of the auctioneer from $\phi$ is defined as $\ensuremath{\mathrm{rev}}(\varphi) = \sum_{S\in\mathcal{S_\varphi}} \ensuremath{\mathrm{rev}}(S)$. We break ties arbitrarily, but in a consistent manner. \subsection{Na\"ive LP.} \label{subsec:naive_LP} Our first observation is that the problem of finding the optimal signaling scheme can be formalized into an LP, with potentially many variables. Assume $\varphi^*$ is an optimal signaling scheme. We claim $\varphi^*$ has only a finite number of signals. The key observation is that the auctioneer has no need for two signals $S$ and $T$ s.t. $\ensuremath{\mathrm{supp}}(S) = \ensuremath{\mathrm{supp}}(T)$ and in both $S$ and $T$ bidder $i_1$ is the winning bidder ($w_1(S)=w_1(T)=i_1$) and bidder $i_2$ is the $2$nd highest bidder ($w_2(S) = w_2(T) = i_2$). We prove this observation rigorously. \begin{claim} \label{clm:altering_a_signaling_scheme} Let $\varphi$ be a signaling scheme, and assume that there exist $S$ and $T\neq S$ s.t. $\ensuremath{\mathrm{supp}}(S) = \ensuremath{\mathrm{supp}}(T)$, and both $w_1(S) = w_1(T)$ and $w_2(S) = w_2(T)$. We define a new signaling scheme $\varphi'$ by ``merging'' $S$ and $T$ into a single signal $S'$, and keeping all other signals unchanged. Formally, let $\varphi'$ be the signaling scheme s.t. \begin{eqnarray*} \forall j, && \varphi'(j,S') = \varphi(j,S) + \varphi(j,T), \qquad \varphi'(j,S) = \varphi'(j,T) = 0 \cr \forall j, S_0 \neq S, T, && \varphi'(j,S_0) = \varphi(j,S_0) \end{eqnarray*} Then for every bidder $i$ and item type $j$, the probability $i$ gets the item of type $j$ is identical in $\varphi$ and $\varphi'$, and $\ensuremath{\mathrm{rev}}(\varphi) = \ensuremath{\mathrm{rev}}(\varphi')$. \end{claim} \begin{proof} The probability of $i$ winning item of type $j$ is exactly the probability that the auctioneer sees that the item is of type $j$ and then declares a signal $S$, for which $i$ has the winning bid. This clearly holds for all bidders but $w_1(S)$ ($=w_1(T)$), as all signals for which the winner isn't $w_1(S)$ are declared with the same probability in $\varphi$ and in $\varphi'$. The claim then follows from showing that $w_1(S)$ also has the winning bid for $S'$. First, observe that under the signal $S'$, the bidders bid $\E[v_i |\ S'] = \sum_j v_{i,j} \Pr[j |\ S'] = \frac {\sum_j p_j v_{i,j} \varphi'(j,S')} {\Pr[S']}$. Therefore, the order of the bids is determined by the numerator in the last term, as the denominator is the same for all bidders. By definition, for every $i$ we have that $\sum_j p_j v_{i,j} \varphi'(j,S') = \sum_j p_j v_{i,j} (\varphi(j,S)+\varphi(j,T))$, and so $w_1(S)$ had the winning bid for $S'$ and $w_2(S)$ has the second highest bid in $S'$. This allows us to deduce the first part of the claim. As for revenue, it is evident that $\ensuremath{\mathrm{rev}}(\varphi')-\ensuremath{\mathrm{rev}}(\varphi) = \ensuremath{\mathrm{rev}}(S') - \left(\ensuremath{\mathrm{rev}}(S) + \ensuremath{\mathrm{rev}}(T)\right)$, and it is also simple to see that \[\ensuremath{\mathrm{rev}}(S') = \sum_{j\in S'} \varphi'(j,S')~ \psi_{w_2(S),j} = \sum_{j\in S} \varphi(j,S)~ \psi_{w_2(S),j} + \sum_{j\in T} \varphi(j,T)~ \psi_{w_2(T),j} = \ensuremath{\mathrm{rev}}(S) +\ensuremath{\mathrm{rev}}(T)\] because $S$ and $T$ have the same support as $T$, and because $w_2(S') = w_2(S) = w_2(T)$. We deduce $\ensuremath{\mathrm{rev}}(\varphi')-\ensuremath{\mathrm{rev}}(\varphi) = 0$. \end{proof} Following Claim~\ref{clm:altering_a_signaling_scheme}, it is evident that the number of signals in an optimal signaling scheme can be upper bounded by all possible subsets of types and pairs of bidders, so $|\mathcal{S}| \leq 2^m n^2$. Furthermore, constraining $i_1$ to the be the winning bidder and $i_2$ to be the second highest bidder for signal $S$, is simply a linear constraints. Therefore, by having a variable per signal and a pair of winning bidders, we get that the optimal signaling scheme is the solution for the following (exponential) LP: \begin{equation} \max \sum_{S\subset [m]}\sum_{i_1\neq i_2}\ \sum_{j\in S} x_j(S,i_1, i_2) ~\psi_{i_2,j} \label{eq:naive_LP}\end{equation} \begin{align} &\textrm{under constraints:}\cr & \forall S, \forall i_1\neq i_2, \ \forall i\neq i_1, i_2 & \sum_j x_j(S,i_1, i_2) ~\psi_{i_1,j} \geq \sum_{j\in S} x_j(S,i_1, i_2) ~\psi_{i,j} \cr & & \sum_j x_j(S,i_1, i_2) ~\psi_{i_2,j} \geq \sum_{j\in S} x_j(S,i_1, i_2) ~\psi_{i,j} \label{constraint:i_1_and_i_2_are_best}\\ & \forall S, \forall i_1\neq i_2, & \sum_j x_j(S,i_1, i_2) ~\psi_{i_1,j} \geq \sum_{j\in S} x_j(S,i_1, i_2) ~\psi_{i_2,j} \label{constraint:i_1_wins}\\ & \forall j, & \sum_{S: j\in S} \sum_{i_1\neq i_2} x_j(S,i_1, i_2) \leq 1 \cr & \forall S, \forall i_1\neq i_2, & x_j(S,i_1, i_2) \geq 0 \notag \end{align} Where the constraints in \eqref{constraint:i_1_and_i_2_are_best} assure $i_1$ and $i_2$ are the two highest bidders for $S$, and the constraint in \eqref{constraint:i_1_wins} assures $i_1$ wins for $S$. The last two constraints assure $\varphi$ indeed induces a probability for every $j$. Therefore, our goal in the remainder of this section is to show that the number of variables in the LP \eqref{eq:naive_LP} can be reduced to a polynomial number. We comment that the same principals as in the proof of Claim~\ref{clm:altering_a_signaling_scheme} will be repeatedly applied in future claims. From now on, we omit the rigorous description of $\varphi'$, and merely refer to $\varphi'$ as the result of merging signals into a single signal / splitting a single signal into multiple signals. \subsection{Reducing the Number of Variables in the LP.} \label{subsec:improve_LP} Our goal is to show that the number of subsets we need to consider in the abovementioned LP can be reduced to a number polynomial in $n$ and $m$. To show this, we follow a series of observations. In order to bound the number of signals needed, we'd ideally like to show that every signal can be split. That is, we would like to take any non-singleton signal $S$ in $\varphi^*$, and have the auctioneer declare a few signals of smaller support rather than declaring $S$. If such a thing is always possible, then we can recursively split signals until we're left with only singleton signals. \begin{definition} \label{def:splittable_signal} Given a signaling scheme $\varphi$, we call a signal $S\in \mathcal{S}_\varphi$ \emph{splittable} if there exists a partition $S = S_1 \cup S_2 \cup \ldots \cup S_t$ s.t. $\sum_{k=1}^t \ensuremath{\mathrm{rev}}(S_k) \geq \ensuremath{\mathrm{rev}}(S)$. We call a signal \emph{singleton-splittable} if the signal is splittable w.r.t the partition of the signal into $|S|$ singleton signals, that is $\sum_{j\in S} \ensuremath{\mathrm{rev}}(j) \geq \ensuremath{\mathrm{rev}}(S)$. \end{definition} Unfortunately, the existence of such a split is not always possible -- some signals are non-splittable. Our claims characterize exactly the cases where this split causes the auctioneer to lose revenue. \begin{claim} \label{clm:same_winners} Let $S$ be a signal in the optimal signaling scheme, which is \emph{not} singleton splittable. That is, $\ensuremath{\mathrm{rev}}(S) > \sum_{j\in S} \varphi_{j,S}~\ensuremath{\mathrm{rev}}(j)$. Then both $w_1(S)$ and $w_2(S)$ belong to the set of bidders that win the items of $S$: $\{w_1(j) : \ j\in S\}$. \end{claim} \begin{proof} Assume that $w_1(S)$ does not belong to the set $\{w_1(j) : \ j\in S\}$. It follows that for every $j$, the $2$nd highest bid cannot be smaller than the bid of the $w_1(S)$, and so we achieve the contradiction \[\sum_{j\in S} \varphi_{j,S}~\ensuremath{\mathrm{rev}}(j) \geq \sum_{j\in S} \varphi_{j,S}~\psi_{w_1(S),j} \geq \sum_{j\in S}\varphi_{j,S}~\psi_{w_2(S),j} = \ensuremath{\mathrm{rev}}(S)\] Similarly, if $w_2(S)$ isn't a winner for some $j\in S$, then for any $j$ we have that the bid of the bidder $w_2(j)$ is no less than the bid of $w_2(S)$. The inequality follows: $\sum_{j\in S}\varphi_{j,S}~\ensuremath{\mathrm{rev}}(j) \geq \sum_{j\in S}\varphi_{j,S}~\psi_{w_2(S),j} = \ensuremath{\mathrm{rev}}(S)$. \end{proof} The proof of Claim~\ref{clm:same_winners} gives the following as an immediate corollary. \begin{corollary} \label{cor:simply_splittable_signals} Let $S$ be a signal s.t. the set $\{w_1(j) : \ j\in S\}$ contains a single bidder. Then $S$ is singleton-splittable. \end{corollary} \begin{proof} If $S$ wasn't singleton-splittable, then the set $\{w_1(j) : \ j\in S\}$ would contain at least two distinct bidders. \end{proof} Corollary~\ref{cor:simply_splittable_signals} allows us to deduce that the non-splittable signals must contain at least two distinct bidders in their set of winners. We next show that non-splittable signals must contain at most two distinct bidders in this set. \begin{claim} \label{clm:two_winners} There does not exist a non-splittable signal $S$ with $|\{w_1(j) : j\in S\}| \geq 3$. \end{claim} \begin{proof} Assume the existence of a non-splittable signal $S$ with a set of winners, $\{w_1(j)\}_{j\in S}$, containing at least $3$ distinct bidders. From Claim~\ref{clm:same_winners} we know $w_1(S), w_2(S)$ belong to this set, and wlog we denote them simply as bidders $1 = w_1(S)$ and $2=w_2(S)$. This allows to denote $\ensuremath{\mathrm{rev}}(S) = \sum_{j\in S} \psi_{2,j} \varphi(j,S)$, where the winning bid for $S$ is $\sum_{j\in S} \psi_{1,j} \varphi(j,S)$. We now show $S$ is splittable. Let us denote the following two disjoint subsets: $S_1 = S\cap d(1), S_2 = S\cap d(2)$, i.e., the set of types in $S$ that bidder $1$ (resp., bidder 2) covet the most. Observe that by assumption, some types in $S$ are not in $S_1\cup S_2$, so we can consider the partition $S = \big(S_1 \cup S_2\big) \cup \bigcup_{j \in S\setminus(S_1\cup S_2)} \{j\}$. I.e., we partition $S$ into $|S\setminus(S_1\cup S_2)| + 1$ signals: $|S\setminus(S_1\cup S_2)|$ singleton signals, and one signal for all types in $S_1 \cup S_2$. First, we consider the revenue of the auctioneer from the singleton signals: $\sum_{j \notin S_1\cup S_2} \ensuremath{\mathrm{rev}}(j)$. On all such types $j$, neither bidder $1$ nor bidder $2$ have the highest bid, so the $2$nd highest bid is at least as high as the bid of bidder $1$ and the bid of bidder $2$. Therefore, on $S\setminus(S_1\cup S_2)$, the auctioneer's revenue is \[\sum_{j \notin S_1\cup S_2} \ensuremath{\mathrm{rev}}(j)\geq \max \{ \sum_{j\in S\setminus(S_1\cup S_2)} \varphi_{j,S} \psi_{1,j}, \sum_{j\in S\setminus(S_1\cup S_2)} \varphi_{j,S} \psi_{2,j} \}\] Now we consider the revenue of the auctioneer from the signal $S_1\cup S_2$, where \emph{at least one} of the bidders $\{1,2\}$ doesn't have the winning bid. Therefore, the $2$nd highest bid is at least as high as the bid of bidder $1$ or the bid of bidder $2$. As a result, $\ensuremath{\mathrm{rev}}(S_1\cup S_2) \geq \sum_{j\in S_1\cup S_2} \varphi(j,S) \psi_{1,j}$ or $\ensuremath{\mathrm{rev}}(S_1\cup S_2) \geq \sum_{j\in S_1\cup S_2} \varphi(j,S) \psi_{2,j}$. It follows that the abovementioned partition of $S$ has revenue which is either $\ensuremath{\mathrm{rev}}(S_1\cup S_2) + \sum_{j\in S\setminus(S_1\cup S_2)} \ensuremath{\mathrm{rev}}(j) \geq \sum_{j\in S} \varphi(j,S) \psi_{1,j}$ or $\ensuremath{\mathrm{rev}}(S_1\cup S_2) + \sum_{j\in S\setminus(S_1\cup S_2)} \ensuremath{\mathrm{rev}}(j) \geq \sum_{j\in S} \varphi(j,S) \psi_{2,j}$. Observe that $\sum_{j\in S} \varphi(j,S) \psi_{1,j}$ is the winning bid of $S$, so $\sum_{j\in S} \varphi(j,S) \psi_{1,j} \geq \sum_{j\in S} \varphi(j,S) \psi_{2,j} = \ensuremath{\mathrm{rev}}(S)$, and deduce that in any case $\ensuremath{\mathrm{rev}}(S_1\cup S_2) + \sum_{j\in S\setminus(S_1\cup S_2)} \ensuremath{\mathrm{rev}}(j) \geq \ensuremath{\mathrm{rev}}(S)$. Contradiction. \end{proof} Combining Corollary~\ref{cor:simply_splittable_signals} and Claim~\ref{clm:two_winners} we deduce the following. \begin{corollary} \label{cor:support_containment} Let $S$ be a non-splittable signal in $\varphi^*$. Then $\{w_1(j) :\ j\in S\} = \{w_1(S), w_2(S)\}$, otherwise denoted as $\ensuremath{\mathrm{supp}}(S) \subset d(w_1(S)) \cup d(w_2(S))$. \end{corollary} Using Corollary~\ref{cor:support_containment} we deduce the existence of an optimal signaling scheme with exactly two types of signals: either singleton signals, or non-splittable signals. Now, using Claim~\ref{clm:altering_a_signaling_scheme} we can take any two non-splittable signals $S,T$ such that $w_1(S)=w_1(T)$ and that $w_2(S) = w_2(T)$ and merge them. This follows from the fact that we can always think of $S$ and $T$ as two signals over $d(w_1(S)) \cup d(w_2(S))$, with some elements have $0$ probability of declaring $S$ (or $T$). Using~\ref{clm:same_winners}, \ref{cor:support_containment} and , we deduce that there exists a signaling scheme that has at most $m + n(n-1)$ different signals: the singleton signals, and the signals composed from pairing $d(i)$ and $d(i')$ for any two bidders $i, i'$. Observe that $d(1), d(2), \ldots, d(n)$ partition the $m$ different types into disjoint sets, so there can only be $\min\{m,n\}$ such elements in the partition. We therefore deduce that the optimal signaling scheme has at most $N = m + \min\{m(m-1), n(n-1) \} \leq m^2$ signals. We can therefore reduce our LP to have $N$ variables: variables $x_j$, indicating the probability that the auctioneer sees item of type $j$ and declares the singleton cluster $\{j\}$; and variables $y_j(i_1, i_2)$, indicating the probability that the auctioneer sees item of type $j \in d(i_1)\cup d(i_2)$ and declares a signal in which $i_1$ has the highest bid, and $i_2$ has the second highest bid. Formally, we solve: \begin{equation} \max \sum_j x_j ~\psi_{w_2(j),j} + \sum_{i_1}\sum_{i_2\neq i_1} \sum_{j \in d(i_1)\cup d(i_2)} y_j(i_1, i_2) ~\psi_{i_2,j} \label{eq:LP}\end{equation} \begin{align*} &\textrm{under constraints:}\\ & \forall i_1,\ \forall i_2\neq i_1, & \sum_{j\in d(i_1)\cup d(i_2)} y_{j}(i_1,i_2)~\psi_{i_1,j} \geq \sum_{j\in d(i_1)\cup d(i_2)} y_{j}(i_1,i_2) ~\psi_{i_2,j}\\ & \forall i_1,\ \forall i_2\neq i_1,\textrm{ and } i \neq i_1, i_2, & \sum_{j\in d(i_1)\cup d(i_2)} y_{j}(i_1,i_2) ~\psi_{i_1,j} \geq \sum_{j\in d(i_1)\cup d(i_2)} y_{j}(i_1,i_2) ~\psi_{i,j} \\ && \sum_{j\in d(i_1)\cup d(i_2)} y_{j}(i_1,i_2) ~\psi_{i_2,j} \geq \sum_{j\in d(i_1)\cup d(i_2)} y_{j}(i_1,i_2) ~\psi_{i,j} \\ & \forall j, & x_{j} + \sum_{i_1}\sum_{\substack{i_2\neq i_1 \\ \textrm{s.t. } j\in d(i_1)\cup d(i_2)}} y_{j}(i_1,i_2) \leq 1 \\ & \forall j,\ \forall i_1,\ \forall i_2\neq i_1, & x_{j} \geq 0, \qquad y_{j}(i_1, i_2) \geq 0 \end{align*} \subsection{An Additional Observation} \label{subsec:additional_observation} Note that for every $i_1\neq i_2$ and every $j\in d(i_1)\cup d(i_2)$, we have two $y$-variables in the LP~\eqref{eq:LP}, one for $i_1$ winning and $i_2$ coming second, and one for $i_2$ winning and $i_1$. We now show that it is enough to use just one variable, indicating a signal in which \emph{both} $i_1$ and $i_2$ give the highest bid. \begin{observation} \label{obs:equal_first_and_second_bid} There exists an optimal signaling scheme, in which for each non-singleton signal $S$, the first and the second highest bid are identical. \end{observation} \begin{proof} Assume that for a certain signal $S$, the bid of $w_1(S)$ is strictly greater than the bid of $w_2(S)$. Wlog, denote bidder $1$ as $w_1(S)$ and bidder $2$ as $w_2(S)$. We split $S$ into two disjoint, non-empty sets $S_1 = S \cap d(1)$ and $S_2 = S\cap d(2)$. (If either $S_1$ or $S_2$ are empty, then Corollary~\ref{cor:simply_splittable_signals} shows $S$ can be split into singleton signals.) Define \[g = \frac {\sum_{j\in S_2} \varphi_{j,S}~(\psi_{2,j} -\psi_{1,j}) } {\sum_{j\in S_1} \varphi_{j,S}~(\psi_{1,j} -\psi_{2,j}) }\] (Note, both the numerator and the denominator or positive.) By assumption, we have \begin{align*} & \sum_{j\in S} \varphi(j,S) ~\psi_{1,j} > \sum_{j\in S} \varphi(j,S) ~\psi_{2,j} && \Leftrightarrow \cr & \sum_{j\in S_1} \varphi_{j,S}~\psi_{1,j} + \sum_{j\in S_2} \varphi_{j,S}~\psi_{1,j} > \sum_{j\in S_2} \varphi_{j,S}~\psi_{2,j} + \sum_{j\in S_2} \varphi_{j,S}~\psi_{2,j} && \Leftrightarrow \cr & \sum_{j\in S_1} \varphi_{j,S}~(\psi_{1,j} -\psi_{2,j}) > \sum_{j\in S_2} \varphi_{j,S}~(\psi_{2,j} -\psi_{1,j}) && \Leftrightarrow \ \ g < 1 \end{align*} So now, define $\varphi'$ to be the signaling scheme where for any $j\in S_1$, the probability of giving the signal $S$ decreases: $\varphi'(j,S) = g\cdot \varphi_{j,S}$, and as a result, the probability of giving the singleton signal $\{j\}$ increases: $\varphi'(j, \{j\}) = \varphi_{j, \{j\}} + (1-g)\cdot \varphi_{j,S}$. In $\varphi'$, the above derivation shows that the bid of bidder $1$ and of bidder $2$ are identical. Furthermore, by increasing the probability mass on the singleton signals, the auctioneer can only increase her revenue. \end{proof} Following Observation~\ref{obs:equal_first_and_second_bid}, we deduce that the number of variables in the LP (and the number of signals in our signaling scheme) can be bounded by $N = m + \min\{\binom{n}{2}, \binom{m}{2}\}$. Furthermore, Observation~\ref{obs:equal_first_and_second_bid} justifies the fact that we repeatedly identify a signal with its support. \section{Competitiveness Against a Benchmark} \label{sec:approximating-benchmark} We show a lower bound for the revenue against a benchmark, and first discuss which benchmarks are reasonable. It is quite clear, especially when viewed as selling $m$ divisible goods, that the auctioneer cannot get more than $\sum_j \max_i \psi_{i,j}$. As we are restricted to run a 2nd price auction in the end, this quantity is in general unapproachable, since some bidder might have valuations that are so high that they overshadow all other valuations of all other bidders. We thus define our benchmark as the outcome of ``taking a bidder out of the picture''. That is, we ignore the bids of $i'$, and sum the maximum bid for each type separately. Formally, \[\ensuremath{\mathcal{B}} = \min_{i'} \left(\sum_j \max_{i\neq i'} \psi_{i,j}\right) \ \ = \min_{i'} \left(\sum_{j\in d(i')} \ensuremath{\mathop{\rm max2}}_i \psi_{i,j} + \sum_{j\notin d(i')} \max_{i} \psi_{i,j}\right) \] Before showing our algorithm is competitive with $\mathcal{B}$, let us first discuss the motivation for this benchmark. A classical benchmark for comparison in other prior-free settings is the one the results from omitting the bidder with the highest bid (for the same reasoning mentioned above), see Goldberg {\em et al.} \cite{Goldberg}. Therefore, one might suggest that the right benchmark for the problem is result of ignoring the bid of the one bidder who covets her set of item types the most. Formally, this other benchmark for the problem is: $\tilde \ensuremath{\mathcal{B}} = \sum_j (\max_{i\neq i^*} \psi_{i,j})$ where $i^* = \arg \max_i \left(\sum_{j\in d(i)} \psi_{i,j}\right)$. First, observe that $i^*$ and $i_0$, the bidder for which the benchmark $\ensuremath{\mathcal{B}}$ is obtained, are not necessarily the same, as the example in Figure~\ref{fig:example} demonstrate. But let us show that are closely related. \begin{figure} \begin{center} \fbox{\parbox{2.5in}{ \begin{tabular*}{2.5in}{r|c|c|c} & type $1$ & type $2$ & type $3$ \cr \hline bidder $1$ & 500 & 500 & 0 \cr bidder $2$ & 499 & 498 & 1 \cr bidder $3$ & 7 & 3 & 999 \cr \end{tabular*} }} \caption{\label{fig:example} \scriptsize An example demonstrating that $\ensuremath{\mathcal{B}}$ and $\tilde\ensuremath{\mathcal{B}}$ are different. $\tilde\ensuremath{\mathcal{B}}$ requires we ignore bidder $1$, as her bids are the highest. $\ensuremath{\mathcal{B}}$ requires we ignore bidder $3$, as ignoring bidder $3$ results in the biggest decrease in maximal bids.} \end{center} \end{figure} \begin{claim} \label{clm:comparing_benchmarks} \[ \tilde\ensuremath{\mathcal{B}}/2 \leq \ensuremath{\mathcal{B}} \leq \tilde\ensuremath{\mathcal{B}} \] \end{claim} \begin{proof} The inequality $\ensuremath{\mathcal{B}} \leq \tilde\ensuremath{\mathcal{B}}$ follows from the definition of $\ensuremath{\mathcal{B}}$, so we turn to proving $\ensuremath{\mathcal{B}} \geq \frac 1 2 \tilde\ensuremath{\mathcal{B}}$. Denote $i_0$ as the bidder on which the minimum of $\ensuremath{\mathcal{B}}$ is obtained. Obviously, if $i^* = i_0$, we are done, as both benchmarks are the same. So we assume $i^*\neq i_0$, and we have \begin{eqnarray*} &&\ensuremath{\mathcal{B}} = \sum_{j} (\max_i \psi_{i,j}) - \left(\sum_{j\in d(i_0)} (\max_i \psi_{i,j}) - (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j})\right) \geq \sum_{j\in d(i^*)} (\max_i \psi_{i,j}) \cr &&\tilde\ensuremath{\mathcal{B}} = \sum_{j} (\max_i \psi_{i,j}) - \left(\sum_{j\in d(i^*)} (\max_i \psi_{i,j}) - (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j})\right) \cr \end{eqnarray*} and since, by definition of $i^*$, we have that $\sum_{j\in d(i^*)} \psi_{i,j} \geq \sum_{j\in d(i_0)} \psi_{i,j}$ then it holds that \begin{eqnarray*} &\tilde\ensuremath{\mathcal{B}} - \ensuremath{\mathcal{B}} &= \left(\sum_{j\in d(i_0)} (\max_i \psi_{i,j}) - (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j})\right) - \left(\sum_{j\in d(i^*)} (\max_i \psi_{i,j}) - (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j}) \right)\cr && \leq \sum_{j\in d(i^*)} (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j}) + \left( \sum_{j\in d(i_0)} (\max_i \psi_{i,j}) - \sum_{j\in d(i^*)} (\max_i \psi_{i,j}) \right) \cr && \leq \sum_{j\in d(i^*)} (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j}) \leq \sum_{j\in d(i^*)} (\max_i \psi_{i,j}) \leq \ensuremath{\mathcal{B}} \end{eqnarray*} \end{proof} We comment that the example in Figure~\ref{fig:example} also demonstrates that the $2$-factor of Claim~\ref{clm:comparing_benchmarks} is essentially tight. Now, having established the connection between $\ensuremath{\mathcal{B}}$ and $\tilde\ensuremath{\mathcal{B}}$, we compare our signaling scheme with the benchmark $\ensuremath{\mathcal{B}}$. \begin{theorem} \label{thm:2-approximation} For any set of valuations $\psi_{i,j}$, the revenue of our signaling scheme is $\geq \ensuremath{\mathcal{B}}/2$. \end{theorem} \begin{proof} The proof follows from breaking the revenue of the signaling scheme into two terms: the revenue from singleton signals, and the revenue from non-singleton signals. Given a signaling scheme $\varphi$, we denote \begin{eqnarray*} &&\ensuremath{\mathrm{rev}}^{\rm S}(\varphi) = \sum_j \ensuremath{\mathrm{rev}}(j) = \sum_j \varphi(j,\{j\}) ~\psi_{w_2(j),j} \cr &&\ensuremath{\mathrm{rev}}^{\rm NS}(\varphi) = \sum_{S:\ |S|\geq 2} \ensuremath{\mathrm{rev}}(S) = \sum_{S:\ |S|\geq 2} \varphi(j,S)\sum_{j\in S} \psi_{w_2(S),j} \end{eqnarray*} We now denote $\varphi^*$ as the optimal signaling scheme we get from solving the LP in (\ref{eq:LP}). Let's fix $S$ to be some non-singleton signal in our scheme. So $S$ corresponds to a pair of bidders, $i$ and $i'$, such that $S\subset d(i)\cup d(i')$ and $i$ has the highest bid on $d(i)$, whereas $i'$ has the highest bid over the items in $d(i')$. The revenue the auctioneer gets from signal $S$ is exactly $\ensuremath{\mathrm{rev}}(S) = \sum_{j\in d(i)\cup d(i')} y_{j}(i,i') \psi_{i,j} = \sum_{j\in d(i)\cup d(i')} y_{j}(i,i') \psi_{i',j}$. Therefore, \[2\cdot\ensuremath{\mathrm{rev}}(S) = \sum_{j\in d(i)\cup d(i')} y_{j}(i,i') (\psi_{i,j} + \psi_{i',j}) \geq \sum_{j\in d(i)\cup d(i')} y_{j}(i,i')(\max_i\psi_{i,j})\] Summing up the revenue of the auctioneer from all non-singleton signals, we have \begin{eqnarray} \label{eq:profit_non_singletons} & \ensuremath{\mathrm{rev}}^{\rm NS}(\varphi^*) & \geq \frac 1 2 \sum_{S:\ |S|\geq 2} \sum_{j\in S} (\max_i\psi_{i,j})y_{j}(i,i') = \frac 1 2 \sum_j (\max_i\psi_{i,j}) \sum_{\substack{S:\ |S|\geq 2\\ j\in S}} \varphi^*(j,S) \cr && = \frac 1 2 \sum_j (\max_i \psi_{i,j}) \Pr[\textrm{Given }j,\ \varphi^*\textrm{ declares a non-singleton signal}] \cr && = \frac 1 2 \sum_j (\max_i \psi_{i,j})\big(1-\varphi^*(j,\{j\})\big) \end{eqnarray} We now turn to bound the revenue from singleton signals, that is, the term $\ensuremath{\mathrm{rev}}^{\rm S}(\varphi^*)$. Let us consider the following procedure, that converts one signaling scheme $\varphi$ into a different one $\varphi'$. \begin{enumerate} \item Let $j = \arg\min \{\varphi(j,\{j\}) ~\psi_{w_1(j),j} : \ \varphi(j,\{j\})>0\}$. \item Fix some $j'$ s.t. $\varphi(j',\{j'\}) >0$ and s.t. $w_1(j) \neq w_1(j')$. \item Define $\lambda = \frac{ \varphi(j,\{j\}) ~ \psi_{w_1(j),j} } {\varphi(j',\{j'\}) ~ \psi_{w_1(j'),j'}}~$ (obviously, $\lambda \leq 1$). \item Alter $\varphi$ in the following manner. Introduce a new signal $S_{\rm new} = \{j, j'\}$ and set \begin{align*} &\varphi'(j,S_{\rm new}) = \varphi(j,\{j\}) && \varphi'(j,\{j\}) = 0 \\ &\varphi'(j',S_{\rm new}) = \lambda \varphi(j',\{j'\}) &&\varphi'(j',\{j'\}) = (1-\lambda)\varphi(j',\{j'\}) \end{align*} \end{enumerate} Now, the effect of applying this procedure on a signaling scheme is that $\ensuremath{\mathrm{rev}}^{\rm S}$ decreases, yet $\ensuremath{\mathrm{rev}}^{\rm NS}$ increases: $\ensuremath{\mathrm{rev}}^{\rm S}(\varphi') - \ensuremath{\mathrm{rev}}^{\rm S}(\varphi) = - \varphi_{j,j} ~\psi_{w_2(j),j} - \lambda\varphi_{j',j'} ~\psi_{w_2(j'), j'}$, whereas $\ensuremath{\mathrm{rev}}^{\rm NS}(\varphi') - \ensuremath{\mathrm{rev}}^{\rm NS}(\varphi) = \ensuremath{\mathrm{rev}}(S_{\rm new})$. But now, because of $\lambda$, the bids of $w_1(j)$ and the bid of $w_1(j')$ are identical for $S_{\rm new}$, and therefore, just as shown above, $\ensuremath{\mathrm{rev}}(S_{\rm new}) = \frac 1 2 \big(\varphi(j,\{j\}) ~\psi_{w_1(j),j} + \lambda\varphi(j',\{j'\})~\psi_{w_1(j'),j'}\big)$. Given a signaling scheme, we denote $J_\varphi = \{j: \ \varphi(j,\{j\})>0\}$, and $I_\varphi = \{w_1(j) : j\in J_\varphi\}$. It is evident that the above procedure is applicable as long as $I$ contains at least two distinct bidders. So, imagine we take $\varphi^*$ and apply the abovementioned procedure repeatedly, until it is no longer applicable. (Note, every time we apply the procedure, we decrease the number of singleton signals by at least $1$, so in $m$ iterations we must terminate.) Denote the signaling scheme which we end with by $\bar\varphi$, and assume $I_{\bar\varphi} $ contains a single bidder, $i_0$ (the case $I_{\bar\varphi} =\emptyset$ is even simpler). Denote $J_{\rm remain}$ as all the types that appear as singleton in $\bar\varphi$ (and obviously in $\varphi^*$), and observe that $J_{\rm remain}\subset d(i_0)$. Repeating the derivation from \eqref{eq:profit_non_singletons}, we get that \begin{eqnarray*} & \ensuremath{\mathrm{rev}}^{\rm NS}(\bar\varphi) & \geq \frac 1 2 \sum_j (\max_i \psi_{i,j})\big(1-\bar\varphi(j,\{j\})\big) \cr && = \frac 1 2 \sum_{j\notin J_{\rm remain}} (\max_i \psi_{i,j}) + \frac 1 2 \sum_{j\in J_{\rm remain}} (\max_i \psi_{i,j})\big(1-\bar\varphi(j,\{j\})\big) \cr && \geq \frac 1 2 \sum_{j\notin J_{\rm remain}} (\max_i \psi_{i,j}) + \frac 1 2 \sum_{j\in J_{\rm remain}} (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j})\big(1-\bar\varphi(j,\{j\})\big) \end{eqnarray*} Since $\ensuremath{\mathrm{rev}}^{\rm S}(\bar\varphi) = \sum_{j\in J_{\rm remain}} \bar\varphi(j, \{j\}) (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j})$ we can conclude and deduce that \begin{eqnarray*} & \ensuremath{\mathrm{rev}}(\varphi^*) \geq \ensuremath{\mathrm{rev}}(\bar\varphi) &= \ensuremath{\mathrm{rev}}^{\rm NS}(\bar\varphi) + \ensuremath{\mathrm{rev}}^{\rm S}(\bar\varphi) \cr && \geq \frac 1 2 \sum_{j\notin J_{\rm remain}} (\max_i \psi_{i,j}) + \frac 1 2 \sum_{j\in J_{\rm remain}} (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j}) \cr && \geq \frac 1 2 \sum_{j\notin d(i_0)} (\max_i \psi_{i,j}) + \frac 1 2 \sum_{j\in d(i_0)} (\ensuremath{\mathop{\rm max2}}_i \psi_{i,j}) \cr && = \frac 1 2 \sum_{j} (\max_{i\neq i_0} \psi_{i,j}) \geq \ensuremath{\mathcal{B}} \end{eqnarray*} \end{proof} We comment that if $\varphi^*$ or $\bar\varphi$ contains no singleton signals, then we have $\ensuremath{\mathrm{rev}}(\varphi^*)\geq \ensuremath{\mathrm{rev}}(\bar\varphi) = \ensuremath{\mathrm{rev}}^{\rm NS}(\bar\varphi) = \frac 1 2 \sum_j \max_{i} \psi_{i,j}$. In words: if the optimal signaling scheme contains no singleton clusters, then the revenue of the auctioneer is at least half the sum of highest bids over all types. We also comment that the example in the introduction, the one where $m=n$ and the valuations form the unit matrix, exhibit a case where the $2$-factor in Theorem~\ref{thm:2-approximation} is tight. \section{Discussion and Open Problems} \label{sec:conclusion} We have shown that in probabilistic single item auctions, mixed signaling schemes outperforms pure ones, both with respect to revenue and with respect to computational complexity. Furthermore, Observation~\ref{obs:equal_first_and_second_bid} gives us an insight as to the characterization of the optimal signaling / bundling scheme. The auctioneer leverages her informational advantage to bundle goods in a way that \emph{maximizes competition} among bidders -- her non-singleton bundles are exactly those where two (or more) bidders are equal in their utility. In that aspect, our model allows us to \emph{precisely} quantify the extent for which the seller can shape the demand in order to increase her revenue (rather than the usual concern of truthfully sampling the demand, in the non-full information setting). Needless to say, the notion that an increase in the demand leads to an increase in revenue is a basic principle of microeconomics (e.g.~\cite{mascolell1995mt}). Similarly, Observation~\ref{obs:equal_first_and_second_bid} also demonstrates the connection between our signaling scheme and the fractional knapsack problem (see~\cite{KelPfePis04}). In fact, one may view the problem as a version of the knapsack problem -- for every pair of bidders $(i,i')$ there are numerous ways of bundling the goods s.t. the bids of $i$ and $i'$ are the same. The auctioneer is therefore faced with the problem of picking a subset of these potential bundles (subject to having at most one unit of each good) in order to maximize her profit. And, much like the fact that the fractional knapsack problem is polynomial time solvable, so is the mixed signals problem. Finally, we suggest some interesting open problems: \begin{itemize} \item{}Are there instances where the optimal mixed signaling scheme generates strictly more than twice the revenue of the optimal pure signaling schemes? \item{}In Bayesian variants of the setup (see \cite{Emek}), how well does the signaling + 2nd price auction approach approximate the optimal auction (in the sense of Myerson \cite{Myer})? \item{}Is it possible to find an optimal (or approximately optimal) signaling scheme when $m$ is exponentially large? Consider the case where each type can be described using $d$ attributes, and the bidders' valuations for the item are functions of these $d$ attributes. Can one extend the LP of~\eqref{eq:LP} to handle such valuations? \end{itemize}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{section:introduction} Current constraints, from a comparison of standard, homogeneous, primordial nucleosynthesis calculations with light-element abundances by Walker {et al.}\/ (1991, see also Olive {et al.}\/ 1990 and Peebles {et al.}\/ 1991), place tight limits on the baryon density parameter of the Universe, $\Omega_{\rm b}=0.05\pm0.01h_{50}^{-2}$ ($H_{0}=50h_{50}\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}\Mpc^{-1}$). This implies a mean baryon density of about $<6$ per cent of the critical closure density if $h_{50}=1$, which is much lower than the previous upper limit of $\Omega_{\rm b}<0.19h_{50}^{-2}$ determined by Yang {et al.}\/ (1984). As first pointed out by S.~White \& Frenk (1991), the recent calculation of the mean baryon fraction conflicts with the X-ray determinations of the gas fraction of mass in clusters if $\Omega_0=1$. This fraction should equal $\Omega_{\rm b}/\Omega_0$ if dark matter is distributed similarly to the X-ray emitting gas. They, and more recently S.~White {et al.}\/ (1993), noted that hot gas contributes approximately 20 per cent to the total mass of the Coma cluster within approximately $3{\rm\thinspace Mpc}$, indicating that $\Omega_{\rm b}/\Omega_0\sim0.3$. Baryon fraction estimates for the Coma cluster have been extended to $5{\rm\thinspace Mpc}$, utilizing the `unlimited' field of view of the {\it ROSAT} All-Sky Survey (Briel, Henry, \& B\"{o}hringer 1992), where the gas mass content is then 30 per cent. Thus at face value, either $\Omega_0\sim0.2$, dark matter and baryons have different distributions on cluster scales, or the abundance measurements and/or calculations of $\Omega_{\rm b}$ are incorrect. This last possibility is unlikely since it involves the best-understood physics. Over a decade ago, Ku {et al.}\/ (1983) determined that the baryon fraction within $1.9{\rm\thinspace Mpc}$ of CA$0340-538$ was greater than 10 per cent, while Stewart {et al.}\/ (1984) found variations between 3 and 20 per cent within the central $0.5{\rm\thinspace Mpc}$ of 36 clusters, and that a significant number of clusters have baryon fractions of at least 10 per cent. Stewart {et al.}\/ also noted that the baryon fraction increase with radius, a result supported by the analysis of {\it Einstein Observatory} \/ data by Forman \& Jones (1984) which showed that the scale-height of the gas distribution in clusters is generally larger than that of the gravitational mass. Edge \& Stewart (1991b) also determined baryon fractions of up to 20 per cent from {\it EXOSAT} observations of 36 clusters of galaxies. However, none of these studies noted any conflict between the X-ray determinations of the baryon fraction and the constraints from standard primordial nucleosynthesis; the calculated limit of the baryon fraction at that time was $\Omega_{\rm b}<0.19h_{50}^{-2}$. A recent study of the Shapley supercluster by Fabian (1991), where gas mass and luminosity relations from Forman \& Jones (1984) are extrapolated, indicates that the baryon fraction there is greater than 18 per cent over a region of $37{\rm\thinspace Mpc}$ in radius. This over-density of a factor of 3 implies that the baryons must have been accumulated from a region that is at least 40 per cent larger in radius, if $\Omega_0=1$. This then implies that the Shapley region must be bound, or has at least retarded the Hubble flow over this region, and creates problems for the theory of the formation of large-scale structure in currently favoured models. If baryon over-densities are common in clusters, as we aim to show in this paper, then perhaps the most obvious solution is that $\Omega_0<1$ ({\it e.g.\ } for baryon fractions of 30 per cent $\Omega_{\rm b}/\Omega_0\approxlt0.06/0.2=0.3$ --- see also S.~White \& Frenk 1991, and S.White {et al.}\/ 1993). However, this solution disagrees with the strong evidence for $\Omega_0=1$ from cluster evolution and substructure studies (Richstone, Loeb \& Turner 1992), and the estimates obtained from the {\sc POTENT} analysis of {\it IRAS} galaxy density fields and peculiar velocities (Nusser \& Dekel 1993, Dekel {et al.}\/ 1993, Dekel \& Rees 1994). If $\Omega_0=1$, there are several possible solutions or implications from the baryon over-density problem. \begin{enumerate} \item{\label{point:mgrav} The X-ray emitting gas has been concentrated with respect to the dark matter and the total cluster masses are much higher, or there is clustering of dark matter on larger scales, {\it e.g.\ } in a mixed dark matter Universe.} \item{\label{point:clumping} The X-ray determined gas masses are over-estimated, {\it e.g.\ } due to clumping of the X-ray gas.} \item{\label{point:anomoly} The current examples of large baryon over-densities in clusters are unrepresentative of clusters in general.} \item{\label{point:lambda} The cosmological constant, $\Lambda$, is non-zero and contributes to the density parameter such that $\Omega_0=\Omega_{\rm matter}+\Omega_{\Lambda}=1$ ({\it e.g.\ } see the review by Carrol, Press \& Turner 1992).} \item{\label{point:ibbns} $\Omega_{\rm b}$ can be higher if the Universe is inhomogeneous at the time of nucleosynthesis.} \item{\label{point:bbns} The new calculations of standard primordial nucleosynthesis or the primordial abundance measurements are incorrect, or they are very less tightly constrained, allowing $\Omega_{\rm b}\sim0.3h_{50}^{-2}$ ({\it e.g.\ } if some abundance determinations are incorrect).} \end{enumerate} Recent work on inhomogeneous nucleosynthesis (Jedamzik, Fuller, \& Mathews 1994) now shows that \ref{point:ibbns} is not viable, and \ref{point:bbns} seems unlikely given the physics involved is well understood. Solution \ref{point:clumping} is unlikely to cause a significant discrepancy, as shown later in this paper (see also McHardy {et al.}\/ 1990). We shall discuss solution \ref{point:mgrav}, which implies that there are significant masses of gravitating matter outside the regions of the X-ray emitting gas. The main focus of this paper is to investigate point \ref{point:anomoly} by determining gas masses to large radii in a number of clusters. Our determinations have been made from X-ray image-deprojection analysis of 19 clusters of galaxies which were observed with the {\it Einstein Observatory} \/ Imaging Proportional Counter (IPC). These are X-ray luminous ($\hbox{$L_{\rm X}\,$}_{\rm bol}\approxgt5\times10^{44}\hbox{${\rm\thinspace erg}{\rm\thinspace s}^{-1}\,$}$) and moderately distant ($z>0.05$) clusters which are easily covered by the IPC field of view, and have no strong contaminating sources to their surface brightness profiles. This enables their gas masses to be well determined out to between 1 to $2.5{\rm\thinspace Mpc}$. \section{Deprojection analysis and results}\label{section:analysis} We have used the X-ray image deprojection technique (pioneered by Fabian {et al.}\/ 1981). This method assumes a spherical geometry for the cluster, and enables the volume count emissivity from the hot intracluster gas to be determined as a function of radius. The properties of the intracluster medium (ICM) can then be determined after corrections for attenuation of the cluster emission due to absorption from intervening material, and assumptions on the form of the underlying gravitational potential of the cluster. Detailed descriptions and recent examples of the current analysis method and procedure can be found in the description of the analysis of {\it ROSAT} Position Sensitive Proportional Counter (PSPC) and High Resolution Imager (HRI) data, on A478, by Allen {et al.}\/ (1993) and D.~White {et al.}\/ (1994), respectively. This analysis differs from previous deprojection analyses, which have concentrated on investigation of the cooling flow properties of clusters, as such studies required relatively small radial bins to resolve the cooling time of the intracluster gas at the very centre of the cluster. We are interested in accurate determinations of the total gas masses in clusters to large radii and so large radial bins, which improve the signal-to-noise of the data, enable deprojection to greater radii. The clusters in our sample were selected to be bright and moderately distant so that the emission can be followed to sufficient radius within the field of view of the IPC. The X-ray emission of the cluster should also be relatively symmetrical with no significant contamination from sources which will produce significant errors in the gas mass determinations. Most of the clusters which we selected do appear fairly spherically symmetric and smooth in the central regions, although there are some clusters where the emission in the outer regions is less regular (notably A1763, A3186, A3266 and A3888). However, as these results are not significantly different from the clusters which appear very regular (such as A85, A478, A644, A1795 and A2009), we believe the results provide a good statistical indication of the baryon fraction in clusters, despite morphological details. With the above criterion we formed our sample of 19 clusters, and obtained surface-brightness profiles from C.~Jones, W.~Forman and C.~Stern at the Harvard-Smithsonian Center for Astrophysics. We selected IPC rather that HRI data for this analysis due its larger field of view and superior quantum efficiency. The poorer spatial resolution of the IPC, as we have noted is inconsequential. The point-spread function of the IPC, which is approximately 1 arcmin (Gaussian width), corresponds to relatively large radii at the moderate redshifts of the clusters in our sample. The data were corrected for the effects of the telescope vignetting, and the background contributions were estimated from the region just outside the maximum radius of each deprojection. This ensures that our estimates of the cluster gas masses are conservative as there may be some cluster emission outside the selected maximum radius. The deprojection of the surface brightness profile of each cluster requires the additional information, shown in Table~\ref{table:input_data}, such as the column density along the line-of-sight, the X-ray temperature and velocity dispersion of each cluster. The total attenuation of X-rays from the cluster is dependent on the absorption (note we use photoelectric absorption cross-sections given by Morrison \& McCammon 1983) within our Galaxy and intrinsic absorption. D.~White {et al.}\/ (1991b) have shown that many clusters appear to have intrinsic absorption, but as we do not have such information for our whole sample we use estimates for the Galactic contribution taken from the $21{\rm\thinspace cm}$ determinations by Stark {et al.}\/ (1992). The effect of possible excess absorption will be shown in Section~\ref{section:gas_masses}, but we note here that our prescription leads to conservative gas mass estimates. \begin{table*} \begin{center} \small \caption{Input Data. \label{table:input_data} } \begin{tabular}{rcccllcc} \\ \hline \multicolumn{1}{c}{No.} & \multicolumn{1}{c}{Cluster} & \multicolumn{1}{c}{$z$} & \multicolumn{1}{c}{Galactic $21{\rm\thinspace cm}$} & \multicolumn{2}{c}{Temperature $({\rm\thinspace keV})$} & \multicolumn{2}{c}{Gravitational Potential} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$N_{\rm H}$ $(10^{21}\hbox{$\cm^{-2}\,$})$} & \multicolumn{1}{c}{Reference} & \multicolumn{1}{c}{Deprojected} & \multicolumn{1}{c}{$r_{\rm core}$ $({\rm\thinspace Mpc})$} & \multicolumn{1}{c}{$\sigma$ $(\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$})$} \\ \\ 1. & A85 & $ 0.0521 ^{\S} $ & 0.30 & $~^{\star} 6.2^{+ 0.4 ( 0.2)}_{ -0.5 ( 0.3)} $&$ ( 6.1^{+ 1.4}_{ -0.3}) $ & 0.10 & $ 749 ^{\S} $\\ 2. & A401 & $ 0.0748 ^{\infty} $ & 1.11 & $~^{\star} 7.8^{+ 1.1 ( 0.6)}_{ -0.9 ( 0.6)} $&$ ( 8.3^{+ 1.2}_{ -0.7}) $ & 0.60 & $ {\em 1112} ^{T_{\rm X}} $ \\ 3. & A478 & $ 0.0881 ^{\S} $ & 1.36 & $~^{\dagger} 6.8^{+ 1.1 ( 0.6)}_{ -1.0 ( 0.6)} $&$ ( 7.2^{+ 1.8}_{ -0.8}) $ & 0.20 & $ 904 ^{\S} $ \\ 4. & A545 & $ 0.1530 ^{\infty} $ & 1.14 & $~^{\star} 5.5^{+\infty ( 6.2)}_{ -2.8 ( 2.1)} $&$ ( 6.8^{+ 0.7}_{ -4.3}) $ & 0.65 & $ {\em 925} ^{T_{\rm X}} $ \\ 5. & A644 & $ 0.0704 ^{\infty} $ & 0.73 & $~^{\star} 7.2^{+ 3.0 ( 1.1)}_{ -1.2 ( 0.8)} $&$ ( 7.2^{+ 0.7}_{ -2.5}) $ & 0.40 & $ {\em 1017} ^{T_{\rm X}} $ \\ 6. & A665 & $ 0.1816 ^{\infty} $ & 0.42 & $~^{\star} 8.2^{+ 1.0 ( 0.6)}_{ -0.8 ( 0.4)} $&$ ( 9.4^{+ 0.2}_{ -1.8}) $ & 1.00 & $ 1201 ^{\infty} $ \\ 7. & A1413 & $ 0.1427 ^{\infty} $ & 0.20 & $~^{\star} 8.9^{+ 0.5 ( 0.3)}_{ -0.5 ( 0.3)} $&$ (10.0^{+ 1.7}_{ -2.3}) $ & 0.50 & $ {\em 1193} ^{T_{\rm X}} $ \\ 8. & A1650 & $ 0.0840 ^{\infty} $ & 0.15 & $~^{\star} 5.5^{+ 2.7 ( 1.3)}_{ -1.5 ( 1.0)} $&$ ( 6.1^{+ 0.6}_{ -0.8}) $ & 0.35 & $ {\em 925} ^{T_{\rm X}} $ \\ 9. & A1689 & $ 0.1810 ^{\infty} $ & 0.19 & $~^{\star} 10.1^{+ 2.7 ( 5 4)}_{ -1.5 ( 1.0)} $&$ (10.9^{+ 0.2}_{ -0.9}) $ & 0.40 & $ {\em 1275} ^{T_{\rm X}} $ \\ 10. & A1763 & $ 0.1870 ^{\diamondsuit} $ & 0.09 & $~^{\star} 6.9^{+\infty }_{ -3.6 ( 1.9)} $&$ ( 7.1^{+ 0.6}_{ -3.2}) $ & 0.70 & $ {\em 1043} ^{T_{\rm X}} $ \\ 11. & A1795 & $ 0.0621 ^{\S} $ & 0.12 & $~^{\dagger} 5.1^{+ 0.4 ( 0.2)}_{ -0.5 ( 0.3)} $&$ ( 5.6^{+ 0.1}_{ -0.8}) $ & 0.20 & $ 773 ^{\S} $ \\ 12. & A2009 & $ 0.1530 ^{\infty} $ & 0.33 & $~^{\star} 7.8^{+\infty ( 4.4)}_{ -2.9 ( 2.1)} $&$ ( 8.0^{+ 0.5}_{ -0.5}) $ & 0.40 & $ {\em 1112} ^{T_{\rm X}} $ \\ 13. & A2029 & $ 0.0765 ^{\S} $ & 0.24 & $~^{\star} 7.8^{+ 1.4 ( 0.8)}_{ -1.0 ( 0.7)} $&$ ( 8.3^{+ 1.0}_{ -3.1}) $ & 0.30 & $ {\em 1112} ^{T_{\rm X}} $ \\ 14. & A2142 & $ 0.0899 ^{\infty} $ & 0.39 & $~^{\dagger} 11.0^{+ 2.0 ( 1.2)}_{ -0.7 ( 0.4)} $&$ (10.4^{+ 1.0}_{ -4.9}) $ & 0.40 & $ 1295 ^{\bullet} $ \\ 15. & A2163 & $ 0.2030 ^{\clubsuit} $ & 1.10 & $~^{\star} 13.9^{+ 1.1 ( 0.7)}_{ -0.8 ( 0.5)} $&$ (14.2^{+ 1.1}_{-11.3}) $ & 0.60 & $ {\em 1509} ^{T_{\rm X}} $ \\ 16. & A2319 & $ 0.0559 ^{\spadesuit} $ & 0.86 & $~^{\star} 9.9^{+ 1.4 ( 0.8)}_{ -1.1 ( 0.7)} $&$ (11.9^{+ 1.3}_{ -3.0}) $ & 0.60 & $ {\em 1261} ^{T_{\rm X}} $ \\ 17. & A3186 & $ 0.1270 ^{\clubsuit} $ & 0.60 & $~^{\ddagger} 5.9^{ }_{ } $&$ ( 6.7^{+ 1.6}_{ -3.0}) $ & 0.50 & $ {\em 960} ^{T_{\rm X}} $ \\ 18. & A3266 & $ 0.0545 ^{\heartsuit} $ & 0.30 & $~^{\star} 6.2^{+ 0.6 ( 0.5)}_{ -0.6 ( 0.4)} $&$ ( 6.8^{+ 0.6}_{ -1.1}) $ & 0.80 & $ {\em 985} ^{T_{\rm X}} $ \\ 19. & A3888 & $ 0.1680 ^{\heartsuit} $ & 0.11 & $~^{\ddagger} 7.9^{ }_{ } $&$ ( 7.9^{+ 0.3}_{ -1.0}) $ & 0.50 & $ {\em 1120} ^{T_{\rm X}} $ \\ \hline \end{tabular} \newline \parbox[]{17.75cm}{ \noindent This table contains the input data required for the cluster deprojections. The first temperatures given are reference values (with 5th and 95 percentile confidence limits and $1\sigma$ standard deviations in the brackets) obtained from the literature. In the next column are the spatially-averaged emission-weighted ($0.4-4{\rm\thinspace keV}$) temperatures from the deprojected temperature profiles (these are median values with 10th and 90th percentile limits given in brackets). A comparison of these two columns shows the accuracy of the deprojection calibration with respect to the reference temperatures. The velocity dispersion values written in italics with the superscript ${T_{\rm X}}$ refer to values interpolated from the X-ray temperature values using equation stated in the main text. Note, we have used X-ray temperature interpolated velocity dispersions for A401, A2009 and A2029, as we were unable to obtain a flat temperature profile from the literature values. The velocity dispersion for A401 was reduced from $1290^\infty\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}$, for while A2009 and A2029 the velocity dispersion was increased from $804^\infty\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}$ and $786^{\S}\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}$. The superscripts refer to: ${\star}$ David {et al.}\/ (1993); ${\dagger}$ Edge \& Stewart (1991a); ${\ddagger}$ Forman \& Jones (private communication); ${\S}$ Zabludoff, Huchra \& Geller (1990); ${\infty}$ Struble \& Rood (1991); ${\bullet}$ Quintana \& Lawrie (1982); ${\clubsuit}$ Arnaud {et al.}\/ (1992); ${\diamondsuit}$ Noonan (1981); ${\heartsuit}$ Abell, Corwin \& Olowin (1989); and ${\spadesuit}$ Stocke {et al.}\/ (1991). } \end{center} \end{table*} \normalsize \doublespace The {\it Einstein Observatory} IPC data do not have sufficient combined spatial and spectral resolution to enable an accurate empirical determination of the temperature profile, from which the gravitational potentials of the clusters may be determined. This means that the deprojection technique, which could otherwise be used to directly determine the total gravitational mass of the cluster as a function of radius, actually requires the form of the gravitational potential to be specified. The deprojection results are then calibrated using the only widely X-ray observed property of the intracluster medium --- {\it i.e.\ } spatially averaged cluster temperatures from broad-beam detectors (Edge \& Stewart 1991a and David {et al.}\/ 1993). \begin{table*} \begin{center} \small \caption{Results \label{table:results} } \begin{tabular}{rccccccc} \\ \hline \multicolumn{1}{c}{No.} & \multicolumn{1}{c}{Cluster} & \multicolumn{1}{c}{$R_{\rm 0}$} & \multicolumn{1}{c}{d$R$} & \multicolumn{2}{c}{Mass ($10^{14}\hbox{$\rm\thinspace M_{\odot}$}$)} & \multicolumn{2}{c}{Mass Ratio ($\hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$}$\%)} \\ \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & \multicolumn{1}{c}{(${\rm\thinspace Mpc}$)} & \multicolumn{1}{c}{(${\rm\thinspace Mpc}$)} & \multicolumn{1}{c}{Gas} & \multicolumn{1}{c}{Grav} & \multicolumn{1}{c}{$(R\le 1{\rm\thinspace Mpc})$} & \multicolumn{1}{c}{$(R\le R_{\rm 0})$} \\ \\ 1. & A85 & 1.415 & 0.101 & $ 0.87\pm0.06$ & $ 4.64$ & $17.3\pm1.1$ & $18.8\pm1.3$ \\ 2. & A401 & 1.265 & 0.141 & $ 1.32\pm0.07$ & $ 10.1$ & $12.8\pm0.4$ & $13.0\pm0.7$ \\ 3. & A478 & 1.951 & 0.163 & $ 2.38\pm0.21$ & $ 9.28$ & $23.1\pm0.9$ & $25.6\pm2.2$ \\ 4. & A545 & 1.815 & 0.259 & $ 1.91\pm0.25$ & $ 10.6$ & $17.1\pm1.6$ & $18.1\pm2.4$ \\ 5. & A644 & 1.198 & 0.133 & $ 0.95\pm0.06$ & $ 9.06$ & $10.6\pm0.6$ & $10.5\pm0.6$ \\ 6. & A665 & 2.376 & 0.297 & $ 4.37\pm0.46$ & $ 22.1$ & $18.1\pm1.0$ & $19.8\pm2.1$ \\ 7. & A1413 & 1.715 & 0.245 & $ 1.83\pm0.23$ & $ 15.9$ & $10.8\pm1.1$ & $11.5\pm1.4$ \\ 8. & A1650 & 1.090 & 0.156 & $ 0.75\pm0.08$ & $ 6.37$ & $11.8\pm1.2$ & $11.8\pm1.2$ \\ 9. & A1689 & 1.481 & 0.296 & $ 2.12\pm0.16$ & $ 15.5$ & $13.0\pm0.5$ & $13.7\pm1.0$ \\ 10. & A1763 & 1.823 & 0.304 & $ 2.61\pm0.22$ & $ 13.2$ & $17.6\pm1.2$ & $19.8\pm1.7$ \\ 11. & A1795 & 1.426 & 0.119 & $ 1.13\pm0.08$ & $ 5.49$ & $18.7\pm1.1$ & $20.6\pm1.5$ \\ 12. & A2009 & 1.297 & 0.259 & $ 1.44\pm0.10$ & $ 10.6$ & $13.4\pm0.5$ & $13.6\pm0.9$ \\ 13. & A2029 & 1.291 & 0.143 & $ 1.26\pm0.11$ & $ 10.3$ & $11.9\pm0.8$ & $12.3\pm1.1$ \\ 14. & A2142 & 1.931 & 0.276 & $ 2.84\pm0.15$ & $ 20.1$ & $11.9\pm0.3$ & $14.1\pm0.6$ \\ 15. & A2163 & 2.264 & 0.323 & $ 5.46\pm0.49$ & $ 32.5$ & $14.4\pm1.0$ & $16.8\pm1.5$ \\ 16. & A2319 & 1.402 & 0.108 & $ 1.73\pm0.12$ & $ 14.2$ & $11.9\pm0.8$ & $12.2\pm0.8$ \\ 17. & A3186 & 1.508 & 0.188 & $ 1.76\pm0.23$ & $ 9.50$ & $15.6\pm1.9$ & $18.5\pm2.4$ \\ 18. & A3266 & 1.420 & 0.114 & $ 1.42\pm0.07$ & $ 9.07$ & $15.4\pm0.4$ & $15.7\pm0.8$ \\ 19. & A3888 & 1.118 & 0.279 & $ 1.20\pm0.15$ & $ 8.66$ & $13.9\pm1.8$ & $13.9\pm1.8$ \\ \hline \end{tabular} \newline \parbox[]{17.75cm}{ \noindent This table summarizes the deprojection results, where $R_0$ is the outer radius of the deprojection, d$R$ is the bin size. The gas and gravitational results are plotted against $R_0$ in Fig.~\ref{figure:masses}. The baryon fractions within $1{\rm\thinspace Mpc}$ and the total region of each deprojection are given in the last two columns. Note the observational errors in the velocity dispersion are not available for all the clusters, and so are not quoted. The uncertainty in \hbox{$M_{\rm gas}\,$} and \hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$} are $1\sigma$ standard deviation values, resulting from the statistical uncertainty in the X-ray data. } \end{center} \end{table*} \normalsize \doublespace The form of the gravitational potential that we have chosen is that of a true isothermal sphere. This produces comparatively conservative mass estimates (compared to a King-law distribution), and can be parameterised using observational data, such as the optical velocity dispersion. In our standard deprojection model we used a two-component true-isothermal potential, each parameterised by a velocity dispersion and core-radius, with one potential for the cluster and another for a central cluster galaxy. Only the cluster potential was varied; the galaxy potential was fixed with a galaxy velocity dispersion of $350\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}$ and a core-radius of $2{\rm\thinspace kpc}$. The effect of uncertainties in the cluster velocity dispersion, the effect of the mass from a central galaxy, and the use of different gravitational mass distributions on the results were all investigated, and are discussed in Section~\ref{section:grav_masses}. First we discuss the choice of cluster velocity dispersions and core radii. The cluster velocity dispersions were chosen from the literature where available. However, when we could find no suitable value, or there appeared to be some problem obtaining a satisfactory deprojection results, we obtained a value from the following relationship between the velocity dispersion and observed X-ray temperature: \begin{equation}\label{equation:cdisp} \sigma=376\left[\hbox{$T_{\rm X}\,$}({\rm\thinspace keV})\right]^{0.528}\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}. \end{equation} This relationship was determined (D.~White {et al.}\/ in preparation) using the `orthogonal distance regression' algorithm (see the ODRPACK V2.01 software by Boggs {et al.}\/ 1990, discussed in relation to astronomical data analysis by Feigelson \& Babu 1992), and accounts for errors in both axes of the data --- an essential feature when the errors in both dimensions are significant. The final velocity dispersions that were used, and the source of these values, are given in Table~\ref{table:input_data}. Suitable values for cluster core radii are more difficult to obtain than velocity dispersion values. Although they are available from the X-ray surface brightness profiles of clusters, and are less prone than optical values to contamination from sub-structure within the cluster, they can be affected by the presence of a cooling flow (which enhances the X-ray emission within the central $200$ to $300{\rm\thinspace kpc}$, as shown by Forman \& Jones 1984). Therefore we have not used the values for core-radii given in the literature, but used the core radius as a free parameter because it can significantly alter the shape of the gravitational mass distribution. As the best and most widely available cluster temperatures for most of our sample are only spatially-averaged values for the whole cluster, determined from broad-beam detectors, we vary the core-radius and outer pressure to produce a temperature profile that is as consistent with the observed value over as large a radius of the cluster as possible, {\it i.e.\ } a flat temperature profile. This tends to overestimate the temperature at the centre of a cluster in a cooling flow cluster, but will lead to conservative estimates of the gas mass, as $\hbox{$M_{\rm gas}\,$}\propto\hbox{$T_{\rm X}\,$}^{-1/4}$. The final selections of core radius and outer pressure used in each cluster are given in Table~\ref{table:results}. Note, we do not assign a particular significance to the core-radii that we have used in this analysis; it was essentially used as a parameter to obtain flat deprojected temperature profile for each cluster. This also produces conservative gas mass estimates, because the temperature at the centre will be hotter than expected in a cooling flow cluster. This may not represent the true form of the temperature profile, and our resulting core-radii may be somewhat misleading. This may explain some of the large core-radii, although it may also be due to unresolved physical substructure in the X-ray emission. We also note that as the baryon fraction varies with radius according to the core radius used, as can be seen in Fig.~\ref{figure:core}. We have therefore quoted our results at the maximum radius of each deprojection to ensure the results are not affected by the core-radii that were used. The baryon fractions that we determine from the deprojected value of $\hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$}$ at the maximum radii are given in Table~\ref{table:results}. They do not include the stellar contribution to the baryon content (perhaps an extra 5 per cent). The results indicate that there is a wide variation between approximately 10 and 30 per cent, although some of this variation is due to an apparent trend for increasing baryon fraction with radius, as shown in Fig.~\ref{figure:mass_ratios}. A linear regression to the data in this diagram (shown by the dashed line) indicates that the baryon fraction may be consistent with $\Omega_{\rm b, max}\le0.06$ only at the very centre, but the mean value of the data points is much higher than the standard primordial nucleosynthesis value. We note that Fig.~\ref{figure:mass_ratios} does not account for errors in the gravitational potential from the velocity dispersion, but we shall address this point in Section~\ref{section:grav_masses}. The uncertainty from the core radius, and other parameters, on the determination of the baryon fractions has been assessed using the Abell 478 data as a control data set. The results of these tests, which will be discussed in the following section and shown in Table~\ref{table:tests}, indicate that the gravitational potential of the cluster provides the main uncertainty in the baryon fraction determinations. \section{Baryon fraction uncertainties} As the deprojection estimates of the cluster baryon fraction are given by $\hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$}$, we have estimated the susceptibility of the deprojection results to uncertainties in $\hbox{$M_{\rm gas}\,$}$ and $\hbox{$M_{\rm grav}\,$}$ resulting from changes in the input parameters for an individual cluster. We have also estimated the uncertainties in the baryon fraction due to \hbox{$M_{\rm grav}\,$} using the observational errors in the X-ray temperatures. \subsection{Gas mass uncertainties}\label{section:gas_masses} The deprojection method produces gas mass estimates that are statistically very well determined. We assume that the emission in the outer regions of clusters arises from thermal emission rather than non-thermal processes, as there is no evidence for significant non-thermal emission at large radii from the radio waveband. The main uncertainty in the gas masses arises from the intrinsic X-ray luminosity of a cluster, {\it i.e.\ } through the estimate of the distance to the cluster, intervening absorption, spherical symmetry, and the effect of clumping in the intracluster gas. All these points are addressed below. The effect of ellipticity in the cluster X-ray emission has been investigated by D.~White~{et al.}\/ (1994) in their analysis of {\it ROSAT} \/ HRI data on A478. They found that the ellipticity of $(1-b/a)=0.2$ in the X-ray emission produced an average (and $1\sigma$ ) value of $\hbox{$M_{\rm gas}\,$}=(4.6\pm0.5)\times10^{13}\hbox{$\rm\thinspace M_{\odot}$}$ (within $0.5{\rm\thinspace Mpc}$) from the deprojection of four sectors, as compared to $\hbox{$M_{\rm gas}\,$}=(4.8\pm0.2)\times10^{13}\hbox{$\rm\thinspace M_{\odot}$}$ from an azimuthal average. Thus, within errors the effect of the spherical symmetry assumption is negligible. We also note that, although a cluster may appear spherically symmetric in projection, it may be extended in the line of sight. However, for a constant luminosity $\hbox{$L_{\rm X}\,$}\propto\hbox{$M_{\rm gas}\,$}^2/V$ the volume $V$ would have to be increased by a factor of 16 to eliminate baryon over-densities of 4. Similarly, the accuracy of the background subtraction, which affects the luminosity estimate, would have to be wrong by a factor of 16 to reduce a baryon fraction of 25 per cent to the universal value of $\le6$ per cent. We therefore do not consider spherical asymmetries, either tangential or elongation along the line of sight, or background subtraction uncertainties, to be important effects in the baryon overdensities in clusters, especially if the baryon overdensities are shown to be common in a statistical sample of clusters such as ours. \begin{figure} \small \epsfxsize=0.48\textwidth \noindent \caption{ \label{figure:core} } This diagram shows that differing mass fraction profiles obtained with differing core radii ($0.2$, $0.5$ and $1.0{\rm\thinspace Mpc}$) for the gravitational mass distribution. This example is for the Abell 478 data, where the core radius used to give a flat temperature profile was $0.2{\rm\thinspace Mpc}$. This is also approximately the core radius determined from a comparison of a deprojection and spectral analysis of {\it ROSAT} PSPC data (Allen {et al.}\/ 1993). We note that outside the core region of each potential the mass fraction profiles are approximately flat, and more importantly, tend to the same result. \epsfxsize=0.48\textwidth \noindent \caption{ \label{figure:mass_ratios} } This diagram shows the baryon fraction ($\hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$}$) at the outer radius of each deprojection. The dashed line show a best-fitting linear function of $\hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$}=0.0579+0.0556R$. This is clearly inconsistent with the standard nucleosynthesis value of $<6$ per cent, indicating by the dot-dashed line. Note the dashed line also shows an increase in the baryon fraction with radius. Observational errors on \hbox{$M_{\rm grav}\,$} are not included in this plot but the effect on \hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$} is estimated in Section~\ref{section:grav_masses} from Fig.~\ref{figure:kt_mass_ratios}. \end{figure} \normalsize \doublespace \begin{table*} \begin{center} \tiny \caption{Test Parameters \label{table:tests} } \begin{tabular}{rcccccccccccccc} \\ \hline \multicolumn{1}{r}{Test} & \multicolumn{3}{c}{Cosmology} & \multicolumn{1}{c}{$N_{\rm H}$} & \multicolumn{1}{c}{$\hbox{$T_{\rm X}\,$}$} & \multicolumn{1}{c}{$\phi$} & \multicolumn{1}{c}{${\rm d}M/{\rm d}R$} & \multicolumn{1}{c}{$\sigma$} & \multicolumn{1}{c}{$r_{\rm core}$} & \multicolumn{1}{c}{$P_0$} & \multicolumn{1}{c}{$\hbox{$M_{\rm gas}\,$}$} & \multicolumn{1}{c}{$\hbox{$M_{\rm grav}\,$}$} & \multicolumn{1}{c}{$\hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$}$} \\ \multicolumn{1}{r}{No.} & \multicolumn{1}{c}{$H_{\rm 0}$} & \multicolumn{1}{c}{$q_{\rm 0}$} & \multicolumn{1}{c}{$z$} & \multicolumn{1}{c}{($10^{21}\hbox{$\cm^{-2}\,$}$)} & \multicolumn{1}{c}{(${\rm\thinspace keV}$)} & \multicolumn{1}{c}{G-C} & \multicolumn{1}{c}{$(\hbox{$\rm\thinspace M_{\odot}$}\kpc^{-1})$} & \multicolumn{1}{c}{$(\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$})$} & \multicolumn{1}{c}{$({\rm\thinspace Mpc})$} & \multicolumn{1}{c}{$(10^{4}{\rm\thinspace K}\hbox{$\cm^{-3}\,$})$} & \multicolumn{1}{c}{($10^{14}\hbox{$\rm\thinspace M_{\odot}$}$)} & \multicolumn{1}{c}{($10^{14}\hbox{$\rm\thinspace M_{\odot}$}$)} & \multicolumn{1}{c}{$(\times100\%)$} \\ \\ 0.& 50 & 0.0 & 0.0881 & 1.36 & 6.8 & ISO-ISO & N/A & 904 & 0.20 & 1.5 & $2.38\pm0.21$ & $ 9.28$ & $25.6\pm2.2$ \\ & & & & & & & & & & & & & \\ 1.& 100 & & & & & & & & 0.10 & & $0.84\pm0.08$ & $ 4.65$ & $18.2\pm1.6$ \\ 2.& & 0.5 & & & & & & & & & $2.30\pm0.20$ & $ 9.11$ & $25.2\pm2.2$ \\ 3.& & & 0.0890 & & & & & & & & $2.44\pm0.21$ & $ 9.35$ & $26.1\pm2.3$ \\ 4.& & & 0.0872 & & & & & & & & $2.32\pm0.20$ & $ 9.21$ & $25.2\pm2.2$ \\ 5.& & & & 2.50 & & & & & & & $2.59\pm0.23$ & $ 9.28$ & $28.0\pm2.5$ \\ 6.& & & & & 7.9 & & & & & 2.0 & $2.38\pm0.20$ & $ 9.28$ & $25.7\pm2.2$ \\ 7.& & & & & 5.8 & & & & & 1.0 & $2.38\pm0.21$ & $ 9.28$ & $25.6\pm2.3$ \\ 8.& & & & & & KNG-KNG& & & & 2.0 & $2.38\pm0.21$ & $ 7.08$ & $33.6\pm2.9$ \\ 9.& & & & & & NO-ISO & & & & & $2.38\pm0.21$ & $ 8.10$ & $29.4\pm2.6$ \\ 10.& & & & & & NO-LM & 5.0 & N/A & N/A & & $2.38\pm0.21$ & $ 10.2$ & $23.3\pm2.0$ \\ 11.& & & & & & & & 1165 & 0.50 & 1.0 & $2.39\pm0.21$ & $ 17.1$ & $13.9\pm1.2$ \\ 12.& & & & & & & & 764 & 0.15 & 3.0 & $2.39\pm0.20$ & $ 6.67$ & $35.8\pm3.0$ \\ \hline \end{tabular} \newline \small \parbox[]{17.75cm}{ \noindent This table summarizes the effects of uncertainties in various input parameters used in the deprojection analysis on the results (shown in the last three columns). The tests have been applied to the A478 data, and the variations should be compared with the standard results shown in the first row (test number 0). The largest reduction in the mass ratio is produced by lowering the velocity dispersion to the $1\sigma$ lower limit given by Zabludoff, Huchra \& Geller (1990). The parameter labeled $\phi$ indicates the galaxy-cluster combined potential used; ISO indicates a true isothermal potential, KNG a King Law potential, NO a null contribution, and LM indicates a linear mass model. The numbers for the gravitational potentials are: $\sigma$ for the velocity dispersion of the cluster and $r_{\rm core}$ for the core radius, or ${\rm d}M/{\rm d}R$ for the amount of mass in the linear mass model. $P_0$ is the pressure used at $R_0$ to obtain the correct deprojected temperature profile (in conjunction with the core radius where applicable). N/A indicates the entry was not applicable to the potential used in that test. } \end{center} \end{table*} \normalsize \doublespace The uncertainty in the gas masses from the distance is obviously dependent on cosmological parameters and the cluster redshift (we have adopted a Hubble constant of $H_{\rm 0}=50h_{50}\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}\Mpc^{-1}$ in the general analysis). The expected dependences of the masses on $H_{\rm 0}$ are approximately $\hbox{$M_{\rm gas}\,$}\propto h_{50}^{-5/2}$ (for a constant radial density profile in the cluster), $\hbox{$M_{\rm grav}\,$}\propto h_{50}^{-1}$ (at radii outside the core of an isothermal sphere), and therefore the baryon fraction should change as $\hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$}\propto h_{50}^{-3/2}$. However, we have found that a deprojection with a different Hubble constant requires a gravitational potential with a proportionately smaller core radius to obtain the same temperature profile as that obtained with a smaller Hubble constant. Test number 1 of Table~\ref{table:tests} shows that with $h_{50}=2$ the change in $\hbox{$M_{\rm grav}\,$}$ is in agreement with that expected for a cluster that is half as distant and with a core-radius half as large. The corresponding change in \hbox{$M_{\rm gas}\,$} is less than the expected value of $0.42\times10^{14}\hbox{$\rm\thinspace M_{\odot}$}$, because the required change in core radius, for a flat temperature profile, results in a larger X-ray luminosity and gas content in the central regions of the cluster. Therefore, because the changes in the Hubble constant force a recalibration of the deprojection results, the Hubble constant uncertainties lead to a smaller changes in the baryon mass fraction than would be expected. The uncertainty in $q_0$ and the redshift of a cluster produce comparatively small changes, as shown in test 2 for $\qO{\frac{1}{2}}$, or tests 3 and 4 for the statistical uncertainties in the redshift of A478. Although the Hubble constant provides the greatest uncertainty in the gas mass determinations, it does not eliminate the large baryon over-densities. An unreasonably small Hubble constant would be required to reduce them to the standard primordial nucleosynthesis values, because this also depends on $H_{\rm 0}$ as $\Omega_{\rm b}\le0.06h_{50}^{-2}$. However, as Steigman (1987, 1989) has noted, a more useful limit may be obtained by requiring that the gas mass does not exceed the total mass of the cluster. Assuming that $\hbox{$M_{\rm gas}\,$}/\hbox{$M_{\rm grav}\,$}\propto h_{50}^{-3/2}$, then we obtain a lower limit on the Hubble constant of $\HO{22}$. The gas mass determinations also depend on the estimate of the absorption of X-rays emitted from the cluster. We have already noted that intrinsic absorption may occur in some clusters, but we have used Galactic column densities determined from $21{\rm\thinspace cm}$ measurements throughout to give a consistent sample of column density determinations. As these represent minimum estimates for the total column densities, we have tested for the effect of excess absorption on the baryon fraction. We expect that the baryon fraction should increase with the intrinsic luminosity, and therefore gas masses will be larger rather than smaller. A478 provides an ideal example, as the excess absorption in this cluster has been well studied (D.~White {et al.}\/ 1991b, Johnstone {et al.}\/ 1992, Allen {et al.}\/ 1993). In test 5 of Table~\ref{table:tests} we show that an addition of $1.1\times10^{21}\hbox{$\cm^{-2}\,$}$ above the Stark {et al.}\/ (to the value determined by Allen {et al.}\/ from their spectral fits of the {\it ROSAT} \/ PSPC data on A478 of $2.5\times10^{21}\hbox{$\cm^{-2}\,$}$) produces approximately a 10 per cent increase in the gas mass. \begin{figure} \small \epsfxsize=0.48\textwidth \epsfxsize=0.48\textwidth \noindent \caption{ \label{figure:clump_7kev} } These plots show: (a) the error in the gas mass estimate, (b) the emission-weighted temperature, when the X-ray emission is assumed to be from a single-phase medium but there are actually two phases. The single-phase temperature is assumed to be ${\rm k}T_{\rm ref}=7{\rm\thinspace keV}$. The main-phase temperature is ${\rm k}T_1=7{\rm\thinspace keV}$ (with an abundance of $Z_1=0.4\hbox{$\rm\thinspace Z_{\odot}$}$), and the secondary-phase temperature is varied between ${\rm k}T_2=(0.01-10)\times {\rm k}T_{\rm ref}$. The separate lines are for volume filling factors of the secondary phase of $V_2$: 0.0 -- solid (flat), 0.01 -- dash, 0.05 -- dash-dot, 0.10 -- dot, 0.70 -- dash-dot-dot-dot, 0.5 -- solid. \end{figure} \normalsize \doublespace \begin{figure} \small \epsfxsize=0.48\textwidth \epsfxsize=0.48\textwidth \noindent \caption{ \label{figure:clump_15kev} } These plots are similar to those in Fig.~\ref{figure:clump_7kev} where the single-phase temperature is assumed to be ${\rm k}T_{\rm ref}=7{\rm\thinspace keV}$, but here the main-phase temperature is actually ${\rm k}T_1=15{\rm\thinspace keV}$ and the secondary-phase abundance is $Z_2=2\hbox{$\rm\thinspace Z_{\odot}$}$. \end{figure} \normalsize \doublespace One further point in the determination of gas masses from X-ray data that we discuss is the effect of clumping in the intracluster gas. We have estimated the error in the determinations of the gas mass that could arise when the gas is assumed to be a single-phase medium, but in actuality the gas is multiphase. Two phases have been considered, and the combined emission is forced to produce a fixed total number of counts $F_{0.4-4{\rm\thinspace keV}}$ in a waveband from $0.4-4{\rm\thinspace keV}$ ({\it i.e.\ } a top-hat approximation to the response of the IPC). We then select a reference temperature for the single phase estimation and compare this mass with the mass that we would estimate if the gas had two-phases with different temperatures and volume filling-factors. The masses in the two phases are given by solving the following equation assuming pressure equilibrium between the two phases: \begin{eqnarray} \label{equation:mass} F_{0.4-4{\rm\thinspace keV}}\propto n^2_1V_1\int^{4{\rm\thinspace keV}}_{0.4{\rm\thinspace keV}}\frac{\Lambda({\rm k}T_1)}{E}dE+ \nonumber\\ n^2_2V_2\int^{4{\rm\thinspace keV}}_{0.4{\rm\thinspace keV}}\frac{\Lambda({\rm k}T_2)}{E}dE~, \end{eqnarray} where the subscript number refers to the two phases, $n$ is the electron number density, k$T$ is the temperature variable, $V$ is the volume fraction, and $E$ is photon energy in the integral that evaluates the total number of counts from the cooling function $\Lambda$ in the specified waveband. We note that equation~\ref{equation:mass} takes no account of absorption or the effect of cluster redshifts. Note, we also assume pressure equilibrium between the two phases, otherwise a mechanism is required to prevent the cooler gas from expanding and mixing into the hotter gas. In Fig.~\ref{figure:clump_7kev}(a) we show the error in the gas mass determination when a single phase of temperature ${\rm k}T_{\rm ref}=7{\rm\thinspace keV}$ is assumed. The lines show the mass error when there is one component of temperature ${\rm k}T_1=7{\rm\thinspace keV}$ and a secondary phase which is varied between ${\rm k}T_2=(0.01-10)\times {\rm k}T_{\rm ref}$. The different lines show the mass error for volume fractions of the secondary phase, $V_2=0-0.5$. It can be seen that a baryon over-density of a factor of 2 could be eliminated by over-estimates in the gas mass determination, if the secondary phase filled less than approximately 40 per cent of the total volume, and had a temperature between approximately $0.8$ and $1{\rm\thinspace keV}$ (depending on the exact value of $V_2$). However, from Fig.~\ref{figure:clump_7kev}(b), we can see that the corresponding emission-weighted temperature from the combined medium could only be as high as approximately $1.5{\rm\thinspace keV}$, irrespective of $V_2$, so that it is unlikely such errors in the gas mass could be made as the temperature was assumed to be ${\rm k}T_1=7{\rm\thinspace keV}$. Observational uncertainties would usually rule out such a large discrepancy. {}From a slightly different perspective, one can ask if we can obtain sufficient gas mass overestimates when the emission-weighted temperature from the combined emission is close to that expected from a single-phase gas. In Fig.~\ref{figure:clump_15kev} we show the results when the gas is thought to have a single-phase temperature of ${\rm k}T_{\rm ref}=7{\rm\thinspace keV}$, but there is actually a component at ${\rm k}T_1=15{\rm\thinspace keV}$ (of the same abundance of $Z_1=0.4\hbox{$\rm\thinspace Z_{\odot}$}$) and a second component, again between ${\rm k}T_2=(0.01-10)\times {\rm k}T_{\rm ref}$ (this time with an abundance of $Z_2=2.0\hbox{$\rm\thinspace Z_{\odot}$}$). Very large overestimates can be produced, but a factor of two overestimation is not obtained unless the emission-weighted temperature is allowed to be as low as approximately $5{\rm\thinspace keV}$ (for $V_2=0.01$). In this case the average abundance would be about $0.5-0.6\hbox{$\rm\thinspace Z_{\odot}$}$, which is not unreasonable compared to the $Z_{\rm ref}=Z_1=0.4$ that would be assumed, and the fraction of mass in the cooler phase is approximately 10 per cent (the luminosity contribution is about be 70 per cent). We can apply this scenario of significantly different temperature phases to a cluster of a similar emission weighted temperature. A1763 has an emission-weighted temperature of k$T\sim7{\rm\thinspace keV}$, and a deprojected gas mass of $2.6\times10^{14}\hbox{$\rm\thinspace M_{\odot}$}$ (within $1.8{\rm\thinspace Mpc}$ radius). Therefore, from our example, we would expect $1.3\times10^{13}\hbox{$\rm\thinspace M_{\odot}$}$ in a cooler phase to produce a 50 per cent overestimate of the gas mass. This amount of cooler gas cannot be contained within the interstellar medium of giant elliptical galaxies (which have the required temperature of approximately $1{\rm\thinspace keV}$), as the mass in the cool gas is equivalent to approximately a thousand giant elliptical galaxies, which is clearly unreasonable. Thus the majority of the cooler gas would have to be in the intracluster medium, isolated from the destructive processes of the hotter phase by magnetic fields. The problem with this scenario is that observations already appear to rule out variations of more than a factor of two in the intracluster gas, as we discuss below. \begin{figure} \small \epsfxsize=0.48\textwidth \noindent \caption{ \label{figure:potentials} } This diagram shows the different gravitational mass distributions. The standard deprojection results employ the true isothermal potentials (solid line). We note that the King law underestimates the mass at outside 8 to 10 core-radii (which is =$0.2{\rm\thinspace Mpc}$ in this example). \end{figure} \normalsize \doublespace In summary, large gas mass overestimations can occur when there is significant amounts of cooler gas at k$T\approxlt1{\rm\thinspace keV}$ with line emission which enables the same count emissivity to be produced by a smaller mass of gas. As the effect is due to the lines, the abundance of the intracluster gas also influences the possibility of mass determination errors. However, the emission-weighted temperature also decreases with abundance, as most of the emission comes from the cooler phase, and the resulting effect of abundance variations is that the gas mass over-estimates are very nearly constant for a given range of the emission-weighted temperature. We note that, in a spectral analysis a contribution from a cool phase should be easily discernible, however our deprojection analysis is a broad-band analysis and cannot discriminate between combined spectra of various temperature which produce similar count emissivities. Although we cannot rule out such disparate temperatures from our imaging analysis, a spectral analysis of {\it Ginga} and {\it EXOSAT} data on the Perseus cluster (Allen {et al.}\/ 1992) only allows temperature variations of a factor of approximately two. Also, a spectral analysis of the A478 cluster out to $2{\rm\thinspace Mpc}$ (Allen {et al.}\/ 1993) indicates that a $1{\rm\thinspace keV}$ component cannot be significant in this cluster, as the best-fit emission-weighted temperature is consistent with the broad-beam value ($6.8{\rm\thinspace keV}$), and is above $4{\rm\thinspace keV}$ at the 90 per cent confidence level. Only in the central regions of the cooling flow, and between $1-2{\rm\thinspace Mpc}$ is the lower-limit around $1{\rm\thinspace keV}$ (but the best fit is around $4{\rm\thinspace keV}$). Thus, within $1{\rm\thinspace Mpc}$ where the temperature is well constrained and there is still a baryon over-density problem, the results indicate that a cool component is not significant. We expect {\it ASCA} to be able to rule out such variations to much larger radii. One further point is that the baryon overdensities are common to the whole sample and do not appear to be dependent on the Galactic column density. If clumping were responsible for gas mass overestimates then we would have expected clusters such as A478, which have large Galactic column densities, to have smaller than average baryon over-densities because we would see little of the sub-$1{\rm\thinspace keV}$ emission would be responsible for the overestimations. {}From our investigations into the required conditions for significant gas mass overestimations, and spectral observations of specific clusters, we conclude that clumping cannot explain the baryon overdensities in clusters. \subsection{Gravitational mass uncertainties}\label{section:grav_masses} We have shown that the gas mass uncertainties are unlikely to reduce the cluster baryon fractions to the 6 per cent upper limit obtained from standard primordial nucleosynthesis. However, the gravitational potential is the most uncertain component in the calculation and we now discuss its uncertainties. The deprojection results are changed by altering the gravitational potential to give a temperature that is consistent with the observed broad-beam values. In tests 6 and 7 of Table~\ref{table:tests} we can see that the statistical uncertainties in the temperature (for A478) produce comparatively small changes in the results, so that individual baryon fraction uncertainties will be probably dominated by the form of the potential that is chosen to obtain this temperature, rather than errors in this temperature determination. \begin{figure} \small \epsfxsize=0.48\textwidth \epsfxsize=0.48\textwidth \noindent \caption{ \label{figure:kt_mass_ratios} } These plots show how we have estimated the effect of uncertainties in the gravitational potential using the errors in the observed X-ray temperatures (from 13 clusters where the temperature errors are measured, and symmetric to within $2{\rm\thinspace keV}$). The uncertainty in the gravitational mass has been estimated by propagating the ($1\sigma$) errors in the observed X-ray temperature to the baryon fraction at $1{\rm\thinspace Mpc}$, as shown in (a). Assuming that the errors are symmetric and Gaussian we have then determined the cumulative probability, as shown in (b), from which we estimate that the cluster baryon fraction at $1{\rm\thinspace Mpc}$ has a median value of 13.8 per cent, with 5th and 95th percentile limits of 10.0 and 22.3 per cent. \end{figure} \normalsize \doublespace We have investigated the effect of changes in the gravitational potential, {\it e.g.\ } for a King-law density distribution, the gravitational contribution from the central galaxy, changing the form of the gravitational potential, and the statistical uncertainties in the optical velocity dispersion of the cluster. In the first case, test 8 shows that replacing both the galaxy and cluster potentials with King-law distributions increases the baryon fraction estimate. The reason for this is shown in Fig.~\ref{figure:potentials} where we have plotted several different mass distributions (appropriate for Abell 478, {\it i.e.\ } a velocity dispersion of $904\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}$ and the core-radius which we used of $0.2{\rm\thinspace Mpc}$). The King approximation provides a good description for the mass distribution within 10 core-radii ($<2{\rm\thinspace Mpc}$), but outside this region the King law clearly underestimates that total gravitational mass compared to the true isothermal potential. Although we have no particular reason to believe the cluster should follow a true isothermal potential at large radii (especially if the cluster in not relaxed), we use the true isothermal potential to provide conservative baryon fraction estimates. In test 9 we show that when the mass of the central galaxy is neglected, a larger baryon fraction is estimated for the cluster. When the remaining cluster potential is changed to a linear mass distribution (test 10), similar results to the standard result are obtained. The major change in the gravitational mass estimates, and therefore the baryon fraction, actually arises from the uncertainties in the velocity dispersion, as shown in tests 11 and 12. The statistical uncertainties (for A478; Zabludoff, Huchra \& Geller 1990) indicate that the cluster baryon fraction can be reduced from 26 per cent to 14 per cent when the ($1\sigma$) upper limit on the velocity dispersion is used ($+261\hbox{${\rm\thinspace km}{\rm\thinspace s}^{-1}\,$}$), but it is then very difficult to obtain a flat temperature profile, and the baryon fraction has still not been reduced to less than 6 per cent. To reduce all the baryon overdensities to $<6$ per cent would require that we use high velocity-dispersions for all the clusters, and would then produce unsatisfactory temperature profiles. This seems to be an unlikely solution to the baryon overdensity problem, especially as optical velocity dispersions are, if anything, usually overestimated due to substructure in clusters. We have attempted to estimate the uncertainty in the results due to the gravitational potential mass using the error in the reference (observed) temperatures. To estimate the effect of uncertainties in the velocity dispersion we have plotted the baryon fraction within a consistent radius of $1{\rm\thinspace Mpc}$ (see Table~\ref{table:results}) against the observational X-ray temperature (see Table~\ref{table:input_data}). We have only included those data where the X-ray temperature has been measured and its uncertainties (the standard deviation errors) are reasonably symmetric ({\it i.e.\ } the positive and negative errors are similar within $2{\rm\thinspace keV}$). This eliminates A545, A1689, A1763, A2009, A3186 and A3888. Using this refined sample of 13 clusters we have propagated the uncertainty in the temperature onto the uncertainty in the baryon fraction, as shown in Fig.~\ref{figure:kt_mass_ratios}(a). (Note, \hbox{$M_{\rm gas}\,$} is not significantly affected by uncertainties in the X-ray temperature.) To then estimate confidence limits on the baryon fraction with $1{\rm\thinspace Mpc}$ we have treated the errors as Gaussian, and determined the cumulative probability as a function of baryon fraction, as shown in Fig.~\ref{figure:kt_mass_ratios}(b). The diagram clearly shows that although there is a wide variation in the baryon fraction, it is very unlikely ({\it i.e.\ } a probability of $10^{-4}$) that at least one cluster from a similar sample has a baryon fraction at $1{\rm\thinspace Mpc}$ of $\Omega_{\rm b, max}\le0.06$. We estimate that the median baryon fraction at $1{\rm\thinspace Mpc}$ is 13.8 per cent with 5th and 95th percentile confidence limits of 10.0 and 22.3 per cent. \begin{figure} \small \epsfxsize=0.48\textwidth \noindent \caption{ \label{figure:masses} } The gas masses (squares) at the outer radius of the deprojection of each cluster are plotted together with the total gravitational masses (triangles). The {\em solid\/} line shows a fits to the gas masses, $\hbox{$M_{\rm gas}\,$}=6.7\times10^{13}R_{\rm Mpc}^{2}$, which predicts gravitational masses ({\em dot-dash\/} line) of $\hbox{$M_{\rm grav}\,$}(\Omega_{\rm b}=0.06)=1.1\times10^{15}R_{\rm Mpc}^{2}$, if $\Omega_{\rm b}/\Omega_0=0.06$. The actual gravitational masses in the deprojection results are fit with $\hbox{$M_{\rm grav}\,$}'=4.7\times10^{14}R_{\rm Mpc}^{2}$ ({\em dashed\/} line) if the same index as the gas mass is used, or $\hbox{$M_{\rm grav}\,$}''=5.3\times10^{14}R_{\rm Mpc}^{1.79}$ if the power law has a free-fit index ({\em dotted\/} line). (Errors on the data points are 1 standard deviation.) \end{figure} \normalsize \doublespace \section{Discussion} The results, as shown in Fig.~\ref{figure:mass_ratios}, from our deprojection analysis of 19 clusters of galaxies indicate that the baryon fraction in clusters is inconsistent with the mean baryon fraction of the Universe predicted from standard primordial nucleosynthesis calculations, if $\Omega_0=1$, as first noted by S.~White \& Frenk (1991) for the Coma cluster. The diagram also shows a trend for increasing baryon fractions with radius, and indicates that the cluster baryon fraction could be consistent with the universal value of $<6$ per cent at the very centre of clusters, but not further out. Our determinations of the baryon content in clusters are not compromised by the uncertainties in our analysis. The gas masses are extremely well determined, and overestimates due to clumping appear unlikely to be able to simultaneously reduce the baryon fractions significantly and produce observationally consistent emission-weighted temperatures. The uncertainty in $H_0$ would require an unreasonably small $H_0$ to reduce the cluster baryon fractions to $<6$ per cent ($\Omega_0=1$), and a lower limit of $\HO{22}$ is obtained by allowing all the mass of the clusters to be in gas (see also Steigman 1987, 1989). Possible excess absorption in clusters only increases the gas mass estimates. The main uncertainty in the cluster baryon fractions probably arises from the uncertainties in the total gravitational mass, which are dominated by the cluster velocity dispersion values. The optical determinations of all the velocity dispersions would be have to be underestimated, which is somewhat contrived, and is also contrary to overestimates expected from optical determinations if the clusters have undetected substructure. Since our results indicate that baryon fractions at $1{\rm\thinspace Mpc}$ are typically $10-20$ per cent in clusters, then the simplest solution to the conflict with standard primordial nucleosynthesis may be that $\Omega_0\approxlt0.3$. As there is evidence for $\Omega_0=1$ (see the Introduction) we shall first discuss the implications that arise from assuming standard primordial nucleosynthesis when $\Omega_0=1$. (Note we ignore the fact that $\qO{\frac{1}{2}}$ when $\Omega_0=1$, whereas our results are for $\qO{0}$. Test~2 in Table~\ref{table:tests} indicates that $q_0$ has little affect on the results.) Using the results given in Table~\ref{table:results} we have plotted, in Fig.~\ref{figure:masses}, the gas and gravitational masses against the maximum radius of each deprojection. We note that if all the cluster deprojections are extended to the surface brightness of the background, then we would expect the gas masses at the maximum radii to follow an $R_0^2$ dependence, and indeed fitting a power-law function to the gas masses at the maximum radius indicates that the index is $2.2$ with 90 per cents confidence limits of $\pm0.17$. We have therefore obtained gas masses approaching the maximum detectable radii for these data. Forcing an index of 2, and fitting a power law to the gas masses gives the fit of $\hbox{$M_{\rm gas}\,$}=6.7\times10^{13}R_{\rm Mpc}^{2}$ shown by the {\em solid\/} line. The corresponding total masses used in the deprojection analysis, fitted with the same radial dependence, gives $\hbox{$M_{\rm grav}\,$}'=4.7\times10^{14}R_{\rm Mpc}^{2}$, shown as the {\em dashed\/} line. If the gravitational masses are fitted with the radial dependence as a free parameter, then we find that $\hbox{$M_{\rm grav}\,$}''=5.3\times10^{14}R_{\rm Mpc}^{1.79}$, shown as the {\em dotted\/} line. This again indicates that the baryon fraction increases with radius, as already found in Fig.~\ref{figure:mass_ratios}. If we then assume that $\Omega_{\rm b}/\Omega_0=0.06$ in clusters, and return to the same radial dependence as the gas masses, then the expected total gravitational mass is given by $\hbox{$M_{\rm grav}\,$}(\Omega_{\rm b}\le0.06)=1.1\times10^{15}R_{\rm Mpc}^{2}$, shown as the {\em dot-dash\/} line. {}From this we can see that, if we use the mean baryonic fraction of $<6$ per cent to predict the total gravitational masses from gas masses, then we overpredict masses with respect to the virial values, {\it e.g.\ } for A665 the predicted mass is approximately $5.7\times10^{15}\hbox{$\rm\thinspace M_{\odot}$}$ within $2.4{\rm\thinspace Mpc}$. This is larger than considered from current theories of cluster formation, which would give a total mass of $2.8\times10^{15}\hbox{$\rm\thinspace M_{\odot}$}$ for A665 [from equation ${\rm k} T/(4{\rm\thinspace keV})=(M/10^{15}\hbox{$\rm\thinspace M_{\odot}$})^{2/3}$ in Henry {et al.}\/ 1992]. This total mass is more in line with that expected for a very much hotter cluster, such as A2163 at $13.9{\rm\thinspace keV}$. We also note that current theories of the formation of large-scale structure and cluster of galaxies may have problems explaining the apparently common occurrence of large baryon overdensities. For the median and 5th and 95th percentile confidence limits that we have placed on the baryon fraction within $1{\rm\thinspace Mpc}$, the overdensity is probably at least $2\Omega_{\rm b}$. If clusters are truly overdense in baryons then, as highlighted by Fabian (1991) from the Shapley Supercluster data, then how are the extra baryons accumulated from the surrounding volume at the maximum mean density of 6 per cent for the Universe? In A665 the gas mass is approximately $5.4\times10^{14}\hbox{$\rm\thinspace M_{\odot}$}$ within $2.4{\rm\thinspace Mpc}$, and therefore the size of the region with the equivalent mass of baryons for a Universe of density $\Omega_{\rm b}\le0.6$ is $31{\rm\thinspace Mpc}$ --- a factor of 13 in radius, or greater than $2\times10^{3}$ in volume. The requirement of such large regions, to provide a source of baryons for such overdensities, may be too large to enable the concentration to occur within a Hubble time and would rule out self-gravitational accumulation of baryons as a valid mechanism to concentrate the baryons. This is the real problem of baryon overdensities in clusters, as it is independent of the uncertainties in the gravitational mass estimates in this analysis. However, even if we assume that sufficient baryons {\em can\/} be accumulated within the cluster, we still have to explain how the baryons appear be concentrated at the centre of a cluster with respect to the overall dark matter distribution. In Fig.~\ref{figure:schematic} we show a schematic diagram of the gas and gravitational mass distributions that could give rise to large baryon fractions within the central $\sim3{\rm\thinspace Mpc}$, decreasing to a baryon fraction consistent with the universal average at larger radius. [We note, with the mass fractions determined from the deprojection results and the $\beta$ values for clusters (Forman \& Jones 1984), both indicate an increase of the gas to gravitational mass fraction increases with radius, over the observed regions of clusters, {\it i.e.\ } $\approxlt3{\rm\thinspace Mpc}$]. We can envisage two ways to create the distribution shown in Fig.~\ref{figure:schematic} -- through evolution or an uneven distribution of baryonic and non-baryonic material in the early Universe ($z\sim5$). First, an evolutionary process, which may produce a central concentration of gas surrounded by an `extended halo' of dark matter, from the infall process which forms the clusters and/or the subsequent infall of sub-clusters. For example, if a gas-rich cluster fell into a larger cluster the gaseous component would be stripped from it in the dense central regions of the larger cluster, in a manner similar to the ram-pressure stripping of the hot gas from the elliptical galaxy M86 in the Virgo cluster ({\it e.g.\ } D.~White {et al.}\/ 1991a), while the collisionless dark matter would pass through unhindered to the other side of the cluster. This process would produce an atmosphere of gas which would be slightly more extended than the virial core of the cluster, due to shock heating, surrounded at larger radius by a halo of dark matter. This dark matter, if bound, may remain at large radius for a relatively large period of time before falling again towards the core of the cluster. Thus, within the framework of hierarchical merging, infalling sub-clusters may produce significant amount of dark matter at large cluster radii. This scenario requires that clusters are more massive, approaching $10^{16}\hbox{$\rm\thinspace M_{\odot}$}$, than generally considered in current theories of cluster formation, and would result in large peculiar velocities around massive clusters of galaxies. Other methods in which the dark matter could be distributed on larger scales rely on different clustering properties of the dark matter, {\it e.g.\ } if the Universe is composed of a mixture of mostly hot with some cold dark matter, or if $\Lambda$ is non-zero. Alternatively, if the central concentration of baryons with respect to the dark matter does not occur in the evolutionary scenario, then the gas needs to be distributed differently before the formation of clusters. However, as the gas is more concentrated than the gravitational matter, gravitational effects cannot have been responsible, and the baryons must have been pushed together to form regions of higher density. This could have happened if there was a population of active quasars with strong winds or radiation pressure which produced voids in the early Universe before cluster formation. The baryonic material would have been forced together at the interface between voids, at the sites of cluster formations, while the dark matter would have been less compressed. Clusters would then have inherited the distributions of baryonic and non-baryonic material. This scenario leads to the prediction that there should be a population of objects at the centre of voids. None of the above solutions for the baryon over-densities resulting from standard primordial nucleosynthesis are very elegant or without problems. Perhaps the most damning fact is that it appears extremely difficult to accumulate enough baryons from a region with a baryon density of at most 6 per cent to provide the overdensity seen to be common in our sample. As there is still evidence for $\Omega_0=1$ on large scales, {\it e.g.\ } from studies of the structure in clusters (Richstone, Loeb \& Turner 1992), and the {\sc POTENT} analysis of {\it IRAS} galaxies (Nusser \& Dekel 1993, Dekel {et al.}\/ 1993, Dekel \& Rees 1994). We do not appeal to low values of $\Omega_0$, but assume that the dark matter in clusters is spread over a larger radius than the baryons. This means that clusters are several times more massive than is canonically assumed. \begin{figure*} \begin{center} \small \epsfxsize=0.8\textwidth \parbox[]{17.75cm}{ \noindent \caption{ \label{figure:schematic} } This schematic figure shows how the observational results, which indicate baryon fractions approaching 30 per cent, may be reconciled with the mean baryon fraction for the Universe of $<6$ per cent ($\Omega_0=1$ and $h_{50}=1$) for the cluster as a whole. The solid lines are the cluster gas and gravitational mass distributions, and the dotted line shows the mass expected within the same volume with a critical density of material and the baryon fraction of 6 per cent ($\Omega_0=1$). The reasoning for the more extended nature of the dark matter is given in the main text. } \end{center} \end{figure*} \normalsize \doublespace \section{Conclusion} Our deprojection analysis of 19 moderately luminous and distant clusters, observed with the {\it Einstein Observatory} IPC, shows that cluster baryon fractions are all inconsistent with the mean value for the Universe of $\Omega_{\rm b}=0.05\pm0.01h_{50}^{-2}$, as calculated according to standard, homogeneous, primordial nucleosynthesis (Olive {et al.}\/ 1990, Walker {et al.}\/ 1991). The deprojection method produces well-determined gas masses, such that the main uncertainty in the gas mass lies in the value of the Hubble constant, while the overall main uncertainty in the baryon fraction determinations lies is in the gravitational masses. However, this also cannot produce a significant enough effect to reconcile the cluster determinations with the mean value predicted from standard primordial nucleosynthesis. We find, at the 5th and 95th per cent confidence levels, that the baryon fractions of the clusters, in our refined sample of 13, lie between 10 and 22 per cent. {\it ASCA} should reduce uncertainties in the gas and gravitational mass determinations, by enabling accurate temperature measurements (with adequate spatial resolution) to be made, from which the total masses and baryon fractions of clusters will be accurately determined. As there is still strong evidence that $\Omega_0=1$ on large scales, we have considered the implications that result from conflict between the baryon fractions in clusters and the mean baryon fraction prediction from standard primordial nucleosynthesis. These solutions, which imply clusters are much more massive than generally thought, require halos of dark matter outside the main X-ray extent of the cluster. \section{Acknowledgements} We thank Gary Steigman, Steven Allen, Niel Brandt and Alastair Edge for many useful points and discussions. D.A.~White and A.C.~Fabian thank the P.P.A.R.C. and Royal Society for support respectively.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Dense subgraph mining is a fundamental task in many graph analytic tasks. Studying dense subgraphs can reveal important information about connectivity, centrality, and robustness of the network. For instance, we can find sub-community of users with close relationships in social networks; locate highly active pathways in gene expression networks, and identify clusters of suspicious accounts with possible money laundering behaviours from networks of transaction histories. There exist different definitions of dense subgraphs such as $k$-cliques, $k$-plexes, and $n$-clubs, which are intractable to compute~\cite{bonchi2014core}. Core decomposition is a popular notion of dense subgraphs due to its cohesive structure and the fact that it can be computed in polynomial time. It can also be used to compute other dense subgraphs such as maximal cliques~\cite{bonchi2014core}. The $k$-core is defined as the largest subgraph in which each vertex has a degree of at least $k$ within the subgraph. The collection of all $k$-cores for different values of $k$ forms the core decomposition of the graph. The highest value of $k$ for which a vertex belongs to a $k$-core subgraph is called the core number (or coreness) of the vertex. Core decomposition has been used in several applications such as text summarization~\cite{antiqueira2009complex}, exploring collaboration in software teams~\cite{wolf2009mining}, and describing biological functions of proteins in protein-protein interaction networks \cite{li2010computational} Extension of the definition of core decomposition to probabilistic graphs has been recently introduced in the literature~\cite{bonchi2014core}. Due to intrinsic uncertainty in many real-world networks such as social, biological, and communication networks (cf.~\cite{cheng2015reachability}) it is important to study core decomposition in probabilistic contexts. Probabilistic graphs are graphs in which each edge is assigned a probability of existence. In social and trust networks, an edge can be weighted by the probability of influence or trust between two users that the edge connects~\cite{korovaiko2013trust}. In biological networks of protein-protein interactions (cf.~\cite{Genome}), an edge can be assigned a probability value representing the strength of prediction that a pair of proteins will interact in a living organism~\cite{sharan2007network}. We use the notion of $(k,\eta)$-core introduced by Bonchi \emph{et al.}~\cite{bonchi2014core}. Specifically, we aim to compute the largest subgraph in which each vertex has at least $k$ neighbours within that subgraph with probability no less than a user-specified threshold $\eta$. To compute core decomposition in probabilistic graphs, the $\eta$-degree or probabilistic degree of a vertex is introduced in~\cite{bonchi2014core}. The standard approach for computing $(k,\eta)$-core decomposition is the peeling process which is based on continuously removing the vertices with $\eta$-degree less than $k$~\cite{bonchi2014core,esfahani2019efficient}. When a vertex is removed, its core number is set to be its $\eta$-degree at the time of removal, and the $\eta$-degree of all its neighbours is computed and updated again. The peeling process is repeated after incrementing $k$ until no vertices remain, which results in finding all $(k,\eta)$-cores for different values of $k$, and user-defined threshold $\eta$. Core decomposition in probabilistic graphs is challenging due to the combinatorial nature of $\eta$-degree computation in such graphs. Esfahani \emph{et al.}~\cite{esfahani2019efficient} improved the peeling process by using easy-to-compute lower-bounds on $\eta$-degree of vertices based on Lyapunov Central Limit Theorem (CLT) in statistics~\cite{lyapunov-clt} and designing efficient array structures for storing important bookkeeping information~\cite{esfahani2019efficient}. However, the previously proposed graph peeling algorithm by Esfahani \emph{et al.}~\cite{esfahani2019efficient} still works by starting from the lowest degree vertices and spends a lot of time working its way up to vertices that are more valuable in terms of information abundance. Our motivation is that comparing to low incidence vertices in the small cores, we think more valuable information can be found by directly focusing on more dense cores with high activities. In this work, we will present a more efficient and dense core focusing multi-stage peeling algorithm. Since it will be based on our previously developed algorithm, we will refer to the previous algorithm \textit{PA} and our proposed algorithm \textit{M-PA} in the rest of the paper. For M-PA, a two-stage filtering procedure is added in order for PA to properly screen out vertices in smaller cores and focus on denser sub-communities. The idea is after filtration, we have effectively decomposed the large graph into smaller subgraphs and we can then perform core decomposition even more efficiently. \section{Background} Let $G=(V,E)$ be an undirected graph, where $V$ and $E$ are the set of vertices and edges in $G$, respectively. Given a vertex $u \in V$, let $N_{G}(u)$ be the set of all neighbours of $u$, i.e. $N_{G}(u) = \{v: (u,v) \in E\}$. $\left| N_{G}(u) \right|$, is equal to deterministic degree of $u$ in $G$. \subsection{Core Decomposition in Deterministic Graphs} Given a graph $G$, the $k$-core of $G$ is defined as the largest subgraph $H \subseteq G$ in which each vertex has degree of at least $k$ in $H$. The set of all $k$-cores forms the core decomposition of $G$, where $ 0 \leq k \leq d_{\max}(G)$, and $d_{\max}(G)$ is the maximum vertex degree in $G$. Given a vertex $u$, the largest value of $k$ for which $u$ belongs to a $k$-core is called core number of $u$. \subsection{Probabilistic Graphs} A probabilistic graph $\mathcal G = (V, E, p)$, is defined over a set of vertices $V$, a set of edges $E$ and a probability function $p : E \rightarrow (0,1]$ which assigns an existence probability $p(e)$ to every edge $e \in E$. In the literature, the existence probability of each edge is assumed to be independent of other edges~\cite{bonchi2014core}. The \textit{possible worlds} of $\mathcal{G}$ are deterministic graph instances of $\mathcal G$, which are used for analyzing probabilistic graphs. In each possible world, only a subset of edges appears. For each possible world $G = (V, E_G) \sqsubseteq \mathcal G $, where $E_G \subseteq E$, the probability of observing that possible world is obtained as follows: $ \text{Pr}(G) = \prod_{e \in E_G} p(e) \prod_{e \in E\setminus E_G}(1-p(e)). $ \begin{figure}[H] \centering \subfloat[]{ \label{probgraph1} \begin{tikzpicture}[auto, node distance=2cm, every loop/.style={}, thick,main node/.style={scale=0.6, circle,draw,font=\sffamily\small\bfseries}] \node[main node,fill={orange}] (1) {\textcolor{white}{1}}; \node[main node,fill={orange}] (2) [right of = 1] {\textcolor{white}{2}}; \node[main node,fill={orange}] (3) [below of = 1] {\textcolor{white}{3}}; \node[main node,fill={orange}] (4) [below of = 2] {\textcolor{white}{4}}; \node[main node,fill={green!70!blue}] (0) [below left of = 1] {\textcolor{white}{0}}; \node[main node,fill={green!70!blue}] (5) [above right of = 4] {\textcolor{white}{5}}; \path[every node/.style={scale=0.6, font=\sffamily\small}] (0) edge[] node [left, pos=0.6, font=\small\bfseries] {0.3} (1) (1) edge[] node [above, font=\small\bfseries] {0.4} (2) edge[] node [pos = 0.5, left, font=\small\bfseries] {0.6} (3) (2) edge[] node [right,pos=0.5, font=\small\bfseries] {0.6} (4) (3) edge node [above, font=\small\bfseries] {0.4} (4) (4) edge node [ right, font=\small\bfseries] {0.5} (5) ; \end{tikzpicture} } \hspace{0.5cm} \subfloat[]{ \label{probgraph2} \begin{tikzpicture}[auto, node distance=2cm, every loop/.style={}, thick,main node/.style={scale=0.6, circle,draw,font=\sffamily\small\bfseries}] \node[main node,fill={orange}] (1) {\textcolor{white}{1}}; \node[main node,fill={orange}] (2) [right of = 1] {\textcolor{white}{2}}; \node[main node,fill={orange}] (3) [below of = 1] {\textcolor{white}{3}}; \node[main node,fill={orange}] (4) [below of = 2] {\textcolor{white}{4}}; \path[every node/.style={scale=0.6, font=\sffamily\small}] (1) edge[] node [above, font=\small\bfseries] {0.4} (2) edge[] node [pos = 0.5, left, font=\small\bfseries] {0.6} (3) (2) edge[] node [right,pos=0.5, font=\small\bfseries] {0.6} (4) (3) edge node [above, font=\small\bfseries] {0.4} (4) ; \end{tikzpicture} } \caption{\color{black} a) Probabilistic graph $\mathcal G$, b) (2,0.2)-core $\mathcal{H}$ of $\mathcal G$.} \label{exam1} \end{figure} Let $u$ be a vertex in $\mathcal{G}$. The probability that $u$ has degree at least $t$ in $\mathcal G$ can be expressed as $\text{Pr}[ \textsf{deg}_{\mathcal G}(u) \geq t]=\sum_{G \sqsubseteq \mathcal G }\text{Pr}(G) \cdot \mathbbm{1}(G,u,t)$, where $\mathbbm{1}(G,u,t)$ is an indicator function which takes on 1 if degree of $u$ in possible world $G$ is at least $t$. It should be noted that as $t$ decreases (increases), $\text{Pr}[\textsf{deg}_{\mathcal G}(u) \geq t]$ increases (decreases). Given a user-defined threshed $\eta \in [0,1]$, the $\eta$-degree of $u$~\cite{bonchi2014core}, denoted by $\eta$-$\textsf{deg}_{\mathcal G}(u)$, is defined as the maximum integer $t \in [0,d_u]$ for which $\text{Pr}[ \textsf{deg}_{\mathcal G}(u) \geq t] \geq \eta$, where $d_u$ is the number of edges incident to $u$ which is equal to the deterministic degree of $u$. \subsection{Core Decomposition in Probabilistic Graphs} We use the notion of $(k,\eta)$-core in~\cite{bonchi2014core} for core decomposition in probabilistic graphs. Let $\mathcal{G}=(V,E,p)$ be a probabilistic graph, and $\eta \in [0,1]$ be a user-specified threshold. The $(k,\eta)$-\textit{core} is the largest subgraph $\mathcal{H}$ of $\mathcal{G}$ in which each vertex $u$ has $\eta$-degree no less than $k$, i.e. $\eta$-$\textsf{deg}_{\mathcal{H}}(u) \geq k$. {\em Core decomposition} of $\mathcal G$ is the set of all $(k,\eta)$-cores, for $k \in [0,k_{\max, \eta}]$, where $k_{\max,\eta}=\max_{u} \{ \eta$-$\textsf{deg}_{\mathcal G}(u) \}$. The {\em core number} of a vertex $u$, $\kappa_{\eta}(u)$, is the largest integer $k$ for which $u$ belongs to a $(k,\eta)$-core. \begin{examp} Consider Fig.~\ref{probgraph1}, vertex $u=1$, and $\eta = 0.2$. We have $\text{Pr}[ \textsf{deg}_{\mathcal G}(u) \geq 3] = 0.3 \cdot 0.4 \cdot 0.6 = 0.072$ (product of probabilities that edges $(0,1)$, $(1,2)$, and $(1,3)$ exist), and $\text{Pr}[ \textsf{deg}_{\mathcal G}(u) \geq 2] = 0.396$. Since $0.396$ is greater than $\eta$, $\eta$-$\textsf{deg}_{\mathcal G}(u) = 2$. \smallskip Fig.~\ref{probgraph2} shows a $(2,0.2)$-core $\mathcal{H}$ of $\mathcal G$. Each vertex $u \in \mathcal{H}$, has $\eta$-degree $2$ with probability $0.24$. \smallskip Consider $u=1$ and $\eta=0.2$. Vertex $u$ is in $(1,0.2)$-core ($\mathcal G$ itself) and $(2,0.2)$-core ($\mathcal{H}$). There is no $(3,0.2)$-core, thus, $\kappa_{\eta}(u)=2$. \end{examp} \subsection{$\eta$-degree Computation Using Dynamic Programming (DP)} We have $\text{Pr}[ \textsf{deg}_{\mathcal G}(u) \geq t] = \text{Pr}[ \textsf{deg}_{\mathcal G}(u) \geq t-1] - \text{Pr}[ \textsf{deg}_{\mathcal G}(u) = t]$. To find $\eta$-degree of each vertex $u$, we need to compute $\text{Pr}[ \textsf{deg}_{\mathcal G}(u) = t]$. These probabilities can be computed using dynamic programming (DP) as proposed in~\cite{bonchi2014core}. The main idea of DP is as follows~\cite{bonchi2014core}. Given a vertex $u$ in probabilistic graph $\mathcal{G}$, and edge $e$ incident to $u$, the probability that $u$ has degree equal to $t$ ($\text{Pr}[ \textsf{deg}_{\mathcal G}(u) = t]$) consists of two mutually exclusive events: (1) edge $e$ exists and $u$ has degree $t-1$ in $\mathcal{G}_{\setminus \{ e \}}$, (2) edge $e$ does not exist and $u$ has degree $t$ in $\mathcal{G}_{\setminus \{ e \}}$, where $\mathcal{G}_{\setminus \{ e \}}$ is the subgraph of $\mathcal{G}$ in which edge $e$ does not exist. As a result, $\text{Pr}[ \textsf{deg}_{\mathcal G}(u) = t]$ can be written as the sum of the probability of these events, and a recursive formula is obtained. The above reasoning can be extended to any subgraph of $\mathcal{G}$. \cite{bonchi2014core} provides a thorough formulation of what is explained here. \section{Related Work} Core decomposition is one of the most popular notions of cohesive subgraphs~\cite{li2010computational,malliaros2020core,ugander2012structural}. It can be used for computing other definitions of dense subgraphs such as maximal cliques~\cite{eppstein2010listing}. In deterministic graphs, core decomposition has been studied extensively in different settings~\cite{batagelj2003m,montresor2013distributed,wen2016efficient,aridhi2016distributed}. For probabilistic graphs, the notion of $(k,\eta)$-core is introduced by Bonchi \emph{et al.}~\cite{bonchi2014core}. The authors propose an algorithm that is based on iterative removing of the vertex of smallest $\eta$-degree, and updating the $\eta$-degree of its neighbours. In~\cite{bonchi2014core}, techniques are developed based on dynamic programming for computing $\eta$-degrees. More efficient algorithms are proposed by Esfahani \emph{et al.}~\cite{esfahani2019efficient} which can handle large graphs which do not fit in main memory. A different probabilistic core decomposition model, $(k,\theta)$-cores, is proposed by Peng \emph{et al.}~\cite{peng2018efficient} which is based on finding subgraphs whose vertices have a high probability to be a deterministic $k$-core member in different possible worlds of a probabilistic graph $\mathcal{G}$. Additionally, Yang \emph{et al.}~\cite{yang2019index} defined an index-based structure for processing core decomposition in probabilistic graphs. Truss decomposition is another notion of dense substructures. For probabilistic graphs, the notion of local $(k,\eta)$-truss is introduced by Huang \emph{et al.} in \cite{huang2016truss}. The authors propose an algorithm for computing local $(k,\eta)$-truss which is based on iterative peeling of edges with support less than $k-2$, and updating the support of affected edges. Moreover, the notion of global $(k,\eta)$-truss is proposed in~\cite{huang2016truss} which is based on the probability of each edge belonging to a connected $k$-truss in a possible world. An approximate algorithm for the local truss decomposition is proposed by Esfahani \emph{et al.} in~\cite{esfahani2019fast} to efficiently compute the tail probability of edge supports in the peeling process described in~\cite{huang2016truss}. \section{Proposed Approach}\label{sec:method} As mentioned before, in M-PA we add two data screening stages before PA. The goal of the added data filtering steps is to reduce the number of vertices in the graph and speed up the follow-up analyses. In particular, we wish to remove a large proportion of low connectivity vertices (i.e. vertices with small $\eta$-degree) that we know will not likely be members of dense sub-communities in the graph. \subsection{Data Screening Based on Degree Expectation} Here we briefly explain the methodology behind the first stage of data screening. Given a probabilistic graph $\mathcal G=(V,E,p)$, for each vertex $v \in V$, we have a set of edges incident to $v$ and each edge is accompanied with a probability of existence $p_i$ that is independent of other edge probabilities in $\mathcal G$. For vertex $v$, $\textsf{deg}_{\mathcal G}(v)$ can be interpreted as the sum of a set of independent Bernoulli random variables $X_i$'s with different success probabilities $p_i$'s \cite{esfahani2019efficient} where: \begin{equation} X_i = \begin{cases} 1, & \text{if edge $e_i$ incident to $v$ exists in the graph}\\ 0, & \text{otherwise} \end{cases} \end{equation} and $\textsf{deg}_{\mathcal G}(v)$ follows Poisson binomial distribution with $E[\textsf{deg}_{\mathcal G}(v)]=\sum E[X_i]=\sum p_i$. We will use $\sum p_i$ as the first screening criteria since $\sum p_i$ can be seen as an approximation to $\textsf{deg}_{\mathcal G}(v)$. Thresholds are user-defined so any positive integer greater or equal to 0 is accepted. However, we recommend that the first threshold be set greater or equal to 5 (for example, if the first threshold is set to be 5, all vertices with degree expectation less than 5 is removed, and only those with $\sum p_i\ \ge\ 5$ and $\sum (1-p_i)\ \ge\ 5$ are kept). The purpose of this step is to screen out vertices that are rarely connected with others and hence are not likely to be part of any highly connected sub-network. For example, if a vertex $u$ has $\sum p_i$ less than 5, its $\textsf{deg}_{\mathcal G}(u)$ will also likely be around 5 with slight variations, therefore $u$ will not appear in cores with high activities (e.g. vertices with big coreness). Note that when the first threshold is set lower, more vertices will be retained. On the one hand, to speed up subsequent analyses, the threshold value should be high enough, on the other hand, the threshold should not be too high that possible highly connected vertices are removed. In our experiment, we empirically chose a conservative number, 5, as the default first threshold, but other threshold values could be used. \subsection{Data Screening Based on Lower-bounds of \textit{$\eta$-degree}} In this section, we introduce the second data screening step before PA. For the remaining vertices that passed the first stage of data screening, we calculate lower-bounds of their $\eta$-degree using Lyapunov Central Limit Theorem (CLT)~\cite{Lyapunov-Nouvelle}. Given a vertex $v \in V$, based on Lyapunov CLT, $Z = \frac{1}{\sigma} \sum_{i=1}^{d_v} (X_i-\mu_i)$ has standard normal distribution, where $\mu_i = \Pr(X_i)$, and $ \sigma = \sqrt{\sum_{i=1}^{d_v}\Pr(X_i)(1-\Pr(X_i))}$. Approximation of $\text{Pr}[ \textsf{deg}_{\mathcal G}(v) \geq t] = \text{Pr}[ \sum_{i=1}^{d_v} X_i \geq t]$ can be obtained by subtracting $\mu_i$ from the sum of $X_i$'s, and dividing by $\sigma$. As a result, we have: \begin{equation}\label{clt} \Pr\left[\sum_{i=1}^{d_v}X_i \geq t\right] = \Pr\left[\frac{1}{\sigma} \sum_{i=1}^{d_v} (X_i - \mu_i) \geq \frac{1}{\sigma}\left(t- \sum_{i=1}^{d_v} \mu_i\right)\right] \end{equation} Since $Z$ has standard normal distribution, we can find the maximum value of $t$, such that the right-hand side of Equation~\ref{clt} is no less than $\eta$. We then use the second user-defined threshold to further select applicable vertices. The procedure for the second data screening stage is described in Algorithm~\ref{euclid}. Note that as we start graph peeling the vertex's $\eta$-degree will also start decreasing, so in this last data filtering stage, we only select based on vertices' initial $\eta$-degree lower-bounds. \begin{algorithm} \caption{Selection based on $\eta$-degree lower-bounds}\label{euclid} \begin{algorithmic}[1] \Procedure{SecondStageScreening ()}{} \State $\textit{nodelist} \gets \text{list of remaining nodes in }\textit{network}$ \State $init\_\eta\_degree \gets \{\}$ \Comment{empty hash table} \ForAll {$v \in \textit{nodelist}$} \State $init\_\eta\_degree[v] \gets \text{compute initial }\eta\text{-}\textsf{deg}(v)$ \EndFor \ForAll {$v \in init\_\eta\_degree.keys()$} \If {$init\_\eta\_degree[v] < \textit{threshold}$} \State delete $init\_\eta\_degree[v]$ \Comment{delete $v$ from hash table keys} \EndIf \EndFor \Return $init\_\eta\_degree.keys()$ \Comment{return hash table keys} \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Remaining Parts of M-PA} Core Decomposition which is based on peeling algorithm, includes three important steps: (1) removing vertex $u$ of the smallest $\eta$-degree, (2) assigning the core number of $u$ to be equal to its $\eta$-degree, and (3) recomputing the $\eta$-degree of $u$'s neighbours. Vertices should be kept sorted by their current $\eta$-degree at all times during the process. This process is challenging in probabilistic graphs as it involves many recomputations of $\eta$-degrees. It should be noted that computing $\eta$-degree of a vertex $u$ using dynamic programming takes $O(d_u^2)$. As a result, in~\cite{esfahani2019efficient}, an efficient version of the peeling algorithm is proposed which uses efficient array structures and lazy updates of $\eta$-degree of vertices. In M-PA, which is based on the PA proposed in~\cite{esfahani2019efficient}, we utilize data screening for detecting and removing non-promising vertices. The core computation part of M-PA approach is given in Algorithm~\ref{bz1}. Let $\text{V}_{\text{alive}}$ be the set of vertices which are remained after the second data screening phase, i.e. \textit{SecondStageScreening} (Line~\ref{valive}). The vertices are labelled by numbers 0 to $n-1$. Array \textbf{d} initially stores for each vertex the lower-bound on the $\eta$-degree of that vertex, and by the end of iterations we have the output core numbers in array \textbf{d}. The lower-bounds are obtained using CLT. Array \textbf{A} stores vertices in ascending order of their lower-bounds. Array \textbf{gone} keeps track of the removed vertices at each step of the algorithm. Array \textbf{valid} tells for each vertex $v$ if the $\eta$-degree of $v$ is the same as the value $\textbf{d}[v]$. These two arrays are initially set to all-false vectors (Lines~\ref{gone}-\ref{valid}) since all the vertices are on their lower-bounds at the beginning of the algorithm. Also, none of the vertices has been removed yet. The algorithm starts processing the vertices based on their (lower-bound on) $\eta$-degree. When a vertex $v$ is being processed, the algorithm checks if $v$ is on its lower-bound or its $\eta$-degree is available (Line~\ref{validitycheck}). if not, the $\eta$-degree of $v$ is computed using DP, stored in array \textbf{d}, and the vertex is swapped to be placed in a correct position in \textbf{A} (Lines~\ref{swapright1}-\ref{swapright2}). It should be noted that two additional arrays are defined to keep array \textbf{A} sorted at all times during the algorithm. One array stores the position of each vertex in \textbf{A}, and the other one stores the index boundaries of the vertex blocks having the same $\eta$-degree (exact or lower-bound) in \textbf{d} (a detailed discussion on the array structures can be found in~\cite{esfahani2019efficient}). These arrays help to swap vertex $v$ efficiently to its proper place in \textbf{A}. Otherwise, if $d[v]$ is the same as $\eta$-degree of $v$, $v$ is removed (Line~\ref{removed}), and the value $\textbf{d}[u]$ of all its neighbours $u \in \text{V}_{\text{alive}}$ with $\textbf{d}[u] > \textbf{d}[v] $ is decremented by one (the same as deterministic graphs, in which the degree of a vertex decreases by one when a neighbour of that vertex is removed). Then, $u$ is swapped to a proper place in \textbf{A} (Lines~\ref{swapleft}-\ref{swapleft2}). Lines~\ref{extra1}-\ref{extra2} make sure that the algorithm does not go below the current minimum lower-bound which is being processed. It should be noted that during the main algorithm cycle, $\eta$-degree computation for each vertex is done with respect to its neighbours $u'$ such that $u' \in \text{V}_{\text{alive}}$, and \textbf{gone}$[u'] = \textit{false}$. \noindent The numerical stability, lower-bound accuracy, and correctness of the CLT-based peeling algorithm have been discussed extensively by Esfahani \emph{et al.} in~\cite{esfahani2019efficient} so we will not reiterate those aspects in this paper. \begin{algorithm} \caption{M-PA core decomposition function}\label{bz1} \begin{algorithmic}[1] \Function{CoreCompute}{$\text{Graph}$ $\mathcal{G}$, $\eta$} \State {$ \text{V}_{\text{alive}} \gets \text{SecondStageScreening ()}$} \label{valive} \State {{\textit{initialize}} \textbf{A}, and \textbf{d} based on $v \in \text{V}_{\text{alive}}$ } \State \textbf{gone} $ \gets $ \textbf{False} \Comment{all-false vector} \label{gone} \State \textbf{valid} $ \gets $ \textbf{False} \Comment{all-false vector} \label{valid} \State $i \gets 0$ \While {$i<n$} \State $v \gets \textbf{A}[i]$ \If {\textbf{valid}$[v] = \textit{true}$} \label{validitycheck} \State \textbf{gone}$[v] \gets \textit{true}$ \label{removed} \ForAll {$u: (u,v) \in \mathcal{N}_v$ and $ u \in \text{V}_{\text{alive}}$} \If {$\textbf{d}[u] = \textbf{d}[v]$} \label{extra1} \If {\textbf{valid}$[u] = \textit{false}$} \State {Compute $\eta\text{-}\textsf{deg}(u)$, \textbf{d}$[u] \gets \eta\text{-}\textsf{deg}(u)$} \State{swap $u$ to a correct place in \textbf{A} } \EndIf \label{extra2} \EndIf \If {$\textbf{d}[u] > \textbf{d}[v]$} \State {\textbf{d}$[u]--$, swap $u$ to a correct position in \textbf{A}} \label{swapleft} \State{\textbf{valid}$[u] \gets \textit{false}$} \label{swapleft2} \EndIf \EndFor \State $i++$ \Else \State{\text{Compute} $\eta$-$\textsf{deg}(v)$, $ \textbf{d}[v] \gets \eta\text{-}\textsf{deg}(v)$} \label{swapright1} \State{\text{swap $v$ to a correct place in \textbf{A}}} \label{swapright2} \EndIf \EndWhile \State \textbf{return} $\textbf{d}$ \EndFunction \end{algorithmic} \end{algorithm} \section{Experiments} In this section, we present results from the running time comparisons between M-PA approach and the original PA approach. We also compute the probabilistic density and probabilistic clustering coefficient for the outcomes of PA and M-PA for cohesiveness comparison. Both PA and M-PA are implemented in Java and the experiments are conducted using the WestGrid \footnote{\href{www.westgrid.ca}{www.westgrid.ca}} Graham cluster from Compute Canada \footnote{\href{www.computecanada.ca}{www.computecanada.ca}}. \subsection{Efficiency Comparison} We use the Flickr, DBLP, Biomine, and ljournal-2008 datasets used in \cite{esfahani2019efficient} and three more datasets (itwiki-2013, uk-2014-tpd, enwiki-2013) from Laboratory for Web Algorithmics (LAW)~\cite{BoVWFI,BRSLLP}. The dataset statistics are presented in Table \ref{table:01} and the description of each dataset is also given. The smallest dataset is Flickr with less than 30\ 000 vertices and the biggest dataset is ljournal-2008 with more than 5 million vertices and nearly 50 million edges. \def1.4{1.4} \begin{table}[h] \centering \caption{Dataset statistics} \begin{tabular}{cccc} \hline Name & $|V|$ & $|E|$ & $P_{avg}$\\ \hline\hline Flickr & 24 125 & 300 836 & 0.13\\ \hline DBLP & 684 911 & 2 284 991 & 0.26\\ \hline Biomine & 1 008 201 & 6 722 503 & 0.27\\ \hline itwiki-2013 & 1 016 867 & 23 429 644 & 0.50\\ \hline uk-2014-tpd & 1 766 010 & 15 283 718 & 0.50\\ \hline enwiki-2013 & 4 206 785 & 91 939 728 & 0.50\\ \hline ljournal-2008 & 5 363 260 & 49 514 271 & 0.50\\ \hline \end{tabular} \label{table:01} \end{table} \begin{itemize} \item \textbf{Flickr}: snapshot of the Flickr online photo sharing community. The edge probability between any two nodes (users) is computed based on the Jaccard coefficient of the groups the users belonged to~\cite{bonchi2014core}. \item \textbf{DBLP}: snapshot of the DBLP database. Two authors (nodes) are linked if they have coauthored a publication together and the edge probability is computed based on the number of collaborations~\cite{bonchi2014core}. \item \textbf{Biomine}: snapshot of the Biomine probabilistic database. Biomine integrates indexes from several biological databases (Entrez Gene, STRING, UniProt, etc.) and probability is calculated for all edges (i.e. cross-references)~\cite{eronen2012biomine}. \item \textbf{itwiki-2013}, \textbf{enwiki-2013}, \textbf{uk-2014-tpd}: snapshots of the Italian and English part of Wikipedia in 2013, and the snapshot of top private .uk domains in 2014~\cite{BoVWFI,BRSLLP}. We generated $[0, 1]$ uniformly distributed probabilities for the edges. \item \textbf{ljournal-2008}: snapshot of LiveJournal social network in 2008, each node is a user and an edge from node $x$ to node $y$ indicates $x$ registered $y$ as its friend~\cite{BoVWFI,BRSLLP}. We generated probability values uniformly distributed in $[0, 1]$ for the edges. \end{itemize} For each dataset, we record the running time for PA and M-PA separately. In order to avoid our benchmark task compete for memory bandwidth with other jobs on the cluster, we requested an entire 32-core compute node on Graham with two Intel(R) Xeon(R) E5-2683v4 [email protected], and 125GB RAM. In addition, we have also set up the input and output of the algorithm to communicate directly with the compute node to avoid the impact from the parallel file system used in Graham's login node on the benchmark results. As discussed before, M-PA takes the same arguments as the original PA and two more user-defined thresholds for data screening ($threshold_1$, $threshold_2$). The first threshold is not affected by the choice of $\eta$, but the second threshold is related to $\eta$ because it is for the initial $\eta$-degree. There are many possible threshold selection methods and in some cases in the actual analysis, according to the specific dataset or purpose, it may be necessary to often change the chosen threshold for better outcomes. In this section, we set $\eta$ to be 5 different values: 0.1, 0.3, 0.5, 0.7, and 0.9. For each dataset and $\eta$, we performed exploratory analyses to determine the data screening thresholds. For the first threshold, we calculate the $\sum p_i$ of all vertices. If the results' 75th percentile is less than or equal to 5, we use 5 as the threshold for the first data filtering step. Otherwise, we assume that the distribution of $\sum p_i$ has an inflection point where the value of $\sum p_i$ quickly grows, and we use piecewise regression (segmented regression) to detect this change point and set it to be the first threshold. If more than one infection point is discovered, the highest one will be used. We have explained the rationale to choose number 5 as the default first threshold in Section~\ref{sec:method}: we wish to remove low-connectivity nodes in the network that are not eligible to be part of any highly connected dense subgraphs but at the same time the default threshold should not be too high that possible valuable information is lost. For the determination of the second data screening threshold, for convenience we assume all the vertices have passed the screening of the first step; then we calculate the initial $\eta$-degree of the current list of vertices in the graph. If more than 80\% of the results (80th percentile of the initial $\eta$-degrees for current vertices in the graph) are less than or equal to 10, then we choose 10 as the threshold for the second data screening step; otherwise, we still apply segmented regression to locate the second threshold. The reason to choose the arbitrary number 10 as the default second data screening threshold is that if a vertex has at least 10 edges incident to it before peeling, we can consider it a hotspot suited for the afterward high activity subgraph mining. If in the full graph a vertex is not connected to at least 10 other vertices, there is no point in performing core decomposition as we only focus on dense sub-communities. Note that $\eta$-degree is related to the choice of $\eta$. Typically, the higher the $\eta$, the lower the $\eta$-degree. For example, if we set different $\eta$ to M-PA with the same dataset and same two-stage screening thresholds, higher $\eta$ would result in more nodes being screened out and this should also be included in the consideration when determining the data screening thresholds. Ultimately the choice of threshold combination would be depending on the applications, e.g. if the volume of the dataset is too big to reason and we wish to reduce its size significantly, then setting higher $\eta$ (0.5, 0.7, 0.9) and higher data-screening thresholds would certainly help. As a result, the thresholds we obtained for our experiment are presented in Table \ref{table:02}. The majority of data screening threshold combinations is $(5,10)$ and this is due to the distribution of edge existing probabilities in the datasets that we used. In addition, using different threshold selection methods will result in different threshold combinations to ours. \def1.4{1.4} \begin{table*} \centering \caption{Thresholds for data screening} \begin{tabular}{ccccccccc} \hline & $\eta$ & Filckr & DBLP & Biomine & itwiki-2013 & uk-2014-tpd & enwiki-2013 & ljournal-2008 \\ \hline\hline \multicolumn{2}{l}{$threshold_1$} & 5 & 5 & 5 & 41 & 5 & 30 & 21\\ \hline \multirow{5}{*}{$threshold_2$} & 0.1 & 10 & 10 & 10 & 39 & 10 & 34 & 14\\ & 0.3 & 10 & 10 & 10 & 36 & 10 & 32 & 10 \\ & 0.5 & 10 & 10 & 10 & 36 & 10 & 30 & 10 \\ & 0.7 & 10 & 10 & 10 & 34 & 10 & 29 & 10 \\ & 0.9 & 10 & 10 & 10 & 32 & 10 & 27 & 10 \\ \hline \end{tabular} \label{table:02} \end{table*} For itwiki-2013, uk-2014-tpd, enwiki-2013, and ljournal-2008, the threshold combinations start to vary, and we will use the case of ljournal-2008 as an example. For ljournal-2008, the third quartile of $\sum p_i$ is 6.52 ($\approx 1.87$ in log-scale, as shown in Fig.~\ref{fig2}a), which is greater than 5. So we performed segmented regression and found 21 ($\approx 3.04$ in log-scale) to be the first threshold, as illustrated in Fig.~\ref{fig2}a. Piecewise regression was also applied on the initial $\eta$-degree result of ljournal-2008 with $\eta=0.1$, since its 80th percentile, 11, is greater than 10. As shown in Fig.~\ref{fig2}b, we found 14 ($\approx 2.64$ in log-scale) to be the second threshold for $\eta=0.1$ case of ljournal-2008 dataset. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.55]{fig12.png} \end{center} \caption{a) Distribution of $\sum p_i$ for ljournal-2008, b) Distribution of initial $\eta$-degree for ljournal-2008 with $\eta=0.1$.} \label{fig2} \end{figure} With the two extra threshold arguments located for M-PA, we conducted algorithm efficiency experiments and the results are illustrated in Fig.~\ref{runningtime}. It can be seen that in an overwhelming majority of cases, M-PA is faster than PA. The worst cases are for Biomine with $\eta=0.1$ and $\eta=0.3$, M-PA finished computation at the same time as PA. This is possibly due to the choice of screening thresholds. To ensure fair comparisons we only used a fixed method for threshold exploration. As stated before, in real cases for different datasets and $\eta$ we might need to adjust the screening thresholds using different methods in order to achieve optimal results for M-PA. \begin{figure*} \centering \subfloat{ \begin{tikzpicture} \begin{axis}[width=3.8cm,height=3.5cm, xtick pos=left, ytick pos=left, title={\textbf{Flickr}}, xlabel={\textbf{$\eta$}}, xlabel style={font=\fontsize{7}{7}\selectfont}, symbolic x coords={0.1,0.3,0.5,0.7,0.9}, xticklabel style={rotate=-45}, ytick distance =1, xtick={0.1,0.3,0.5,0.7,0.9}, ylabel={\textbf{Time (s)}}, ylabel near ticks, ticklabel style = {font=\fontsize{7}{6}\selectfont}, legend style={draw=none,nodes={scale=0.7}}, ] \addplot[ color=blue, mark=square, ] coordinates { (0.1,6)(0.3,4)(0.5,4)(0.7,5)(0.9,3) }; \addplot[ color=red, mark=*, ] coordinates { (0.1,3)(0.3,3)(0.5,2)(0.7,2)(0.9,2) }; \end{axis} \end{tikzpicture} } \hspace{0.01cm} \subfloat{ \begin{tikzpicture} \begin{axis}[width=3.8cm,height=3.5cm, xtick pos=left, ytick pos=left, title={\textbf{DBLP}}, xlabel={\textbf{$\eta$}}, xlabel style={font=\fontsize{7}{7}\selectfont}, symbolic x coords={0.1,0.3,0.5,0.7,0.9}, xticklabel style={rotate=-45}, xtick={0.1,0.3,0.5,0.7,0.9}, ylabel near ticks, ticklabel style = {font=\fontsize{7}{6}\selectfont}, legend style={draw=none,nodes={scale=0.7}}, ] \addplot[ color=blue, mark=square, ] coordinates { (0.1,8)(0.3,8)(0.5,8)(0.7,8)(0.9,8) }; \addplot[ color=red, mark=*, ] coordinates { (0.1,3)(0.3,3)(0.5,2)(0.7,3)(0.9,3) }; \end{axis} \end{tikzpicture} } \hspace{0.01cm} \subfloat{ \begin{tikzpicture} \begin{axis}[width=3.8cm,height=3.5cm, xtick pos=left, ytick pos=left, title={\textbf{Biomine}}, xlabel={\textbf{$\eta$}}, xlabel style={font=\fontsize{7}{7}\selectfont}, ymin=50, ytick distance =2, symbolic x coords={0.1,0.3,0.5,0.7,0.9}, xticklabel style={rotate=-45}, ticklabel style = {font=\fontsize{7}{6}\selectfont}, xtick={0.1,0.3,0.5,0.7,0.9}, ylabel near ticks, legend style={draw=none,nodes={scale=0.7}}, ] \addplot[ color=blue, mark=square, ] coordinates { (0.1,53)(0.3,55)(0.5,56)(0.7,58)(0.9,60) }; \addplot[ color=red, mark=*, ] coordinates { (0.1,53)(0.3,55)(0.5,54)(0.7,56)(0.9,52) }; \end{axis} \end{tikzpicture} } \hspace{0.01cm} \subfloat{ \begin{tikzpicture} \begin{axis}[width=3.8cm,height=3.5cm, xtick pos=left, ytick pos=left, title={\textbf{itwiki-2013}}, xlabel={\textbf{$\eta$}}, xlabel style={font=\fontsize{7}{7}\selectfont}, symbolic x coords={0.1,0.3,0.5,0.7,0.9}, xticklabel style={rotate=-45}, xtick={0.1,0.3,0.5,0.7,0.9}, ylabel near ticks, ticklabel style = {font=\fontsize{7}{6}\selectfont}, legend style={draw=none,nodes={scale=0.7}}, ] \addplot[ color=blue, mark=square, ] coordinates { (0.1,97)(0.3,98)(0.5,99)(0.7,98)(0.9,102) }; \addplot[ color=red, mark=*, ] coordinates { (0.1,63)(0.3,64)(0.5,64)(0.7,62)(0.9,62) }; \end{axis} \end{tikzpicture} } \hspace{0.01cm} \subfloat{ \begin{tikzpicture} \begin{axis}[width=3.8cm,height=3.5cm, xtick pos=left, ytick pos=left, title={\textbf{uk-2014-tpd}}, xlabel={\textbf{$\eta$}}, xlabel style={font=\fontsize{7}{7}\selectfont}, symbolic x coords={0.1,0.3,0.5,0.7,0.9}, xticklabel style={rotate=-45}, xtick={0.1,0.3,0.5,0.7,0.9}, ylabel={\textbf{Time (s)}}, ylabel near ticks, ticklabel style = {font=\fontsize{7}{6}\selectfont}, legend style={draw=none,nodes={scale=0.7}}, ] \addplot[ color=blue, mark=square, ] coordinates { (0.1,81)(0.3,79)(0.5,81)(0.7,81)(0.9,81) }; \addplot[ color=red, mark=*, ] coordinates { (0.1,77)(0.3,77)(0.5,76)(0.7,75)(0.9,75) }; \end{axis} \end{tikzpicture} } \hspace{0.01cm} \subfloat{ \begin{tikzpicture} \begin{axis}[width=3.8cm,height=3.5cm, xtick pos=left, ytick pos=left, title={\textbf{enwiki-2013}}, xlabel={\textbf{$\eta$}}, xlabel style={font=\fontsize{7}{7}\selectfont}, symbolic x coords={0.1,0.3,0.5,0.7,0.9}, xticklabel style={rotate=-45}, xtick={0.1,0.3,0.5,0.7,0.9}, ylabel near ticks, ticklabel style = {font=\fontsize{7}{6}\selectfont}, legend style={draw=none,nodes={scale=0.7}}, ] \addplot[ color=blue, mark=square, ] coordinates { (0.1,351)(0.3,336)(0.5,357)(0.7,358)(0.9,355) }; \addplot[ color=red, mark=*, ] coordinates { (0.1,233)(0.3,236)(0.5,224)(0.7,226)(0.9,228) }; \end{axis} \end{tikzpicture} } \hspace{0.01cm} \subfloat{ \begin{tikzpicture} \begin{axis}[width=3.8cm,height=3.5cm, legend pos=outer north east, xtick pos=left, ytick pos=left, title={\textbf{ljournal-2008}}, xlabel={\textbf{$\eta$}}, xlabel style={font=\fontsize{7}{7}\selectfont}, xticklabel style={rotate=-45}, xtick={0.1,0.3,0.5,0.7,0.9}, ticklabel style = {font=\fontsize{7}{6}\selectfont}, ylabel near ticks, legend entries={\textbf{PA},\textbf{M-PA}}, legend style={font=\fontsize{7.5}{6}\selectfont,draw=none,nodes={scale=0.6}}, ] \addplot[ color=blue, mark=square, ] coordinates { (0.1,158)(0.3,166)(0.5,168)(0.7,170)(0.9,178) }; \addplot[ color=red, mark=*, ] coordinates { (0.1,116)(0.3,122)(0.5,121)(0.7,121)(0.9,118) }; \end{axis} \end{tikzpicture} } \caption{Running time of probabilistic core decomposition: PA vs M-PA.} \label{runningtime} \end{figure*} \subsection{Quality Evaluation} In this section, we evaluate results from PA and M-PA in terms of graph cohesiveness. The metrics we used are probabilistic density (PD) and probabilistic clustering coefficient (PCC)~\cite{huang2016truss}. These metrics are defined as follows: \begin{equation} \mathrm{PD}(\mathcal{G})=\frac{\sum_{e \in E} p_e}{\frac{1}{2}|V| \cdot(|V|-1)} \end{equation} \begin{equation} \operatorname{PCC}(\mathcal{G})=\frac{3 \sum_{\Delta_{u v w} \in \mathcal{G}} p(u, v) \cdot p(v, w) \cdot p(u, w)}{\sum_{(u, v),(u, w), v \neq w} p(u, v) \cdot p(u, w)} \end{equation} Simply put, PD is the sum of all edge probabilities in the graph divided by the maximum possible number of edges in the graph. PCC on the other hand is the degree measurement for nodes in the graph to cluster together. We use $\eta=0.1$, $\eta=0.5$, and $\eta=0.9$ and we report the PD, PCC results for the maximum core (i.e. the densest subgraph) obtained from running PA and M-PA on Flickr, DBLP, Biomine, itwiki-2013, uk-2014-tpd, enwiki-2013, and ljournal-2008 in Table~\ref{table:03}. Sometimes for the given core number, we might discover several connected components in the results, so we report the average PD, PCC instead of the maximum. \definecolor{Blue1}{HTML}{2e1f96} \definecolor{Blue2}{HTML}{6d62b6} \definecolor{Blue3}{HTML}{978fcb} \definecolor{Blue4}{HTML}{d5d2ea} \def1.4{1.4} \begin{table*} \vspace{0.3cm} \centering \caption{Cohesiveness statistics from the original PA, O, and M-PA, M on Flickr, DBLP, Biomine, itwiki-2013, uk-2014-tpd, enwiki-2013, and ljournal-2008} \begin{tabular}{ccccc} \hline Graph & $\eta$ & $Coreness_{O\_max}$/$Coreness_{M\_max}$ & $PD_{O\_avg}$ /$PD_{M\_avg}$ & $PCC_{O\_avg}$/$PCC_{M\_avg}$ \\ \hline\hline Flickr & 0.1 & 46/27 & \cellcolor{Blue4!95}1.0/0.871 & \cellcolor{Blue4!95}1.0/0.872 \\ & 0.5 & 46/25 & \cellcolor{Blue4!95}1.0/0.871 & \cellcolor{Blue4!95}1.0/0.872 \\ & 0.9 & 46/23 & \cellcolor{Blue4!95}1.0/0.871 & \cellcolor{Blue4!95}1.0/0.872 \\ DBLP & 0.1 & 26/26 & \cellcolor{Blue1!46}0.264/0.264 & \cellcolor{Blue1!46}0.317/0.317 \\ & 0.5 & 21/21 & \cellcolor{Blue1!46}0.264/0.264 & \cellcolor{Blue1!46}0.317/0.317 \\ & 0.9 & 16/16 & \cellcolor{Blue1!46}0.419/0.419 & \cellcolor{Blue1!46}0.441/0.441 \\ Biomine & 0.1 & 79/79 & \cellcolor{Blue1!46}0.212/0.212 & \cellcolor{Blue1!46}0.218/0.218 \\ & 0.5 & 70/70 & \cellcolor{Blue1!46}0.227/0.227 & \cellcolor{Blue1!46}0.230/0.230 \\ & 0.9 & 60/60 & \cellcolor{Blue1!46}0.216/0.216 & \cellcolor{Blue1!46}0.221/0.221 \\ itwiki-2013 & 0.1 & 118/117 & \cellcolor{Blue2!46}0.203/0.202 & \cellcolor{Blue4!95}0.035/0.031 \\ & 0.5 & 110/108 & \cellcolor{Blue2!46}0.202/0.199 & \cellcolor{Blue4!95}0.036/0.031 \\ & 0.9 & 102/101 & \cellcolor{Blue2!46}0.203/0.202 & \cellcolor{Blue4!95}0.035/0.031 \\ uk-2014-tpd & 0.1 & 257/257 & \cellcolor{Blue1!46}0.359/0.359 & \cellcolor{Blue1!46}0.361/0.361 \\ & 0.5 & 244/244 & \cellcolor{Blue1!46}0.359/0.359 & \cellcolor{Blue1!46}0.361/0.361 \\ & 0.9 & 231/231 & \cellcolor{Blue1!46}0.359/0.359 & \cellcolor{Blue1!46}0.361/0.361 \\ enwiki-2013 & 0.1 & 78/77 & \cellcolor{Blue3!46}0.122/0.111 & \cellcolor{Blue3!46}0.122/0.114 \\ & 0.5 & 70/70 & \cellcolor{Blue2!46}0.112/0.113 & \cellcolor{Blue2!46}0.115/0.116 \\ & 0.9 & 62/62 & \cellcolor{Blue2!46}0.110/0.111 & \cellcolor{Blue2!46}0.113/0.114 \\ ljournal-2008 & 0.1 & 156/156 & \cellcolor{Blue1!46}0.375/0.375 & \cellcolor{Blue1!46}0.378/0.378 \\ & 0.5 & 147/147 & \cellcolor{Blue1!46}0.379/0.379 & \cellcolor{Blue1!46}0.381/0.381 \\ & 0.9 & 138/138 & \cellcolor{Blue1!46}0.379/0.379 & \cellcolor{Blue1!46}0.381/0.381 \\ \hline \end{tabular} \label{table:03} \begin{tabular}{cccccccc} \textcolor{Blue1!46}{$\blacksquare$} & identical & \textcolor{Blue2!46}{$\blacksquare$} & within 2\% change& \textcolor{Blue3!46}{$\blacksquare$} & within 10\% change& \textcolor{Blue4!88}{$\blacksquare$} & within 15\% change \end{tabular} \vspace{0.3cm} \end{table*} Table~\ref{table:03} is coloured based on levels of changes in PD and PCC results between PA and M-PA, i.e. if PD/PCC results are identical for PA and M-PA, or within 2\%, 10\%, or 15\% level of changes. It can be seen that M-PA produce very close if not identical PD/PCC results to the PA. The maximum level of changes is within 15\% and for DBLP, Biomine, uk-2014-tpd, and ljournal-2008, PA and M-PA produced identical PD and PCC scores. This is expected since our final goal underlying the modification to the original peeling algorithm is to make it focus more on dense cores and remove the nodes that do not belong to them. Ideally, when we selected and removed nodes below the user-defined thresholds, the dense cores in the graph should not be affected and the algorithm should run faster. In the case of Flickr, itwiki-2013, and enwiki-2013, M-PA gives slightly different PD and PCC scores, e.g., for $\eta=0.5$ case of Flickr, PA's maximum core is 12.9\% denser than M-PA, for $\eta=0.5$ case of enwiki-2013, M-PA's maximum core is 0.89\% denser than PA, etc. Flickr is the smallest dataset of the seven and has the smallest average edge existing probability. Given this, the thresholds we used might be too strict for the Flickr dataset. However, 0.87 is still very good as PD and PCC scores and having nearly 0.9 density is completely acceptable. As for itwiki-2013, the PD score is nearly identical (within 2\% change) between PA and M-PA and the PCC score is also very close. However, the PCC score is extremely low, which could indicate that for this specific dataset, the nodes do not tend to cluster together and hence the slightly more different PCC scores. Lastly, for enwiki-2013, at $\eta=0.1$ M-PA produced slightly small maximum coreness but the PD/PCC scores are still close to PA. Additionally, at $\eta=0.5$ and $\eta=0.9$, M-PA was able to produce a denser subgraph compared to PA. \section{Conclusion} We presented a multi-stage probabilistic graph peeling algorithm (M-PA) for core decomposition. A two-stage data filtering procedure was added to the original peeling algorithm (PA) to reduce the complexity of input graphs and increase the algorithm efficiency. We compared M-PA and PA in terms of speed and we showed that M-PA is generally faster than PA. After evaluating the cohesiveness from the results of M-PA and PA, we concluded that M-PA, when equipped with proper data screening thresholds, will produce very comparable if not identical subgraph density as the original PA, and at the same time will be more efficient. \balance \section*{Acknowledgment} This work was enabled in part by support provided by WestGrid and Compute Canada. \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} The fundamental nature of non-baryonic Dark Matter is one of the most pressing topics in contemporary Particle Physics. While Dark Matter is invoked in most cosmological models, and its existence, essentially inferred through gravitational effects, distribution and total abundance on cosmological scales are quantitatively established with an increasing and remarkable accuracy, little is known about what is the elementary constituent of this elusive and yet so substantial component of the cosmic budget \cite{dmreviews}. Dark Matter candidates have been proposed in several extensions of the Standard Model of particle physics, or in purely phenomenological settings, motivated, for instance, by astrophysical observations with no obvious known source counterparts; the range of particle masses proposed in the literature spans over various decades of orders of magnitude (see, {\em e.g.}, \cite{Baltz:2004tj}). A sensible rationale to distinguish among different Dark Matter candidates emerges in the nature of the process invoked to explain why a given candidate should have a relic abundance close to the abundance of Dark Matter we infer from observations today. Along this line of reasoning, weakly interacting massive particles (WIMPs) stand as excellent prototypes. Once in thermal equilibrium with the primordial particles thermal bath in the Early Universe, WIMPs undergo a {\em freeze-out} of their number density as their pair annihilation rate becomes smaller than the Universe expansion rate. After decoupling, the WIMP number density per comoving volume remains substantially constant up to the present epoch; their final relic abundance crucially depends upon the WIMP pair annihilation rate. Be it a fortuitous coincidence or not, when the WIMP pair annihilation cross section is comparable to a typical weak-interactions cross section, the estimated WIMP relic abundance $\Omega_\chi h^2$ is remarkably close to the actual inferred Dark Matter abundance, $\Omega_{\rm CDM}h^2\simeq 0.110$ \cite{Spergel:2006hy} (here and in the remainder of the paper, $\Omega_i$ indicates the ratio of the mean density of species $i$ over the critical density, and $h$ the normalized value of the Hubble expansion rate today, in units of 100 km/s/Mpc). The detailed dynamics of WIMPs freeze-out depends crucially, however, on the specific nature of the particle physics setup at hand. For instance, the relic abundance of the lightest neutralino in minimal supersymmetric extensions of the Standard Model (MSSM), a paradigmatic WIMP, varies over several orders of magnitude, questioning whether achieving $\Omega_\chi\simeq\Omega_{\rm CDM}$ is in fact ``natural'' at all. Further, the chemical decoupling of WIMPs in the Early Universe can be complicated by the concomitant, and possibly {\em coupled}, decoupling of other particle species. The occurrence of the latter scenario, known in the jargon as {\em coannihilation} \cite{coan,Griest:1990kh}, can lead to very significant effects on the final WIMP relic abundance. The net effect of coannihilations crucially depends upon the size of the thermally averaged annihilation and coannihilation cross sections of the extra degrees of freedom participating to the freeze-out of the lightest, stable species, averaged over the total number of degrees of freedom (see the next section for a more detailed and quantitative description of coannihilations). On top of this, since the abundance of non-relativistic species in thermal equilibrium approximately follows a Maxwell-Boltzmann distribution, the importance of coannihilation processes is exponentially suppressed by the relative mass splitting between the coannihilating particles's masses and the stable particle's mass. Evidently, depending upon the particle physics setup, coannihilations can be responsible for both an {\em increase} or for a {\em decrease} in the final relic abundance of the stable species. In the widely studied context of minimal supergravity (mSUGRA)~\cite{msugra}, a special realization of the MSSM, coannihilations are often thought to be synonym of {\em suppression} of the neutralino relic abundance. In mSUGRA the neutralino is Bino-like ({\em i.e.} the lightest mass eigenstate almost coincides with the fermionic superpartner of the hypercharge gauge boson) over most of the parameter space of the theory; Binos pair annihilate rather inefficiently in the Early Universe, as they feature a suppressed coupling to gauge bosons, and the pair annihilation into fermion-antifermion states is helicity suppressed. As noted in numerous publications, one of the few regions of the mSUGRA parameter space where $\Omega_\chi\simeq\Omega_{\rm CDM}$ is possible lies at low values of the universal scalar soft supersymmetry breaking parameter $m_0$, where the lightest neutralino is close in mass to the lightest stau \cite{elliscoan,nihei}. There, stau coannihilations enhance the effective stau-neutralino annihilation rate around neutralino freeze-out. Since staus annihilate {\em more efficiently} than Binos, the final Bino relic abundance is {\em lower} than without stau coannihilations, and can be such that $\Omega_\chi\simeq\Omega_{\rm CDM}$. Slepton coannihilations where first explicitely studied in Ref.~\cite{elliscoan}, where the (co-)annihilation cross section was approximated, in the low-velocity expansion limit, in powers of the mass-over-temperature ratio. A more accurate calculation, that takes into account the role of slepton mixing and the exact computation of the effective pair annihilation cross section, was then presented in Ref.~\cite{nihei} Ref.~\cite{Edsjo:2003us} gave a nice example, again in the context of the mSUGRA paradigm, where slepton coannihilations {\em increase} the Bino relic abundance: if Binos resonantly annihilate through the $s$-channel resonant exchange of a heavy Higgs, adding coannihilating slepton degrees of freedom makes the total effective cross section, averaged above all degrees of freedom, smaller than without coannihilations. When Binos annihilate efficiently, slepton act as {\em parasite degrees of freedom} at the lightest supersymmetric particle (LSP) freeze-out. Another illustrative example of the role of coannihilations in {\em enhancing} the final relic abundance of the stable particle comes from Universal Extra Dimensions (UED) \cite{Appelquist:2000nn}. In UED, the particle mass spectrum of Kaluza-Klein (KK) states is naturally highly degenerate, lying around a mass scale set by the inverse compactification scale radius $R^{-1}$. Coannihilations of the stable lightest KK particle (LKP) are therefore expected to play an important role. As pointed out in Ref.~\cite{Servant:2002aq} and in subsequent refined analyses \cite{Kong:2005hn}, for realistic spectra the effect of coannihilations is to significantly {\em increase} the relic abundance of the LKP, corresponding, in those setups, to the $B^{(1)}$ (to a good approximation the $n=1$ KK excitation of the hypercharge gauge boson). This results in a reduction of the value of $R^{-1}$ such that $\Omega_{B^{(1)}}\simeq\Omega_{\rm CDM}$ by factors as large as 2, depending upon the details of the KK spectrum. Coannihilations with KK states featuring a smaller annihilation cross section than that of the $B^{(1)}$ itself, and close in mass to it, such as right-handed KK leptons, are responsible for this effect. In the present note we point out that, unlike the generic case of a Bino (barring fortuitous resonant annihilation channels), when the LSP is dominated by its Higgsino or Wino components, {\em i.e.} when the lightest mass eigenstate approximately corresponds to the fermionic superpartner of the neutral Higgses or of the SU(2) neutral gauge boson, the effect of slepton coannihilations is to {\em increase} the LSP relic abundance. The increase in $\Omega_\chi$ depends upon various circumstances (next-to-lightest neutralino and/or chargino coannihilations, sizable couplings to gauge bosons) that contribute to make the total effective Higgsino and Wino annihilation cross sections larger than that in presence of slepton coannihilations. As a result, the mass $m_\chi$ of neutralinos such that $\Omega_\chi\simeq\Omega_{\rm CDM}$ is pushed to smaller values, and the pair annihilation cross sections $\langle\sigma v\rangle$ to larger ones. In turn, this implies {\em larger indirect Dark Matter detection rates}, as the latter are in general proportional to the combination $\langle\sigma v\rangle/m_\chi^2$. We show that the occurrence of slepton coannihilations in scenarios where the LSP is Higgsino or Wino-like is perfectly viable, and actually takes place in several well motivated theoretical setups, where the induced degree of ``fine-tuning'' is generically not larger than that invoked in the context of the stau coannihilation region of mSUGRA. We start our analysis with a quantitative discussion of coannihilation processes, which leads us to a guiding analytical formula. We then focus on particular phenomenological MSSM setups, motivated by several examples of GUT scale completions which can lead to similar spectra, and determine the combinations of LSP masses and mass splitting between the LSP and the coannihilating sleptons such that the thermal neutralino relic abundance saturates the Dark Matter abundance (sec.~\ref{sec:thermal}). Finally, we make use of the models featuring $\Omega_\chi\simeq\Omega_{\rm CDM}$ with slepton coannihilations to estimate the resulting enhancement in various indirect Dark Matter detection rates (sec.~\ref{sec:detect}), and summarize our conclusions (sec.~\ref{sec:conclude}). \section{Neutralino Thermal Relic Abundance and Slepton Coannihilations}\label{sec:thermal} The effective annihilation cross section for a system of $N$ (co-)annihilating particles $i$ of mass $m_i$ featuring a relative mass splitting, with respect to the stable lightest species $\chi$, with mass $m_\chi$, of \be \Delta_i\equiv\frac{m_i-m_\chi}{m_\chi} \ee is given by the expression \cite{Griest:1990kh} \be\label{eq:sigeff} \sigma_{\rm eff}=\sum_{i,j=1}^N\ \sigma_{ij}\frac{g_ig_j}{g_{\rm eff}^2}\left(1+\Delta_i\right)^{3/2}\left(1+\Delta_j\right)^{3/2}{\rm e}^{-x(\Delta_i+\Delta_j)}, \ee where $x\equiv m_\chi/T$, $T$ is the temperature, the $\sigma_{ij}$'s represent the various cross section of annihilation of particles $i$ and $j$ into Standard Model particles, $g_i$ stands for the number of internal degrees of freedom associated with particle $i$, and \be\label{eq:geff} g_{\rm eff}\equiv\sum_{i=1}^{N}g_i\left(1+\Delta_i\right)^{3/2}{\rm e}^{-x\Delta_i}. \ee Eq.~(\ref{eq:sigeff}) and (\ref{eq:geff}) illustrate quantitatively the two points we alluded to in the Introduction: (1) the effective annihilation cross section relevant for the relic $\chi$ abundance can be increased or decreased as a result of extra coannihilating partners, according to the relative size of the $\chi$ pair annihilation cross section and the (co-)annihilation cross section of the coannihilating partners; (2) the effect of coannihilations depends exponentially upon the ratio $\Delta_i$, times a factor accounting for the actual $\chi$ freeze-out temperature. In the present context, we deal with a situation where the effective $\chi$ pair annihilation cross section is larger than that of its coannihilating partners, the sleptons. The Wino and Higgsino effective annihilation cross section without slepton coannihilations actually results from a combination of the various contributing (co-)annihilation cross sections of the lightest neutralino and the lightest chargino, as well as, for the case of a Higgsino LSP, with the next-to-lightest neutralino (this depends on the neutralino $\chi_i$ and chargino $\chi^\pm_i$ mass spectrum: in the case of a Wino LSP, $m_{\chi_1}\simeq m_{\chi^\pm_1}\simeq M_2$, and in the case of a Higgsino LSP $m_{\chi_1}\simeq m_{\chi_2}\simeq m_{\chi^\pm_1}\simeq \mu$). In most MSSM realizations, the resulting overall Wino and Higgsino effective cross section is {\em larger} than the slepton pair annihilation cross sections and of the slepton-neutralino and slepton-chargino coannihilation cross sections. In this context, it is easy to draw a rough theoretical estimate of the relative enhancement of the thermal relic abundance $\Omega_\chi$ in presence of parasite degrees of freedom associated to a set of coannihilating particles $\tilde L$ (in our case, the sleptons), assumed to lie all at the same mass scale $m_{\tilde L}$, for simplicity. We shall hereafter indicate the relative mass splitting $\Delta_{\tilde L}\equiv(m_{\tilde L}-m_\chi)/m_\chi$. Suppose the total effective neutralino annihilation cross section (including, in the case of Higgsinos and of Winos, the contribution of the next-to-lightest neutralino and/or of the lightest chargino) without the contribution of parasite particles $\tilde L$ ($\Delta_{\tilde L}\gg1$) is given by $\sigma^0_{\rm eff}=\sigma_{\chi\chi}$. The assumption that the extra coannihilating degrees of freedom associated to $\tilde L$ act as ``{\em parasite}'' degrees of freedom quantitatively amounts to have $\sigma_{\chi\chi}\gg\sigma_{\chi\tilde L},\sigma_{\tilde L\tilde L}$, where we indicate with $\sigma_{\chi\tilde L},\sigma_{\tilde L\tilde L}$ the $\tilde L$ coannihilation and self-annihilation effective cross sections, respectively. Denoting with $g^0_{\rm eff}$ the effective degrees of freedom when the $\tilde L$ particles are much heavier than the LSP ($\Delta_{\tilde L}\gg1$), the new effective total annihilation cross section $\sigma_{\rm eff}$ can be expressed as a function of the effective degrees of freedom $g_{\rm eff}$ including the $\tilde L$ particles as \be \sigma_{\rm eff}\simeq\sigma_{\rm eff}^0\left(\frac{g^0_{\rm eff}(x_{\rm f.o.})}{g_{\rm eff}(x_{\rm f.o.})}\right)^2 \ee where $x_{\rm f.o.}$ corresponds to temperatures around the $\chi$ freeze-out, $T_{\rm f.o.}\approx m_\chi/25$. Conversely, the relative enhancement in the $\chi$ relic abundance will be approximately given by \be \frac{\Omega_\chi}{\Omega^0_\chi}\simeq\left(\frac{g_{\rm eff}(x_{\rm f.o.})}{g^0_{\rm eff}(x_{\rm f.o.})}\right)^2\approx\left(\frac{g^0_{\rm eff}(x_{\rm f.o.})+ g_{\tilde L} \left(1+\Delta_{\tilde L}\right)^{3/2}{\rm e}^{-x_{\rm f.o.}\Delta_{\tilde L}})}{g^0_{\rm eff}(x_{\rm f.o.})}\right)^2,\label{eq:master} \ee where we indicate with $g_{{\tilde L}.}$ the total number of internal degrees of freedom associated with the $\tilde L$ particles. In the case under investigation here, $g^0_{\rm eff}(x_{\rm f.o.})\approx6,8$ in the Wino and Higgsino case, respectively (recalling that every neutralino carries 2 internal degrees of freedom, while every chargino carries 4, and neglecting the mass splitting between the lightest neutralino and next-to-lightest neutralino and the lightest chargino), while $g_{{\tilde L}}=2,4,18$ when only the SU(2) singlet third generation slepton, the SU(2) doublet third generation sleptons and all the sleptons are coannihilating (for conciseness, we shall indicate in the figure labels throughout the present paper the quantity $g_{{\tilde L}}$ simply with $g$). To make quantitative estimates of the neutralino relic density enhancement, we need a specific MSSM setup; we then compute $\Omega_\chi h^2$ numerically, making use of publicly available codes (namely, {\tt DarkSUSY} \cite{Gondolo:2004sc} and {\tt micrOMEGAs} \cite{Belanger:2006is}). To this extent, we consider, for the Higgsino-like neutralino case, a value of $\mu=800$ GeV, $M_1=5\mu$ and an mSUGRA-motivated hierarchy among the gaugino masses at the low-energy scale ($M_2=2M_1$, $M_3=6M_1$). For the Wino-like case, we resort instead to a minimal anomaly-mediated supersymmetry breaking (mAMSB) inspired setting \cite{amsb} for the gaugino masses ($M_1=3M_2$, $M_3=8M_2$), and set $M_2=1.2$ TeV and $\mu=5M_2$. In both cases we set $\tan\beta=20$ and $m_A,\ m_{\rm Squarks}\approx 10\times m_{\chi}$. The last choice is motivated by avoiding, in the following discussion, spurious effects deriving from squark coannihilations. Unlike sleptons, strongly interacting squarks potentially feature a larger effective cross section than the neutralino/chargino systems of Wino and Higgsino-like neutralinos; therefore the effect we discuss here does not apply when the LSP coannihilates with squarks. Moreover, we take a large value for $m_A$ in order to forbid resonant neutralino annihilations through $s$-channel $A$ exchange diagrams, which can potentially blur the effect under investigation in the present analysis. The mass of sleptons which are not assumed to coannihilate is also set to the same value as that of $m_{\rm Squarks}$. The setup we refer to is motivated by several theoretical studies discussed in the literature. In the context of mSUGRA, the LSP can be Higgsino-like in the Focus Point/Hy\-per\-bo\-lic Branch region, at very large values of the common supersymmetry braking scalar mass $m_0$ \cite{Baer:2005ky}. In this case, however, sleptons are very heavy and cannot coannihilate with the LSP. Going beyond mSUGRA, and relaxing some of the universality assumptions on the soft supersymmetry breaking terms drastically changes the situation. Ref.~\cite{Baer:2005bu} addressed the case of non-universality in the soft breaking Higgs masses (non-universal Higgs mass, NUHM, model); in that context, a Higgsino-like LSP is naturally achieved for arbitrarily small sfermion masses, and slepton coannihilations with Higgsinos can very well take place. As pointed out in \cite{Baer:2005bu}, in NUHM models the usual mSUGRA hierarchy $m_{\tilde \tau_R}<m_{\tilde \tau_L}$ between right and left handed sfermions can be subverted, and left-handed sleptons can be lighter than their right-handed counterparts, see {\em e.g.} their Fig.~11. Special values of the Higgs soft breaking masses even allow for a quasi-degeneracy of the full slepton spectrum. In this respect, one can therefore expect several slepton coannihilation scenarios: the right-handed stau alone ($g\approx2$), all right-handed sleptons ($g\approx6$), left-handed third generation sleptons ($g\approx 4$), all left-handed sleptons ($g\approx12$) or even the extreme situation of all sleptons ($g\approx18$). Relaxing the universality of gaugino masses at the grand unification (GUT) scale, again within mSUGRA, also naturally leads to a Higgsino LSP \cite{nugmh}, as well as to a Wino-like LSP \cite{nugmw} (non-universal gaugino mass (NUGM) models; see also \cite{othernugm}). Values of $\mu$ smaller than $M_{1,2}$ can be achieved setting $M_3$ smaller than $M_1=M_2=M_{1/2}$ at the GUT scale (where $M_{1/2}$ stands for the mSUGRA universal gaugino soft supersymmetry breaking mass parameter), through renormalization group evolution \cite{nugmh}. Retaining the universality assumption in the scalar sector, in the NUGM model the lightest slepton is the right handed stau, with the lightest (right-handed) smuon and selectron relatively close in mass. One therefore expects a value of $g$ between 2 and 6, for full slepton coannihilations. \FIGURE[!t]{ \mbox{\hspace*{-0.5cm}\epsfig{file=figures/hgraph1.eps,width=7.5cm}\qquad \epsfig{file=figures/wgraph1.eps,width=7.5cm}} \caption{\label{fig1}The LSP thermal relic abundance, as a function of the percent mass splitting between the LSP and the coannihilating sleptons. The case of a $m_\chi=800$ GeV Higgsino-like neutralino is shown in the left panel, while that of a $m_\chi=1200$ GeV Wino-like neutralino is featured in the right panel. The horizontal bands indicate the range of $\Omega_\chi h^2$ corresponding to the abundance of cold Dark Matter inferred by the WMAP team for a $\Lambda$CDM cosmology at 2-$\sigma$ \cite{Spergel:2006hy}. The label $g$ stands for the number of coannihilating slepton degrees of freedom (see the text for more details).}} Our reference setup for Wino-like neutralinos will however be that of mAMSB \cite{amsb}. The nature of the LSP in mAMSB scenarios is determined by the gaugino soft supersymmetry breaking masses being proportional to the associated gauge group beta functions times the gravitino mass, and features, typically, a Wino-like LSP (a Higgsino-like LSP is also possible, through the analogous of the focus point effect in mSUGRA). The problem of negative slepton mass squared is solved, in the context of mAMSB, through a common phenomenological scalar mass parameter $m_0^2$. Within this setup, the lightest sfermion is the right-handed stau, although in some parameter space regions the two staus can be significantly close in mass. Selectrons and smuons always tend to be very close in mass. The absolute value of $m_0^2$ allows one to naturally obtain slepton coannihilations with a Wino-like LSP. Relaxing, here, the assumption of universality for the phenomenological parameter $m_0^2$ easily entails all possible slepton coannihilation patterns, suitably adjusting {\em e.g.} the value of the left and right handed parameters $(m_0^2)_{L,R}$ for the slepton sector. We thus conclude that slepton coannihilations with Higgsinos and Winos are a perfectly viable possibility in several theoretically motivated supersymmetric setups. For computational ease, we resort here to a handier low-energy scale parameterization, which, however, captures the main features of the general problem in more generic scenarios. In Fig.~\ref{fig1} we show the neutralino relic density, computed with the {\tt micrOMEGAs} code, in the case of a Wino ($g^0_{\rm eff}(x_{\rm f.o.})=6$) and in the case of Higgsino ($g^0_{\rm eff}(x_{\rm f.o.})=8$), as a function of $\Delta_{\tilde L}$, for the two Higgsino- and Wino-like neutralino reference supersymmetric setups discussed above. In both cases the injection of the parasite slepton degrees of freedom (we focus, here and in what follows, on the cases $g=2,4$ and 18) enhances the thermal relic density up to values in the 2-$\sigma$ WMAP allowed region. The increase in the relic abundance, down to a relative mass splitting of the order of 1\%, can be as large as a factor 5, when all sleptons participate in the coannihilation process. \FIGURE[!t]{ \mbox{\hspace*{-0.5cm}\epsfig{file=figures/hgraph2.eps,width=7.5cm}\qquad \epsfig{file=figures/wgraph2.eps,width=7.5cm}} \caption{\label{fig2} The thermal relic abundance $\Omega_\chi h^2$ as a function of the lightest neutralino mass in the extreme case of vanishing mass splitting between the LSP and the (coannihilating) sleptons, for a Higgsino-like neutralino (left panel) and a Wino-like neutralino (right panel). The horizontal band indicates the range of $\Omega_\chi h^2$ corresponding to the abundance of cold Dark Matter inferred by the WMAP team \cite{Spergel:2006hy}.}} Although qualitatively the approximate formula given in Eq.~(\ref{eq:master}) reproduces the correct trend found in the numerical results shown in Fig.~\ref{fig1}, for some values of the relative mass splitting we do find quantitative differences. The latter originate from the assumptions used to derive the analytical approximation. In particular, in the theoretical estimate of Eq.(\ref{eq:master}) we neglect the annihilation and coannihilation cross sections for sleptons, hence overestimating the thermal relic density enhancement; secondly, we set $g^0_{\rm eff}=6$ (or $8$) while the real value is generically smaller; lastly, assuming a putative value $x_{\rm f.o.}=25$ does not always matches the actual numerical value for the freeze-out temperature. In any case, we stress that Eq.~(\ref{eq:master}) provides us with a useful analytical insight and a qualitative prediction for the effect we are focusing on in the present analysis. We also point out that in the right panel of Fig.~\ref{fig1}, {\em i.e.} for the Wino case, we find a a non negligible enhancement also for values of the mass splitting well beyond the level of $\approx10$\%, where one would expect some effect related to coannihilations from the discussion above and from Eq.~(\ref{eq:master}). This fact is traced back to the kinematic enhancement in the chargino and neutralino pair annihilation cross section in presence of lighter sleptons, and has actually nothing to do with slepton coannihilations. \FIGURE[!t]{ \mbox{\hspace*{-0.5cm}\epsfig{file=figures/hgraph3.eps,width=7.5cm}\qquad \epsfig{file=figures/wgraph3.eps,width=7.5cm}} \caption{\label{fig3} Isolevel curves of the lightest neutralino relic abundance corresponding to a neutralino thermal relic abundance $\Omega_\chi h^2$ equal to the central value for $\Omega_{\rm CDM}h^2\simeq 0.110$, in the plane defined by the LSP mass versus the relative mass splitting between the LSP and the (coannihilating) sleptons, for a Higgsino-like neutralino (left panel) and for a Wino-like neutralino (right panel).}} In Fig.~\ref{fig2} we show the neutralino thermal relic abundance as a function of the LSP mass in the extreme case of coannihilating particles completely degenerate, in mass, with the lightest neutralino. Hereafter, the neutralino mass is varied keeping the ratios between $\mu$ and the gaugino soft supersymmetry breaking masses fixed at the values corresponding to our two reference models. The other supersymmetric parameters are kept fixed. In this way, spurious effects originating from the details of the neutralino composition are expected to be minimized, while our hypothesis on the supersymmetric setup are kept simple enough. We notice that the upper bound on the LSP mass from its thermal relic abundance is significantly lowered. For example even in the case of a Higgsino-like neutralino coannihilating with the third generation right handed slepton alone, the upper limit on the LSP mass is about 20\% smaller with respect to the case whitout coannihilation; in the most extreme case of coannihilations with all sleptons, the effect amounts to a suppression in the upper limit on the LSP mass of a factor close to 4. We clarify and detail on this point in Fig.~\ref{fig3}, where we plot, in the $(m_\chi,\,\Delta_{\tilde L})$ plane, the isolevel curves at $\Omega_{\rm CDM} h^2\simeq\Omega_\chi h^2\simeq0.110$. Points on the curves shown feature the ``right'' neutralino thermal relic abundance. In the case of mass splitting of 1\% and all sleptons coannihilating, the upper bound on the LSP mass is about one half of the LSP mass value without sleptons coannihilations, both for the Higgsino- and Wino-like case; this effect is less spectacular, but also appreciable, in the case with third generation right handed slepton coannihilations ($g=2$) or with third generation left handed sleptons coannihilations ($g=4$). \section{The Enhancement of Indirect Dark Matter Detection Rates}\label{sec:detect} Numerous theoretical and experimental efforts have been directed in recent years to the possibility of inferring the existence of galactic (or even extra-galactic) particle Dark Matter through the presence of exotic ``signatures'' in stable end-products of Dark Matter pair annihilations (for reviews on the topic see {\em e.g.} Ref.~\cite{dmreviews}). In particular, Dark Matter pair annihilations in the Galactic Halo can yield sizable positron and antiproton fluxes, which might be disentangled from the cosmic ray secondary and tertiary backgrounds (see {\em e.g.} \cite{Hooper:2004bq, Profumo:2004ty}); low-energy antideuterons are also among the stable hadronization products of pair annihilations of neutralinos, or other WIMPs, in the Halo, and suffer from a relatively small background \cite{Baer:2005tw}. Neutralinos captured in the core of the Sun or of the Earth through scattering with ordinary matter and subsequent gravitational collapse, can pair annihilate and produce a coherent and possibly detectable flux of energetic neutrinos \cite{dmreviews}. Finally, gamma rays from the decay of hadrons produced in pair-annihilation of neutralinos, or promptly produced in loop-suppressed processes at a monochromatic energy, are also among promising indirect detection methods \cite{dmreviews}. \FIGURE[!t]{ \mbox{\hspace*{-0.5cm}\epsfig{file=figures/hgraph4.eps,width=7.5cm}\qquad \epsfig{file=figures/wgraph4.eps,width=7.5cm}} \caption{\label{fig4} The relative enhancement (with respect to the asymptotic value with decoupled heavy sleptons) in the quantity $\Theta\equiv\langle\sigma v\rangle/m_\chi^2$, relevant for all indirect Dark Matter detection rates, as a function of the slepton-LSP percent mass splitting; for the case of a Higgsino-like neutralino ({left panel}) and a Wino-like neutralino ({right panel}). The models displayed are those at $\Omega_\chi h^2=0.110$ singled out in Fig.~\protect{\ref{fig3}}, with the same sample choice of parameters and the same line-type and color coding.}} The most crucial particle physics quantity involved in the assessment of generic indirect particle Dark Matter detection rates is the pair-annihilation rate today ({\em i.e.} at ``zero temperature'') times integrals involving the number density of Dark Matter pairs. In turn, this latter quantity, for a fixed Dark Matter {\em density} profile, scales with the inverse square of the Dark Matter particle mass. In Fig.~\ref{fig4} we show the enhancement of the quantity $\Theta=\left<\sigma v\right>/m^2_\chi$, computed with {\tt DarkSUSY}~\cite{Gondolo:2004sc}, with respect to the case without slepton coannihilations, as a function of the relative percent mass splitting between the LSP mass and the coannihilating particle masses. We show, again, the models featuring a neutralino thermal relic abundance $\Omega_\chi h^2\simeq0.110$ determined in the previous Fig.~\ref{fig3}. We wish to emphasize that in the extreme case of all sleptons coannihilating, the generic enhancement with respect to the asymptotic values is remarkable. We hence expect indeed significant improvements in the prospects for Dark Matter indirect detection within the present setup. In what follows, we briefly review the actual detailed size of the enhancement expected for several different indirect detection techniques. \FIGURE[!t]{ \mbox{\hspace*{-0.5cm}\epsfig{file=figures/hgrapneubis.eps,width=7.5cm}\qquad \epsfig{file=figures/wgraphneubis.eps,width=7.5cm}} \caption{\label{fig5} The relative enhancement (with respect to the asymptotic value with decoupled heavy sleptons) in the muon flux induced by energetic neutrinos from the Sun ($E_\mu>50$ GeV), as a function of the slepton-LSP percent mass splitting, for the case of a Higgsino-like neutralino ({left panel}) and a Wino-like neutralino ({right panel}). The models displayed are those at $\Omega_\chi h^2=0.110$ singled out in Fig.~\protect{\ref{fig3}}, with the same sample choice of parameters and the same line-type and color coding.}} In Fig.~\ref{fig5} we show the enhancement of the muon flux $\Phi_{\mu}$ induced by neutralinos annihilating in the core of the Sun and producing a flux of energetic neutrinos with respect to the case without slepton coannihilations, again as a function of the relative percent mass splitting. We employ a relatively large muon energy threshold, namely 50 GeV, appropriate for ${\rm km}^3$ neutrino telescopes such as IceCube~\cite{Achterberg:2005fs}. The enhancement in the signal, showed in Fig.~\ref{fig5}, is larger than the enhancement in the annihilation cross section. To understand this effect we recall that the magnitude of the neutrino flux depends upon two quantities: the Sun capture rate, mostly driven by the spin-dependent LSP-nucleons scattering cross section, and the flux of neutrinos produced per neutralino annihilation; the total enhancement accounts for both these factors. When the neutralino mass is reduced, the role of the off-diagonal entries related to electro-weak symmetry breaking effects in the neutralino mass matrix and in the LSP composition becomes more and more important (intuitively, the relevance of the mixing induced by the mentioned entries roughly scales as $(m_W/m_\chi)^2$). As a result, a larger gaugino-higgsino mixing is expected at smaller neutralino mass. In particular, this results in a net increase in the quantity $|N_{13}|^2-|N_{14}|^2$, which enters in the $\chi\chi Z^0$ vertex, and drives an enhancement, at small neutralino masses, of more than one order of magnitude in the neutralino spin-dependent cross section off nucleons. The gaugino fraction, in the Higgsino case, and the Higgsino fraction, in the Wino case, are however always greatly suppressed, typically lying around $10^{-4}$. In the particular models we consider here, both the asymptotic and the fully enhanced values for the muon flux do not give a signal which might be detectable with IceCube~\cite{Achterberg:2005fs}; this mostly depends upon the size of the product of the Higgsino and gaugino fractions of the lightest neutralino: to avoid spurious effects ({\em e.g.} a Bino component, and the consequent associated neutralino degrees of freedom, in the Higgsino-like case) we picked models with a suppressed spin-dependent coupling to matter. However, we explicitly checked that allowing for a larger Higgsino-gaugino mixing, the enhancement in the flux of muons in neutrino telescopes caused by the occurrence of slepton degrees of freedom at neutralino freeze-out can indeed be crucial, and make models otherwise giving a hopelessly small neutrino flux from the Sun, detectable with IceCube. \FIGURE[!t]{ \mbox{\hspace*{-0.5cm}\epsfig{file=figures/hgraphpb.eps,width=7.5cm}\qquad \epsfig{file=figures/wgraphpb.eps,width=7.5cm}} \caption{\label{fig6} The relative enhancement (with respect to the asymptotic value with decoupled heavy sleptons)in the quantity $I(\Phi_{\bar p})$, proportional to the expected enhancement of the $\chi^2$ to the antiproton flux with a supersymmetric contribution added on top of the background, as a function of the slepton-LSP percent mass splitting, for the case of a Higgsino-like neutralino ({left panel}) and a Wino-like neutralino ({right panel}). The models displayed are those at $\Omega_\chi h^2=0.110$ singled out in Fig.~\protect{\ref{fig3}}, with the same sample choice of parameters and the same line-type and color coding. The horizontal lines indicate the sensitivities of the PAMELA experiment \cite{pamela} after three years of data taking for a cuspy~\cite{n03} and a cored~\cite{burkert} Dark Matter halo profile.}} The prospects for indirect detection of an exotic signature from galactic Dark Matter annihilations with the recently launched space-based PAMELA experiment~\cite{pamela} are shown in Fig.~\ref{fig6}. We indicate, in the $y$ axis, the enhancement in the antiproton ``{\em Visibility Ratio}'' $I(\Phi)/I(\Phi)_{\rm No\ Coann.}$ where the quantity $I(\Phi)$, first introduced in Ref.~\cite{Profumo:2004ty}, is defined as \begin{equation} I(\Phi) \equiv \int_{E_{\rm min}}^{E_{\rm max}} {\rm d}E \, \frac{\left[\Phi_s(E)\right]^2}{\Phi_b(E)}\;.\label{eq:visibility} \end{equation} $\Phi_s(E)$ and $\Phi_b(E)$ are the signal and background antiproton fluxes, respectively, at a kinetic antiproton energy $E$, while $I(\Phi)_{\rm No\ Coann.}$ corresponds to the asymptotic case without slepton coannihilations. The quantity $I(\Phi)$ approximates the projected $\chi^2$ of the signal plus background expected with an exotic contribution providing an antiproton flux $\Phi_s(E)$, in the limit of a large number of energy bins \cite{Profumo:2004ty}; $E_{\rm min,\ max}$ indicate the minimal and maximal experimentally accessible antiproton kinetic energies. The treatment of antiproton galactic propagation, diffusion and solar modulation (projected for the actual period of PAMELA data-taking) follows Ref.~\cite{Profumo:2004ty}, where the reader is directed for further details. As in the previous plots, we again make use in Fig.~\ref{fig6}, for the $x$ axis, of the percent mass splitting between the LSP and the coannihilating particles. As shown in Ref.~\cite{Profumo:2004ty} a model gives a statistically significant departure from the background alone, after three years of data-taking and at the 95\% confidence level, if the computed value for $I(\Phi)$ is larger than $3.2\times10^{-8}\,{\rm cm}^{-2}\,{\rm sr}^{-1}\,{\rm s}^{-1}$. We show this sensitivity limit with two horizontal lines, corresponding to a cuspy profile (the adiabatic contraction of the N03 halo model of Ref.~\cite{n03}) and to a cored profile (the Burkert profile \cite{burkert}; for more details on these halo models the reader is directed to Ref.~\cite{Profumo:2004ty,Provenza:2006hr}). As shown in Fig.~\ref{fig6}, we find a very large enhancement for the case with all sleptons almost degenerate with the lightest neutralino and, assuming a cuspy Dark Matter halo profile~\cite{n03}, PAMELA will be able to statistically disentangle such signal; even with the choice of a cored halo~\cite{burkert} the detection potential of the PAMELA experiment could be sufficient to discriminate an exotic signal, assuming, for instance, a boost factor in the signal flux generated by Dark Matter substructures, or clumps, in the galactic halo, as large as $\approx5$ \cite{substructures}. We moreover wish to point out that we find a similar enhancement in the detection prospects for both positrons and antideuterons, which we do not show here for conciseness, and since it would not add further crucial information to the present discussion. Finally, we also computed the expected enhancement in the flux of gamma-rays from neutralino pair annihilations; in this case, we find enhancements very similar to those shown in Fig.~\ref{fig4} for the quantity $\Theta=\langle\sigma v\rangle/m_\chi^2$, integrating the total gamma-ray signal flux in the energy range $E_\gamma>1$ GeV. The question of the actual feasibility of distinguishing a gamma-ray signal originating from neutralino annihilations from the various astrophysical backgrounds relies on several critical assumptions on the Dark Matter distribution and on hypothesis about the background itself, from a given direction in the Sky (see {\em e.g.} the recent discussion given in Ref.~\cite{Zaharijas:2006qb} concerning the case of the Galactic Center). Suffices it to say that if slepton coannihilations are active in the Early Universe at the LSP freeze-out, and if the neutralino is not Bino-like, the shift in the neutralino mass giving the ``right'' thermal relic abundance implies a sizable enhancement (close to what we show in Fig.~\ref{fig4}) in the expected gamma-ray flux as well. \section{Conclusions}\label{sec:conclude} In this paper we studied the effects of slepton coannihilations on the thermal relic abundance of Higgsino- or Wino-like lightest neutralinos. We pointed out that, unlike the well known case of a Bino-like neutralino, coannihilations with sleptons yield a larger Higgsino and Wino relic abundance. The effect on the relic abundance amounts to an increase of a factor ranging from a few percent up to 5. Requiring that the neutralino relic abundance lies in the range of values preferred for the abundance of Dark Matter entails, in presence of slepton coannihilations, a reduced mass for Winos and Higgsinos, and a larger pair annihilation cross section. Quantitatively, we find that the neutralino mass can be reduced up to a factor between 2 and 3, depending on the particular setup at hand. We showed that smaller values of the neutralino mass and larger pair annihilation cross sections produce potentially very large enhancements in the rates expected in indirect Dark Matter search experiments. In some cases, we showed that the occurrence of slepton coannihilations and the resulting reduction of the neutralino mass needed to produce the right amount of relics is crucial to produce signals that might allow to indirectly probe the occurrence of galactic Dark Matter annihilations. \acknowledgments We thank Piero Ullio for valuable discussions and suggestions. The work of S.P. was supported by the U.S. Department of Energy grant numbers DE-FG03-92-ER40701 and FG02-05ER41361, and by NASA grant number NNG05GF69G. The work of A.P. was supported by the Italian INFN under the project ``Fisica Astroparticellare'' and the MIUR PRIN ``Fisica Astroparticellare''
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There is a vast literature on branching processes. Here we cite the monographs \cite{AsmussenHering,AthreyaNey,Harris}; moreover, we also cite the monographs \cite{Mode} for the multitype case, \cite{Guttorp}, which focuses on statistical inference, and \cite{Jagers} and \cite{KimmelAxelrod} for applications in biology. The simplest example of a branching process is the Galton--Watson process. We consider the case of a population that has a unique individual at the beginning and all the individuals (of all generations) live for a unitary time; moreover, at the end of their lifetimes, every individual of the population (of every generation) produces a random number of new individuals acting independently of all the rest, according to a specific fixed distribution. So, if we consider a sequence of random variables $\{V_n:n\geq0\}$ such that $V_n$ is the population size at time $n$ (for all $n\geq0$), we have $V_0=1$ and \[ V_n:=\sum_{k=1}^{V_{n-1}}X_{n,k} \quad (\mbox{for}\ n\geq1), \] where $\{X_{n,i}:n,i\geq1\}$ is a family of nonnegative integer-valued i.i.d.\ random variables. In other words, $X_{n,1},\ldots,X_{n,V_{n-1}}$ represent the offspring generated at time $n$ by each of $V_{n-1}$ individuals that live at time $n-1$. We recall some other preliminaries on the Galton--Watson process in Section~\ref{sec:preliminaries}, where, in particular, we consider a slightly different notation to allow the case with a random initial population (instead of the case with a unitary initial population cited before). In this paper, we present large deviation results. The theory of large deviations is a collection of techniques that gives an asymptotic estimate of small probabilities in an exponential scale (see, e.g., \cite{DemboZeitouni} as a reference). We recall some preliminaries in Section~\ref{sec:preliminaries}. The literature on large deviations for branching processes is large. Here we essentially recall some references with results concerning the Galton--Watson process.\looseness=-1 In several references, the large-time behavior for the supercritical case is studied, namely the case where the offspring mean $\mu$ is strictly larger than one (in such a case, the extinction probability is strictly less than one). Here we recall \cite{Athreya} (see also \cite{AthreyaVidyashankar} for the multitype case), \cite{BigginsBingham}, where the main object is the study of the tails of $W:=\lim_{n\to\infty}V_n/\mu^n$, \cite{NeyVidyashankar2003} with a careful analysis based on harmonic moments of $\{V_n:n\geq0\}$, \cite{NeyVidyashankar2004} (and \cite{NeyVidyashankar2006}) with some conditional large deviation results based on some local limit theorems, \cite{FleischmannWachtel} where the central role of some \lq\lq lower deviation probabilities\rq\rq\ is highlighted for the study of the asymptotic behavior of the Lotka--Nagaev estimator $V_{n+1}/V_n$ of $\mu$. Other references study the most likely paths to extinction at some time $n_0$ when the initial population $k$ is large. The idea is to consider the representation of a branching process with initial population equal to $k$ as a sum of $k$ i.i.d.\ replications of the process with a unitary initial population; in this case, Cram\'{e}r's theorem for empirical means of i.i.d.\ random variables (on $R^{n_0}$) plays a crucial role. A most likely path to extinction in \cite{KlebanerLiptser2006} (see also \cite{KlebanerLiptser2008}) is a trajectory that minimizes the rate function among the paths that reach the level 0 at time $n_0$. A generalization of this concept for the most likely paths to reach a level $b\geq0$ can be found in \cite{HamzaKlebaner}. In this paper, we are interested in a different direction. Namely, we are interested in the empirical means of i.i.d.\ replications of the total progeny of a~Galton--Watson process. The total progenies of branching processes are studied in several references: here we cite the old references \cite{Dwass,Kennedy,Pakes} for a~Galton--Watson process, and \cite{GonzalezMolina} (see Section~2.2) among the references concerning different branching processes. The total progeny of a Galton--Watson process is an almost surely finite random variable when the extinction occurs almost surely, and therefore the supercritical case will not be considered. Some relationships between the offspring distribution and the total progeny distribution of a Galton--Watson process are well known (see \eqref{eq:link-pmf} for the probability mass functions and \eqref{eq:link-pgf} for the probability generating functions). A new relationship is provided by Proposition \ref{prop:main-unitary-initial-population}, where we illustrate how the rate function for the empirical means of total progenies can be expressed in terms of the analogous rate function for the empirical means of a single progeny. This is a quite natural problem to investigate large deviations, and, as we can expect, \eqref{eq:link-pgf} has an important role in the proof; in fact, the large deviation rate function for empirical means of i.i.d.\ random variables (provided by Cram\'{e}r's theorem recalled below; see Theorem \ref{th:Cramer}) is given by the Legendre transform of the logarithm of the (common) moment generating function of the random variables. Moreover, the relationship provided by Proposition \ref{prop:main-unitary-initial-population} can have interest in information theory because the involved rate functions can be expressed in terms of suitable relative entropies (or Kullback--Leibler divergences); see, for example, \cite{Varadhan} for a discussion on the rate function expressions in terms of the relative entropy. Another result presented in this paper is Proposition \ref{prop:main}, that is a version of Proposition~\ref{prop:main-unitary-initial-population}, where the initial population $V_0$ is a random variable with a suitable distribution. Finally, in Propositions \ref{prop:main-estimators} and \ref{prop:minor-estimators}, we prove large deviation results for some estimators of the offspring mean $\mu$ in terms of i.i.d.\ replications of the total progeny and of the initial population (we are considering the case where the initial population $V_0$ is a random variable as in Proposition \ref{prop:main}). We conclude with the outline of the paper. We start with some preliminaries in Section~\ref{sec:preliminaries}. In Section~\ref{sec:applications-CT}, we prove the results concerning the large deviation rate functions related to Cram\'{e}r's theorem. Finally, in Section~\ref{sec:estimators}, we prove the large deviation results for the estimators of the offspring mean $\mu$. \section{Preliminaries}\label{sec:preliminaries} We start with some preliminaries on the Galton--Watson process. In the second part, we recall some preliminaries on large deviations. \subsection{Preliminaries on Galton--Watson process} Here we introduce a slightly different notation, and, moreover, we recall some preliminaries in order to define the total progeny of a Galton--Watson process. We start with some notation concerning the offspring distribution (note that $\mu_f$ defined further coincides with $\mu$ in the Introduction): \begin{itemize} \item the probability mass function $p_h:=P(X_{n,i}=h)$ (for all integer $h\geq0$); \item the probability generating function $f(s):=\sum_{h\geq0}s^hp_h$; \item the mean value $\mu_f:=\sum_{h\geq0}hp_h$ (and we have $\mu _f=f^\prime(1)$). \end{itemize} Moreover, we introduce the analogous items for the initial population: \begin{itemize} \item the probability mass function $\{q_r:r\geq0\}$ (see \eqref {eq:pmf-initial-population}); \item the probability generating function $g(s):=\sum_{r\geq0}s^rq_r$; \item the mean value $\mu_g:=\sum_{r\geq0}rq_r$ (and we have $\mu _g=g^\prime(1)$). \end{itemize} So, from now on, we consider the following slightly different notation: \[ \bigl\{V_n^{f,g}:n\geq0 \bigr\}\vadjust{\eject} \] (in place of $\{V_n:n\geq0\}$ presented before). More precisely: \begin{itemize} \item the probability generating function of $V_0^{f,g}$ is $g$ (so $V_0^{f,g}$ does not depend on $f$), and therefore \begin{equation} \label{eq:pmf-initial-population} q_r:=P \bigl(V_0^{f,g}=r \bigr)\quad (\mbox{for all integer}\ r\geq0); \end{equation} \item for a family of i.i.d.\ random variables $\{X_{n,i}:n,i\geq 1\}$ with probability generating function $f$, we have \[ V_n^{f,g}:=\sum_{i=1}^{V_{n-1}^{f,g}}X_{n,i} \quad (\mbox{for all}\ n\geq1). \] \end{itemize} \begin{remark}\label{rem:unitary-initial-population} Note that $ \{V_n^{f,g}:n\geq0 \}$ here corresponds to $\{V_n:n\geq0\}$ presented in the Introduction if $q_1=1$ or, equivalently, if $g=\mathrm{id}$ \textup{(}i.e. $g(s)=s$ for all $s$\textup{)}. \end{remark} If we consider the extinction probability \[ p_{\mathrm{ext}}^{f,g}:=P \bigl( \bigl\{V_n^{f,g}=0 \ \mbox{for some}\ n\geq0 \bigr\} \bigr), \] then it is known that we have \[ p_{\mathrm{ext}}^{f,\mathrm{id}}=\min\bigl\{s\in[0,1]:f(s)=s\bigr\}; \] moreover, if $p_0>0$, then we have $p_{\mathrm{ext}}^{f,\mathrm{id}}=1$ if $\mu_f\leq1$ and $p_{\mathrm{ext}}^{f,\mathrm{id}}\in(0,1)$ if $\mu_f>1$. More generally, we have \[ p_{\mathrm{ext}}^{f,g}:=q_0+\sum _{n\geq1}\bigl(p_{\mathrm{ext}}^{f,\mathrm {id}} \bigr)^nq_n=g\bigl(p_{\mathrm{ext}}^{f,\mathrm{id}}\bigr), \] and, if $q_0<1$ (we obviously have $p_{\mathrm{ext}}^{f,g}=1$ if $q_0=1$), then we have the following cases: \[ \begin{array}{ll} p_{\mathrm{ext}}^{f,g}=g(0)=q_0&\ \mbox{if}\ p_0=0;\\ p_{\mathrm{ext}}^{f,g}=g(1)=1&\ \mbox{if}\ p_0>0\ \mbox{and}\ \mu_f\leq 1;\\ p_{\mathrm{ext}}^{f,g}\in(q_0,1)&\ \mbox{if}\ p_0>0\ \mbox{and}\ \mu_f>1. \end{array} \] Then, if $p_0>0$ and $\mu_f\leq1$, then the random variable $Y^{f,g}$ defined by \[ Y^{f,g}:=\sum_{i=0}^{\tau-1}V_i^{f,g}, \quad \mbox{where}\ \tau:=\inf \bigl\{ n\geq0:V_n^{f,g}=0 \bigr \}, \] is almost surely finite and provides the total progeny of $ \{V_n^{f,g}:n\geq0 \}$. In view of what follows, we consider the probability generating function \[ \mathcal{G}_{f,g}(s):=\sum_{k\geq0}s^k \pi_k^{f,g}, \] where $ \{\pi_k^{f,g}:k\geq0 \}$ is the probability mass function of the random variable $Y^{f,g}$. Moreover, we have the mean value \begin{equation} \label{eq:mean-value-total-progeny} \nu^{f,g}:=\sum_{k\geq0}k \pi_k^{f,g},\quad \mbox{and we have}\quad \nu^{f,g}= \frac{\mu_g}{1-\mu_f};\vadjust{\eject} \end{equation} in particular, $\nu^{f,g}=\frac{\mu_g}{1-\mu_f}$ even if $\mu_f=1$, namely \[ \nu^{f,g}= \left\{ \begin{array}{@{}ll} \infty&\ \mbox{if}\ \mu_g>0\ (\mbox{and}\ \mu_f=1)\\ 0&\ \mbox{if}\ \mu_g=0\ (\mbox{and}\ \mu_f=1). \end{array} \right. \] Finally, we recall some well-known connections between total progeny and offspring distributions (see e.g. \cite{Dwass}): for the probability mass functions, we have \begin{equation} \label{eq:link-pmf} \pi_k^{f,\mathrm{id}}=\frac{1}{k}\cdot p_{k-1}^{*k}, \end{equation} where $\{p_h^{*n}:h\geq0\}$ is the $n$th power of convolution of $\{p_h:h\geq0\}$; for the probability generating functions, we have \begin{equation} \label{eq:link-pgf} \mathcal{G}_{f,\mathrm{id}}(s)=sf\bigl(\mathcal{G}_{f,\mathrm{id}}(s) \bigr). \end{equation} \subsection{Preliminaries on large deviations} We start with the concept of large deviation principle (LDP). A sequence of random variables $\{W_n:n\geq1\}$ taking values in a topological space $\mathcal{W}$ satisfies the LDP with rate function $I:\mathcal{W}\to[0,\infty]$ if $I$ is a lower semicontinuous function, \[ \liminf_{n\to\infty}\frac{1}{n}\log P(W_n\in O) \geq-\inf_{w\in O}I(w)\quad \mbox{for all open sets}\ O, \] and \[ \limsup_{n\to\infty}\frac{1}{n}\log P(W_n\in C) \leq-\inf_{w\in C}I(w)\quad \mbox{for all closed sets}\ C. \] We also recall that a rate function $I$ is said to be good if all its level sets $\{\{w\in\mathcal{W}:I(w)\leq\eta\}:\eta\geq0\}$ are compact. \begin{remark}\label{rem:closed-sets-with-probability-1} If $P(W_n\in S)=1$ for some closed set $S$ (at least eventually with respect to $n$), then $I(w)=\infty$ for $w\notin S$; this can be checked by taking the lower bound for the open set $O=S^c$. \end{remark} In particular, we refer to Cram\'{e}r's theorem on $\mathbb{R}^d$ (see e.g. Theorems 2.2.3\break and 2.2.30 in \cite{DemboZeitouni} for the cases $d=1$ and $d\geq2$), and we recall its statement. We remark that, in this paper, we consider the cases $d=1$ (in such a case, the rate function need not to be a good rate function) and $d=2$. Moreover, we use the symbol $\langle\cdot,\cdot\rangle$ for the inner product in $\mathbb{R}^d$. \begin{thm}[Cram\'{e}r's theorem]\label{th:Cramer} Let $\{W_n:n\geq1\}$ be a sequence of i.i.d.\ $\mathbb{R}^d$-valued random variables, and let $\{\bar{W}_n:n\geq1\}$ be the sequence of empirical means defined by $\bar{W}_n:=\frac{1}{n}\sum_{k=1}^nW_k$ \textup{(}for all $n\geq1)$. \textup{(i)} If $d=1$, then $\{\bar{W}_n:n\geq1\}$ satisfies the LDP with rate function $I$ defined by \[ I(w):=\sup_{\theta\in\mathbb{R}} \bigl\{\theta w-\log\mathbb{E} \bigl[e^{\theta W_1} \bigr] \bigr\}. \] \textup{(ii)} If $d\geq2$ and the origin of $\mathbb{R}^d$ belongs to the interior of the set $\{\theta\in\mathbb{R}^d:\log\mathbb{E} [e^{\langle\theta,W_1\rangle } ]<\infty\}$, then $\{\bar{W}_n:n\geq1\}$ satisfies the LDP with good rate function $I$ defined by \[ I(w):=\sup_{\theta\in\mathbb{R}^d} \bigl\{\langle\theta,w\rangle-\log \mathbb{E} \bigl[e^{\langle\theta,W_1\rangle} \bigr] \bigr\}. \] \end{thm} \section{Applications of Cram\'{e}r's theorem}\label{sec:applications-CT} The aim of this section is to prove Propositions \ref{prop:main-unitary-initial-population} and \ref{prop:main}. In view of this, we recall Lemmas \ref{lem:LDP-offspring} and \ref{lem:LDP-totalprogeny}, which give two immediate applications of Cram\'{e}r's theorem (Theorem \ref{th:Cramer}) with $d=1$; in Lemma \ref{lem:LDP-totalprogeny}, we consider the case with a unitary initial population almost surely (thus, as stated Remark \ref{rem:unitary-initial-population}, the case with $q_1=1$ or, equivalently, $g=\mathrm{id}$). \begin{lemma}[Cram\'{e}r's theorem for offspring distribution]\label {lem:LDP-offspring} Let $\{X_n:n\geq1\}$ be i.i.d.\ random variables with probability generating function $f$. Let $\{\bar{X}_n:n\geq1\}$ be the sequence of empirical means defined by $\bar{X}_n:=\frac{1}{n}\sum_{k=1}^nX_k$ \textup{(}for all $n\geq1)$. Then $\{\bar{X}_n:n\geq1\}$ satisfies the LDP with rate function $I_f$ defined by $I_f(x):=\sup_{\alpha\in\mathbb{R}}\{\alpha x-\log f(e^\alpha)\}$. \end{lemma} \begin{lemma}[Cram\'{e}r's theorem for total progeny distribution with $g=\mathrm{id}$]\label{lem:LDP-totalprogeny} Assume that $p_0>0$ and $\mu_f\leq1$. Let $\{Y_n:n\geq1\}$ be i.i.d.\ random variables with probability generating function $\mathcal{G}_{f,\mathrm{id}}$. Let $\{\bar{Y}_n:n\geq1\}$ be the sequence of empirical means defined by $\bar{Y}_n:=\frac{1}{n}\sum_{k=1}^nY_k$ \textup{(}for all $n\geq1)$. Then $\{\bar{Y}_n:n\geq1\}$ satisfies the LDP with rate function $I_{\mathcal{G}_{f,\mathrm{id}}}$ defined by $I_{\mathcal{G}_{f,\mathrm{id}}}(y):=\sup_{\beta\in\mathbb{R}}\{\beta y-\log\mathcal{G}_{f,\mathrm{id}}(e^\beta)\}$. \end{lemma} Now we can prove our main results. We start with Proposition \ref{prop:main-unitary-initial-population}, which provides an expression for $I_{\mathcal{G}_{f,\mathrm{id}}}$ in terms of $I_f$. \begin{proposition}\label{prop:main-unitary-initial-population} Let $I_f$ and $I_{\mathcal{G}_{f,\mathrm{id}}}$ be the rate functions in Lemmas \ref{lem:LDP-offspring} and \ref{lem:LDP-totalprogeny}. Then we have $I_{\mathcal{G}_{f,\mathrm{id}}}(y)=yI_f (\frac{y-1}{y} )$ for all $y\geq1$. \end{proposition} \begin{proof} We remark that \[ I_f(x):=\sup_{\alpha\in\mathcal{D}(f)}\bigl\{\alpha x-\log f \bigl(e^\alpha\bigr)\bigr\}, \] where $\mathcal{D}(f):=\{\alpha\in\mathbb{R}:f(e^\alpha)<\infty\}$, and \[ I_{\mathcal{G}_{f,\mathrm{id}}}(x):=\sup_{\beta\in\mathcal{D}(\mathcal {G}_{f,\mathrm{id}})}\bigl\{\beta y-\log \mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta \bigr)\bigr\}, \] where $\mathcal{D}(\mathcal{G}_{f,\mathrm{id}}):=\{\beta\in\mathbb{R}:\mathcal {G}_{f,\mathrm{id}}(e^\beta)<\infty\}$, by Lemmas \ref{lem:LDP-offspring} and \ref{lem:LDP-totalprogeny}, respectively. Moreover, the function $\alpha:\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})\to\mathcal{D}(f)$ defined by \[ \alpha(\beta):=\log\mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta\bigr) \] is a bijection. This can be checked noting that $\alpha(\beta)\in\mathcal{D}(f)$ (for all $\beta\in\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})$) because $f(e^{\alpha(\beta)})=f(\mathcal{G}_{f,\mathrm{id}}(e^\beta))=\frac {\mathcal{G}_{f,\mathrm{id}}(e^\beta)}{e^\beta}<\infty$ (here we take into account \eqref{eq:link-pgf}); moreover, its inverse $\beta:\mathcal{D}(f)\to\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})$ is defined by \[ \beta(\alpha):=\log\mathcal{G}_{f,\mathrm{id}}^{-1} \bigl(e^\alpha\bigr) \] (where $\mathcal{G}_{f,\mathrm{id}}^{-1}$ is the inverse of $\mathcal{G}_{f,\mathrm{id}}$), and $\beta(\alpha)\in\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})$ (for all $\alpha\in\mathcal{D}(f)$) because $\mathcal{G}_{f,\mathrm{id}}(e^{\beta(\alpha)})=e^\alpha<\infty$. Thus, we can set $\alpha=\log\mathcal{G}_{f,\mathrm{id}}(e^\beta)$ (for $\beta\in\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})$) in the expression of $I_f(x)$, and we get \[ I_f(x)=\sup_{\beta\in\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})}\bigl\{\log \mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta\bigr)x-\log f\bigl( \mathcal{G}_{f,\mathrm {id}}\bigl(e^\beta\bigr)\bigr)\bigr\}. \] Then (we take into account \eqref{eq:link-pgf} in the second equality below) \begin{align*} I_f(x)&=\sup_{\beta\in\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})}\bigl\{\log \mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta\bigr)x-\log(e^{-\beta}e^\beta f\bigl(\mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta\bigr)\bigr)\bigr\} \\ &=\sup_{\beta\in\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})}\bigl\{\log\mathcal {G}_{f,\mathrm{id}} \bigl(e^\beta\bigr)x+\beta-\log\mathcal{G}_{f,\mathrm {id}} \bigl(e^\beta\bigr)\bigr\} \\ &=\sup_{\beta\in\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})}\bigl\{\beta -(1-x)\log\mathcal{G}_{f,\mathrm{id}} \bigl(e^\beta\bigr)\bigr\}, \end{align*} and, for $x\in[0,1)$, we get \[ I_f(x)=(1-x)I_{\mathcal{G}_{f,\mathrm{id}}} \biggl(\frac{1}{1-x} \biggr). \] We conclude by taking $x=\frac{y-1}{y}$ for $y\geq1$ (thus, $x\in[0,1)$), and we obtain the desired equality with some easy computations. \end{proof} Now we present Proposition \ref{prop:main}, which concerns the LDP for the empirical means of i.i.d. bivariate random variables $\{(Y_n,Z_n):n\geq1\}$ distributed as $(Y^{f,g},V_0^{f,g})$. In particular, we obtain an expression for the rate function $I_{\mathcal{G}_{f,g},g}$ in terms of $I_f$ in Lemma \ref{lem:LDP-offspring} and $I_g$ defined by \begin{equation} \label{def:rf-initial-population} I_g(z):=\sup_{\gamma\in\mathbb{R}}\bigl\{\gamma z- \log g\bigl(e^\gamma\bigr)\bigr\}. \end{equation} \begin{proposition}\label{prop:main} Let $\{(Y_n,Z_n):n\geq1\}$ be i.i.d.\ random variables distributed as $(Y^{f,g},V_0^{f,g})$. Assume that $\mathbb{E} [e^{\beta Y^{f,g}+\gamma V_0^{f,g}} ]$ is finite in a neighborhood of $(\beta,\gamma)=(0,0)$. Let $\{(\bar{Y}_n,\bar{Z}_n):n\geq1\}$ be the sequence of empirical means defined by\break $(\bar{Y}_n,\bar{Z}_n):= (\frac{1}{n}\sum_{k=1}^nY_k,\frac{1}{n}\sum_{k=1}^nZ_k )$ \textup{(}for all $n\geq1)$. Then $\{(\bar{Y}_n,\bar{Z}_n):n\geq1\}$ satisfies the LDP with good rate function $I_{\mathcal{G}_{f,g},g}$ defined by \[ I_{\mathcal{G}_{f,g},g}(y,z)= \left\{ \begin{array}{@{}ll} yI_f (\frac{y-z}{y} )+I_g(z)&\ \mbox{if}\ y\geq z>0,\\ I_g(0)&\ \mbox{if}\ y=z=0,\\ \infty&\ \mbox{otherwise}. \end{array} \right. \] \end{proposition} \begin{remark}\label{rem:implicit-hypotheses-for-prop-main} We are assuming \textup{(}implicitly\textup{)} that $p_0>0$ and $\mu _f\leq1$; in fact, since we require that $\mathbb{E} [e^{\beta Y^{f,g}+\gamma V_0^{f,g}} ]$ is finite in a neighborhood of $(\beta,\gamma)=(0,0)$, we are assuming that $\mu_f<1$ and $\mu_g<\infty$. \end{remark} \begin{proof} The LDP is a consequence of Cram\'{e}r's theorem (Theorem \ref{th:Cramer}) with $d=2$, and the rate function $I_{\mathcal{G}_{f,g},g}$ is defined by \[ I_{\mathcal{G}_{f,g},g}(y,z):=\sup_{\beta,\gamma\in\mathbb{R}} \bigl\{ \beta y+\gamma z- \log\mathbb{E} \bigl[e^{\beta Y^{f,g}+\gamma V_0^{f,g}} \bigr] \bigr\}. \] Throughout the proof, we restrict our attention on the pairs $(y,z)$ such that $y\geq z\geq0$. In fact, almost surely, we have $Y^{f,g}\geq V_0^{f,g}\geq0$, and therefore $\bar{Y}_n\geq\bar{Z}_n\geq0$; thus, by Remark \ref{rem:closed-sets-with-probability-1} we have $I_{\mathcal{G}_{f,g},g}(y,z)=\infty$ if condition $y\geq z\geq0$ fails. We remark that $\mathbb{E} [s^{Y^{f,g}}|V_0^{f,g} ]=(\mathcal{G}_{f,\mathrm {id}}(s))^{V_0^{f,g}}$, and therefore \[ \mathbb{E} \bigl[e^{\beta Y^{f,g}+\gamma V_0^{f,g}} \bigr]=\mathbb {E} \bigl[e^{\gamma V_0^{f,g}} \bigl(\mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta \bigr)\bigr)^{V_0^{f,g}} \bigr] =g\bigl(e^\gamma\mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta \bigr)\bigr); \] thus, \[ I_{\mathcal{G}_{f,g},g}(y,z)=\sup_{\beta,\gamma\in\mathbb{R}} \bigl\{ \beta y+\gamma z-\log g\bigl(e^{\gamma+\log\mathcal{G}_{f,\mathrm {id}}(e^\beta)}\bigr) \bigr\}. \] Furthermore, the function \[ (\beta,\gamma)\mapsto\bigl(\beta,\gamma+\log\mathcal{G}_{f,\mathrm {id}} \bigl(e^\beta\bigr)\bigr) \] is a bijection defined on $\mathcal{D}(\mathcal{G}_{f,\mathrm{id}})\times\mathbb{R}$, where \[ \mathcal{D}(\mathcal{G}_{f,\mathrm{id}}):=\bigl\{\beta\in\mathbb{R}:\mathcal {G}_{f,\mathrm{id}}\bigl(e^\beta\bigr)<\infty\bigr\} \] as in the proof of Proposition \ref{prop:main-unitary-initial-population}; then, for $\delta:=\gamma+\log\mathcal{G}_{f,\mathrm{id}}(e^\beta)$, we obtain \[ I_{\mathcal{G}_{f,g},g}(y,z)=\sup_{\beta,\delta\in\mathbb{R}} \bigl\{ \beta y+\bigl(\delta- \log\mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta\bigr)\bigr)z-\log g \bigl(e^\delta\bigr) \bigr\}. \] Thus, we have (note that the last equality holds by Proposition \ref{prop:main-unitary-initial-population}) \begin{align*} I_{\mathcal{G}_{f,g},g}(y,z)&\leq\sup_{\beta\in\mathbb{R}} \bigl\{\beta y+z\log \mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta\bigr) \bigr\}+ \sup _{\delta\in\mathbb{R}} \bigl\{\delta z-\log g\bigl(e^\delta\bigr) \bigr \} \\ &= \left\{ \begin{array}{@{}ll} zI_{\mathcal{G}_{f,\mathrm{id}}}(y/z)+I_g(z)&\ \mbox{if}\ y\geq z>0,\\ I_g(0)&\ \mbox{if}\ y=z=0,\\ \infty&\ \mbox{otherwise.} \end{array} \right. \\ &= \left\{ \begin{array}{@{}ll} yI_f (\frac{y-z}{y} )+I_g(z)&\ \mbox{if}\ y\geq z>0,\\ I_g(0)&\ \mbox{if}\ y=z=0,\\ \infty&\ \mbox{otherwise}. \end{array} \right. \end{align*} We conclude by showing the inverse inequality \begin{equation} \label{eq:inverse-inequality} I_{\mathcal{G}_{f,g},g}(y,z)\geq\sup_{\beta\in\mathbb{R}} \bigl\{\beta y+z\log\mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta\bigr) \bigr\}+\sup _{\delta\in \mathbb{R}} \bigl\{\delta z-\log g\bigl(e^\delta\bigr) \bigr \}. \end{equation} To this end, we take two sequences $\{\beta_n:n\geq1\}$ and $\{\delta_n:n\geq1\}$ such that \[ \lim_{n\to\infty}\beta_ny-z\log\mathcal{G}_{f,\mathrm{id}} \bigl(e^{\beta_n}\bigr) =\sup_{\beta\in\mathbb{R}} \bigl\{\beta y+z\log \mathcal{G}_{f,\mathrm{id}}\bigl(e^\beta\bigr) \bigr\} \] and \[ \lim_{n\to\infty}\delta_n z-\log g\bigl(e^{\delta_n} \bigr)=\sup_{\delta\in\mathbb {R}} \bigl\{\delta z-\log g\bigl(e^\delta \bigr) \bigr\}. \] Then we have \[ I_{\mathcal{G}_{f,g},g}(y,z)\geq\beta_n y+\bigl(\delta_n-\log \mathcal {G}_{f,\mathrm{id}}\bigl(e^{\beta_n}\bigr)\bigr)z-\log g \bigl(e^{\delta_n}\bigr), \] and we get \eqref{eq:inverse-inequality} letting $n$ go to infinity. \end{proof} \section{Large deviations for estimators of $\mu_f$}\label{sec:estimators} In this section, we prove two LDPs for two sequences of estimators of the offspring mean $\mu_f$. Namely, if $\{(\bar{Y}_n,\bar{Z}_n):n\geq1\}$ is the sequence in Proposition~\ref {prop:main} (see also the precise assumptions in Remark \ref{rem:implicit-hypotheses-for-prop-main}; in particular, we have $\mu_f<1$), then we consider: \begin{enumerate} \item$ \{\frac{\bar{Y}_n-\bar{Z}_n}{\bar{Y}_n}:n\geq1 \}$; \item$ \{\frac{\bar{Y}_n-\mu_g}{\bar{Y}_n}:n\geq1 \}$. \end{enumerate} Obviously, these estimators are well defined if the denominators $\bar{Y}_n$ are different from zero; then, in order to have well-defined estimators, we always assume that $q_0=0$ (where $q_0$ is as in \eqref{eq:pmf-initial-population}), and, noting that, in general, $I_g(0)=-\log q_0$, we have \[ I_g(0)=\infty. \] Moreover, both sequences converge to $\frac{\nu^{f,g}-\mu_g}{\nu^{f,g}}=\mu_f$ as $n\to\infty$ (see $\nu^{f,g}$ in~\eqref{eq:mean-value-total-progeny}), and they coincide when the initial population is deterministic (equal to $\mu_g$ almost surely). The LDPs of these two sequences are proved in Propositions \ref{prop:main-estimators} and \ref{prop:minor-estimators}. Moreover, Corollary \ref{cor:comparison} and Remark \ref{rem:comparison} concern the comparison between the convergence of the first sequence $ \{\frac{\bar{Y}_n-\bar{Z}_n}{\bar{Y}_n}:n\geq1 \}$ and its analogue when the initial population is deterministic (equal to the mean). Propositions \ref{prop:main-estimators} and \ref{prop:minor-estimators} are proved by combining the contraction principle (see e.g. Theorem 4.2.1 in \cite{DemboZeitouni}) and Proposition \ref{prop:main} (note that the rate function $I_{\mathcal{G}_{f,g},g}$ in Proposition \ref{prop:main} is good, as it is required to apply the contraction principle). We remark that, in the proofs of Propositions \ref{prop:main-estimators} and \ref{prop:minor-estimators}, we take into account that $I_{\mathcal{G}_{f,g},g}(0,0)=\infty$ by Proposition \ref{prop:main} and $I_g(0)=\infty$. At the end of this section, we present some remarks on the comparison between the rate functions in Propositions \ref{prop:main-estimators} and~\ref {prop:minor-estimators} (Remarks \ref{rem:rf-in-propminorestimator-could-be-finite-for-negative-arguments} and \ref{rem:no-offsprings}). We start with the LDP of the first sequence of estimators. \begin{proposition}\label{prop:main-estimators} Assume the same hypotheses of Proposition \ref{prop:main} and $q_0=0$. Let $\{(Y_n,Z_n):n\geq1\}$ be i.i.d. random variables distributed as $(Y^{f,g},V_0^{f,g})$. Let\break $\{(\bar{Y}_n,\bar{Z}_n):n\geq1\}$ be the sequence of empirical means defined by $(\bar{Y}_n,\bar{Z}_n):= (\frac{1}{n}\sum_{k=1}^nY_k,\frac{1}{n}\sum_{k=1}^nZ_k )$ \textup{(}for all $n\geq1$\textup{)}. Then $ \{\frac{\bar{Y}_n-\bar{Z}_n}{\bar{Y}_n}:n\geq1 \}$ satisfies the LDP with good rate function $J_{\mathcal{G}_{f,g},g}$ defined by \[ J_{\mathcal{G}_{f,g},g}(x):= \left\{ \begin{array}{@{}ll} -\log g (e^{-\frac{I_f(x)}{1-x}} )&\ \mbox{if}\ x\in[0,1),\\ \infty&\ \mbox{otherwise}. \end{array} \right. \] \end{proposition} \begin{proof} By Proposition \ref{prop:main} and the contraction principle we have the LDP of $ \{\frac{\bar{Y}_n-\bar{Z}_n}{\bar{Y}_n}:n\geq1 \}$ with good rate function $J_{\mathcal{G}_{f,g},g}$ defined by \[ J_{\mathcal{G}_{f,g},g}(x):=\inf \biggl\{I_{\mathcal {G}_{f,g},g}(y,z):y\geq z>0, \frac{y-z}{y}=x \biggr\}. \] The case $x\notin[0,1)$ is trivial because we have the infimum over the empty set. For $x\in[0,1)$, we rewrite this expression as follows (where we take into account the expression of the rate function $I_{\mathcal{G}_{f,g},g}$ in Proposition \ref{prop:main}):\vadjust{\eject} \begin{align*} J_{\mathcal{G}_{f,g},g}(x)&=\inf \biggl\{I_{\mathcal{G}_{f,g},g} \biggl(\frac {z}{1-x},z \biggr):z>0 \biggr\} \\ &=\inf \biggl\{\frac{z}{1-x}I_f \biggl(\frac{\frac{z}{1-x}-z}{\frac {z}{1-x}} \biggr)+I_g(z):z>0 \biggr\} \\ &=\inf \biggl\{\frac{z}{1-x}I_f(x)+I_g(z):z>0 \biggr\}\\ &=-\sup \biggl\{-z\frac {I_f(x)}{1-x}-I_g(z):z>0 \biggr\}; \end{align*} thus, since $I_g(z)=\infty$ for $z\leq0$, we obtain $J_{\mathcal{G}_{f,g},g}(x)=-\log g (e^{-\frac{I_f(x)}{1-x}} )$ by taking into account the definition of $I_g$ in \eqref{def:rf-initial-population} and the well-known properties of Legendre transforms (see e.g. Lemma 4.5.8 in \cite{DemboZeitouni}; see also Lemma 2.2.5(a) and Exercise 2.2.22 in \cite{DemboZeitouni} for the convexity and the lower semicontinuity of $\gamma\mapsto\log g(e^\gamma)$). \end{proof} We have an immediate consequence of this proposition that concerns the case with a deterministic initial population equal to $\mu_g$ (almost surely). Namely, if we consider the probability generating function $g_\diamond$ defined by $g_\diamond(s):=s^{\mu_g}$ (for all $s$), then we mean the case $g=g_\diamond$, and therefore: \begin{itemize} \item$V_0^{f,g_\diamond}=\mu_g$ almost surely; thus, $Z_n=\mu_g$ and $\bar{Z}_n=\mu_g$ almost surely (for all $n\geq1$); \item$\{Y_n^{f,g_\diamond}:n\geq1\}$ are i.i.d. random variables distributed as $Y^{f,g_\diamond}$, that is, \[ Y^{f,g_\diamond}:=\mu_g+\sum_{i=1}^\tau V_i^{f,g_\diamond},\quad \mbox {where}\ \tau:=\inf \bigl\{n \geq0:V_n^{f,g_\diamond}=0 \bigr\}; \] \item the rate function $J_{\mathcal{G}_{f,g_\diamond},g_\diamond}$ is \begin{equation} \label{eq:main-estimators-rf-deterministic-initial-population} J_{\mathcal{G}_{f,g_\diamond},g_\diamond}(x)= \left\{ \begin{array}{@{}ll} \mu_g\cdot\frac{I_f(x)}{1-x}&\ \mbox{if}\ x\in[0,1),\\ \infty&\ \mbox{otherwise,} \end{array} \right. \end{equation} by Proposition \ref{prop:main-estimators}. \end{itemize} \begin{cor}[Comparison between $J_{\mathcal{G}_{f,g},g}$ in Proposition \ref{prop:main-estimators} and $J_{\mathcal{G}_{f,g_\diamond},g_\diamond }$]\label{cor:comparison} We have $J_{\mathcal{G}_{f,g},g}(x)\leq J_{\mathcal{G}_{f,g_\diamond},g_\diamond}(x)$ for all $x\in\mathbb{R}$. Moreover the inequality turns into an equality if and only if we have one of the following cases: \begin{itemize} \item$x\notin[0,1)$ and $J_{\mathcal{G}_{f,g},g}(x)=J_{\mathcal {G}_{f,g_\diamond},g_\diamond}(x)=\infty$; \item$x=\mu_f$ and $J_{\mathcal{G}_{f,g},g}(x)=J_{\mathcal {G}_{f,g_\diamond},g_\diamond}(x)=0$; \item$V_0^{f,g}$ is deterministic, equal to $\mu_g$, and $J_{\mathcal{G}_{f,g},g}(x)=J_{\mathcal{G}_{f,g_\diamond},g_\diamond}(x)$ for all $x\in\mathbb{R}$. \end{itemize} \end{cor} \begin{proof} The case $x\notin[0,1)$ is trivial. On the contrary, if $x\in [0,1)$, then by Jensen's inequality we have \[ -\log g \bigl(e^{-\frac{I_f(x)}{1-x}} \bigr)=-\log\mathbb{E} \bigl[e^{-\frac{I_f(x)}{1-x}\cdot V_0^{f,g}} \bigr]\leq\mu_g\cdot\frac{I_f(x)}{1-x}; \] moreover, the cases where the inequality turns into an equality follow from the well-known properties of Jensen's inequality. \end{proof} \begin{remark}[Comparison between convergence of estimators of $\mu _f$]\label{rem:comparison} Assume that $\mu_f>0$ and the initial population is not deterministic. Then there exists $\eta>0$ such that \begin{equation} \label{eq:local-strict-inequality-between-rf} 0<J_{\mathcal{G}_{f,g},g}(x)<J_{\mathcal{G}_{f,g_\diamond},g_\diamond }(x)\quad \mbox{for}\ x\in( \mu_f-\eta,\mu_f+\eta)\setminus\{\mu_f\}. \end{equation} Thus, we can say that $ \{\frac{\bar{Y}_n^{f,g_\diamond}-\mu_g}{\bar{Y}_n^{f,g_\diamond }}:n\geq 1 \}$ converges to $\mu_f$ \textup{(}as $n\to\infty)$ faster than $ \{\frac{\bar{Y}_n^{f,g}-\bar{Z}_n}{\bar{Y}_n^{f,g}}:n\geq 1 \}$; in fact, we can find $\varepsilon>0$ such that \[ \lim_{n\to\infty}\frac{P (\llvert \frac{\bar{Y}_n^{f,g_\diamond}-\mu _g}{\bar{Y}_n^{f,g_\diamond}}-\mu_f\rrvert \geq\varepsilon )}{ P (\llvert \frac{\bar{Y}_n^{f,g}-\bar{Z}_n}{\bar{Y}_n^{f,g}}-\mu _f\rrvert \geq\varepsilon )}=0. \] We can repeat the same argument to say that $ \{\frac{\bar{Y}_n^{f,g_\diamond}-\mu_g}{\bar{Y}_n^{f,g_\diamond }}:n\geq 1 \}$ converges to $\mu_f$ \textup{(}as $n\to\infty)$ faster than $\{\bar{X}_n:n\geq1\}$ in Lemma \ref{lem:LDP-offspring}. In fact, we have $V_0^{f,g_\diamond}=\mu_g$ almost surely, $\mu_g$ is an integer, and, since $\mu_g>0$ because $q_0=0$, we have $\mu_g\geq 1$; then we have \[ J_{\mathcal{G}_{f,g_\diamond},g_\diamond}(x)=\mu_g\cdot\frac {I_f(x)}{1-x}>I_f(x)>0 \quad \mbox{for all}\ x\in(0,1)\setminus\{\mu_f\} \] \textup{(}we can also consider the case $x=0$ if $\mu_g>1)$. \end{remark} Now we present the LDP for the second sequence of estimators. \begin{proposition}\label{prop:minor-estimators} Assume the same hypotheses of Proposition \ref{prop:main} and $q_0=0$. Let $\{Y_n:n\geq1\}$ be i.i.d. random variables distributed as $Y^{f,g}$. Let $\{\bar{Y}_n:n\geq1\}$ be the sequence of empirical means defined by $\bar{Y}_n:=\frac{1}{n}\sum_{k=1}^nY_k$ \textup{(}for all $n\geq1)$. Then $ \{\frac{\bar{Y}_n-\mu_g}{\bar{Y}_n}:n\geq1 \}$ satisfies the LDP with good rate function $J_{\mu_g}$ defined by \[ J_{\mu_g}(x):= \left\{ \begin{array}{@{}ll} \inf \{\frac{\mu_g}{1-x}I_f (\frac{\frac{\mu_g}{1-x}-z}{\frac {\mu_g}{1-x}} )+I_g(z):z>0 \}&\ \mbox{if}\ x<1,\\ \infty&\ \mbox{if}\ x\geq1. \end{array} \right. \] \end{proposition} \begin{proof} By Proposition \ref{prop:main} and the contraction principle we have the LDP of $ \{\frac{\bar{Y}_n-\mu_g}{\bar{Y}_n}:n\geq 1 \}$ with good rate function $J_{\mu_g}$ defined by \[ J_{\mu_g}(x):=\inf \biggl\{I_{\mathcal{G}_{f,g},g}(y,z):y\geq z>0, \frac {y-\mu_g}{y}=x \biggr\}. \] The case $x\geq1$ is trivial because we have the infimum over the empty set (we recall that $\mu_g>0$ because $q_0=0$). For $x<1$, we have \[ J_{\mu_g}(x)=\inf \biggl\{I_{\mathcal{G}_{f,g},g} \biggl(\frac{\mu _g}{1-x},z \biggr):z>0 \biggr\}, \] and we obtain the desired formula by taking into account the expression of the rate function $I_{\mathcal{G}_{f,g},g}$ in Proposition \ref{prop:main}.\vadjust{\eject} \end{proof} \begin{remark}[We can have $J_{\mu_g}(x)<\infty$ for some $x<0$]\label {rem:rf-in-propminorestimator-could-be-finite-for-negative-arguments} We know that, for $J_{\mathcal{G}_{f,g},g}$ in Proposition \ref{prop:main-estimators}, we have $J_{\mathcal{G}_{f,g},g}(x)=\infty$ for $x\notin[0,1)$. On the contrary, as we see, we could have $J_{\mu_g}(x)<\infty$ for some $x<0$. In order to explain this fact, we denote the minimum value $r$ such that $q_r>0$ by $r_{\mathrm{min}}$; then we have $\mu_g\geq r_{\mathrm{min}}$; moreover, we have $\mu_g>r_{\mathrm{min}}$ if $q_{r_{\mathrm{min}}}<1$. In conclusion, we can say that if $\mu_g>r_{\mathrm{min}}$, then the range of negative values $x$ such that $J_{\mu_g}(x)<\infty$ is \begin{equation} \label{eq:range-of-negative-x} x\geq1-\frac{\mu_g}{r_{\mathrm{min}}}; \end{equation} in fact, for $x<1$, both $I_f (\frac{\frac{\mu_g}{1-x}-z}{\frac{\mu_g}{1-x}} )$ and $I_g(z)$ are finite for $z\in [r_{\mathrm{min}},\frac{\mu_g}{1-x}]$, and therefore we can say that $J_{\mu_g}(x)<\infty$ if $r_{\mathrm{min}}\leq\frac{\mu_g}{1-x}$ or, equivalently, if \eqref{eq:range-of-negative-x} holds. \end{remark} \begin{remark}[Estimators of $\mu_f$ when $\mu_f=0$]\label{rem:no-offsprings} If $\mu_f=0$, that is, $f(s)=1$ for all $s$ or, equivalently, $p_0=1$, then the rate function in Proposition \ref{prop:main-estimators} is \[ J_{\mathcal{G}_{f,g},g}(x)= \left\{ \begin{array}{@{}ll} 0&\ \mbox{if}\ x=0,\\ \infty&\ \mbox{otherwise}. \end{array} \right. \] Then it is easy to check that $J_{\mathcal{G}_{f,g},g}$ coincides with $I_f$, and therefore $J_{\mathcal{G}_{f,g},g}$ coincides with $J_{\mathcal{G}_{f,g_\diamond},g_\diamond}$ in \eqref{eq:main-estimators-rf-deterministic-initial-population} \textup{(}note that, in particular, we cannot have the strict inequalities in \eqref{eq:local-strict-inequality-between-rf} in Remark \ref{rem:comparison} stated for the case $\mu_f>0$). Finally, if $\mu_f=0$ \textup{(}and as usual $q_0=0$ or, equivalently, $\mu_g>0$), then we have $z=\frac{\mu_g}{1-x}$ in the variational formula of the rate function in Proposition \ref{prop:minor-estimators}, and therefore \begin{equation} \label{eq:rf-prop-minor-estimators-muf=0} J_{\mu_g}(x)= \left\{ \begin{array}{@{}ll} I_g (\frac{\mu_g}{1-x} )&\ \mbox{if}\ 1-\frac{\mu_g}{r_{\mathrm {min}}}\leq x<1,\\ \infty&\ \mbox{otherwise}. \end{array} \right. \end{equation} Note the rate function in \eqref{eq:rf-prop-minor-estimators-muf=0} can also be derived by combining the contraction principle and the rate function $I_g$ for the empirical means $\{\bar{Z}_n:n\geq1\}$; in fact, we have $ \{\frac{\bar{Y}_n-\mu_g}{\bar{Y}_n}:n\geq 1 \}= \{\frac{\bar{Z}_n-\mu_g}{\bar{Z}_n}:n\geq 1 \}$, and the rate function $I_g$ is good by the hypotheses of Proposition \ref{prop:minor-estimators} \textup{(}see Proposition \ref{prop:main} and Remark \ref{rem:implicit-hypotheses-for-prop-main}\textup{)}. Finally, we also note that inequality \eqref{eq:range-of-negative-x} appears in the rate function expression~\eqref{eq:rf-prop-minor-estimators-muf=0}. \end{remark} \section*{Acknowledgments} The authors thank a referee for suggesting shorter proofs of Propositions \ref{prop:main-unitary-initial-population} and \ref{prop:main}. The support of GNAMPA (INDAM) is acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Named Entity Recognition (NER) is the task of detecting mentions of real-world entities from text and classifying them into predefined types (e.g., locations, persons, organizations). It is a core task in knowledge extraction and is important to various downstream applications such as user interest modeling \citep{karatay2015user}, question answering \citep{khalid2008impact} and dialogue systems \citep{bowden2018slugnerds}. Traditional approaches to NER mainly train statistical sequential models, such as Hidden Markov Model (HMM) \citep{zhou2002named} and Conditional Random Field (CRF) \citep{lafferty2001conditional} based on hand-crafted features. To alleviate the burden of designing hand-crafted features, deep learning models \citep{ma2016end,huang2015bidirectional} have been proposed for NER and shown strong performance. However, most deep learning methods rely on large amounts of labeled training data. As NER tasks require token-level labels, annotating a large number of documents can be expensive, time-consuming, and prone to human errors. In many real-life scenarios, the lack of labeled data has become the biggest bottleneck that prevents deep learning models from being adopted for NER tasks. To tackle the label scarcity issue, one approach is to use distant supervision to generate labels automatically. In distant supervision, the labeling procedure is to match the tokens in the target corpus with concepts in knowledge bases (e.g. Wikipedia\footnote{\url{https://www.wikipedia.org/}} and YAGO\footnote{\url{https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/}}), which are usually easy and cheap to access. Nevertheless, the labels generated by the matching procedure suffer from two major challenges. The first challenge is \emph{incomplete annotation}, which is caused by the limited coverage of existing knowledge bases. Take two common open-domain NER datasets as examples. From Table~\ref{tab:comp_distantlabel}, we find that the coverage of tokens on both datasets is very low (less than 60\%).This issue renders many entities mentions unmatched and produces many false-positive labels, which can hurt subsequent NER model training significantly. The second challenge is \emph{noisy annotation}. The annotation is often noisy due to the labeling ambiguity -- the same entity mention can be mapped to multiple entity types in the knowledge bases. For instance, the entity mention '\emph{Liverpool}' can be mapped to both '\emph{Liverpool City}' (type: \texttt{LOC}) and '\emph{Liverpool Football Club}' (type: \texttt{ORG}) in the knowledge base. While existing methods adopt label induction methods based on type popularity, they will potentially lead to a matching bias toward popular types. Consequently, it can lead to many false-positive samples and hurt the performance of NER models. What's worse, there is often a trade-off between the label accuracy and coverage: generating the high-quality label requires setting strict matching rules which may not generalize well for all the tokens and thus reduce the coverage and introduce false-negative labels. On the other hand, increasing the coverage of annotation suffers from the increasing number of incorrect labels due to label ambiguity. From the above, it is still very challenging to generate high-quality labels with high coverage to the target corpus. Several studies have attempted to address the above challenges in distantly-supervised NER. To address the label incompleteness issue, some works adopt the partial annotation CRFs to consider all possible labels for unlabeled tokens~\citep{yang2018distantly,shang2018learning}, but they still require a considerable amount of annotated tokens or external tools. To address the label noise issue, \citeauthor{ni2017weakly} \cite{ni2017weakly} use heuristic rules to filter out sentences with low matching quality. However, this filtering strategy improves the precision at the expense of lowering the recall. \citeauthor{cao2019low} \cite{cao2019low} attempt to induce labels for entity mentions based on their occurrence popularity in the concept taxonomy, which can suffer from labeling bias and produce mislabeled data. Moreover, most of the methods mainly focus on NER tasks in specific domains (e.g. biomedical, chemistry, etc.) where the ambiguity of the named entity is very low. When the matching ambiguity issue is more severe, such methods will be less effective especially under open-domain scenarios. Till now, training \emph{open-domain} NER models with distant supervision remains a challenging problem. We propose our model {\sf BOND}\xspace, short for \textbf{B}ERT-Assisted \textbf{O}pen-Domain \textbf{N}amed entity recognition with \textbf{D}istant Supervision, which learns accurate named entity taggers from distant supervision without any restriction on the domain or the content of the corpora. To address the challenges in learning from distant supervision, our approach leverages the power of pre-trained language models (e.g., ELMo \citep{peters2018deep}, BERT \citep{devlin2018bert}, XLnet \citep{yang2019xlnet}) which are particularly attractive to this task due to the following merits: \emph{First}, they are very large neural networks trained with huge amounts of unlabeled data in a \emph{completely unsupervised manner}, which can be cheaply obtained; \emph{Second}, due to their massive sizes (usually having hundreds of millions or billions of parameters), they have \emph{strong expressive power} to capture general semantics and syntactic information effectively. These language models have achieved state-of-the-art performance in many popular NLP benchmarks with appropriate fine-tuning ~\citep{devlin2018bert,liu2019roberta,yang2019xlnet,Lan2020ALBERT,raffel2019exploring}, which demonstrates their strong ability in modeling the text data. To fully harness the power of pre-trained language models for tackling the two challenges, we propose a two-stage training framework. In the first stage, we fine-tune the RoBERTa model~\citep{liu2019roberta} with distantly-matched labels to essentially transfer the semantic knowledge in RoBERTa, which will improve the quality of prediction induced from distant supervision. It is worth noting that we adopt early stopping to prevent the model from overfitting to the incomplete annotated labels\footnote{Here the incomplete annotated labels refer to tokens wrongly labeled as type '\texttt{O}'.} and significantly improve the recall. Then we use the RoBERTa model to predict a set of pseudo soft-labels for all data. In the second stage, we replace the distantly-matched labels with the pseudo soft-labels and design a \emph{teacher-student} framework to further improve the recall. The \emph{student} model is first initialized by the model learned in the first stage and trained using pseudo soft-labels. Then, we update the \emph{teacher} model from the \emph{student} model in the previous iteration to generate a new set of pseudo-labels for the next iteration to continue the training of the \emph{student} model. This \textit{teacher-student} framework enjoys the merit that it progressively improves the model confidence over data. In addition, we select samples based on the prediction confidence of the \emph{student} model to further improve the quality of soft labels. In this way, we can better exploit both the knowledge base information and the language models and improve the model fitting. Our proposed method is closely related to low-resource NER and semi-supervised learning. We discuss more details in Section 5. We summarize the key contributions of our work as follows: \vspace{0.05in} \noindent $\bullet$ We demonstrate that the pre-trained language model can also provide additional semantic information during the training process and reduce the label noise for distantly-supervised named entity recognition. To the best of our knowledge, this is the first work that leverages the power of pre-trained language model for open-domain NER tasks with distant supervision. \vspace{0.05in} \noindent $\bullet$ We design a two-stage framework to fully exploit the power of language models in our task. Specifically, we refine the distant label iteratively with the language model in the first stage and improve the model fitting under the teacher-student framework in the second stage, which is able to address the challenge of noisy and incomplete annotation. \vspace{0.05in} \noindent $\bullet$ We conduct comprehensive experiments on 5 datasets for named entity recognition tasks with distant supervision. Our proposed method significantly outperforms state-of-the-art distantly supervised NER competitors in all 5 datasets (4 of which by significant margins). \section{Preliminaries} \newcommand{\eg}{\emph{e.g.}\xspace} We briefly introduce the distantly-supervised NER problem and the pre-trained language models. \subsection{Distantly Supervised NER} NER is the process of locating and classifying named entities in text into predefined entity categories, such as person names, organizations, locations, etc. Formally, given a sentence with $N$ tokens $\bX=[x_{1}, ..., x_{N}]$, an entity is a span of tokens $\bs = [x_i, ...,x_j] \ (0 \leq i \leq j\leq N)$ associated with an entity type. Based on the \texttt{BIO} schema~\citep{li2012joint}, NER is typically formulated as a sequence labeling task of assigning a sequence of labels $\bY = [y_{1}, ..., y_{N}]$ to the sentence $\bX$. Specifically, the first token of an entity mention with type \texttt{X} is labeled as \texttt{B-X}; the other tokens inside that entity mention are labeled as \texttt{I-X}; and the non-entity tokens are labeled as \texttt{O}. For (fully) supervised NER, we are given $M$ sentences that are already annotated at token level, denoted as $\{(\bX_m,\bY_m)\}_{m=1}^M$. Let $f(\bX;\theta)$ denote an NER model, which can compute $N$ probability simplexes for predicting the entity labels of any new sentence $\bX$, where $\theta$ is the parameter of the NER model. We train such a model by minimizing the following loss over $\{(\bX_m,\bY_m)\}_{m=1}^M$: \begin{align}\label{supervised-NER} \hat\theta = \argmin_{\theta} \frac{1}{M}\sum_{m=1}^\text{M} \ell(\bY_m, f(\bX_m; \theta)), \end{align} where $\ell(\cdot, \cdot)$ is the cross-entropy loss. For distantly-supervised NER, we do not have access to well-annotated true labels, but only \emph{distant labels} generated by matching unlabeled sentences with external gazetteers or knowledge bases (KBs). The matching can be achieved by string matching \citep{giannakopoulos-etal-2017-unsupervised}, regular expressions \citep{DBLP:journals/corr/Fries0RR17} or heuristic rules (e.g., POS tag constraints). Accordingly, we learn an NER model by minimizing Eq. \eqref{supervised-NER} with $\{\bY_m\}_{m=1}^M$ replaced by their distantly labeled counterparts. \vspace{1ex} \noindent \textbf{Challenges.} The labels generated by distant supervision are often noisy and incomplete. This is particularly true for open-domain NER where there is no restriction on the domain or the content of the corpora. \citeauthor{DBLP:journals/corr/Fries0RR17} \cite{DBLP:journals/corr/Fries0RR17} and \citeauthor{giannakopoulos-etal-2017-unsupervised} \cite{giannakopoulos-etal-2017-unsupervised} have proposed distantly-supervised NER methods for specific domains (\eg, biomedical domain), where the adopted domain-specific gazetteers or KBs are often of high matching quality and yield high precision and high recall distant labels. For the open domain, however, the quality of the distant labels is much worse, as there is more ambiguity and limited coverage over entity types in open-domain KBs. Table \ref{tab:comp_distantlabel} illustrates the matching quality of distant labels on the open-domain and the biomedical-domain datasets. As can be seen, the distant labels for the open-domain datasets suffer from much lower precision and recall. This imposes great challenges to training accurate NER models. \begin{table}[tb!] \centering \caption{Existing Gazetteer Matching Performance on Open-Domain \citep{sang2003introduction, strauss2016results} and Biomedical Domain NER Datasets \citep{shang2018learning}. } \vspace{+0.1in} \begin{tabular}{|c | c | c | c | c|} \hline \multirow{2}{*}{Metric} & \multicolumn{2}{c|}{Open-Domain} & \multicolumn{2}{c|}{Biomedical Domains} \\ \cline{2-5} & CoNLL03 & Tweet & BC5CDR & NCBI-Disease \\ \hline Entity Types & 4 & 10 & 2 & 1 \\ \hline F-1 & 59.61 & 35.83 & 71.98 & 69.32\\ Precision & 71.91 & 40.34 & 93.93 & 90.59 \\ Recall & 50.90 & 32.22 & 58.35 & 56.15\\ \hline \end{tabular} \vspace{-0.1in} \label{tab:comp_distantlabel} \end{table} \subsection{Pre-trained Language Model} Pre-trained language models, such as BERT and its variants (\eg, RoBERTa \citep{liu2019roberta}, ALBERT \citep{Lan2020ALBERT} and T5 \citep{raffel2019exploring}), have achieved state-of-the-art performance in many natural language understanding tasks \citep{jiang2019smart}. These models are essentially massive neural networks based on bi-directional transformer architectures, and are trained using open-domain data in a completely unsupervised manner. The stacked self-attention modules of the transformer architectures can capture deep contextual information, and their non-recurrent structures enable the training to scale to large amounts of open-domain data. For example, the popular BERT-base model contains 110 million parameters, and is trained using the BooksCorpus~\citep{zhu2015aligning} (800 million words) and English Wikipedia (2500 million words). More importantly, many pre-trained language models have been publicly available online. One does not need to train them from scratch. When applying pre-trained language models to downstream tasks, one only needs to slightly modify the model and adapt the model through efficient and scalable stochastic gradient-type algorithms. \section{Two-Stage Framework: {\sf BOND}\xspace} We introduce our proposed two-stage framework--{\sf BOND}\xspace. In the first stage of {\sf BOND}\xspace, we adapt the BERT model to the distantly supervised NER task. In the second stage, we use a self-training approach to improve the model fitting to the training data. We summarize the {\sf BOND}\xspace framework in Figure~\ref{fig:FlowChart}. \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{figure/methodology/crop_flowchart} \vspace{-0.3in} \caption{The two-stage {\sf BOND}\xspace framework. In Stage I, the pre-trained BERT is adapted to the distantly supervised NER task with early stopping. In Stage II, a student model and a teacher model are first initialized from the model learned in Stage I. Then the student model is trained using pseudo-labels generated by the teacher model. Meanwhile, the teacher model is iteratively updated by the early-stopped student. } \label{fig:FlowChart} \end{figure*} \subsection{Stage I: BERT-Assisted Distantly Supervised Learning with Early Stopping} Before proceeding with our proposed method, we briefly introduce how we generate distant labels for open-domain NER tasks. Our label generation scheme contains two steps: We first identify potential entities by POS tagging and hand-crafted rules. We then query from Wikidata to identify the types of these entities using SPARQL \citep{vrandevcic2014wikidata} as illustrated in Figure~\ref{fig:wikimatch_small}. We next collect gazetteers from multiple online resources to match more entities in the data \citep{sang2003introduction}. Please refer to the appendix for more technical details. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{figure/wikimatch_small} \vspace{-0.15in} \caption{Illustration of matching entities from Wikidata} \label{fig:wikimatch_small} \end{figure} We then proceed with our proposed method. We use $f(\cdot; \theta)$ to denote the NER model parameterized by $\theta$, $f_{n,c}(\cdot; \cdot)$ to denote the probability of the $n$-th token belonging to the $c$-th class, and $\{(\bX_m,\bD_m)\}_{m=1}^M$ to denote the distantly labeled data, where $\bD_m = [d_{m,1}, ..., d_{m,N}]$ and $\bX_m = [x_{m,1}, ..., x_{m,N}]$. The NER model $f(\cdot; \theta)$ is learned by minimizing the loss over $\{(\bX_m,\bD_m)\}_{m=1}^M$: \begin{align} \hat\theta = \argmin_{\theta}\frac{1}{M}\sum_{m=1}^{M} \ell(\bD_m, f(\bX_{m}; \theta)), \label{eq:stage1} \end{align} where $\ell(\bD_m, f(\bX_{m}; \theta)) = \frac{1}{N} \sum_{n=1}^{N} -\log{f_{n,d_{m, n}}(\bX_{m}; \theta)}$. The architecture of the NER model $f(\cdot, \cdot)$ is a token-wise NER classifier on top of a pre-trained BERT, as shown in Figure~\ref{fig:Architecture}. The NER classifier takes in the token-wise output embeddings from the pre-trained BERT layers, and gives the prediction on the type for each token. The pre-trained BERT contains rich semantic and syntax knowledge, and yields high quality output embeddings. Using such embeddings as the initialization, we can efficiently adapt the pre-trained BERT to the target NER task using stochastic gradient-type algorithms, e.g., ADAM \citep{kingma2014adam,Liu2019}. Following \cite{raffel2019exploring}, our adaptation process updates the entire model including both the NER classification layer and the pre-trained BERT layers. \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{figure/methodology/crop_transfer_model} \vspace{-0.15in} \caption{Pre-trained Mask Language Model vs. NER Model} \label{fig:Architecture} \end{figure} \begin{algorithm}[ht] \KwIn{$M$ unlabeled sentences, $\{\bX_m\}_{m=1}^M$; External KBs including Wikidata and multi-source gazetteers; The NER model with pre-trained BERT layers $f(\cdot; \theta^{(0)})$; The early stopping time $T_1$; The updating formula of ADAM $\mathcal{T}$.} \textbf{// Distant Label Generation (DLG)} \begin{align*} \{\bD_m\}_{m=1}^M = \textrm{Matching}(\{\bX_m,\bD_m\}_{m=1}^M; \textrm{External KBs}) \end{align*} \textbf{// Model Adaptation} \\ \For{$t = 1, 2, ..., T_1$}{ Sample a minibatch $\cB_t$ from $\{(\bX_m,\bD_m)\}_{m=1}^M$ .\\ Update the model using ADAM:\\ \quad\quad\quad\quad\quad\quad$ \theta^{(t)} = \cT(\theta^{(t-1)}, \cB_t) . $ } \KwOut{The early stopped model: $\hat\theta = \theta^{(T_1)}$} \caption{Stage I: BERT-Assisted Distantly Supervised Learning with Early Stopping} \label{alg:main1} \end{algorithm} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{figure/methodology/crop_stage1} \vspace{-0.15in} \caption{ Illustration of Stage I. Top) The pre-trained semantic knowledge is transferred to the NER task; Middle) Early stopping leverages the pre-trained knowledge and yields better prediction; Bottom) Without early stopping, the model overfits the noise. The token embeddings are evolving, as we update the pre-trained BERT layers. } \label{fig:overfit} \end{figure} Figure~\ref{fig:overfit} illustrates how the pre-trained BERT embeddings help the model adapt to distantly supervised NER tasks. We highlight that BERT is pre-trained through a masked language model (MLM) task, and is capable of predicting the missing words using the contextual information. Such a MLM task shares a lot of similarity with the NER task. Both of them are token-wise classification problems and heavily rely on the contextual information (see Figure~\ref{fig:Architecture}). This naturally enables the semantic knowledge of the pre-trained BERT to be transferred to the NER task. Therefore, the resulting model can better predict the entity types than those trained from scratch using only the distantly labeled data. \vspace{1ex}\noindent \textbf{Early Stopping.} One important strategy we use in the adaptation process is early stopping. Due to the large model capacity as well as the limited and noisy supervision (distant labels), our NER model can overfit the noise in distant labels and forget the knowledge of the pre-trained BERT if without any intervention. Early stopping essentially serves as a strong regularization to prevent such overfitting and improves generalization ability to unseen data. \begin{remark} Stage I addresses both of the two major challenges in distantly supervised NER tasks: noisy annotation and incomplete annotation. As the semantic knowledge in the pre-trained BERT is transferred to the NER model, the noise is suppressed such that the prediction precision is improved. Moreover, early stopping prevents the model from overfitting the incomplete annotated labels and further improves the recall. \end{remark} \subsection{Stage II: Self-Training} We first describe a teacher-student framework of self-training to improve the model fitting, and then we propose to use high-confidence soft labels to further improve the self-training. \subsubsection{The Teacher-student Framework} We use $f(\cdot; \theta_{\textrm{tea}})$ and $f(\cdot; \theta_{\textrm{stu}})$ to denote teacher and student models, respectively. Given the model learned in Stage I, $f(\cdot; \hat\theta)$, one option is to initialize the teacher model and the student model as: $$\theta_{\textrm{tea}}^{(0)} = \theta_{\textrm{stu}}^{(0)} = \hat\theta,$$ and another option is \begin{align}\label{re-init} \theta_{\textrm{tea}}^{(0)} = \hat\theta\quad\textrm{and}\quad\theta_{\textrm{stu}}^{(0)}=\theta_{\textrm{BERT}}, \end{align} where $\theta_{\textrm{BERT}}$ denotes the initial model with the pre-trained BERT layers used in Stage I. For simplicity, we refer the second option to ``re-initialization''. At the $t$-th iteration, the teacher model generates pseudo labels $\{\tilde{\bY}^{(t)}_m = [\tilde{y}_{m,1}^{(t)}, ..., \tilde{y}_{m,N}^{(t)}]\}_{m=1}^{M}$ by \begin{align} \tilde{y}_{m,n}^{(t)} = \argmax_{c}{f_{n,c}(\bX_m; \theta_{\textrm{tea}}^{(t)})}. \label{eq:pseudo} \end{align} Then the student model fits these pseudo-labels. Specifically, given the teacher model $f(\cdot;\theta_{\textrm{tea}}^{(t)})$, the student model is learned by solving \begin{align} \hat\theta_{\textrm{stu}}^{(t)} = \argmin_{\theta}\frac{1}{M}\sum_{m=1}^M \ell(\tilde{\bY}_m^{(t)}, f(\bX_{m}; \theta)). \label{eq:self_train1} \end{align} We then use ADAM to optimize Eq. \eqref{eq:self_train1} with early stopping. At the end of $t$-th iteration, we update the teacher model and the student model by: \begin{gather} \theta_{\textrm{tea}}^{(t+1)} = \theta_{\textrm{stu}}^{(t+1)} = \hat\theta_{\textrm{stu}}^{(t)}. \notag \end{gather} The algorithm is summarized in Algorithm~\ref{alg:main2}. \begin{algorithm}[ht] \KwIn{$M$ training sentences, $\{\bX_m\}_{m=1}^M$; The early stopped model obtained in Stage I, $f(\cdot; \hat\theta)$; The number of self-training iterations $T_2$; The early stopping time $T_3$; The updating formula of ADAM $\mathcal{T}$.} Initialize the teacher model and the student model: \[\theta_{\textrm{tea}}^{(0)} = \theta_{\textrm{stu}}^{(0)} = \hat\theta.\] \For{$t = 1, 2, ...T_2$}{ $\theta_{\textrm{stu}}^{(t,0)} = \theta_{\textrm{stu}}^{(t)}.$\\ \For{$k = 1, 2, ..., T_3$}{ Sample a minibatch $\cB_k$ from $\{\bX_m\}_{m=1}^M$ .\\ Generate pseudo-labels $\{\tilde{\bY}_m\}_{m \in \cB_k}$ by Eq. \eqref{eq:pseudo}.\\ Update the student model:\\ \quad\quad\quad$ \theta_{\textrm{stu}}^{(t,k)} = \mathcal{T}(\theta_{\textrm{stu}}^{(t,k-1)},\{(\bX_m, \tilde{\bY}_m)\}_{m \in \cB_k}). $ } Update the teacher and student:\\ \quad\quad\quad\quad\quad\quad$ \theta_{\textrm{tea}}^{(t)} = \theta_{\textrm{stu}}^{(t)} = \theta_{\textrm{stu}}^{(t,T_3)}. $ } \KwOut{The final student model: $\theta^{(T_2)}$} \caption{Stage II: Self-Training} \label{alg:main2} \end{algorithm} \begin{remark} Note that we discard all pseudo-labels from the $(t\textrm{-}1)$-th iteration, and only train the student model using pseudo-labels generated by the teacher model at the $t$-th iteration. Combined with early stopping, such a self-training approach can improve the model fitting and reduce the noise of the pseudo-labels as illustrated in Figure~\ref{fig:stage2}. With progressive refinement of the pseudo-labels, the student model can gradually exploit knowledge in the pseudo-labels and avoid overfitting. \end{remark} \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{figure/methodology/Explain_stage2.png} \vspace{-0.15in} \caption{Illustration of self-training. The self-training can gradually reduce the noise of the pseudo-labels and improve model fitting. } \label{fig:stage2} \end{figure} \begin{remark} Our teacher-student framework is quite general, and can be naturally combined with other training techniques, e.g., mean teacher \citep{tarvainen2017mean} and virtual adversarial training \citep{miyato2018virtual}. Please refer to Section~\ref{sec:discussion} for more detailed discussions. \end{remark} \subsubsection{Re-weighted High-Confidence Soft Labels} The hard pseudo-labels generated by Eq. \eqref{eq:pseudo} only keeps the most confident class for each token. To avoid losing too much information of other classes, we propose to use soft labels with confidence re-weighting. Recall that for the $n$-th token in the $m$-th sentence, the output probability simplex over $C$ classes is denoted as $$[f_{n,1}(\bX_m;\theta),...,f_{n,C}(\bX_m;\theta)].$$ At the $t$-th iteration, the teacher model generates soft pseudo-labels $\{\bS_m^{(t)} = [\bs_{m,n}^{(t)}]_{n=1}^N \}_{m=1}^M$ following ~\cite{xie2016unsupervised}: \begin{align} \bs_{m,n}^{(t)} = [s_{m,n,c}^{(t)}]_{c=1}^{C} = \Bigg[ \frac{f_{n,c}^2(\bX_m;\theta_{\textrm{tea}}^{(t)})/p_{c}}{\sum_{c'=1}^C f_{n,c'}^2(\bX_m;\theta_{\textrm{tea}}^{(t)})/p_{c'}}\Bigg]_{c=1}^{C} \label{eq:soft} \end{align} where $p_{c} = \sum_{m=1}^M \sum_{n=1}^N f_{n,c}(\bX_m;\theta_{\textrm{tea}}^{(t)})$ calculates the unnormalized frequency of the tokens belonging to the $c$-th class. As can be seen, such a squared re-weighting step in Eq. \eqref{eq:soft} essentially favors the classes with higher confidence. The student model $f(\cdot; \theta_{\textrm{stu}}^{(t)})$ is then optimized by minimizing \begin{align*} \theta_{\textrm{stu}}^{(t)} &= \argmin_{\theta} \frac{1}{M} \sum_{m=1}^{M} \ell_{\rm KL}(\bS_m^{(t)}, f(\bX_{m}; \theta)), \end{align*} where $\ell_{\rm KL}(\cdot,\cdot)$ denotes the KL-divergence-based loss: \begin{align} \ell_{\rm KL}(\bS_m^{(t)}, f(\bX_{m}; \theta))=\frac{1}{N}\sum_{n=1}^N\sum_{c=1}^C - s_{m,n,c}^{(t)} \log f_{n,c}(\bX_{m}; \theta). \label{eq:klloss} \end{align} \noindent \textbf{High-Confidence Selection.} To further address the uncertainty in the data, we propose to select tokens based on the prediction confidence. Specifically, at the $t$-th iteration, we select a set of high confidence tokens from the $m$-th sentence by \begin{align}\label{select-token} H^{(t)}_m = \{n : \max_{c} s_{m,n,c}^{(t)} > \epsilon \}, \end{align} where $\epsilon\in(0,1)$ is a tuning threshold. Accordingly, the student model $f(\cdot; \theta_{\textrm{stu}}^{(t)})$ can be optimized by minimizing the loss only over the selected tokens: \begin{align*} \theta_{\textrm{stu}}^{(t)} = \argmin_{\theta} \frac{1}{M|H^{(t)}_m|}\sum_{m=1}^{M} \sum_{n\in H^{(t)}_m}\sum_{c=1}^C - s_{m,n,c}^{(t)} \log f_{n,c}(\bX_{m}; \theta). \end{align*} The high confidence selection essentially enforces the student model to better fit tokens with high confidence, and therefore is able to improve the model robustness against low-confidence tokens. \section{Experiments} \label{sec:exp} We conduct a series of experiments to demonstrate the superiority of our proposed method. \input{setup} \subsection{Experimental Results} Our NER model use RoBERTa-base as the backbone. A linear classification layer is build up on the RoBERTa-base model. Please refer to the appendix for implementation details. \input{exp-main} \input{exp-ablation} \input{exp-parameter} \input{exp-case} \section{Related Work and Discussion} \label{sec:discussion} Our work is related to \textbf{low-resource NER}. This line of research focuses on leveraging cross lingual information to improve the model performance. For examples, \cite{cotterell-duh-2017-low, ijcai2018-566} consider NER for a low resource target language. They propose to train an NER model with annotated language that are closely related to the target language. \cite{xie-etal-2018-neural} propose to use the bilingual dictionaries to tackle this challenge. More recently, \cite{DBLP:journals/corr/abs-1902-00193} propose a Bayesian graphical model approach to further improve the low resource NER performance. \noindent Our work is also relevant to \textbf{semi-supervised learning}, where the training data is only partially labeled. There have been many semi-supervised learning methods, including the popular Mean Teacher and Virtual Adversarial Training methods used in our experiments for comparison \citep{rosenberg2005semi,tarvainen2017mean,miyato2018virtual,meng2018weakly,clark2018semi}. Different from distant supervision, these semi-supervised learning methods usually has a partial set of labeled data. They rely on the labeled data to train an sufficiently accurate model. The unlabeled data are usually used for inducing certain regularization to further improve the generalization performance. The distant supervision, however, considers the setting with only noisy labels. Existing semi-supervised learning methods such as Mean Teacher and Virtual Adversarial Training can only marginally improve the performance, as shown in the ablation study in our experiments. \noindent \textbf{Other related works}: \cite{liu2019knowledge} propose a language model-based method --- {\sf KALM}\xspace for NER tasks. However, their approach has two drawbacks: (i) Since they design a language model designated for NER tasks, they need to first train the language models from scratch. However, this often requires a large amount of training corpus and enormous computational resources. In contrast, {\sf BOND}\xspace uses general-purpose pre-trained language models, which are publicly available online. (ii) The training of their language model is not fully unsupervised and requires token-level annotations. To address this issue, they resort to distant supervision, which yields incomplete and noisy annotations. Therefore, their language model does not necessarily achieve the desired performance. \noindent \textbf{Larger Pre-trained Language Models}: To further improve the performance of {\sf BOND}\xspace, we can use larger pre-trained language models such as RoBERTa-large~\citep{liu2019roberta} (Three times as big as RoBERT-base in our experiments) and T5~\citep{raffel2019exploring} (Thirty times larger than RoBERTa-base). These larger models contain more general semantics and syntax information, and have the potentials to achieve even better performance for NER Tasks. Unfortunately, due to the limitation of our computational resources, we are unable to use them in our experiments. \section*{Appendix} \clearpage \subsubsection{Ablation Study} To gain insights of our two-stage framework, we investigate the effectiveness of several components of our method via ablation study. The table \ref{tab:ablation} shows the results on both CoNLL03 and Wikigold datasets. Our results can be summarized as follows: \vspace{0.05in} \noindent $\bullet$ For Stage I, \textbf{Pre-trained Language Models} significantly improve both precision and recall for both datasets. Specifically, when training the NER model from scratch, the F1 scores of the output model of Stage I drop from $75.61$ to $36.66$ on CoNLL03, and from $51.55$ to $18.31$ on Wikigold. This verifies that the rich semantic and contextual information in pre-trained RoBERTa has been successfully transferred to our NER model in Stage I. \vspace{0.05in} \noindent $\bullet$ For Stage I, \textbf{Early stopping} improves both precision and recall for both datasets. We increase the training iterations from $900$ to $18000$ on CoNLL03 and from $350$ to $7000$ on Wikigold, and the F1 scores of the output model of Stage I drop from $75.61$ to $72.11$ on CoNLL03, and from $51.55$ to $49.68$ on Wikigold. This verifies that Early Stopping eases the overfitting and improves the generalization ability of our NER model. \vspace{0.05in} \noindent $\bullet$ For Stage II, \textbf{Soft labels} improve the $F_1$ score and recall on both datasets. Specifically, the $F_1$ scores and recall increase from $77.28/71.98$ to $80.18/78.84$ on CoNLL03, and from $56.90/59.74$ to $58.64/65.79$ on Wikigold. Moreover, the precision on Wikigold is also improved. This verifies that the soft labels preserve more information and yield better fitted models than those of the hard labels. \vspace{0.05in} \noindent $\bullet$ For stage II, \textbf{High-Confidence Selection} improves the $F_1$ scores on both datasets. Specifically, compared with using soft labels, the $F_1$ scores and recall increase from $81.56/78.84$ to $80.18/72.31$ on CoNLL03, and from $58.64/59.74$ to $60.07/68.58$ on Wikigold. Besides, the precision on CoNLL03 is also improved. This verifies that the high-confidence labels help select data and yield more robust performance. \vspace{0.05in} \noindent $\bullet$ For Stage II, \textbf{Re-initialization} improves both precision and recall, only when the hard labels are adopted. We believe that this is because the hard labels lose too much information about data uncertainty, re-initializing the RoBERTa layers restores semantic and contextual information, and can compensate such loss. In contrast, when soft labels are adopted, \textbf{Re-initialization} deteriorates both precision and recall. We believe that this is because the soft label retains sufficient information (i.e., the knowledge transferred from RoBERTa and learned from the distant labels). As a result, re-initialization only leads to underfitting on the data. \begin{table}[htb!] \caption{Ablation Study: $F_1$ Score (Precision/Recall) (in \%)} \label{tab:ablation} \vspace{-0.1in} \begin{center} \begin{tabular}{l@{\hspace{0.05in}}c@{\hspace{0.05in}}c} \toprule Method & CoNLL03 & Wikigold \\ \hline \multicolumn{3}{l}{Stage I}\\ \hline Stage I &$75.61 (83.76/68.90)$&$51.55 (49.17/54.50)$\\ Stage I w/o pre-train &$36.66 (37.49/35.75)$&$18.31 (18.14/18.50)$\\ Stage I w/o early stop &$72.11 (81.65/64.57)$&$49.68 (48.67/50.74)$\\ Stage I w/ MT &$76.30 (82.92/70.67)$&$46.68 (49.82/43.91)$\\%&$68.40 (67.66/69.16)$&$60.70 (57.57/64.19)$&$53.61 (52.00/55.32)$\\ Stage I w/ VAT &$76.38 (82.58/71.04)$&$47.54 (50.02/45.30)$\\%&$68.28 (67.12/69.48)$&$61.44 (59.49/63.51)$&$54.04 (52.38/55.81)$\\ \hline \multicolumn{3}{l}{Stage I + Stage II}\\ \hline {\sf BOND}\xspace$^\dagger$ &$77.28 (83.42/71.98)$&$56.90 (54.32/59.74)$\\ {\sf BOND}\xspace w/ soft &$80.18 (81.56/78.84)$&$58.64 (58.29/65.79)$\\ {\sf BOND}\xspace w/ soft+high conf &$81.48 (82.05/80.92)$&$60.07 (53.44/68.58)$\\ {\sf BOND}\xspace w/ reinit &$78.17 (85.05/72.31)$&$58.55 (55.31/62.19)$\\ {\sf BOND}\xspace w/ soft+reinit &$76.92(83.39/71.38)$&$54.09 (50.72/57.94)$\\ {\sf BOND}\xspace w/ MT &$77.16 (82.79/72.25)$&$57.93 (55.66/60.39)$\\ {\sf BOND}\xspace w/ VAT &$77.64 (85.62/70.69)$&$57.39 (55.05/59.41)$\\ \bottomrule \end{tabular} \end{center} \vspace{-0.1in} \emph{Note$^\dagger$:} We use ${\sf BOND}\xspace$ to denote our two-stage framework using hard pseudo-labels in this table for clarity. \end{table} Moreover, we also consider {\bf Multiple Re-initialization}, and observe similar results. \begin{figure*}[!htb] \centering \begin{tabular}{ @{}c@{ }c@{ } } \includegraphics[width=0.5\textwidth]{figure/result/crop_f1} & \includegraphics[width=0.5\textwidth]{figure/result/crop_precision} \\ (a) $F_1$ score & (b) Precision \vspace{+0.1in} \end{tabular} \begin{tabular}{ @{}c@{}} \includegraphics[width=0.5\textwidth]{figure/result/crop_recall} \\ (c) Recall \end{tabular} \caption{Learning Curves of {\sf BOND}\xspace, {\sf BOND}\xspace (w/ reinit), {\sf BOND}\xspace (w/ soft) and {\sf BOND}\xspace (w/ soft + reinit)} \label{fig:learning_curve} \end{figure*} \noindent $\bullet$ \textbf{Mean Teacher} and \textbf{Virtual Adversarial Training} can be naturally integrated into our versatile teacher-student framework by adding an additional MT teacher or a VAT teacher. \textbf{VAT} marginally improves the F1 scores on both datasets. \textbf{MT} marginally improves the F1 scores on Wikigold, and deteriorates the $F_1$ scores on CoNLL03. We believe that this is because \textbf{MT} and \textbf{VAT} perform well with high quality labels, however, the labels in our NER tasks are not very precise. \subsubsection{Case Study and Error Analysis} \begin{figure*}[!hbt] \centering \begin{tabular}{ @{}c@{ }c@{ }c@{} } \includegraphics[width=0.32\textwidth]{figure/result/crop_hist_distantlabel} & \includegraphics[width=0.32\textwidth]{figure/result/crop_hist_stage1} & \includegraphics[width=0.32\textwidth]{figure/result/crop_hist_stage2} \\ (a) Knowledge Base Matching & (b) Stage I & (c) Stage II \end{tabular} \caption{Recall of Knowledge Base Matching and different stages of {\sf BOND}\xspace. The horizontal axis denotes the true entity type. The segments in a bar denote the portions of the entities being classified into different entity types. } \label{fig:hist} \end{figure*} To demonstrate how {\sf BOND}\xspace improves the recall, we compare the prediction performance of KB matching with the output models of Stage I and Stage II using Wikigold data. Figure \ref{fig:hist} presents the bar plots of four entity types -- ``\texttt{LOC}'', ``\texttt{PER}'', ``\texttt{ORG}'' and ``\texttt{MISC}''. As can be seen, the KB matching yields a large amount of ''\texttt{O}'' (non-entity) due to its limited coverage. As a result, the recall is very low $47.63\%$. In contrast, our model of the Stage I benefits from the transferred knowledge of pre-trained RoBERTa and is able to correct some wrongly matched \texttt{O}'s to their corresponding entity types. Therefore, it enjoys a better recall $54.50\%$. Moreover, the self-training in the Stage II further improves the recall to $68.48\%$. \subsubsection{Main Results} Table~\ref{tab:main_result} presents the $F_1$ scores, precision and recall for all methods. Note that our implementations of the fully supervised NER methods attain very close to the state-of-the-art performance \citep{devlin2018bert,limsopatham2016bidirectional}. Our results are summarized as follows: \noindent $\bullet$ For all five datasets, our method consistently achieves the best performance under the distant supervision scenarios, in $F_1$ score, precision and recall. In particular, our method outperforms the strongest distantly supervised NER baselines by $\{11.74, 21.91, 0.66,$ $14.35, 12.53\}$ in terms of $F_1$ score. These results demonstrate the significant superiority of our proposed method. \noindent $\bullet$ The standard adaptation of pre-trained language models have already demonstrated remarkable performance. The models obtained by the Stage I of our methods outperform the strongest distantly supervised NER baselines by $\{5.87, 20.51, 0.42, 7.72, 4.01\}$ in terms of $F_1$ score. The Stage II of our methods further improves the performance of the Stage I by $\{5.87, 1.4, 0.24, 6.63, 8.52\}$. \noindent $\bullet$ On CoNLL03 dataset, compared with baselines which use different sources -- {\sf KALM}\xspace and {\sf ConNET}\xspace, our model also outperforms them by significant margins. More detailed technical comparisons between our method and them are provided in Section 5. \begin{table*}[htb!] \caption{Main Results on Testing Set: $F_1$ Score (Precision/Recall) (in \%)} \label{tab:main_result} \vspace{-0.2in} \begin{center} \small \resizebox{\columnwidth}{!}{% \begin{tabular}{lccccc} \hline Method & CoNLL03 & Tweet & OntoNote5.0 & Webpage & Wikigold \\ \hline \textbf{Entity Types} & 4 & 10 & 18 & 4 & 4 \\ \hline KB Matching &$71.40 (81.13/63.75)$&$35.83 (40.34/32.22)$&$59.51 (63.86/55.71)$&$52.45 (62.59/45.14)$&$47.76 (47.90/47.63)$\\ \hline \multicolumn{6}{l}{\textbf{Fully-Supervised} (Our implementation)}\\ RoBERTa &$90.11 (89.14/91.10)$&$52.19 (51.76/52.63)$&$86.20 (84.59/87.88)$&$72.39 (66.29/79.73)$&$86.43 (85.33/87.56)$\\ BiLSTM-CRF &$91.21 (91.35/91.06)$&$52.18 (60.01/46.16)$&$86.17 (85.99/86.36)$&$52.34 (50.07/54.76)$&$54.90(55.40/54.30)$\\ \hline \multicolumn{6}{l}{\textbf{Baseline} (Our implementation)}\\ BiLSTM-CRF &$59.50 (75.50/49.10)$&$21.77(46.91/14.18)$&$66.41 (68.44/64.50)$&$43.34 (58.05/34.59)$&$42.92 (47.55/39.11)$\\ {\sf AutoNER}\xspace &$67.00 (75.21/60.40)$&$26.10 (43.26/18.69)$&$67.18 (64.63/69.95)$&$51.39 (48.82/54.23)$&$47.54 (43.54/52.35)$\\ {\sf LRNT}\xspace &$69.74 (79.91/61.87)$&$23.84 (46.94/15.98)$&$67.69 (67.36/68.02)$&$47.74 (46.70/48.83)$&$46.21 (45.60/46.84)$\\ \hline \multicolumn{6}{l}{\textbf{Other Baseline} (Reported Results)}\\ {\sf KALM}\xspace $^\dagger$ &$\hspace{-0.025in}76.00 (\hspace{0.085in}$-\!-\!-$\hspace{0.085in}/\hspace{0.085in}$-\!-\!-$\hspace{0.085in})$&-\!-\!-&-\!-\!-&-\!-\!-&-\!-\!-\\ {\sf ConNET}\xspace$^\diamond$ &$75.57 (84.11/68.61)$&-\!-\!-&-\!-\!-&-\!-\!-&-\!-\!-\\ \hline \multicolumn{6}{l}{\textbf{Our {\sf BOND}\xspace Framework}}\\ Stage I &$75.61 (83.76/68.90)$&$46.61 (53.11/41.52)$&$68.11 (66.71/69.56)$&$59.11 (60.14/58.11)$&$51.55 (49.17/54.50)$\\ {\sf BOND}\xspace &${81.48} (82.05/80.92)$&${48.01}(53.16/43.76)$&$68.35 (67.14/69.61)$&${65.74} (67.37/64.19)$&${60.07} (53.44/68.58)$\\ \hline \end{tabular}% } \end{center} \vspace{-0.1in} \emph{Note:} $^\dagger$: {\sf KALM}\xspace achieves better performance when using extra data. $^\diamond$: {\sf ConNET}\xspace studies NER under a crowd sourcing setting, where the best human annotator achieves $F_1$ score at $89.51$. \end{table*} \subsubsection{Parameter Study} We investigate the effects of the early stopping time of Stage I -- $T_1$, the early stopping time of Stage II-- $T_3$, and confidence threshold $\epsilon$ for selecting tokens using CoNLL03 data. The default values are $T_1 = 900, T_3 =1800, \epsilon=0.9$. The learning curves are summarized in Figure \ref{fig:learning_curve}: \noindent $\bullet$ Both $T_1$ and $T_3$ reflect trade-offs between precision and recall of the Stage I and Stage II, respectively. This verifies the importance of early stopping. The model performance is sensitive to $T_1$, and less sensitive to $T_3$. \noindent $\bullet$ The recall increases along with $\epsilon$. The precision shows a different behavior: it first decreases and then increases. \noindent $\bullet$ We also consider a scenario, where $T_3$ is allowed to tune for each iteration of the Stage II. This requires more computational resource than the setting where $T_3$ remains the same for all iterations. This can further improve the model performance to $83.49$, $84.09$, $82.89$ in terms of $F_1$ scores, precision and recall, respectively. \begin{figure*}[!htb] \centering \begin{tabular}{ @{}c@{ }c@{ }c@{} } \includegraphics[width=0.32\textwidth]{figure/result/crop_parameter_study_T1} & \includegraphics[width=0.32\textwidth]{figure/result/crop_parameter_study_T3} & \includegraphics[width=0.32\textwidth]{figure/result/crop_parameter_study_conf} \\ (a) The Early Stopping Time & (b) The Early Stopping Time & (c) The Confidence Threshold \\ of Stage I -- $T_1$ & in Stage II -- $T_3$ & of Stage II -- $\epsilon$ \end{tabular} \caption{Parameter Study using CoNLL03: $F_1$, Precision, Recall on Testing Set (in \%)} \label{fig:parameter_study} \end{figure*} \subsection{Experimental Setup} \subsubsection{Datasets} We consider the following NER benchmark datasets: (i) \textbf{CoNLL03} \citep{tjongkimsang2003conll} is a well-known open-domain NER dataset from the CoNLL 2003 Shared Task. It consists of 1393 English news articles and is annotated with four entity types: person, location, organization, and miscellaneous. (ii) \textbf{Twitter} \citep{godin2015multimedia} is from the WNUT 2016 NER shared task. This is an open-domain NER dataset that consists of 2400 tweets (comprising 34k tokens) with 10 entity types. (iii) \textbf{OntoNotes5.0} \citep{weischedel2013ontonotes} contains text documents from multiple domains, including broadcast conversation, P2.5 data and Web data. It consists of around 1.6 millions words and is annotated with 18 entity types. (iv) \textbf{Wikigold} \citep{balasuriya2009named} is a set of Wikipedia articles (40k tokens) randomly selected from a 2008 English dump and manually annotated with the four CoNLL03 entity types. (v) \textbf{Webpage} \citep{ratinov2009design} is an NER dataset that contains personal, academic, and computer science conference webpages. It consists of 20 webpages that cover 783 entities belonging to the four types the same as CoNLL03. For distant labels generation, we match entity types in external KBs including Wikidata corpus and gazetteers collected from multiple online sources. The data sources and matching details are described in the appendix. \subsubsection{Baselines} We compare our model with different groups of baseline methods. \noindent $\bullet$ \textbf{KB Matching.} The first baseline performs string matching with external KBs using the mechanism described in the appendix. \noindent $\bullet$ \textbf{Fully-supervised Methods.} We also include fully-supervised NER methods for comparison, including: (i) \textbf{RoBERTa-base}~\citep{liu2019roberta}---it adopts RoBERTa model with linear layers to perform token-level prediction; (ii) \textbf{BiLSTM-CRF}~\citep{ma2016end} adopts bi-directional LSTM with character-level CNN to produce token embeddings, which are fed into a CRF layer to predict token labels. \noindent $\bullet$ \textbf{Distantly-supervised Methods.} The third group of baselines are recent deep learning models for distantly-supervised NER, including: (i) \textbf{BiLSTM-CRF}~\citep{ma2016end} is trained using the distant labels matched from KBs; (ii) \textbf{AutoNER}~\citep{shang2018learning} trains the model by assigning ambiguous tokens with all possible labels and then maximizing the overall likelihood using a fuzzy LSTM-CRF model; (iii) \textbf{LRNT}~\citep{cao2019low} is the state-of-the-art model for low-resource named tagging, which applies partial-CRFs on high-quality data with non-entity sampling. When comparing with these distantly supervised methods, we use the same distant labels as the training data for fair comparison. \noindent $\bullet$ \textbf{Baselines with Different Settings}. The following methods also conduct open-domain NER under distant supervision. We remark that they use different KBs and extra training data. Therefore, we only compare with the results reported in their papers. (i) {\sf KALM}\xspace~\citep{liu2019knowledge} augments a traditional language model with a KB and use entity type information to enhance the model. (ii) {\sf ConNET}\xspace~\citep{lan2019learning} leverages multiple crowd annotation and dynamically aggregates them by attention mechanism. It learn from imperfect annotations from multiple sources.\footnote{For {\sf KALM}\xspace and {\sf ConNET}\xspace model, the KB and crowd annotation are not public available, and thus we are unable to reproduce the results.} \noindent $\bullet$ For \textbf{Ablation Study}, we consider the following methods/tricks. (i) \textbf{MT}~\citep{tarvainen2017mean} uses Mean Teacher method to average model weights and forms a target-generating teacher model. (ii) \textbf{VAT}~\citep{miyato2018virtual} adopts virtual adversarial training to smooth the output distribution to make the model robust to noise. (iii) \textbf{Hard Label} generates pseudo-labels using Eq. \eqref{eq:pseudo}. (iv) \textbf{Soft Label} generates pseudo-labels using Eq. \eqref{eq:soft}. (v) \textbf{Reinitialization} initializes the student and teacher models using Eq. \eqref{re-init}. (vi) \textbf{High-Confidence Selection} selects tokens using Eq. \eqref{select-token}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Short-lived radioactive nuclei (SLR) are unstable nuclei with mean lives $\approx$ 0.1 to a 100 Myr. Their abundances can be measured in a variety of locations, both live via $\gamma$-ray spectroscopy \citep{Diehl2010} and analysis of deep-sea sediments \citep{Wallner2015}, and extinct, as in the case of their early Solar System (ESS) abundances inferred through the excess of their daughter nuclei in meteoritic samples \citep{Dauphas2011}. Because of their short mean lives relative to the age of the Galaxy, these nuclei represent the fingerprint of current nucleosynthesis, some of them do not even live enough to travel far away from their site of origin, which results in the decoupling of their abundances from galaxy-wide mixing processes \citep[see, e.g.][]{Diehl2010,Fujimoto2018}. When considering their evolution in the Galaxy, SLRs therefore probe the current galactic star formation rate instead of the star formation history \citep{Clayton1984,Meyer2000,Huss2009} and, as such, are relatively unaffected by the processes that operate over the full timescale of the Galaxy, such as galactic inflows and outflows (e.g., \citealt{Somerville2015,Naab2017,Tumlinson2017}), the build-up of the total stellar mass (e.g., \citealt{Bland2016}), and the mixing and recycling processes (e.g., \citealt{angles2017}). Such sources of uncertainty, instead, affect significantly the stable, or long-lived, reference isotope used to measure the abundance of SLR nuclei in the ESS. In \citet{Cote2019} we considered the impact of these sources of uncertainty on the determination of radioactive-to-stable isotopic ratios in the Galaxy and derived that their impact on the ratio results in a variation of at most a factor of 3.5. There are other sources of uncertainty, however, that must be considered for the evolution of SLRs in the interstellar medium (ISM). As mentioned above, due to their short mean life, SLRs are not evenly distributed in the Galaxy \citep{Fujimoto2018,Pleintinger2019}. In particular, the evolution of a SLR at a specific location in the Galaxy directly depends on the ratio between its mean life $\tau$ and the average time between enriching events $\langle\delta\rangle$, as well as the specific statistical distribution of these $\delta$ \citep[see][henceforth Paper I]{Cote2019B}. The reason for this can be understood by analyzing two limiting cases: $\tau \gg \langle\delta\rangle$ and $\tau \ll \langle\delta\rangle$. In the first case, the mean life is much larger than the time between two enriching events. This allows for the build-up of a memory\footnote{Here we define memory as the SLR abundance remains, non-decayed, from the enrichment events that occurred before the last event.} of the SLR abundance up to a steady-state (between production and decay) equilibrium value equal to the yield of a single event multiplied by a factor $\tau/\langle\delta\rangle$. In the second case, the expected time between two enriching events is instead far apart enough to allow for the complete decay of the SLR before the next event, leaving almost no memory. Therefore, in this case, the average abundance remains below the value of the yield. In relation to investigations of the ESS, the first case allows us to calculate the isolation time (T$_{\text{iso}}$), defined as the time between the decoupling of the material that ended up in the solar nebula from the Galactic chemical enrichment processes (in other words, the birth of the colder and denser molecular cloud) and the formation of the first solids in the nebula. The second case instead allows us to calculate the time from the last event (T$_{\text{LE}}$), defined as the time since the last nucleosynthesis event in the Galaxy that contributed a particular SLR to the Solar System matter \citep{Lugaro2014,Lugaro2018}. If T$_{\text{LE}}$ can be calculated, then the SLR may also be used as constraints for the features of specific nucleosynthetic event \citep[see][]{Cote2020}. In Paper I we analysed the SLR abundance distribution resulting from uneven temporal distribution of nucleosynthetic source, and derived the uncertainties due to this temporal granularity of the enriching events using a simple statistical model of a given region in the Galaxy affected by several enriching events via a Monte Carlo calculation. We concluded that the interplay between the time between two enriching events and the mean life of the SLR determines both the steady-state equilibrium value and its uncertainty. The uncertainty calculated in Paper I does not affect the abundance of the stable reference nucleus, which is well mixed within 100 Myr \citep[e.g.][]{deavillez02}, and can be simply composed with the uncertainty due to the GCE studied by \citet{Cote2019} to calculate the total uncertainty in the SLR/stable isotopic ratio. This total uncertainty can then be used to deduce information about the isolation time (see Paper I, Sect.~5) or the time from the last enriching event \citep[see][]{Cote2020}. Here, we use the same methodology as in Paper I to study the effect of the presence of heterogeneities due to the temporal granularity of their stellar sources onto the the behaviour and uncertainty of the ratio of two SLRs. Such ratio can exhibit a markedly different behaviour to that of a SLR/stable isotope ratio because its evolution depends also on the difference between the two mean lives. We will restrict ourselves to analysing the scenario of $synchronous$ enrichment scenario. That is, the situation in which both SLRs are always generated in the same events. This means that the evolution of the abundances of both isotopes are correlated, and the uncertainty of their ratio cannot be simply derived from adding the individual abundance uncertainties on each isotope. We will also assume that the production ratio $P$ of the two SLRs is always the same. The extension to a more general framework in which different events have different production ratios will not fundamentally change our conclusions, as long as both isotopes are always created together. We do not analyse instead the complementary $asynchronous$ enrichment scenario, where at least one of the SLR is created in more than one type of event. This scenario is more complex to analyse with our statistical method because it is not possible to define a single production ratio for this case. Furthermore, the possibility that the two SLR may have different $\langle\delta\rangle$ values from different sources complicates the general analysis. The outline of the paper is as follows. In Section~\ref{sec:analyticalSolution}, we assume that $\delta$ is constant, and present the analytical solutions to quantify the abundance and uncertainty of any ratio involving two SLRs, for four different regimes. In Section~\ref{sec:stochasticDelta}, we extend our analysis by accounting for a variable $\delta$, and run Monte Carlo calculations to better quantify the uncertainty on SLR abundance ratios. In Section~\ref{sec:discussion}, we apply our statistical framework to radioactive isotopic ratios relevant for the ESS, and discuss the implication of our work on the derivation of T$_\mathrm{iso}$ and T$_\mathrm{LE}$. The codes used in this work are publicly available on GitHub\footnote{\url{https://github.com/AndresYague/Stochastic_RadioNuclides}}. \section{The case of $\delta$ = $\delta_c$ = constant} \label{sec:analyticalSolution} We start with the analysis of the simplest case, which assumes that the time between enriching events $\delta$ is constant. The steady-state abundance (in mass) of a single SLR with mean life $\tau$ is \begin{equation} M = M_{\text{ej}}\frac{1}{1 - e^{-\delta_c/\tau}}e^{-\Delta t/\tau}, \label{eq:evolOne} \end{equation} where $M_{\text{ej}}$ is the ejected mass from a single event, $\delta_c$ is the constant time between two successive enrichments, and $\Delta t < \delta_c$ is the time since the last enrichment \citep[see][]{Lugaro2018}. By taking Equation (\ref{eq:evolOne}) for two isotopes $M_1$ and $M_2$ with mean lives $\tau_1$ and $\tau_2$ respectively, the steady-state evolution of their ratio can be described as: \begin{equation} \frac{M_1}{M_2} = P\,\frac{1 - e^{-\delta_c/\tau_2}}{1 - e^{-\delta_c/\tau_1}}e^{-\Delta t/\tau_\text{eq}}, \label{eq:evolRatio} \end{equation} where $P$ is the production ratio at the stellar source, and $\tau_\text{eq}$ is the \textit{equivalent} mean life given by \begin{equation} \tau_\text{eq} = \frac{\tau_1\tau_2}{\tau_2 - \tau_1}, \label{eq:tauEq} \end{equation} and representing the mean life of the ratio of the radioactive isotopes. Note that $\tau_\text{eq}$ can be negative if $\tau_1 > \tau_2$. Although we consider generally the case where $\tau_\text{eq}$ is positive, we will explain the differences with the negative case, wherever they exist. The time-averaged value of Equation (\ref{eq:evolRatio}), is given by (see Appendix \ref{sec:mathDevelopment}) \begin{equation} \left\langle\frac{M_1}{M_2}\right\rangle = \mu = P\, \frac{\tau_\text{eq}}{\delta_c}\,\frac{1 - e^{-\delta_c/\tau_2}}{1 - e^{-\delta_c/\tau_1}}\,\left(1 - e^{-\delta_c /\tau_\text{eq}}\right), \label{eq:avgRatio} \end{equation} and the difference between its maximum and minimum (derived by taking $\Delta t = 0$ and $\Delta t = \delta_c$ in Equation (\ref{eq:evolRatio})) values can be written as \begin{equation} \text{Max} - \text{Min} = \mu\frac{\delta_c}{\tau_\text{eq}}. \label{eq:maxMinRatio} \end{equation} \begin{figure*} \includegraphics[width=7.0in]{FourRegimesStochastic.pdf} \caption{Examples of the behaviour of the four regimes explored in this work when $\tau_\text{eq} > 0$. The production ratio $P$ is taken to be $1$. The blue lines are the evolution for constant delta $\delta_c$, and the black, dashed lines correspond to the maximum, average and minimum values given by Equations (\ref{eq:avgRatio}) and (\ref{eq:maxMinRatio}), while the red lines are the evolution when $\delta$ is a random variable. In the figure annotation, $\gamma$ represents the time between the formation of two enrichment sources progenitor instead of the time between two actual successive enrichment events, exactly as defined in Paper I. As in that work, we find that $\gamma = \langle \delta \rangle$. The larger uncertainty of the stochastic case relatively to the $\delta_c$ case is readily apparent for all of the regimes.} \label{fig:FourRegimesStochastic} \end{figure*} Equation (\ref{eq:avgRatio}) is remarkably similar to that derived in \citet{Lugaro2018} for a SLR/stable isotopic ratio, with the main difference being that the mean life of the radioactive isotope $\tau$ is now substituted by the mean life of the ratio of the radioactive isotopes $\tau_\text{eq}$, and that now the multiplying exponentials do not cancel out\footnote{When considering only one radioactive isotope at the numerator, there is no exponential with $\tau_2$ at the numerator, and $\tau_\text{eq} = \tau_1$, leaving just $\tau/\delta_c$.}. The relative variation, that is, the difference between the maximum and minimum value divided by the average, is otherwise identical to the case of the SLR/stable isotopic ratio, provided we substitute the SLR mean life with the equivalent mean life. This means that, qualitatively, we can expect the uncertainty of the ratio between two radioactive isotopes to behave like that of a single radioactive isotope with mean life given by $\tau_\text{eq}$. However, the fact that the average value contains three non-vanishing exponentials means that, depending on the relative values of $\delta_c$, $\tau_1$, $\tau_2$, and $\tau_\text{eq}$, we face four qualitatively distinct regimes for the evolution of the ratio itself. These regimes are exemplified in Figure~\ref{fig:FourRegimesStochastic} and explained below. \subsection{Regime 1; $\delta_c \gg \tau_\text{eq}, \tau_1, \tau_2$} We study first the regime where $\delta_c \gg \tau_\text{eq}, \tau_1, \tau_2$. In this case, represented by the example on the top-left panel of Figure~\ref{fig:FourRegimesStochastic}, the average abundance ratio is \begin{equation} \mu = \begin{cases} \frac{\tau_\text{eq}}{\delta_c}\,P& \text{if } \tau_\text{eq} > 0,\\ \frac{|\tau_\text{eq}|}{\delta_c}\,P\,e^{\delta_c/|\tau_\text{eq}|}& \text{if } \tau_\text{eq} < 0. \end{cases} \label{eq:avgReg1} \end{equation} Given that the ratio $\tau_\text{eq}/\delta_c$ is small, we expect an average value much lower than the production ratio $P$ when $\tau_\text{eq} > 0$. For a case where $\tau_\text{eq} < 0$, we have an exponential term of $\delta_c/|\tau_\text{eq}|$, which will instead yield an average value much larger than $P$. In addition, the ratio will vary between the production ratio $P$ and 0 (or $P$ and $P\exp(\delta_c/|\tau_\text{eq}|)$) for the case of positive (negative) $\tau_\text{eq}$. The intuitive understanding of this regime is that the time between enrichment events is longer than what it takes for both radioactive isotopes and their ratio to decay, which prevents any memory build-up and results in a very large relative uncertainty. \subsection{Regime 2; $\delta_c \ll \tau_\text{eq}, \tau_1, \tau_2$} In this regime, $\delta_c \ll \tau_\text{eq}, \tau_1, \tau_2$. This case, represented in the top-right panel of Figure~\ref{fig:FourRegimesStochastic}, has an equilibrium average value of \begin{equation} \mu = P\,\frac{\tau_1}{\tau_2}. \label{eq:avgReg2} \end{equation} The evolution of the ratio of radioactive isotopes is marked by relatively frequent events, and the time between them is shorter than the mean life of any of the isotopes. This means that the abundance of both isotopes retains the memory of the previous events and the ratio drifts from the production ratio $P$ to oscillate around the equilibrium average with a low relative uncertainty, behaving in a similar fashion to the case of large $\tau/\delta_c$ studied in \citet{Lugaro2018}. \subsection{Regime 3; $\delta_c \ll \tau_\text{eq}$ and $\delta_c \gg \tau_1, \tau_2$} In this regime, $\delta_c \ll \tau_\text{eq}$ and $\delta_c \gg \tau_1, \tau_2$. This case, represented in the bottom-left panel of Figure~\ref{fig:FourRegimesStochastic} has an equilibrium average value of \begin{equation} \mu = P. \label{eq:avgReg3} \end{equation} Although the value for the average in this case can be recovered from the formula of Regime 2 by using $\tau_1 \approx \tau_2$, we set this case apart because it represents the specific situation when the equivalent mean life is much larger than $\delta_c$, while the individual mean lives of each isotope are not. This regime only arises when the difference between the mean lives is small enough to make $\tau_\text{eq}$ orders of magnitude larger than them (see Eq.~\ref{eq:tauEq}). Given the short mean life of the individual SLR, it is likely that each SLR carries information from the last event only (see Paper I, Fig. 9 and related discussion). At the same time, the variation on the value of the ratio is relatively small because the equivalent mean life is too long for the ratio to change significantly before the next enriching event. \subsection{Regime 4; $\delta_c \gg \tau_\text{eq}, \tau_1; \delta_c \ll \tau_2$} In this regime $\delta_c \gg \tau_\text{eq}, \tau_1$, but $\delta_c \ll \tau_2$. The average value in this case, shown in the bottom-right panel of Figure~\ref{fig:FourRegimesStochastic}, is \begin{equation} \mu = \begin{cases} \frac{\tau_\text{eq}}{\tau_2}\,P& \text{if } \tau_\text{eq} > 0,\\ \frac{|\tau_\text{eq}|\tau_1}{\delta_c^2} e^{\delta_c/|\tau_\text{eq}|}\,P& \text{if } \tau_\text{eq} < 0. \end{cases} \label{eq:avgReg4} \end{equation} Although the evolution resembles that of the first regime when $\tau_\text{eq} > 0$, the maximum value attained by the ratio of the radioactive isotopes in the equilibrium becomes much lower than $P$. This is because, although the evolution of $M_1$ does not retain the memory of the previous events, the evolution of $M_2$ does. We note that in this regime $\tau_\text{eq} \approx \min(\tau_1, \tau_2)$. \section{The case of variable $\delta$} \label{sec:stochasticDelta} The cases studied in the previous section for a constant $\delta$ provide an intuition of how the ratio of two radioactive isotopes can behave in general. However, this simple approach produces deceptively small uncertainties relative to the more realistic scenario of variable $\delta$. This situation was explored already in Paper I for the case of the evolution of a single radioactive isotope, and it is illustrated here in Figure~\ref{fig:FourRegimesStochastic} also for the case of the ratio of two radioactive isotopes. To extend towards a better representation of SLR abundance variations in the ISM, we turn to a Monte Carlo approach where the enriching rate is stochastic, as in Paper I. The set-up for the Monte Carlo experiments is the same as in Paper I. A total of 1000 runs are calculated for 15 Gyr each. For each run, the progenitors of the enriching events are generated with a constant time interval of $\gamma$. The time between the birth of the progenitor and the associated enriching event is sampled from a source-specific delay-time distribution (DTD). The enriching times are sorted and the random $\delta$ calculated from their consecutive differences (see Figure~2 of Paper I). Because the value for $\langle\delta\rangle$ is approximately that of $\gamma$, we use the terms interchangeably in this work. The DTD used here have an equal probability between given initial and final times, and are the same as the ``box'' DTD of Paper I. We have omitted the ``power law'' DTD because, as concluded in Paper I, the actual $\delta$ distribution is approximately the same for both kinds of DTD for equal initial and final times. As in Paper I, we refer to the uniform distribution between 3 and 50 Myr, 50 Myr and 1 Gyr, and 50 Myr and 10 Gyr as the ``short'', ``medium'', and ``long'' box DTD, respectively. Each of these boxes can be associated with a different kind of progenitor for the enriching event, as described in Paper I. Because in the synchronous case both radioisotopes are generated in the same events, the ratios are computed at each timestep for the same run. To explore the different regimes, each of the 1000 runs is repeated using different $\tau$. We consider 1000 runs to be enough for the same reasons as in Paper I: different temporal points of different runs are statistically independent and can, therefore, be considered as different experiments for the purposes of statistical derivation. For this reason, we stack together all the values between 10 and 14 Gyr to represent the final distribution of $M_1/M_2$. All the cases studied here have $\tau_1 < \tau_2$. This particular choice is arbitrary, however, cases with $\tau_1 > \tau_2$ result in positive exponential behavior, and the abundance ratio is no longer bounded and can diverge towards infinity, which complicates the analysis without adding any meaning, \begin{figure*} \includegraphics[width=7.0in]{3Dspread.pdf} \caption{Dependence of the relative spread around the median on $\tau_1$, $\tau_2$ and $\langle\delta\rangle$. The 4 different regimes illustrated in Figure~\ref{fig:FourRegimesStochastic} cluster with different regions in this plot. Regimes 1 (circles) and 4 (stars) are located in the upper- and lower-left-far corner, respectively, with logarithmic relative spread values above 2 dex (100\%). Regime 2 (triangles) is located in the lower-right-far corner, with logarithmic relative spread values between 0 (1\%) and 1.5 dex (32\%), and Regime 3 (inverted triangles) is located on the diagonal contained in the $\tau_1 \approx \tau_2$ plane, with logarithmic relative spread lower than 1 dex. Cases with the same $\tau_\text{eq}$ correspond to vertical lines with constant $\tau_1$ and $\tau_2$. Squares represent combinations that do not fall neatly into any regime and often correspond to a transition between two regimes.} \label{fig:3Dspread} \end{figure*} In Figure~\ref{fig:3Dspread} we show the relative uncertainty (68.2\% of the distribution around the median of the ratio) resulting from the Monte Carlo experiments when varying $\tau_1$, $\tau_2$, and $\gamma$. As the figure shows, Regimes 1 and 4 have extremely large relative uncertainties, mainly due to $M_1$ not building up sufficient memory. Therefore, these regimes can only be treated as additions of individual events, using statistical methods different from that used here. This is similar to the case of Regime II of Paper I (all the regimes of Paper I and their connections to the present regimes will be described in more detail in Section~\ref{sec:PaperI}). Therefore, from now on we will focus on the cases where $\tau_\text{eq} \gtrsim 3\langle\delta\rangle$, which excludes Regimes 1 and 4. The exception is Regime 3, where although neither $M_1$ nor $M_2$ build up enough memory from previous events, the slow decaying property of their ratio results in a stable value with low relative uncertainty. This makes Regime 3 an interesting case where the uncertainty in the ratio of two SLR is as low or lower than in Regime 2, with a large percentage of the ratio containing only the abundances from the last event. \begin{deluxetable*}{lcccccc} \tablewidth{0pc} \tablecaption{Median values and 68\% confidence interval for cases belonging to Regime 2 and lower limits encompassing the 84\% of the distribution for cases belonging to Regime 3, from the Monte Carlo experiment (for $P = 1$), for different values of $\gamma$, $\tau_1/\gamma$, $\tau_2/\gamma$ and $\tau_\text{eq}/\gamma$ for $\tau_\text{eq}/\gamma > 3$. The results from the large box are identical to those from the medium box DTD. A dash in the Regime column means that the specific case does not neatly fall neatly into one of the regimes. These cases typically fall between Regime 1 or 4 and Regime 3. \label{tab:MCTableVals}} \tablehead{ $\gamma$ [Myr] & $\tau_1/\gamma$ & $\tau_2/\gamma$ & $\tau_\text{eq}/\gamma$ & Small box & Large box & Regime } \startdata $1.00$ & $0.10$ & $0.10$ & $10.10$ & $> 0.83$ & $> 0.83$ & $3$\\ $1.00$ & $1.00$ & $1.01$ & $101.00$ & $> 0.98$ & $> 0.98$ & $3$\\ $1.00$ & $1.00$ & $1.10$ & $11.00$ & $> 0.80$ & $> 0.80$ & $3$\\ $1.00$ & $1.00$ & $1.50$ & $3.00$ & $0.63_{-0.20}^{+0.13}$ & $0.63_{-0.20}^{+0.13}$ & $-$\\ $1.00$ & $10.00$ & $10.10$ & $1010.00$ & $0.99_{-0.00}^{+0.00}$ & $0.99_{-0.00}^{+0.00}$ & $2$\\ $1.00$ & $10.00$ & $11.00$ & $110.00$ & $0.91_{-0.01}^{+0.01}$ & $0.91_{-0.02}^{+0.01}$ & $2$\\ $1.00$ & $10.00$ & $15.00$ & $30.00$ & $0.66_{-0.04}^{+0.03}$ & $0.66_{-0.04}^{+0.04}$ & $2$\\ $1.00$ & $10.00$ & $101.00$ & $11.10$ & $0.10_{-0.02}^{+0.02}$ & $0.10_{-0.02}^{+0.02}$ & $2$\\ $1.00$ & $10.00$ & $110.00$ & $11.00$ & $0.09_{-0.01}^{+0.02}$ & $0.09_{-0.02}^{+0.02}$ & $2$\\ $1.00$ & $10.00$ & $150.00$ & $10.71$ & $0.07_{-0.01}^{+0.01}$ & $0.07_{-0.01}^{+0.01}$ & $2$\\ $1.00$ & $100.00$ & $101.00$ & $10100.00$ & $0.99_{-0.00}^{+0.00}$ & $0.99_{-0.00}^{+0.00}$ & $2$\\ $1.00$ & $100.00$ & $110.00$ & $1100.00$ & $0.91_{-0.00}^{+0.00}$ & $0.91_{-0.00}^{+0.00}$ & $2$\\ $1.00$ & $100.00$ & $150.00$ & $300.00$ & $0.67_{-0.01}^{+0.01}$ & $0.67_{-0.01}^{+0.01}$ & $2$\\ $10.00$ & $0.10$ & $0.10$ & $10.10$ & $> 0.86$ & $> 0.83$ & $3$\\ $10.00$ & $1.00$ & $1.01$ & $101.00$ & $> 0.98$ & $> 0.98$ & $3$\\ $10.00$ & $1.00$ & $1.10$ & $11.00$ & $> 0.83$ & $> 0.80$ & $3$\\ $10.00$ & $1.00$ & $1.50$ & $3.00$ & $0.64_{-0.17}^{+0.12}$ & $0.63_{-0.20}^{+0.13}$ & $-$\\ $10.00$ & $10.00$ & $10.10$ & $1010.00$ & $0.99_{-0.00}^{+0.00}$ & $0.99_{-0.00}^{+0.00}$ & $2$\\ $10.00$ & $10.00$ & $11.00$ & $110.00$ & $0.91_{-0.01}^{+0.01}$ & $0.91_{-0.01}^{+0.01}$ & $2$\\ $10.00$ & $10.00$ & $15.00$ & $30.00$ & $0.67_{-0.02}^{+0.02}$ & $0.66_{-0.04}^{+0.04}$ & $2$\\ $100.00$ & $0.10$ & $0.10$ & $10.10$ & $0.95_{-0.03}^{+0.03}$ & $> 0.83$ & $3$\\ $100.00$ & $1.00$ & $1.01$ & $101.00$ & $0.99_{-0.00}^{+0.00}$ & $> 0.98$ & $3$\\ $100.00$ & $1.00$ & $1.10$ & $11.00$ & $0.90_{-0.03}^{+0.03}$ & $> 0.80$ & $3$\\ $100.00$ & $1.00$ & $1.50$ & $3.00$ & $0.65_{-0.08}^{+0.08}$ & $0.63_{-0.20}^{+0.13}$ & $-$\\ \enddata \end{deluxetable*} The uncertainties from the Monte Carlo calculations are presented in Table~\ref{tab:MCTableVals} for $\tau_\text{eq} > 0$ and $\tau_\text{eq}/\gamma > 3$. When the distribution is approximately symmetric (Regime 2), both an upper and lower value are given, when the distribution piles-up on $P$ (Regime 3), a lower limit for the ratio is given instead. Table \ref{tab:MCTableVals} allows us to calculate uncertainties for ratios of SLR due to temporal stochasticity of enrichment events. For any isotopic ratio, we can select the proper $\gamma$, which depends on the source, the best suited $\tau_1/\gamma$ and $\tau_2/\gamma$, and whether a short box (i.e., if the source are core collapse supernovae) or a long box (i.e., if the source are asymptotic giant branch stars or neutron star mergers) describes the source. Afterwards, the corresponding numbers in Column 5 or 6 should be multiplied by the production ratio of the SLR ratio. If there is no exact match to the numbers shown in Table~\ref{tab:MCTableVals}, then Equations~(\ref{eq:approxAvg}) and (\ref{eq:approxSigma2}) or (\ref{eq:approxSigma3}) described below in Section \ref{sec:analyticalApproach}) can be used instead. In Sections~\ref{sec:regime2} and \ref{sec:regime3}, we describe in more detail the differences between the constant and random $\delta$ cases in relation to Regimes 2 and 3, respectively. \subsection{Connections and similarities with the regimes defined in Paper I} \label{sec:PaperI} In Paper I we analysed a single SLR and found 3 different regimes can be applied depending on the relation between $\tau$ and $\gamma$. Here we report a brief description of them and and how they connect with the regimes in this work. For sake of clarity, the 3 regimes from Paper I are marked in Roman numerals, while Arabic numerals refer to the 4 regimes considered here. Regime I refers to $\tau/\gamma > 2$ and it is similar to Regimes 2 or 3, in that statistics can be calculated because the spread is not much larger than the median value. Regime I is associated with the calculation of the isolation time, T$_\text{iso}$, because in this case the ISM will contain an equilibrium value from where there can be an isolation period before the ESS abundances. In the present work, Regime 2 is that associated with the calculation of T$_\text{iso}$. Regime III is a case that covers the region of $\tau/\gamma < 0.3$. In this Regime, there is a large probability that the ISM abundance that decayed into the ESS abundance originated from a single event. Therefore, this Regime is associated with the calculation of the time since the last event, T$_\text{LE}$. Regime 3 of this work is related to Regime III of Paper I in that both carry most likely abundances from only the last event before the formation of the Solar System. The difference is that, while Regime III allows us to calculate T$_\text{LE}$, Regime 3 allows us to also narrowly determine the production ratio of the last event. Regime II falls between two well-defined cases described above. This regime has $0.3 < \tau/\gamma < 2$, which does not allow for meaningful statistics nor for a clean definition of a last event to which the ISM abundance can be solely or mostly attributed. This Regime does not correspond to any of the regimes in this work, and it may be similar to the region between Regime 2 and Regimes 1 and 4. \subsection{Analytical approach}\label{sec:analyticalApproach} We also investigated the possibility to calculate the uncertainties using an analytical approach instead of the full Monte Carlo simulations. The aim is to provide a better understanding of the regimes and their uncertainties, as well as give an alternative to calculate approximate numbers without the need of a simulation. To do that, we use the expression for the average given by \begin{equation} \mu \approx P\,\frac{\tau_\text{eq}}{\langle\delta\rangle}\frac{1 - \langle e^{-\delta/\tau_2}\rangle}{1 - \langle e^{-\delta/\tau_1}\rangle} \left(1 - \langle e^{-\delta/\tau_\text{eq}} \rangle \right), \label{eq:approxAvg} \end{equation} derived in Appendix \ref{sec:mathDevelopment}, and for the relative standard deviation we use \begin{equation} \frac{\sigma}{\mu} \approx F\sqrt{\frac{\langle\delta\rangle}{2\tau_\text{eq}}\frac{1 - \langle e^{-2\delta/\tau_\text{eq}} \rangle}{\left(1 - \langle e^{-\delta/\tau_\text{eq}}\rangle\right)^2} - 1}, \label{eq:approxSigma2} \end{equation} where $F$ is a correction factor applied to Equation \ref{eq:approxSigma1} and is defined by \begin{equation} F = K\left[1 + \log_{10}\left(\frac{\min(\tau_1, \tau_2)}{\langle\delta\rangle}\right)^2\right], \label{eq:eqForF} \end{equation} where $K = 1$ unless $\min(\tau_1, \tau_2)$ is larger than the span of the DTD, in which case $K = 0.5$. In cases where $\min(\tau_1, \tau_2) < \langle\delta\rangle$, then $F = K$. This factor $F$ was derived from the Monte Carlo experiments and corrects some of the approximations made in the derivation of \ref{eq:approxSigma1} in Appendix \ref{sec:mathDevelopment}. With this correction factor, Equation (\ref{eq:approxSigma2}) becomes an accurate estimation to the results of the Monte Carlo experiment. If the full distribution of $\delta$ is unknown, a further approximation to Equation (\ref{eq:approxSigma2}) can be used instead, rendering \begin{equation} \frac{\sigma}{\mu} \approx F \sqrt{\frac{1}{6}\frac{\langle\delta\rangle^3 + 3\sigma_\delta^2\langle\delta\rangle}{\tau_\text{eq}^2\langle\delta\rangle - \tau_\text{eq}\sigma_\delta^2}}, \label{eq:approxSigma3} \end{equation} with the advantage that only $\langle\delta\rangle$ and $\sigma_\delta$ (the standard deviation of the delta distribution) have to be known. This formula is much easier to calculate because no sampling of the $\delta$ distribution is needed. \begin{figure} \includegraphics[width=0.5\textwidth]{SpreadStd_SmallBox.pdf} \caption{Prediction of Equations~(\ref{eq:approxSigma2}) and (\ref{eq:approxSigma3}) of the relative to the median 68.2\% confidence interval calculated from the Monte Carlo experiments (black large squares) for the small box DTD. The equations themselves calculate just the 34.1\% interval, which is why twice their value is used.} \label{fig:SmallBoxSpread} \end{figure} The validity of Equations (\ref{eq:approxSigma2}) and (\ref{eq:approxSigma3}) can be tested by comparing it to the 68.2\% (1$\sigma$) confidence interval calculated from the Monte Carlo experiments. This comparison is presented in Figure~\ref{fig:SmallBoxSpread}. In the worst case, with the small box DTD, the relative difference between the analytical approximations and the results from the numerical experiments is just above 25\%. These are valid for calculations related to Regime 2. For Regime 3, instead, as seen in Table \ref{tab:MCTableVals}, the average remains very close to $P$ introduces asymmetry in the distribution. In this case, the theoretical $\sigma$ is an average of the lower and upper $1\sigma$ threshold. When this $\sigma$ is such that $\mu + \sigma > P$, it is better to calculate a lower limit for the distribution with $P - 2\sigma$, because in this cases the distribution piles up at $P$, making any value between $P$ and $\mu$ functionally equiprobable. \subsection{Regime 2; $\delta \ll \tau_\text{eq}, \tau_1, \tau_2$} \label{sec:regime2} In this case the abundances of both SLR nuclei retain significant memory from past events. The average of their ratio, according to Equation~(\ref{eq:approxAvg}), is the same as the constant case for the same regime, given by Equation~(\ref{eq:avgReg2}). When comparing the uncertainties, however, there is a significant difference between the constant and the stochastic case. As a first order approximation, and taking $\sigma_\delta \approx \langle\delta\rangle$ (see Table 2 of Paper I), we can write Equation (\ref{eq:approxSigma3}) as \begin{equation} \frac{2\sigma}{\mu} \approx 2F \frac{\langle\delta\rangle}{\tau_\text{eq}}, \end{equation} which, when substituting $\langle\delta\rangle$ by $\delta_c$, dividing Eqation~(\ref{eq:eqForF}) by Equation~(\ref{eq:maxMinRatio}) reveals that the stochastic case has a larger uncertainty relative to the constant case by a factor of 2F. This factor can be shown to be in the range $2F \in [2.5, 35]$ when considering $\tau_\text{eq}/\langle\delta\rangle \in [3, 10^4]$ by using Equation~(\ref{eq:eqForF}) with $K = 1$ and taking $\min(\tau_1, \tau_2) = \tau_\text{eq}$. Therefore, the time-stochastic nature of enrichment events can increase the uncertainty by more than an order of magnitude in this regime. The uncertainty on the ratio of two SLR in Regime 2 is still relatively low. For example, for the Large Box, with $\tau_1 = 10$ Myr, $\tau_2 = 15$ Myr and $\gamma = 1$ Myr, Table~\ref{tab:MCTableVals} has a relative uncertainty of $12\%$. For a similar example with $\tau = 10$ Myr and $\gamma = 1$ Myr in Table~3 of Paper I, the relative uncertainty is $45\%$ for the Large Box. Even if we take the case of $\tau = 31.6$ Myr and $\gamma = 1$ Myr, we still have a relative uncertainty of $25\%$ for the SLR/stable isotopic ratio. \subsection{Regime 3; $\delta \ll \tau_\text{eq}$, $\delta \gg \tau_1, \tau_2$} \label{sec:regime3} As discussed in the constant $\delta_c$ scenario, this regime shows a low variation around the average $P$ while retaining no memory of previous events. The difference between the constant and stochastic case is similar to that in Regime 2 (because Equation~\ref{eq:approxSigma3} depends only on $\langle\delta\rangle$ and $\tau_\text{eq}$), that is, a factor of $2F$. The factor $F = K$ is a constant here (when $\min(\tau_1, \tau_2) < \langle\delta\rangle$), equal either to 0.5 or to 1, which means that the relation between the uncertainties in the constant and stochastic cases is of a factor of two at most. Additionally, the stochastic case results in a non-symmetric distribution around the median. The reason is that the ratio is always bounded between 0 and the production factor $P$ (when $\tau_2 > \tau_1$): when the enriching events are more frequent than average, the ratio will remain at $P$, while when the enriching events are less frequent than average the ratio decays away from this average. In any case, the characteristic of Regime 3 is that the average ratio remains always very close to $P$. \section{Discussion} \label{sec:discussion} We apply our general theoretical approach to specific ratios of two SLRs that are either in Regime 2 or Regime 3. Starting from Table~2 of \citet{Lugaro2018}, which lists all the SLRs known to have been present in the ESS, we select SLRs with potentially the same origin (for the synchronous scenario) and with mean lives close enough such that the $\tau_\text{eq}$ of their ratio is potentially larger than the probable $\gamma$ of their source. We find four cases of such ratios of isotopes and present them in Table~\ref{tab:specificValuesSync}, along with the specific Monte Carlo (MC) experiments that reproduce the conditions under which they evolve in the Galaxy, assuming a production ratio $P = 1$. This table categorizes the regime of the selected SLR ratios, realizes the difference with regards to the uncertainties between considering the single SLR/stable (or long-lived) reference isotope ratio (Columns SLR$_{1}$ and SLR$_{2}$), and quantifies the ratio of the two SLRs (Column SLR$_{1}$/SLR$_{2}$). In general, the uncertainties significantly decrease when considering ratios of SLRs with similar mean lives, relative to considering their ratio to a stable or long-lived isotope (compare the last column of MC values to the other two columns of MC values). It is worth mentioning that in this comparison we are supposing that the stable isotope carries no uncertainty at all from GCE processes, which by itself can be a factor of up to $5.7/1.6 = 3.6$ \citep{Cote2019}. In addition, the predicted ISM abundances are much closer to the production ratios when considering ratios between two SLRs. Table~\ref{tab:ESSvaluesTiso} shows the subsequent calculations of the isolation time, $T_\text{iso}$ (in roman), and the time since the last event, $T_\text{LE}$ (in italics), for the selected isotopic ratios for which the ESS ratio is available. These correspond to only three out of the four ratios discussed in Table~\ref{tab:specificValuesSync}. We excluded $^{97}$Tc/$^{98}$Tc because only upper limits are available for the corresponding radioactive-to-stable ratios, which means it is not possible to derive any ESS value for their ratio. The other ESS ratios are calculated using the values for the radioactive-to-stable ratios reported in Table~2 of \citet{Lugaro2018} and the solar abundances of the reference isotopes from \citet{Lodders2010} \citep[see also][]{Cote2020}. Furthermore, the selected values for $\gamma$ were limited to those most likely to occur in the Milky Way for the corresponding production sites. \begin{deluxetable*}{lcccccccccc} \tablewidth{0pc} \tablecaption{Regimes and values of the ratios from the Monte Carlo (MC) experiments applied to the specific cases of ratios between two SLRs (Column SRL$_{1}$/SRL$_{2}$) and between the SLRs and their corresponding stable or long-lived reference isotopes (Columns SRL$_1$/stable and SRL$_2$/stable, see the main text for the list of reference isotopes). Production ratios are always 1. Also indicated are $\tau_1$, $\tau_2$, $\tau_\text{eq}$ and the adopted $\gamma$, all in Myr. The values of $\gamma$ are selected such that it is possible to remain within Regimes 2 or 3, for which cases we can model the uncertainties. The Roman numerals correspond to the regimes of Paper I (SRL$_{1,2}$/stable), while the Arabic numerals correspond to the regimes described in this work. The hyphen symbols correspond to cases that do not fit neatly in any of the regimes. \label{tab:specificValuesSync}} \tablehead{ & \multirow{2}{*}{$\tau_1$} & \multirow{2}{*}{$\tau_2$} & \multirow{2}{*}{$\tau_\text{eq}$} & \multirow{2}{*}{$\gamma$} & \multicolumn{2}{c}{SLR$_1$/stable} & \multicolumn{2}{c}{SLR$_2$/stable} & \multicolumn{2}{c}{SLR$_1$/SLR$_2$}\\ & & & & & Regime & MC values & Regime & MC values & Regime & MC values } \startdata \multirow{6}{*}{$^{247}$Cm/$^{129}$I} & \multirow{6}{*}{22.5} & \multirow{6}{*}{22.6} & \multirow{6}{*}{5085} & 1 & I & $22.37^{+3.45}_{-3.22}$ & I & $22.47^{+3.46}_{-3.23}$ & 2 & $1.00^{+0.00}_{-0.00}$\\ & & & & 3.16 & I & $7.01^{+1.98}_{-1.77}$ & I & $7.04^{+1.99}_{-1.77}$ & 2 & $1.00^{+0.00}_{-0.00}$\\ & & & & 10 & II & $< 3.29$ & II & $< 3.30$ & 3 & $1.00^{+0.00}_{-0.00}$\\ & & & & 31.6 & II & $< 1.29$ & II & $< 1.29$ & 3 & $> 0.99$\\ & & & & 100 & III & $< 0.54$ & III & $< 0.54$ & 3 & $> 0.96$\\ & & & & 316 & III & $< 0.08$ & III & $< 0.08$ & 3 & $> 0.89$\\ \hline \multirow{4}{*}{$^{107}$Pd/$^{182}$Hf} & \multirow{4}{*}{9.4} & \multirow{4}{*}{12.8} & \multirow{4}{*}{35.4} & 1 & I & $9.28^{+2.27}_{-2.05}$ & I & $12.68^{+2.63}_{-2.41}$ & 2 & $0.73^{+0.03}_{-0.04}$\\ & & & & 3.16 & I & $2.86^{+1.32}_{-1.10}$ & I & $3.94^{+1.53}_{-1.31}$ & 2 & $0.73^{+0.06}_{-0.08}$\\ & & & & 10 & II & $< 1.62$ & II & $< 2.06$ & 3 & $0.70^{+0.12}_{-0.19}$\\ & & & & 31.6 & III & $< 0.69$ & III & $< 0.86$ & - & $0.50^{+0.30}_{-0.32}$\\ \hline \multirow{4}{*}{$^{53}$Mn/$^{97}$Tc} & \multirow{4}{*}{5.4} & \multirow{4}{*}{5.94} & \multirow{4}{*}{59.4} & 1 & I & $5.29^{+1.75}_{-1.53}$ & I & $5.83^{+1.83}_{-1.61}$ & 2 & $0.91^{+0.02}_{-0.02}$\\ & & & & 3.16 & II & $< 2.63$ & II & $< 2.84$ & 3 & $0.90^{+0.03}_{-0.05}$\\ & & & & 10 & II & $< 1.04$ & II & $< 1.12$ & 3 & $0.86^{+0.08}_{-0.15}$\\ & & & & 31.6 & III & $< 0.41$ & III & $< 0.45$ & - & $0.68^{+0.22}_{-0.31}$\\ \hline \multirow{5}{*}{$^{97}$Tc/$^{98}$Tc} & \multirow{5}{*}{5.94} & \multirow{5}{*}{6.1} & \multirow{5}{*}{226} & 1 & I & $5.83^{+1.83}_{-1.61}$ & I & $5.99^{+1.85}_{-1.63}$ & 2 & $0.97^{+0.00}_{-0.01}$\\ & & & & 3.16 & II & $< 2.84$ & II & $< 2.91$ & 3 & $0.97^{+0.01}_{-0.02}$\\ & & & & 10 & II & $< 1.12$ & II & $< 1.14$ & 3 & $0.96^{+0.02}_{-0.05}$\\ & & & & 31.6 & III & $< 0.45$ & III & $< 0.47$ & 3 & $0.90^{+0.07}_{-0.13}$\\ & & & & 100 & III & $< 0.05$ & III & $< 0.06$ & - & $0.73^{+0.19}_{-0.29}$\\ \enddata \end{deluxetable*} \begin{deluxetable*}{lccccccccc} \tablewidth{0pc} \tablecaption{Timescales derived by decaying the reported ISM ratios to the ESS ratios in Column 2 for a subset of ratios and $\gamma$ values considered in Table~\ref{tab:specificValuesSync} to represent possible realistic values in the Galaxy for the corresponding production event. Time and $\tau_\text{eq}$ are in Myr. The ISM SLR$_{1,2}$/stable ratios in roman are calculated using the steady-state formula from \citet{Cote2019} and K=2.3. These are cases within Regime I and can provide T$_\text{iso}$ (also in roman). The ISM SLR$_{1,2}$/stable ratios in italics are calculated instead using the last-event formula, i.e., Eqs.~3 (with K=2.3) and 4 (with K=1.2) of \citet{Cote2020} and the selected value of $\gamma=\delta$. These are cases within Regime III and can provide T$_\text{LE}$ (also in italics). The ISM SLR$_{1}$/SLR$_{2}$ values are calculated as $= P (\tau_1/\tau_2)$ (Eq.~\ref{eq:avgReg2}) for roman values and as $= P$ (Eq.~\ref{eq:avgReg3}) for italic values. The production ratios used in all the formulas are reported in the text in each subsection. The ``back-decayed'' ratios are calculated by decaying back the ESS ratio by the average of T$_\text{iso}$, or T$_\text{LE}$, from both SLR$_{1,2}$/Stable ratios, except for the case of $^{53}$Mn/$^{97}$Tc, where only the times derived from $^{53}$Mn were used. Differences between the values in the last two columns highlight the problems discussed in the text. \label{tab:ESSvaluesTiso}} \tablehead{ & \multirow{2}{*}{ESS ratio} & \multirow{2}{*}{$\tau_\text{eq}$} & \multirow{2}{*}{$\gamma$} & \multicolumn{2}{c}{SLR$_1$/stable} & \multicolumn{2}{c}{SLR$_2$/stable} & \multicolumn{2}{c}{SLR$_1$/SLR$_2$}\\ & & & & ISM ratio & Time & ISM ratio & Time & ISM ratio & back-decayed ratio } \startdata {$^{247}$Cm/$^{129}$I} & {$2.28 \times 10^{-3}$} & {5085} & 316 & {\it $\mathit{9.63 \times 10^{-2}}$} & {\it 171} & {\it $\mathit{1.15 \times 10^{-1}}$ } & {\it 153} & {$\mathit{1.22 \times 10^{-2}}$($^a$)} & \textit{$\mathit{2.35 \times 10^{-3}}$}\\ \hline \multirow{3}{*}{$^{107}$Pd/$^{182}$Hf} & \multirow{3}{*}{$4.25$} & \multirow{3}{*}{35.4} & 1 & \multirow{2}{*}{$3.56 \times 10^{-4}$} & $16^{+2}_{-3}$ & \multirow{2}{*}{$5.20 \times 10^{-4}$} & $21^{+2}_{-3}$ & \multirow{2}{*}{2.41} & \multirow{2}{*}{7.17}\\ & & & 3.16 & & $16^{+4}_{-6}$ & & $21^{+4}_{-6}$ & & \\ & & & 31.6 & {\it $\mathit{1.20 \times 10^{-3}}$} & {\it 27} & {\it $\mathit{1.28 \times 10^{-3}}$} & {\it 32} & {\it 3.28} & {\it 9.78} \\ \hline \multirow{2}{*}{$^{53}$Mn/$^{97}$Tc} & \multirow{2}{*}{$> 1.70 \times 10^5$} & \multirow{2}{*}{59.4} & 1 & $1.58 \times 10^{-4}$ & $17^{+2}_{-2}$ & $3.84 \times 10^{-5}$ & $> 7$ & $1.65 \times 10^6$ & $>2.26 \times 10^5$\\ & & & 31.6 & {\it $\mathit{9.23 \times 10^{-4}}$} & {\it 26} & {\it $\mathit{2.04 \times 10^{-4}}$} & {\it $\mathit{> 17}$} & {\it $\mathit{1.82 \times 10^6}$} & {\it $\mathit{> 2.63 \times 10^5}$} \\ \enddata $^a$We calculated this possible \textit{r}-process production using the average $^{247}$Cm/$^{232}$Th ratio from \citet{Goriely2016} and assuming the solar ratio $^{127}$I/$^{232}$Th of $31$ from \citet{Asplund2009}. This is to avoid using $^{235}$U, which decays much faster than $^{232}$Th (with mean life of roughly 1 Gyr, instead of 20 Gyr) and would complicate the assumption that the produced $^{127}$I/$^{235}$U was solar. \end{deluxetable*} \subsection{The ratio of the \textit{r}-process $^{247}$Cm and $^{129}$I} These two isotopes are made by the \textit{rapid} neutron-capture (\textit{r}) process and typical estimates for the time interval at which \textit{r}-process nucleosynthetic events that are believed to enrich a parcel of gas in the Galaxy range between 200 and 500 Myr \citep{Hotokezaka15,Tsujimoto17,Bartos19,Cote2020}. Therefore, the case of $^{247}$Cm/$^{129}$I is the best example of Regime 3 since $\tau_\text{eq} = 5085$ Myr (Table \ref{tab:specificValuesSync}) is much larger than $\gamma$, while each $\tau$ ($\simeq$ 22.5 and 22.6 Myr, respectively) is much shorter than $\gamma$. The ratios to the long-lived or stable references isotopes, $^{247}$Cm/$^{235}$U and $^{129}$I/$^{127}$I, allow us to derive a T$_\text{LE}$, for example for the specific $\gamma$ value of 316 Myr considered in Table~\ref{tab:ESSvaluesTiso}, and derive typical production ratios of $1.35$ for $^{129}$I/$^{127}$I and $0.3$ for $^{247}$Cm/$^{235}$U. While our T$_\text{LE}$ values are not perfectly compatible with each other, the more detailed analysis shown by \citet{Cote2020} demonstrates that there is compatibility for T$_\text{LE}$ in the range between 100 - 200 Myr, depending on the exact choice of the K parameter \citep{Cote2019}, $\gamma$, and the production ratios. The short mean lives of $^{247}$Cm and $^{129}$I ensure that there is no memory from previous events, while the long $\tau_\text{eq}$ of $^{247}$Cm/$^{129}$I instead ensures that this ratio did not change significantly during T$_\text{LE}$ and has a high probability to be within the 10\% of the production ratio. Therefore, the production ratio of the last \textit{r}-process event that polluted the ESS material can be accurately determined directly from the ESS ratio. If we assume that the last event produced a $^{247}$Cm/$^{232}$Th ratio similar to the average predicted by \citet{Goriely2016}, and assume a solar ratio for $^{127}$I/$^{232}$Th, then we find an inconsistency between the numbers in the last two columns of Table~\ref{tab:ESSvaluesTiso}. The back-decayed value is more than five times lower than the assumed production ratio, which indicates a weaker production of the actinides from this last event, with respect to the production ratios that we are using here. The number in the last column represents therefore a unique constrain on the nature of the astrophysical sites of the \textit{r} process in Galaxy at the time of the formation of the Sun and needs to be compared directly to different possible astrophysical and nuclear models \citep{Cote2020}. \subsection{The ratio of the $s$-process $^{107}$Pd and $^{182}$Hf} If T$_\text{LE}$ for the last \textit{r}-process event is larger than 100 Myr, as discussed in the previous section, the presence of these two SLRs in the ESS should primarily be attributed to the \textit{slow} neutron-capture (\textit{s}) process in asymptotic giant branch (AGB) stars, which are a much more frequent event due to the low mass of their progenitors, since their \textit{r}-process contribution would have decayed for a time of the order of 10 times their mean lives \citep{Lugaro2014}. Experimental results on the SLRs $^{107}$Pd ($\tau$=9.8 Myr) and $^{182}$Hf ($\tau$=12.8 Myr) are reported with respect to the stable reference isotopes $^{108}$Pd and $^{180}$Hf, respectively. The ISM ratio reported in Table~\ref{tab:ESSvaluesTiso} are calculated using production ratios of 0.14, 0.15, and 3.28 for $^{107}$Pd/$^{108}$Pd, $^{182}$Hf/$^{180}$Hf, $^{107}$Pd/$^{182}$Hf, respectively, derived from the 3 M$_{\odot}$ model of \citet{Lugaro2014}. For the short $\gamma$ values considered in Table~\ref{tab:ESSvaluesTiso} (1 and 3.16 Myr) the SLR$_{1,2}$/Stable ratios belong to Regime I and the SLR$_{1}$/SLR$_{2}$ ratio belong to Regime 2. Therefore, we can calculate the T$_\text{iso}$ from all the ratios. As shown in Table~\ref{tab:specificValuesSync}, the ratios relative to the stable reference isotopes suffer from larger uncertainties (40\% or 85\% depending on the $\gamma$, and supposing no uncertainty in the stable isotope abundance) compared to the ratio of the two SLRs (less than 20\%). However, when considering the actual ISM ratios, the uncertainties on the evaluation of T$_\text{iso}$ become comparable because these are relative uncertainties and the ratio of the two SLRs and the equivalent mean life have a much larger absolute value that the other two ratios. While the T$_\text{iso}$ values derived from the SLR$_{1,2}$/stable ratios are consistent with each other, the value calculated from SLR$_{1}$/SLR$_{2}$ would need to be much shorter. In the last column of Table~\ref{tab:ESSvaluesTiso} we report the back-decayed ratio, as the ISM ratio that is required to obtain a self-consistent solution. The discrepancy between the ISM and back-decayed values may be due to problems with the stellar production of these isotopes: a main caveat here to consider is that, while the $^{107}$Pd/$^{108}$Pd ratio produced by the \textit{s} process is relatively constant, since it only depends on the inverse of the ratio of the neutron-capture cross sections of the two isotopes, both the $^{182}$Hf/$^{180}$Hf and $^{107}$Pd/$^{182}$Hf production ratios can vary significantly between different AGB star sources. The $^{182}$Hf/$^{180}$Hf ratio is particularly sensitive to the stellar mass \citep{Lugaro2014}, due to the probability on the activation of the $^{181}$Hf branching point, which increases with the neutron density produced by the $^{22}$Ne($\alpha$,n)$^{25}$Mg neutron source reaction, which, in turn, increases with temperature and therefore stellar mass. The $^{107}$Pd/$^{182}$Hf involves two isotopes belonging to the mass region before ($^{107}$Pd) and after ($^{182}$Hf) the magic neutron number of 82 at Ba, La, and Ce. This means that this ratio will also be affected by the total number of neutrons released by the main neutron source $^{13}$C($\alpha$,n)$^{16}$O in AGB stars, which has a strong metallicity dependence \citep[see, e.g.,][]{Gallino1998,Cseh2018}. This means that a proper analysis of these $s$-process isotopes can only be carried out in the framework of a full GCE models, where the stellar yields are varied with mass and metallicity. This work is sumbitted (Trueman et al., submitted) and the uncertainties calculated here will be included in this complete analysis. For long $\gamma$ values, such as 31.6 Myr considered in Table~\ref{tab:ESSvaluesTiso}, the $^{107}$Pd/$^{108}$Pd and $^{182}$Hf/$^{180}$Hf ratios would likely mostly reflect their production in one event only (regime III). In this case we derive an T$_\text{LE}$. Since $^{107}$Pd/$^{182}$Hf is between Regimes 1 and 3, this isotopic ratio changes more significantly during the time interval T$_\text{LE}$ than in the case of the \textit{r}-process isotopes discussed in the previous section. In the Table~\ref{tab:ESSvaluesTiso} we report the production value predicted by decaying back the ESS ratio by T$_\text{LE}$. As in the case of the \textit{r}-process isotopes, in this regimes this number can be used to determine the stellar yields of the last AGB star to have contributed to the \textit{s}-process elements present in the ESS (Trueman et al., submitted). \subsection{The ratio of the \textit{p}-process $^{97}$Tc and $^{98}$Tc} These two SLRs are next to each other in mass and are both \textit{p}-only isotopes, i.e., they are nuclei heavier than Fe that can only be produced by charged-particle reactions or the disintegration ($\gamma$) process. While the origin of \textit{p}-only isotopes is currently not well established especially for those in the light mass region, and the main sites may be both core-collapse and Type Ia supernovae, recent work has shown that the main site of production of the SLRs considered here is probably Chandrasekhar-mass Type Ia supernovae \citep[see, e.g.][]{Travaglio2014,Lugaro2016,Travaglio2018}. Because their mean lives are remarkably similar ($\tau$=5.94 and 6.1 Myr, respectively for $^{97}$Tc and $^{98}$Tc), their $\tau_\text{eq}$=226 Myr and as shown in Table \ref{tab:specificValuesSync}, the theoretical uncertainties related to their ratio are very low for values $\gamma$ up to 31.6 Myr. The full GCE of these isotopes was investigated by \citet{Travaglio2014}. Expanding on that work, in combination with the present results, could provide us with a strong opportunity to investigate both the origin of these \textit{p}-nuclei and the environment of the birth of the Sun. There are many scenarios that could in principle be investigated. If the $\gamma$ value of the origin Type Ia supernovae site was around 1 Myr, then we could derive a T$_\text{iso}$ from all the different ratios, and check for self-consistency. If the $\gamma$ value of the origin site was above 30 Myr, instead, we would be in a similar case as the \textit{r}-process isotopes discussed above, and the $^{97}$Tc/$^{98}$Tc would give us directly the production ratio in the original site, to be checked against nucleosynthesis predictions. For $\gamma$ values in-between, the $^{97}$Tc/$^{98}$Tc ratio would still provide us with the opportunity to calculate T$_\text{iso}$. Unfortunately, we only have upper limits for the ESS ratio of these two nuclei, relative to their experimental reference isotope $^{98}$Ru, which means that an ESS value for their ratio cannot be given and a detailed analysis needs to be postponed until such data becomes available. \subsection{The ratio of $^{97}$Tc and $^{53}$Mn, also potentially of Chandrasekhar-mass Type Ia supernova origin} From a chemical evolution perspective, the origin of Mn (and therefore $^{53}$Mn) is still unclear \citep{Seitenzahl13,Cescutti17,Eitner20,Kobayashi20,Lach20}. Nevertheless, the $^{53}$Mn/$^{97}$Tc ratio can be assumed to be synchronous, as there are indications that the main site of origin of $^{53}$Mn is the same as that of $^{97}$Tc \footnote{And $^{98}$Tc, however, we prefer to consider $^{97}$Tc here because both its mean life and its yields are closer to that of $^{53}$Mn} \citep[see, e.g.,][]{Lugaro2016}. Table~\ref{tab:specificValuesSync} shows that the uncertainty for the ratio of the two SLRs is below 30\% for most cases (and as low as 5\% when $\gamma$=1 Myr), while for each one of the individual isotopes is larger than 60\%. Similar to the $^{97}$Tc/$^{98}$Tc ratio discussed above, the $^{53}$Mn/$^{97}$Tc ratio can also provide the opportunity to investigate T$_\text{iso}$ for $\gamma$ values up to 2 Myr, because even if $\tau_\text{eq}$=59.4 Myr the shorter mean lives of each SLR do not allow to built a memory making this a case of Regime 3, which cannot be treated here. The ISM values reported in Table~\ref{tab:ESSvaluesTiso} were calculated with a production ratio of $2.39 \times 10^{-2}$ for $^{97}$Tc/$^{98}$Ru, $0.108$ for $^{53}$Mn/$^{55}$Mn, and $1.82 \times 10^{6}$ for $^{53}$Mn/$^{97}$Tc \citep{Lugaro2016,Travaglio2011}. We obtain potential self-consistent isolation times, mostly determined by the accurate ESS value of $^{53}$Mn/$^{55}$Mn. Consistency between the last two columns of the table, which could inform us on the relative production of nuclei from nuclear statistical equilibrium (such as $^{53}$Mn) and nuclei from $\gamma$-process in Chandrasekhar-mass Type Ia supernovae (such as $^{97}$Tc), could be found only if the $^{97}$Tc/$^{98}$Ru ratio in the ESS was 7.3 times lower than the current upper limit. Similarly to the $s$-process case described above, for high values of $\gamma$ (e.g., 31.6 and 100 Myr shown in Table~\ref{tab:ESSvaluesTiso}), the $^{53}$Mn/$^{55}$Mn and $^{97}$Tc/$^{98}$Ru ratios would record one event only (Regime III) and the derived T$_\text{LE}$ are consistent with each other. The value from $^{53}$Mn/$^{55}$Mn can then be used to decay back the ESS ratio of the $^{53}$Mn/$^{97}$Tc and derive a direct constrain for the last \textit{p}-process event that polluted the solar material. Overall, a more precise $^{97}$Tc ESS abundance would allow us to take advantage of the low theoretical uncertainties and give a more accurate prediction of the ISM ratio or the production ratio at the site. \subsection{$^{60}$Fe/$^{26}$Al} Finally we consider the case for $^{60}$Fe/$^{26}$Al. This ratio is of great interest in the literature because both isotopes are produced by core-collapse supernovae \citep{LimongiChieffi2006} and they can be observed with $\gamma$-rays \citep{Wang2007} as well as in the ESS \citep{Trappitsch2018}. There are strong discrepancies between core-collapse supernovae yields and observations, as the yields tipically produce a $^{60}$Fe/$^{26}$Al ratio at least three times higher than the $\gamma$-ray observations \citep[e.g.][]{Sukhbold2016}, and orders of magnitude higher than the ESS ratio \citep[see discusssion in][]{Lugaro2018}. We cannot apply our analysis to interpret the $\gamma$-ray ration because it is derived by measuring first the total abundance of $^{60}$Fe and $^{26}$Al separatedly, and then dividing them. In this case, the average abundance ratio is given simply by the ratio of the averages, mixing the $^{60}$Fe and $^{26}$Al productions from several different events, which do not correspond to our synchronous framework. When considering the ESS abundance, however, we can apply our methods, since the ESS ratio represents abundance at one time and place in the ISM, generated by a synchronous set of events. In this case, $\tau_1 = 3.78$ Myr (for $^{60}$Fe) and $\tau_2 = 1.035$ Myr (for $^{26}$Al) results in $\tau_\text{eq} = -1.45$ Myr. If we consider a $\gamma = 1$ Myr for the core-collapse supernovae enriching events we fall somewhere between Regime 2 and 4, with $^{60}$Fe and $^{26}$Al building memory and almost no memory, respectively, between successive events. As a consequence, when considering our statistical analysis, the average ISM value given by Eq.~(\ref{eq:approxAvg}) predicted for the $^{60}$Fe/$^{26}$Al ratio is a factor of 3.9 of the production ratio. This is a $7\%$ higher than the traditional continuous enrichment steady-state formula $P \tau_1/\tau_2$ (i.e., the limit of Eq.~(\ref{eq:evolRatio}) when $\delta_c,\Delta t \to 0$) used in the literature \citep[see e.g.][]{Sukhbold2016}, since that gives a factor of 3.65 of the production ratio instead. In conclusion, our analysis does not help to solve the problem that core-collapse supernova yields produce much more $^{60}$Fe relative to $^{26}$Al than observed in the ESS. \section{Conclusions and future work} \label{sec:conclusions} We presented a statistical framework to study the uncertainties of ratios of SLRs that were present at the formation time of the Solar System. We show that this statistical framework is advantageous because: \begin{itemize} \item it removes the GCE uncertainties associated with the stable reference isotopes often used for ESS ratios (i.e., the value of the parameter K investigated by \citealt{Cote2019}); \item it reduces the stochastic uncertainties, i.e., for ratios of two SLRs these uncertainties are typically much lower than those of SLR/stable isotopic ratios, for equivalent regimes. \item it allows us to define a Regime 3 for the ratio of two SLRs, which is qualitatively different to the regimes described in Paper I for SLR/stable ratios, and represents the case where each mean life is much shorter than $\gamma$, while the equivalent mean life of the ratio of the two SLRs is much longer than $\gamma$. In this case the ratios of the two SLRs allows us to constrain the nucleosynthesis inside the last nucleosynthetic events that contributed the Solar System matter. \end{itemize} We have identified four ratios: $^{247}$Cm/$^{129}$I (from the \textit{r} process), $^{107}$Pd/$^{182}$Hf (from the \textit{s} process), $^{97}$Tc/$^{98}$Tc (from the \textit{p} process), and $^{53}$Mn/$^{97}$Tc (potentially from Type Ia supernovae), which can be used effectively to either reduce the uncertainty in the T$_\text{iso}$ calculation (for relatively small values of $\gamma$), or to predict accurately the production ratio for the last event that enriched the ESS (for relatively large values of $\gamma$). In particular, the inconsistencies we found (see Table~\ref{tab:ESSvaluesTiso}) between the production and the ESS ratios both for the $^{247}$Cm/$^{129}$I and the $^{107}$Pd/$^{182}$Hf ratios can be used to constrain the events in the Galaxy that produced the \textit{r}-process isotopes \citep{Cote2020} and the elements belonging to the first \textit{s}-process peak (Trueman et al., submitted) at the time of the formation of the Sun . While here we have only investigated the simpler synchronous enrichment scenario, where the two SLRs are assumed to originate from the same events, in the future, we could also investigate the asynchronous enrichment scenario, for particular cases such as the $^{146}$Sm/$^{244}$Pu ratio. For example, $^{146}$Sm is a \textit{p} nucleus and $^{244}$Pu is produced by the \textit{r} process, therefore, the $\gamma$ for the production events of the two isotopes are probably very different. The mean life of $^{244}$Pu is 115 Myr, while for $^{146}$Sm, two different mean lives are reported: 98 Myr \citep{Kinoshita2012} and 149 Myr \citep{Marks2014}, for which $\tau_\text{eq} = 663$ Myr and $\tau_\text{eq} = 504$ Myr, respectively. Since these values are extremely long, the $^{146}$Sm/$^{244}$Pu ratio may provide with an opportunity to predict its value with an uncertainty much lower than when considering the individual isotopes. Another interesting ratio may be $^{135}$Cs/$^{60}$Fe, with a $\tau_\text{eq} = 26$ Myr (from mean lives of 3.3 and 3.78 Myr, respectively). For a frequent enrichment rate ($\gamma \sim 1$ Myr) the relative uncertainty on the predicted abundance ratio in a synchronous scenario is 4.5\%. However, $^{135}$Cs is a product of both the $s$ and the \textit{r} processes, while $^{60}$Fe is ejected mostly by core-collapse supernovae, which would require a complex asynchronous scenario. Furthermore, only an upper limit for the ESS abundance for $^{135}$Cs is available. In general, improvements in ESS data for any of the SLRs considered here will help us to constrain the stellar nucleosynthesis models. Particularly, these improvements are strongly needed for the \textit{p}-process isotopes $^{97}$Tc and $^{98}$Tc, for which we currently only have upper limits for their ESS abundances. Together with the well known $^{53}$Mn, these SLRs could provide unique constrains on both galactic \textit{p}-process nucleosynthesis and the origin of Solar System matter. \acknowledgments We thank the anonymous referee for the careful reading of the paper. This research is supported by the ERC Consolidator Grant (Hungary) funding scheme (Project RADIOSTAR, G.A. n. 724560). BC acknowledges the support from the National Science Foundation (NSF, USA) under grant No. PHY-1430152 (JINA Center for the Evolution of the Elements), and from the Hungarian Academy of Sciences via the Lend\"ulet project LP2014-17. \vspace{5mm} \bibliographystyle{aasjournal.bst}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsubsection{#1}} \newcommand{\draftcomment}[3]{\textcolor{#2}{\bf [#1: #3]}} \definecolor{citecolor}{RGB}{0, 113, 188} \usepackage[pagebackref,breaklinks,colorlinks,citecolor=citecolor]{hyperref} \usepackage[capitalize]{cleveref} \crefname{section}{Sec.}{Secs.} \Crefname{section}{Section}{Sections} \Crefname{table}{Table}{Tables} \crefname{table}{Tab.}{Tabs.} \newenvironment{packed_enum}{ \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{2pt} \setlength{\parsep}{0pt} }{\end{enumerate}} \newenvironment{packed_item}{ \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{2pt} \setlength{\parsep}{0pt} }{\end{itemize}} \makeatletter \newcommand{\printfnsymbol}[1]{% \textsuperscript{\@fnsymbol{#1}}% } \makeatother \section{Introduction}\label{sec: intro_luming} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images/teaser_eccv.pdf} \vspace{-0.7cm} \caption{ Visual-Prompt Tuning (VPT) \vs~other transfer learning methods. (a) Current transfer learning protocols are grouped based on the tuning scope: Full fine-tuning, Head-oriented, and Backbone-oriented approaches. (b) VPT instead adds extra parameters in the input space. (c) Performance of different methods on a wide range of downstream classification tasks adapting a pre-trained \vit{}-B backbone, with mean and standard deviation annotated. VPT outperforms Full fine-tuning 20 out of 24 cases while using less than 1$\%$ of all model parameters } \vspace{-0.5cm} \label{fig:teaser} \end{figure} For a variety of recognition applications, the most accurate results are now obtained by adapting large \emph{foundation models} pre-trained on massive curated or raw data, a finding that mirrors developments in natural language processing (NLP)~\cite{bommasani2021opportunities}.\footnote{As pointed out in~\cite{bommasani2021opportunities}, all state-of-the-art models in contemporary NLP are now powered by a few Transformer-based models (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, BERT~\cite{devlin-etal-2019-bert}, T5~\cite{2020t5}, BART~\cite{lewis2020bart}, GPT-3~\cite{brown2020gpt3}) This also applies to vision-language field recently, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, CLIP~\cite{radford2021learning}.} At first glance,this is a success story: one can make rapid progress on multiple recognition problems simply by leveraging the latest and greatest foundation model. In practice, however, \emph{adapting} these large models to downstream tasks presents its own challenges. The most obvious (and often the most effective) adaptation strategy is \emph{full fine-tuning} of the pre-trained model on the task at hand, end-to-end. However, this strategy requires one to store and deploy a separate copy of the backbone parameters for every single task. This is an expensive and often infeasible proposition, especially for modern \emph{Transformer}-based architectures, which are significantly larger than their convolutional neural networks (ConvNet) counterparts, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, ViT-Huge~\cite{dosovitskiy2020vit} (632M parameters) \vs~ResNet-50~\cite{he2016rn} (25M parameters). We therefore ask, \textbf{what is the best way to adapt large pre-trained Transformers to downstream tasks in terms of effectiveness and efficiency}? One straightforward approach is to turn to other strategies that we have perfected for adapting ConvNets to new tasks, as in~\cref{fig:teaser}(a). A popular approach is to fine-tune only a subset of the parameters, such as the classifier head~\cite{wslimageseccv2018,jia2021exploring,chen2021mocov3} or the bias terms~\cite{cai2020tinytl}. Prior research has also looked at adding additional residual blocks (or \emph{adapters}) to the backbone~\cite{rebuffi2018efficient,zhang2020side}. One could implement similar strategies for Transformers. However, in general these strategies \emph{under-perform} full fine-tuning in accuracy. We explore a different route in this paper. Instead of altering or fine-tuning the pre-trained Transformer itself, we modify the \emph{input} to the Transformer. Drawing inspiration from the recent advances on Prompting in NLP~\cite{liu2021pre,li-liang-2021-prefix,lester-etal-2021-power,liu2021p}, we propose a new simple and efficient method to adapt transformer models for downstream vision tasks (\cref{fig:teaser}(b)), namely \textbf{Visual-Prompt Tuning} (VPT). Our method only introduces a small amount of task-specific learnable parameters into the input space while freezing the entire pre-trained Transformer backbone during downstream training. In practice, these additional parameters are simply prepended into the input sequence of each Transformer layer and learned together with a linear head during fine-tuning. On 24 downstream recognition tasks spanning different domains using a pre-trained \vit{} backbone, VPT beats all other transfer learning baselines, even surpassing full fine-tuning in 20 cases, while maintaining the advantage of storing remarkably fewer parameters (less than 1\% of backbone parameters) for each individual task (\cref{fig:teaser}(c)). This result demonstrates the distinctive strength of \emph{visual} prompting: whereas in NLP, prompt tuning is only able to \emph{match} full fine-tuning performance under certain circumstances~\cite{lester-etal-2021-power}. VPT is especially effective in the low-data regime, and maintains its advantage across data scales. Finally, VPT is competitive for a range of Transformer scales and designs (ViT-Base/Large/Huge, Swin). Put together, our results suggest that VPT is one of the most effective ways of adapting ever-growing vision backbones. \section{Related Work}\label{sec:related} \noindent\textbf{Transformer} models~\cite{vaswani2017attention} have gained huge success in NLP~\cite{devlin-etal-2019-bert,2020t5,brown2020gpt3}. The triumph of the Transformer architecture also extends to various computer vision tasks, including image classification~\cite{dosovitskiy2020vit,liu2021swin}, object detection~\cite{carion2020end,li2021benchmarking}, semantic and panoptic segmentation~\cite{strudel2021segmenter,zheng2020rethinking,wang2021max}, video understanding~\cite{girdhar2019video,wang2022bevt,feichtenhofer2022masked} and few-shot learning~\cite{doersch2020crosstransformers}, surpassing previous state-of-the-art approaches. Transformers are also being widely used in recent self-supervised pre-training methods~\cite{chen2021mocov3,he2021mae,bao2021beit}. Given their superior performance and much larger scale compared to ConvNets, how to efficiently adapt Transformers to different vision tasks remains an important open problem. Our proposed VPT provides a promising path forward. \noindent\textbf{Transfer learning} has been extensively studied for vision tasks in the context of ConvNets~\cite{zhuang2020comprehensive} and many techniques have been introduced including side tuning~\cite{zhang2020side}, residual adapter~\cite{rebuffi2017learning}, bias tuning~\cite{cai2020tinytl}, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot. Relatively little attention has been paid to vision Transformers adaptation and how well these aforementioned methods perform on this brand new type of architecture remains unknown. On the other hand, given the dominance of large-scale pre-trained Transformer-based Language Models (LM)~\cite{devlin-etal-2019-bert,2020t5,brown2020gpt3}, many approaches~\cite{he2022towards,guo2020parameter,hu2021lora} have been proposed to efficiently fine-tune LM for different downstream NLP tasks~\cite{wang2018glue,wang2019superglue}. Among them, we focus on the following two representative methods in our experiments for benchmarking purposes: Adapters~\cite{pfeiffer2020AdapterHub} and BitFit~\cite{zaken2021bitfit}. Adapters~\cite{houlsby2019parameter} insert extra lightweight modules inside each Transformer layer. One adapter module generally consists of a linear down-projection, followed by a nonlinear activation function, and a linear up-projection, together with a residual connection~\cite{pfeiffer2020adapterfusion,pfeiffer2020AdapterHub}. Instead of inserting new modules, \cite{cai2020tinytl} proposed to update the bias term and freeze the rest of backbone parameters when fine-tuning ConvNets. BitFit~\cite{bao2021beit} applied this technique to Transformers and verified its effectiveness on LM tuning. Our study demonstrates that VPT, in general, provides improved performance in adapting Transformer models for vision tasks, relative to the aforementioned two well-established methods in NLP. \noindent\textbf{Prompting}~\cite{liu2021pre} originally refers to prepending language instruction to the input text so that a pre-trained LM can ``understand'' the task. With manually chosen prompts, GPT-3 shows strong generalization to downstream transfer learning tasks even in the few-shot or zero-shot settings~\cite{brown2020gpt3}. In addition to the follow-up works on how to construct better prompting texts~\cite{shin2020autoprompt,jiang2020can}, recent works propose to treat the prompts as task-specific continuous vectors and directly optimize them via gradients during fine-tuning, namely Prompt Tuning~\cite{li-liang-2021-prefix,lester-etal-2021-power,liu2021p}. Compared to full fine-tuning, it achieves comparable performance but with 1000$\times$ less parameter storage. Although prompting has also been applied to vision-language models recently~\cite{radford2021learning,zhou2021learning,ju2021prompting,yao2021cpt,ge2022domain}, prompting is still limited to the input of \emph{text} encoders. Due to the disparity between vision and language modalities, in this paper we ask: can the same method can be applied successfully to image encoders? We are the first work (see related concurrent works~\cite{sandler2022fine,wang2022learning,conder2022efficient,bahng2022visual}) to tackle this question and investigate the generality and feasibility of visual prompting via \emph{extensive} experiments spanning multiple kinds of recognition tasks across multiple domains and backbone architectures. \section{Approach}\label{sec:method} We propose Visual-Prompt Tuning (\vprompt{}) for adapting large pre-trained vision Transformer models. \vprompt{} injects a small number of learnable parameters into Transformer's input space and keeps the backbone frozen during the downstream training stage. The overall framework is presented in \cref{fig:method}. We first define the notations in~\cref{subsec:method_pre}, then describe VPT formally in~\cref{subsec:method_vp}. \subsection{Preliminaries} \label{subsec:method_pre} For a plain Vision Transformer (ViT)~\cite{dosovitskiy2020vit} with $N$ layers, an input image is divided into $m$ fixed-sized patches $\{I_j\in\R^{3\times h\times w}\mid j\in\N, 1\le j\le m\}$. $h, w$ are the height and width of the image patches. Each patch is then first embedded into $d$-dimensional latent space with positional encoding: \begin{align} \label{eq:embed} \vec{e}_0^j = \texttt{Embed}(I_j) &&\vec{e}_0^j\in\R^{d}, j = 1,2, \ldots m \eqdot \end{align} We denote the collection of image patch embeddings, $\vec{E}_{i}=\{\vec{e}_i^j\in\R^d\mid j\in\N, 1\le j\le m\}$, as inputs to the ($i$+$1$)-th Transformer layer ($L_{i+1}$). Together with an extra learnable classification token ($\texttt{[CLS]}$), the whole ViT is formulated as: \begin{align} \label{eq:vit} [\vec{x}_i, \vec{E}_i] &= L_i([\vec{x}_{i-1}, \vec{E}_{i-1}]) &&i=1, 2, \ldots, N \\ \vec{y} &= \texttt{Head}(\vec{x}_N)\eqcomma \end{align} where $\vec{x}_{i}\in\R^{d}$ denote $\texttt{[CLS]}$'s embedding at $L_{i+1}$'s input space. $[\cdot,\cdot]$ indicates stacking and concatenation on the sequence length dimension, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $[\vec{x}_{i}, \vec{E}_{i}]\in\R^{(1+m)\times d}$. Each layer $L_i$ consists of Multiheaded Self-Attention (MSA) and Feed-Forward Networks (FFN) together with LayerNorm~\cite{ba2016layer} and residual connections~\cite{he2016rn}. A neural classification head is used to map the final layer's $\texttt{[CLS]}$ embedding, $\vec{x}_{N}$, into a predicted class probability distribution $\vec{y}$.\footnote{Some Transformer architectures in Vision such as~\swin{}~\cite{liu2021swin} do not use $\texttt{[CLS]}$ and treat global pooled $\vec{E}_N$ as input for \texttt{Head}. We follow their designs when adapting VPT to these Transformer variants. See~\cref{supsec:detail} for more details.} \subsection{Visual-Prompt Tuning (\vprompt{})} \label{subsec:method_vp} Given a pre-trained Transformer model, we introduce a set of $p$ continuous embeddings of dimension $d$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot{},~\emph{prompts}, in the input space after the \texttt{Embed} layer. Only the task-specific prompts are being updated during fine-tuning, while the Transformer backbone is kept frozen. Depending on the number of Transformer layers involved, our approach has two variants, \shallowprompt{} and \deepprompt{}, as shown in~\cref{fig:method}. \para{VPT-Shallow.} Prompts are inserted into the first Transformer layer $L_1$ only. Each prompt token is a learnable $d$-dimensional vector. A collection of $p$ prompts is denoted as $\vec{P}=\{\vec{p}^k\in\R^d\mid k\in\N, 1\le k\le p\}$, the shallow-prompted ViT is: \begin{align} \label{eq:shallow} [\vec{x}_{1}, \vec{Z}_{1}, \vec{E}_{1}] &= \textcolor{prompt_blue}{L_1}([\textcolor{prompt_blue}{\vec{x}_{0}}, \textcolor{prompt_red}{\vec{P}},\vec{E}_{0}]) \\ [\vec{x}_{i}, \vec{Z}_{i}, \vec{E}_{i}] &= \textcolor{prompt_blue}{L_i}([\vec{x}_{i-1}, \vec{Z}_{i-1},\vec{E}_{i-1}]) &&i=2, 3, \ldots, N\\ \vec{y} &= \textcolor{prompt_red}{\texttt{Head}}(\vec{x}_N)\eqcomma \end{align} where $\vec{Z}_{i}\in\mathbb{R}^{p\times d}$ represents the features computed by the $i$-th Transformer layer, and $[\vec{x}_{i}, \vec{Z}_{i}, \vec{E}_{i}]\in\R^{(1+p+m)\times d}$. The colors \textcolor{prompt_red}{$\bullet$} and \textcolor{prompt_blue}{$\bullet$} indicate \textcolor{prompt_red}{learnable} and \textcolor{prompt_blue}{frozen} parameters, respectively. Notably for ViT, $\vec{x}_N$ is invariant to the location of prompts since they are inserted after positional encoding, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $[\vec{x}_{0}, \vec{P},\vec{E}_{0}]$ and $[\vec{x}_{0}, \vec{E}_{0}, \vec{P}]$ are mathematically equivalent. This also applies to VPT-Deep. \para{VPT-Deep.} Prompts are introduced at \emph{every} Transformer layer's input space. For ($i$+$1$)-th Layer $L_{i+1}$, we denote the collection of input learnable prompts as $\vec{P}_{i}=\{\vec{p}_i^k\in\R^d\mid k\in\N, 1\le k\le m\}$. The deep-prompted ViT is formulated as: \begin{align} \label{eq:deep} [\vec{x}_i, \underline{\hspace{0.3cm}}, \vec{E}_i] &= \textcolor{prompt_blue}{L_i}([\vec{x}_{i-1}, \textcolor{prompt_red}{\vec{P}_{i-1}},\vec{E}_{i-1}]) &&i=1, 2, \ldots, N\\ \vec{y} &= \textcolor{prompt_red}{\texttt{Head}}(\vec{x}_N)\eqdot \end{align} \para{Storing Visual Prompts.} \vprompt{} is beneficial in presence of multiple downstream tasks. We only need to store the learned prompts and classification head for each task and re-use the original copy of the pre-trained Transformer model, significantly reducing the storage cost. For instance, given a \vit{}-Base with 86 million (M) parameters and $d=768$, 50 shallow prompts and deep prompts yield additional $p\times d=50\times 768=0.038$M, and $N\times p\times d=0.46$M parameters, amounting to only 0.04$\%$ and 0.53$\%$ of all \vit{}-Base parameters, respectively. \subsection{Experiment Setup} \label{subsec:evalsetup} \noindent\textbf{Pre-trained Backbones.} We experiment with two Transformer architectures in vision, Vision Transformers (\vit{})~\cite{dosovitskiy2020vit} and Swin Transformers (\swin{}~\cite{liu2021swin}). All backbones in this section are pre-trained on \imagenet{}-21k~\cite{imagenet_cvpr09}. We follow the original configurations, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, number of image patches divided, existence of \texttt{[CLS]}, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot. More details are included in~\cref{supsec:detail}. \noindent\textbf{Baselines.} We compare both variants of VPT with other commonly used fine-tuning protocols: \begin{enumerate}[nosep, label=(\alph*), font=\small\ttbf,] \item \fullft{}: fully update \emph{all} backbone and classification head parameters. \item Methods that focus on the classification head. They treat the pre-trained backbone as a feature extractor, whose weights are fixed during tuning: \begin{itemize}[leftmargin=0.0em, topsep=0.15mm] \item \linear{}: only use a linear layer as the classification head. \item \partialft{}-$k$: fine-tune the last $k$ layers of backbone while freezing the others, as adopted in~\cite{yosinski2014transferable,zhang2016colorful,noroozi2016unsupervised,he2021mae}. It redefines the boundary of backbone and classification head. \item \mlp{}-$k$: utilize a multilayer perceptron (MLP) with $k$ layers, instead of a linear layer, as classification head. \end{itemize} \item Methods that update a subset backbone parameters or add new trainable parameters to backbone during fine-tuning: \begin{itemize}[leftmargin=0.0em, topsep=0.15mm] \item \sidetune{}~\cite{zhang2020side}: train a ``side'' network and linear interpolate between pre-trained features and side-tuned features before being fed into the head. \item \bias{}~\cite{cai2020tinytl,zaken2021bitfit}: fine-tune only the bias terms of a pre-trained backbone. \item \adapter{}~\cite{houlsby2019parameter,pfeiffer2020adapterfusion,pfeiffer2020AdapterHub}: insert new MLP modules with residual connection inside Transformer layers. \end{itemize} \end{enumerate} \noindent\textbf{Downstream Tasks.} We experiment on the following two collections of datasets: \textit{FGVC} consists of 5 benchmarked Fine-Grained Visual Classification tasks including CUB-200-2011~\cite{WahCUB_200_2011}, NABirds~\cite{van2015nabirds}, Oxford Flowers~\cite{nilsback2008automated}, Stanford Dogs~\cite{Khosla_FGVC2011dogs} and Stanford Cars~\cite{gebru2017cars}. If a certain dataset only has {\small\texttt{train}\xspace}{} and {\small\texttt{test}\xspace}{} sets publicly available, we randomly split the training set into {\small\texttt{train}\xspace}{} (90\%) and {\small\texttt{val}\xspace}{} (10\%), and rely on {\small\texttt{val}\xspace}{} to select hyperparameters. \textit{\vtab{}}~\cite{zhai2019vtab} is a collection of 19 diverse visual classification tasks, which are organized into three groups: \textit{Natural} - tasks that contain natural images captured using standard cameras; \textit{Specialized} - tasks that contain images captured via specialized equipment, such as medical and satellite imagery; and \textit{Structured} - tasks that require geometric comprehension like object counting. Each task of VTAB contains 1000 training examples. Following~\cite{zhai2019vtab}, we use the provided 800-200 split of the {\small\texttt{train}\xspace}{} set to determine hyperparameters and run the final evaluation using the full training data. We report the average accuracy score on {\small\texttt{test}\xspace}{} set within three runs. We report the average accuracy on the FGVC datasets, and the average accuracy on each of the three groups in VTAB. The individual results on each task are in~\cref{subsec:results_supp}, as are image examples of these aforementioned tasks. \subsection{Main Results} \label{subsec:exp_results} \cref{table:main_vitb} presents the results of fine-tuning a pre-trained \vit{}-B/16 on averaged across 4 diverse downstream task groups, comparing VPT to the other 7 tuning protocols. We can see that: \begin{enumerate}[nosep, leftmargin=5mm] \item \textbf{VPT-Deep outperforms \fullft{} (\cref{table:main_vitb}\texttt{(a)}) on 3 out of the 4 problem classes} (20 out of 24 tasks), while using significantly fewer total model parameters (1.18$\times$~\vs~24.02$\times$). Thus, \emph{even if storage is not a concern}, \vprompt{} is a promising approach for adapting larger Transformers in vision. Note that this result is in contrast to comparable studies in NLP, where prompt tuning matches, but \emph{does not exceed} full fine-tuning~\cite{lester-etal-2021-power}. \item \textbf{VPT-Deep outperforms all the other parameter-efficient tuning protocols (\cref{table:main_vitb}\texttt{(b},\texttt{c)}) across all task groups}, indicating that \deepprompt{} is the best fine-tuning strategy in storage-constrained environments. \item Although sub-optimal than \deepprompt{}, \shallowprompt{} still offers non-trivial performance gain than head-oriented tuning methods in \cref{table:main_vitb}\texttt{(b)}, indicating that \shallowprompt{} is a worthwhile choice in deploying multi-task fine-tuned models if the storage constraint is severe. \end{enumerate} \subsection{Ablation on Model Design Variants} \label{subsec:ablate} We ablate different model design choices on the supervised ImageNet-21k pre-trained ViT-Base and evaluate them on VTAB, with same setup in \cref{table:main_vitb}. See more in~\cref{supp_analysis}. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{images/loc_ablate-compressed.pdf} \includegraphics[width=\textwidth]{images/ablateall_loc.pdf} \caption{Ablation on prompt location. We illustrate different location choices at top, and present the results at bottom. For easy comparison, two blue dashed lines represent the performance of the default \deepprompt{} and \shallowprompt{} respectively } \label{fig:ablate_loc} \end{figure} \para{Prompt Location.} An important distinction between \vprompt{} and other methods is the extra learnable parameters introduced as \textit{inputs} for the Transformer layers. \cref{fig:ablate_loc} ablates different choices on how and where to insert prompts in the input space, and how they would affect the final performance. \textit{Prepend or Add?} Instead of prepending prompts to the sequence of the image patches embeddings $\vec{E}_i$ as described in~\cref{subsec:method_vp}, another option is to directly \emph{add} prompts element-wise to those embeddings, keeping the Transformer's input sequence length the same as before. Though this variant is competitive to \fullft{} in some cases (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, VTAB-\textit{Natural}), its performance generally falls behind with the default \texttt{Prepend} in both deep and shallow settings. More discussion on this phenomenon is in~\cref{app:seq}. \textit{Latent or pixel space?} Instead of inserting the prompts as latent vectors for the first Transformer layer, one could introduce prompts in the \textit{pixel} level before the \texttt{Embed} layer in~\cref{eq:embed}, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \texttt{Prepend-pixel} and \texttt{Concat-channel}. \cref{fig:ablate_loc} shows that the adaption performance \emph{decreases} for these two variants. For example, the accuracy score of prepending shallow prompts before the projection layer (\texttt{Prepend-pixel}) drops 6.9$\%$, compared to the default prepending in the embedding space (\texttt{Prepend}) on VTAB-\textit{Natural}. The performance further deteriorates (even as large as 30 accuracy scores drop on VTAB-\textit{Natural}) if we instead concatenate a new channel to the input image (\texttt{Concat-channel}). These observations suggest that it's easier for prompts to learn condensed task-dependent signals in the latent input space of Transformers. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{images/prompttokens.pdf} \caption{Ablation on prompt length. We vary the number of prompts for \deepprompt{} and show the averaged results for each VTAB subgroup. The averaged best \deepprompt{} results for each task is also shown for easy reference } \label{fig:ablate_length} \end{figure} \para{Prompt Length.} This is the only additional hyper-parameter needed to tune for VPT compared to full fine-tuning. For easy reference, we also ablate two other baselines on their individual additional hyper-parameters, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, number of layers for \mlp{} and reduction rate for \adapter{}. As shown in \cref{fig:ablate_length}, the optimal prompt length varies across tasks. Notably, even with as few as only \emph{one} prompt, \deepprompt{} still significantly outperforms the other 2 baselines, and remains competitive or even better compared to full fine-tuning on VTAB-\textit{Structured} and \textit{Natural}. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{images/ablatelayers.pdf} \caption{Ablation on prompt depth. We select the best prompt length for each variant with {\small\texttt{val}\xspace}{} sets. $i\rightarrow j$ indicates the Transformer layer indices that prompts are inserted into. The 1-st layer refers to the one closest to input. ViT-B has 12 layers in total } \label{fig:ablate_depth} \end{figure} \subsubsection{Prompt Depth.} \cref{fig:ablate_depth} ablates which and how many layers to insert prompts. Each variant reports the best prompt length selected with {\small\texttt{val}\xspace}{} set. \vprompt{}'s performance is positively correlated with the prompt depth in general. Yet the accuracy drops if we insert prompts from \textcolor{prompt_red}{top to bottom}, suggesting that prompts at earlier Transformer layers matter more than those at later layers. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{images/outputpool_ablate_compressed.pdf} \includegraphics[width=\textwidth]{images/ablate_output.pdf} \caption{ Ablation on final output. Illustration of different strategies is included at top, and results of those are presented at the bottom section. For easy comparison, the blue dashed line represents the performance of default \deepprompt{} } \label{fig:ablate_output} \end{figure} \para{Final Output.} Following the original configuration of \vit{}, we use the final embedding of $\texttt{[CLS]}$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\vec{x}_N$, as the classification head input, which is also the default setting in our \vit{} experiments. As shown in \cref{fig:ablate_output}, if we use the average pooling on image patch output embeddings $\vec{E}_N$ as final output (\texttt{Image-pool}), the results essentially remain the same (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, 82.4 \vs~82.3 for VTAB-\textit{Specialized}). However, if the pooling involves final prompt outputs $\vec{Z}_N$ (\texttt{Prompt-pool} and \texttt{Global-pool}), the accuracy could drop as large as 8 points. \section{Experiments}\label{sec:exp} We evaluate VPT for a wide range of downstream recognition tasks with pre-trained Transformer backbones across scales. We first describe our experimental setup in~\cref{subsec:evalsetup}, including the pre-trained backbone and downstream tasks, and a brief introduction of alternative transfer learning methods. Then we demonstrate the effectiveness and practical utility of our method in \cref{subsec:exp_results}. We also systematically study how different design choices would affect performance (\cref{subsec:ablate}), which leads to an improved understanding of our approach. \input{main/4_1_setup} \input{main/4_2_main} \input{main/4_2_moremain} \input{main/4_3_ablation} \section{Analysis and Discussion} \label{sec:ana} \para{Visualization.} \cref{fig:tsne} shows t-SNE~\cite{van2008visualizing} visualizations of $\mathbf{x_N}$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, embeddings of \texttt{[CLS]} after the last Transformer layer and before the classification head, for 3 tasks in VTAB (SVNH~\cite{netzer2011reading}, EuroSAT~\cite{helber2019eurosat}, Clevr/count~\cite{johnson2017clevr}), one for each subgroup. All plots show that \deepprompt{} enables linearly separable representations while using less parameters than \fullft{}. We also observe that extra tunable parameters for every Transformer layer (\deepprompt{}) improve the performance, compared to \shallowprompt{}, which only inserts prompts for the first layer's input. Interestingly on Clevr/count (\cref{fig:tsne}(c)), \deepprompt{} and \fullft{} recover the underlying manifold structure of the task (counting objects in images~\vs{} street number or landscape recognition), unlike \shallowprompt{} and \linear{}. \input{main/table_semseg} \para{Apply VPT to more vision tasks.} We explore the feasibility of \vprompt{} beyond visual classification, by evaluating ADE20K~\cite{zhou2019ade20} semantic segmentation task with a Transformer model, SETR-PUP~\cite{zheng2020rethinking}. It adds a standard ConvNet head to the ViT backbone to perform segmentation. The de-facto approach is still fully fine-tuning the pre-trained backbone together with the ConvNet head (\fullft{}). We include two more protocols for comparison: only update the head layers (\textsc{Head Only}), update head layers and bias vectors in the backbone (\bias{}). In~\cref{table:main_seg}, we report {\small\texttt{val}\xspace}{} mIoU results with and without multi-scale inference. Though parameter-efficient protocols could not compete with \fullft{}, \vprompt{} is still comparable with \bias{}. Notably, \vprompt{} offers competitive results to a fully fine-tuned state-of-the-art ConvNet model (DeepLab v3+~\cite{chen2018encoder}), while tuning significantly less parameters (15M \vs{} 64M, respectively). \input{main/table_main_ssl} \para{Apply VPT to more pre-training methods.} In addition to the backbones pre-trained with labeled data, we experiment with two self-supervised objectives: \mae{}~\cite{he2021mae} and \moco{}~\cite{chen2021mocov3}. \cref{table:main_ssl} reports the results on \vtab{} with \vit{}-B. We observe that both variants of \vprompt{} surpass \linear{}, yet the comparisons among other techniques are less conclusive. For \mae{}, other parameter-efficient methods, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \partialft{}-1, outperform both \vprompt{} and \linear{}. In the case of \moco{}, \vprompt{} no longer holds the best performance, though it is still competitive with the others. This suggests that these two self-supervised \vit{}s are fundamentally different from the supervised ones in previous sections. Exactly why and how these differences arise remain open questions. \input{main/table_main_rnrnx} \para{Apply VPT to ConvNets.} We examine the idea of adding trainable parameters in the input space of ConvNets: padding both height and width by $p$ learnable prompt pixels for the input image. Though this operation seems unconventional, we implement VPT this way given there is no obvious solution to add location-invariant prompts similar to the Transformer counterparts. In fact this approach has been explored before in the adversarial attack literature~\cite{elsayed2018adversarial}. The value of $p$ in our experiment is 2 orders of magnitude smaller than previous work: \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, 5 \vs 263. Most importantly, we cast this idea in the lens of transfer learning. See~\cref{vpt_ar} for more discussion. \cref{table:main_rnxrn} presents the results for \rnx{}-B~\cite{liu2022convnet} (pre-trained on \imagenet{}-21k) and \rn{}-50~\cite{he2016rn} (pre-trained on \imagenet{}-1k), respectively. \vprompt{} works well in a larger ConvNet backbone, \rnx{}-B, offering accuracy gains over other sparse tuning protocols (\texttt{b}, \texttt{c}), and outperforming \fullft{} on 8 out of 19 cases. The advantages of \vprompt{}, however, diminish with smaller ConvNet (\rn{}-50), as there is no clear winner for all 19 \vtab{} tasks. \section{Conclusion} We present Visual Prompt Tuning, a new parameter-efficient approach to leverage large vision Transformer models for a wide range of downstream tasks. VPT introduces task-specific learnable prompts in the input space, keeping the pre-trained backbone fixed. We show that VPT can surpass other fine-tuning protocols (often including full fine-tuning) while dramatically reducing the storage cost. Our experiments also raise intriguing questions on fine-tuning dynamics of vision Transformers with different pre-training objectives, and how to transfer to broader vision recognition tasks in an efficient manner. We therefore hope our work will inspire future research on how best to tap the potential of large foundation models in vision. \section{Implementation Details} \label{supsec:detail} We use PyTorch~\cite{paszke2017pytorch} to implement all experiments on NVIDIA A100-40GB GPUs. \subsection{Classification Experiments} \subsubsection{VPT.} We use {\small\texttt{val}\xspace}{} set of each dataset to find best prompt length $p$, see~\cref{subsec:method_vp}. The prompt length is the only VPT-specific hyper-parameter that we tune. For Transformer backbones, the range of $p$ is $\{1, 5, 10, 50, 100, 200\}$ and $\{1, 5, 10, 50\}$ for \vit{} and \swin{}, respectively. The maximum choice of $p$ is approximately close to the number of image patch tokens within each MSA for both architectures (\vit{}: 196, \swin{}: 49). We also apply a dropout of $0.1$ for~\deepprompt{}. For ConvNets, the range of $p$ is $\{1, 3, 5, 7, 9, 11\}$. Each prompt is randomly initialized with xavier uniform initialization scheme~\cite{glorot2010understanding}. We follow the original backbone' design choices, such as the existence of the classification tokens \texttt{[CLS]}, or whether or not to use the final \texttt{[CLS]} embeddings for the classification head input. \subsubsection{\adapter{}.} Adapters~\cite{houlsby2019parameter} insert extra lightweight modules inside each Transformer layer. One adapter module generally consists of a linear down-projection (with a reduction rate $r$), followed by a nonlinear activation function, and a linear up-projection, together with a residual connection. \cite{pfeiffer2020adapterfusion,pfeiffer2020AdapterHub} exhaustively searched all possible configurations and found that only inserting adapters after the FFN ``Add \& LayerNorm'' sub-layer works the best. Therefore we also use this setup in our own implementation. We sweep the reduction rate $r$ in $\{8, 64, 256\}$. \input{supp/table_details} \input{supp/table_data} \input{supp/table_backbones} \begin{figure} \centering \includegraphics[height=0.95\textheight]{images/dataset_imgs-compressed.pdf} \caption{Dataset examples for all classification tasks evaluated } \label{fig:dataexps} \end{figure} \subsubsection{Augmentation and other hyper-parameters.} We adopt standard image augmentation strategy during training: normalize with \imagenet{} means and standard deviation, randomly resize crop to 224$\times$224 and random horizontal flip for five FGVC datasets, and resize to 224$\times$224 for the \vtab{} suite.\footnote{Following the \href{https://github.com/google-research/task_adaptation/blob/master/task_adaptation/data_loader.py}{default settings} in VTAB, we don't adopt other augmentations} \cref{table:supp_imp} summarizes the optimization configurations we used. Following~\cite{wslimageseccv2018}, we conduct grid search to find the tuning-specific hyper-parameters, learning rate, and weight decay values using {\small\texttt{val}\xspace}{} set of each task. Following the linear scaling rule~\cite{krizhevsky2014one,goyal2017accurate,chen2021mocov3,he2021mae}, the learning rate is set as \emph{base\_lr}$\times b / 256$, where $b$ is the batch size used for the particular model, and \emph{base\_lr} is chosen from the range specified in~\cref{table:supp_imp}. The optimal hyper-parameter values for each experiment can be found in~\cref{subsec:results_supp}. \subsubsection{Datasets and pre-trained backbones specifications.} \cref{table:supp_datasets,table:supp_backbone} summarize the statistics and details of the evaluated classification datasets and all the pre-trained backbones used in the paper. \cref{fig:dataexps} includes image examples of all 24 classification tasks evaluated. \subsection{Semantic Segmentation Experiments} ADE20K~\cite{zhou2019ade20} is a challenging scene parsing benchmark with 150 fine-grained labels. The training and validation sets contain 20,210 and 2,000 images respectively. We utilize the public codebase MMSegmentation~\cite{mmseg2020} in our implementation.\footnote{See the \href{https://github.com/open-mmlab/mmsegmentation}{MMSegmentation GitHub page}} The ViT-L backbone is supervisely pre-trained on \imagenet{}-21k.\footnote{\href{https://storage.googleapis.com/vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0.npz}{ViT-L/16 checkpoint}} SETR~\cite{zheng2020rethinking} is a competitive segmentation framework using ViT as the encoder. PUP is a progressive upsampling strategy consisting of consecutive convolution layers and bilinear upsampling operations. Among multiple decoder choices, PUP works the best according to MMSegmentation's reproduction therefore we also use it as in our implementation.\footnote{\href{https://github.com/open-mmlab/mmsegmentation/tree/master/configs/setr}{ MMSegmentation's reproduction on SETR}} When applying VPT to SETR-PUP, we only insert prompts into SETR's ViT encoder backbone. For the decoder, only image patch embeddings are used as inputs and prompt embeddings are discarded. Same as recognition tasks, only the PUP decoder head and prompts are learned during training and the ViT backbone is frozen. For full fine-tuning, we use the same hyper-parameters as in MMSegmentation. For \textsc{HeadOnly}, \bias{}, and VPT, we use the hyper-parameter sweep on learning rate \{0.05, 0.005, 0.0005, 0.001\}. The optimal learning rate is 0.005 for all methods. We sweep prompt length $p\in$ \{1, 5, 10, 50, 100, 200\}. For VPT, we also change the learning rate multiplier to 1.0 instead of the default 10.0, so the decoder head and prompts share the same learning rate. Other hyper-parameters remain the same as full fine-tuning. \section{Extended Analysis} \label{supp_analysis} \subsubsection{Effect of expanding input sequence length.} \label{app:seq} As shown in~\cref{table:main_vitb}, by expanding the input sequence with learnable prompts, VPT achieves better performance than \fullft{} on the 20 out of 24 tasks evaluated. To investigate whether the advantage of VPT is due to its enlarged input sequence length, we experiment on two more variants: (1) the prompts are kept frozen during fine-tuning stage (\texttt{Prompt-Fixed}). (2) only tuning the $\texttt{[CLS]}$ token (\texttt{[CLS]-Learned}). From \cref{fig:ablate_update} we can see that, updating prompt embeddings (\texttt{Prompt-Learned}) offers significant gains, while \texttt{Prompt-Fixed} yields comparable results w.r.t\onedot} \def\dof{d.o.f\onedot~\linear{}. This suggests that the final performance of VPT is mainly contributed by the learned prompt embeddings instead of the enlarged sequence length. Updating the $\texttt{[CLS]}$ token performs similarly as updating 1 prompt ($\texttt{[CLS]}$ \vs{}~\texttt{Learned$_{p=1}$}), but still lags behind the default setting where we manually select the best number of prompt tokens based on the \texttt{val} set. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{images/share_ablate-compressed.pdf} \includegraphics[width=\textwidth]{images/ablateshared.pdf} \caption{Effect of sharing prompts. Illustration of different strategies is included at top, and results of those are presented at the bottom section. For easy comparison, the blue dashed line represents the performance of default \deepprompt{} } \label{fig:ablate_shared} \end{figure} \subsubsection{Sharing prompts.} We examine the effect of sharing parameters of prompts in \cref{fig:ablate_shared} by setting the same prompt embedding \emph{within} Transformer layers (\texttt{Shared-intra}), \emph{among} all layers (\texttt{Shared-inter}), and for all prompts inserted in the Transformer (\texttt{Shared-all}). We can observe that: (1) Sharing prompts within layer (\texttt{Shared-intra}) performs competitively or slightly outperforms the performance of using one prompt (\texttt{Default$_{p=1}$}), further demonstrating the value of expanding input sequence. (2) Although \texttt{Shared-intra} under-performs \texttt{Default} in general, surprisingly, \texttt{Shared-inter} slightly outperforms our default \deepprompt{} while using similar number of trainable parameters (total number of parameters for all VTAB tasks: 1.14$\times$ \vs~1.13$\times$ for \texttt{Shared-inter} \vs \texttt{Default}, respectively). Closer examination reveals that the optimal prompt length $p$ for \texttt{Shared-inter} is in general larger than \texttt{Default}, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot{}, average prompt length on all VTAB tasks: 64.58 \vs~60.94, for \texttt{Shared-inter} \vs \texttt{Default}, respectively. (3) Sharing the same prompt embedding both among and within layers (\texttt{Shared-all}) deteriorates performance, but still surpass the linear probing results across three VTAB subgroups. \subsubsection{Prompt initialization.} In NLP, prompt tuning could benefit from more sophisticated prompt initialization, as shown in~\cite{lester-etal-2021-power}. We investigate if this is the case for visual prompting as well. We utilize prototype representations for downstream target classes so that the prompts are initialized with embeddings that enumerate the output space. Since we want the model to produce an output embedding that is close to one of these prototype representations given a test example, initializing prompts in this manner might give the model some hints about the target categories thus help improve the optimization process. Concretely, we use the averaged final \texttt{[CLS]} embeddings whithin each target class of the down-stream dataset {\small\texttt{train}\xspace}{} split. Given the pre-trained ViT with $N$ layers, and the down-stream {\small\texttt{train}\xspace}{} set with $c$ target classes, for each training example, we compute the final \texttt{[CLS]} embeddings, $\vec{x}_N \in \R^d$. Then we average these embeddings within each target class to get $\{\hat{\vec{x}}_N^k\in\R^d \mid k\in\N, 1\le k\le c\}$.\footnote{if $c>200$, we further apply $k$-means ($k=200$) to class-averaged embeddings and use the corresponding 200 centroid embeddings as $\{\hat{\vec{x}}_N^k\in\R^d\}_{k=1}^{k=200}$.} Setting prompt length $p=c$,\footnote{if $c>200$, we set $p=200$ so that prompt length won't be too large. In fact, for VTAB, only the Sun397 task in the \textit{Natural} subgroup has over 200 classes. See~\cref{table:supp_datasets}.} we initialize $\vec{P}$ with $\{\hat{\vec{x}}_N^k\}_{k=1}^{k=c}$ for \shallowprompt{} , and initialize each $\vec{P}_i$ with $\{\hat{\vec{x}}_N^k\}_{k=1}^{k=c}$, where $i=0, 1, \ldots, N-1$, for \deepprompt{}. We compare the fine-tuning performance using the above initialization strategy (\texttt{CLS}) against the default random initialization (\texttt{Random}) in~\cref{fig:ablate_init}. We also report results when we fix the prompts during the fine-tuning stage (\texttt{$\cdot$-fixed}). As shown in \cref{fig:ablate_init}, it's quite surprising that our default random initialization (\texttt{Random}) works the best in general, consistently across different subgroups of VTAB without extra pre-processing steps described above (\texttt{CLS}). \texttt{CLS} works comparably in \textit{Natural} and \textit{Specialized} subgroups.\footnote{ Utilizing the per-class averaged \texttt{[CLS]} features, we also tried several other different implementation variants, including using per-layer \texttt{[CLS]} embeddings for \deepprompt{} instead of only the final output \texttt{[CLS]} vector. They perform either the same as or even much worse than the \texttt{CLS} strategy above, and none of them is able to out-perform the default \texttt{Random}. } \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images/ablate_init_final.pdf} \caption{Effect of prompt initialization. For easy comparison, the two blue dashed line represents the performance of default \deepprompt{} and \shallowprompt{}, respectively } \label{fig:ablate_init} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images/ablatelayers_all.pdf} \caption{Sensitivity to prompt length for the prompt depth experiments. We select the best prompt length for each variant with {\small\texttt{val}\xspace}{} sets. We also include the same prompt length for all depth choices. $i\rightarrow j$ indicates the Transformer layer indices that prompts are inserted into. The 1-st layer refers to the one closest to input. ViT-B has a total of 12 layers } \label{suppfig:ablate_depth_all} \end{figure} \subsubsection{Prompt depth \vs~prompt length.} In~\cref{fig:ablate_depth}, we ablate the number of layers we insert prompts in. For each prompt depth variant, \cref{fig:ablate_depth} reports the results using the best prompt length for \textit{each} task (``$\cdot\rightarrow\cdot$ (best)'' in~\cref{suppfig:ablate_depth_all}). Here we adopt another setting where the best prompt length from $1\rightarrow12$ are used for \textit{all} other prompt depth variants. Comparing both ``$\cdot\rightarrow\cdot$ (best)'' and ``$\cdot\rightarrow\cdot$'', we observe that there are varied sensitivities to prompt length for different depths, especially if we insert prompts in nine layers only ($3\rightarrow$12, $12\rightarrow3$). \input{supp/table_promptbias} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images/ensembles.pdf} \caption{ Performance of a five-run ensemble. We report the averaged, the best among five runs as well. Best performance is bolded in each column } \label{fig:ensemble} \end{figure} \subsubsection{Combine VPT with Bias Tuning.} Our experiments in the main paper reveal that \bias{} is a competitive parameter-efficient tuning baseline (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot,~\cref{table:main_vitb}\texttt{(c)}). Based on this observation, we explore another protocol where we update both prompts and the bias terms of the pre-trained backbone, keeping everything else in the backbone frozen (\promptbias{}). As shown in \cref{table:supp_promptbias}, to our surprise, incorporating \bias{} with \vprompt{} does not yield superior results in general, even undermines \deepprompt{} for all 3 task subgroups. This suggests that these two methods are not necessarily complementary to each other. \subsubsection{Prompt ensembling.} \cite{lester-etal-2021-power} demonstrated prompt's efficiency in the context of model ensembling. For an ensemble of $k$ models, we only need to store the learnt prompt vectors instead of $k$ copies of the whole fine-tuned model parameters (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot{}, $k\times2.5$GB for ViT-H). Furthermore, given one test example during inference time, \emph{only one} forward pass is executed with a specially-designed batch with replicated original data but varied prompts. Given such advantages, we also investigate VPT's effectiveness on prompt ensembling. We train 5 different prompts for each VTAB task with different random seeds, using the same pre-trained ViT-B backbone and hyper-parameters as in \cref{table:main_vitb}. \cref{fig:ensemble} shows that the ensembled \deepprompt{} outperforms the average or even the best single-prompt counterparts, as well as other ensembled fine-tuning methods including \fullft{}. \input{supp/table_sig} \begin{figure} \centering \includegraphics[width=\textwidth]{images/hp_sensitivity.pdf} \caption{Effect of different fine-tuning hyperparameters. Evaluated on the VTAB-\textit{Specialized}: KITTI/Distance task. Other tuning methods are shaded in gray } \label{fig:hp} \end{figure} \subsubsection{Test of statistical significance.} We conduct non-parametric paired one-tailed $t$-test (the Wilcoxon signed-rank test~\cite{wilcoxon1992individual}) on whether \deepprompt{}'s performance is greater than other fine-tuning methods across 19 VTAB tasks (the null hypothesis $H_0$ states that the mean VTAB performance difference between \deepprompt{} and alternate baseline method is zero. The alternative hypothesis $H_1$ states that \deepprompt{} outperforms the baseline method on VTAB). \cref{table:supp_sig} presents the $p$-values of each test, with the number of observations equal to 19 for each method compared (we use the averaged accuracy scores among 5 runs for 19 VTAB tasks and all fine-tuning methods). For all of the fine-tuning protocols compared, \deepprompt{}'s improvements are statistically significant ($p<0.05$). We also conduct un-paired one-tailed $t$-test with unequal variances (Welch's $t$-test~\cite{welch1947generalization}), comparing the individual runs (the number of observations = 5) for each VTAB task ($H_0$ states that \deepprompt{} and the other baseline perform the same for a specific VTAB task, while $H_1$ states that \deepprompt{} outperforms the other baseline for a specific VTAB task). \cref{suppfig:sig} presents the $p$-values for each $<$\deepprompt{}, baseline method$>$ pair on each task. We reject $H_0$ on 127 out of $19\times 8=152$ cases ($p<0.05$). Compared to \fullft{}, \deepprompt{} achieves statistically significant better performance on 11 out of 19 tasks. \input{supp/table_vtab384} \subsubsection{Effect of different fine-tuning hyper-parameters.} In \cref{fig:hp}, we present different tuning protocol's performance on different fine-tuning hyper-parameters including learning rate and weight decay. For our proposed \deepprompt{}, we also ablate different choices of prompt length $p$, which is the only hyper-parameter that needs to be manually tuned. All experiments are evaluated on the {\small\texttt{val}\xspace}{} set of KITTI/Distance task (VTAB-\textit{Specialized}). We observe different behaviors between \linear{} and \vprompt{}. Both methods freeze backbone parameters during fine-tuning stage. Linear probing is more sensitive to weight decay values in general, whereas VPT is influenced by both learning rate and weight decay values. VPT with larger prompt length is also less sensitive to the choice of learning rate. \subsubsection{Effect of image resolution.} The original ViT paper~\cite{dosovitskiy2020vit} found that fine-tuning with higher image resolutions (384$\times$384) is beneficial to downstream recognition tasks. All recognition experiments presented in the main paper are fine-tuned on 224$\times$224 resolution. As shown in \cref{table:supp_384}, we re-run the VTAB experiments with the same setup as in \cref{table:main_vitb} but in the 384 resolution instead of the default 224. We can see that, \deepprompt{} still achieves the best performance among all parameter-efficient tuning protocols, and even outperforms full fine-tuning on 15 out of 19 tasks. Although the increase of image resolutions doesn't lead to better full fine-tuning performance in general, it indeed slightly boosts \deepprompt{}'s performance. Another interesting observation from~\cref{table:supp_384} is that with 224 fine-tune resolution and a larger value of $p=380$, \vprompt{} could achieve similar or better performance compared to \fullft{} with 384 resolution, while using the same input sequence length yet significantly less trainable parameters. \subsubsection{Empirical computational cost.} One possible limitation of \vprompt{} is the extra input sequence length for Transformers. In theory the complexity of MSA is quadratic w.r.t\onedot} \def\dof{d.o.f\onedot{} the input sequence length, but this might not be the case for real-world speed due to hardware details like lane widths and cache sizes~\cite{dosovitskiy2020vit}. In~\cref{table:supp_cost,suppfig:cost}, we study the empirical computational cost, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot{}, latency, and peak GPU memory usage at both training and inference times, for all the fine-tuning protocols studied. All experiments use the same A100 GPU with a batch size 64 for both training and inference. We can see that the theoretical quadratic scaling w.r.t\onedot} \def\dof{d.o.f\onedot{} sequence length barely happens to VPT. For instance, doubling the length ($p=200$~\vs{}~$m=198$) basically only lead to 2$\times$ (instead of 4$\times$) inference latency and peak GPU memory w.r.t\onedot} \def\dof{d.o.f\onedot{} full fine-tuning. For training, the latency would be largely reduced with less number of prompts. An equivalent implementation of \vprompt{} during test time is directly prepend the parameters to the key and value arrays inside the self-attention module of Transformer~\cite{li-liang-2021-prefix} (\vprompt{}-prefix). While we found that such implementation does not lead to accuracy improvement on VTAB datasets, it reduces the computation cost during inference. \Cref{suppfig:cost_vpt_prefix} shows the comparison with different values of $p$. \vprompt{}-prefix reduces test-time latency and peak GPU memory with a large margin especially when $p$ becomes large. \input{supp/table_cost} \section{Further Discussion} \subsubsection{VPT \vs~Adversarial Reprogramming (AR).} \label{vpt_ar} The differences are: (1) the number of learnt parameters injected in the input space in AR literature~\cite{elsayed2018adversarial} is nearly 20 times larger than ours (264k \vs 13k). VPT is significantly more parameter-efficient; (2) AR has shown its effectiveness in ConvNet, while VPT can be applied to broader architectures, including ViT, Swin. Furthermore, VPT is more general with the option of diving into deeper layers of pre-trained backbone (\cref{fig:method}), whereas AR strictly applies to the \emph{first} input layer of ConvNets. (3) another distinction is that our setting update both prompts and classification head, while AR~\cite{elsayed2018adversarial} directly use the pre-trained classification head. Our setup is more general and could be applied to models with a broader range of pre-training objectives (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, MAE~\cite{he2021mae}, which does not include a pre-trained classification head) and broader vision tasks (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, segmentation). \subsubsection{Visual prompt \vs~textual prompt.} Our paper also discover discrepancies between visual and textual prompts: we show that VPT could even outperform full-model fine-tuning on 20 out of 24 cases, which is in contract to the NLP's related work~\cite{lester-etal-2021-power}. We also found that random initialized prompts works better in~\cref{fig:ablate_init}, and prompts at earlier layers matters more (\cref{fig:ablate_depth,suppfig:ablate_depth_all}), which are also different from observation on the NLP side~\cite{lester-etal-2021-power,liu2021p}. These discrepancies indicate that visual prompting might be fundamentally different from text prompts thus in need of further investigation. \section{Supplementary Results} \label{subsec:results_supp} \input{supp/table_vtabfgvc} \subsubsection{Numerical results of~\Cref{table:main_vitb}.} \cref{table:supp_vtab,table:supp_fgvc} present per-task results for 24 classification tasks evaluated in~\cref{table:main_vitb}. \subsubsection{Per-task results on training data ablations.} \cref{fig:supp_fgvcsize} presents the per-task results for five FGVC datasets. We observe a similar trend in~\cref{fig:main_fgvcsize}: while all parameter-efficient methods outperform full fine-tuning in small-to-medium data regime, \deepprompt{} consistently surpasses \fullft{} across data scales for five FGVC tasks. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{images/fgvc_size_alldata.pdf} \caption{ Effect of downstream data size, for each of FGVC tasks. The size of markers are proportional to the percentage of tunable parameters in log scale } \label{fig:supp_fgvcsize} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{images/tsne_supp-compressed.pdf} \caption{More t-SNE visualization of the final \texttt{[CLS]} embedding $\vec{x}_N$ of more VTAB tasks. We include tasks that have less or equal to 20 target classes for visualization } \label{fig:more_tsne} \end{figure} \subsubsection{More t-SNE visualizations.} In \cref{fig:more_tsne}, We presents more t-SNE visualizations, similar to~\cref{fig:tsne}, for all VTAB datasets with less than or equal to 20 target classes.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Located at the knots of the cosmic web, galaxy clusters trace regions of over-density in the large-scale structure of the universe, making them ideal cosmic laboratories. As cosmological probes (see review articles \citealt{Allen:11, Mantz:14}), clusters have been used to study dark energy (e.g., \citealt{Frieman:08, Heneka:18, Bonilla:18, Huterer:18}) and dark matter (e.g., \citealt{Bradac:06, Clowe:06, Bradac:08, Diego:18}), constrain cosmological parameters (e.g., \citealt{Gladders:07, Dunkley:09, Rozo:10, Mantz:10, Mantz:14, deHaan:16, Bocquet:19}), measure the baryonic fraction (e.g., \citealt{Fabian:91, Allen:08, Vikhlinin:09}) and the amplitude of the matter power spectrum (e.g., \citealt{Allen:03}). Crucial to cosmological studies using galaxy clusters is a large well-defined sample with a complete characterization of the selection function of the observations (e.g., \citealt{Hu:03b, Khedekar:13}). The mass distribution of galaxy clusters (cluster mass function) provides a connection between the observables and the underlying cosmology, and can constrain structure formation models (e.g., \citealt{Jenkins:01, Evrard:02, Corless:09}). The galaxy cluster dynamical and non-linear hierarchical merging growth process \citep{Bertschinger:98} introduces variance in the astronomical measurements \citep{Evrard:02, Allen:11, Huterer:18}. Understanding the systematic errors and assumptions made when estimating the mass of galaxy clusters is paramount as they depend on observable astrophysical quantities (e.g., \citealt{Evrard:02, Huterer:18}). With the advent of recent and upcoming large surveys spanning a broad wavelength range, thousands of strong lensing galaxy clusters will be detected out to redshift of $z\sim 2$ with a high completeness and purity. Examples include the surveys from the South Pole Telescope (SPT-3G, \citealt{Benson:14}; SPT-SZ 2500 deg$^2$, \citealt{Bleem:15}), Atacama Cosmological Telescope (ACT, \citealt{Marriage:11,Hilton:18}), Cerro Chajnantor Atacama Telescope (CCAT, \citealt{Mittal:18}), Dark Energy Survey (DES, \citealt{Abbott:18}), Euclid \citep{Laureijs:11,Boldrin:12}, Vera Rubin Observatory Legacy Survey of Space and Time (LSST, \citealt{LSST:09}), ROSAT All-Sky Survey (RASS, \citealt{Ebeling:98,Ebeling:00}), and eROSITA \citep{Pillepich:18}. A thorough characterization of the selection function and bias implicit in the observations and detections is key. In addition, multi-wavelength coverage of some galaxy clusters will allow for an extensive study of their physical components. Studies of the mass profile of galaxy clusters can provide us with information related to evolution of structure, formation and feedback processes, and dark matter properties. The methods used to estimate the mass of galaxy clusters include X-ray (e.g., \citealt{Vikhlinin:09, Ettori:19, Mantz:18}), Sunyaev-Zel'dovich effect (SZ, \citealt{Sunyaev:72, Sunyaev:80}; e.g., \citealt{Reichardt:13, Sifon:13, Planck:16}), richness (e.g., \citealt{Yee:03, Koester:07, Rykoff:16}), dynamics (e.g., \citealt{Gifford:13, Foex:17}), and gravitational lensing (e.g., \citealt{Kneib:11, Hoekstra:13, Sharon:15, Sharon:20}). Gravitational lensing (weak and strong) is the best technique to probe the total projected (baryonic and dark matter) mass density, independent of assumptions on the dynamical state of the cluster or baryonic physics. At the cores of galaxy clusters, strong gravitational lensing measures mass at the smallest radial scales and most extreme over-densities; when coupled with a mass proxy at a large radii, strong lensing can constrain global properties of the mass profile, including the concentration parameter. Advances in strong lens (SL) modeling, including better understanding of SL systematics \citep{Johnson:16}, its effects on constraining cosmological parameters \citep{Acebron:17}, magnification \citep{Priewe:17,Raney:20}, consequences due to the number of constraints \citep{Mahler:18}, and the use of spectroscopic and photometric redshifts \citep{Cerny:18}, make strong lens modeling a robust technique to study galaxy clusters and the background universe they magnify. A detailed lens model requires extensive follow-up: (1) imaging to identify multiple images and (2) spectroscopy of the lensed images to obtain spectroscopic redshifts of the sources (e.g., \citealt{Johnson:14, Zitrin:14, Diego:16, Kawamata:16, Lotz:17, Strait:18, Lagattuta:19, Sebesta:19, Sharon:20}). The location of the multiple images and the spectroscopic redshifts are used as constraints when computing the SL models. Typically, a detailed SL model for a rich galaxy cluster can take weeks to finalize, and it is not an automated process. Given the large numbers of strong lensing galaxy clusters expected from coming surveys, an accurate, fast, and well-characterized method of extracting basic strong lensing information is needed. In this paper, we evaluate the use of the geometric Einstein radius to estimate the mass at the core of SL galaxy clusters. We determine the uncertainties in the mass estimate, identify its limitations, investigate dependencies on the shape of the projected mass distribution, and find a possible empirical correction to de-bias the mass estimate. We base our analyses on the state-of-the-art, dark matter only, `Outer Rim' simulation \citep{Heitmann:19}. The Outer Rim contains a large sample of massive dark matter halos, and has sufficient mass resolution to enable precise and accurate ray-tracing of the strong lensing due to these halos. This paper is organized as follows. In \S \ref{sec:lensing}, we describe the lensing formalism, including a detailed description of the assumptions of the Einstein radius method to compute the enclosed mass. In \S \ref{sec:sim}, we summarize the properties of the `Outer Rim' simulation, the halo sample used in our analysis, and the cosmological framework. In \S\ref{sec:methods}, we detail how we measure the Einstein radius from the ray-traced images and compute both the inferred mass enclosed by the Einstein radius and the true mass from the simulation. In \S \ref{sec:analysis}, we present our analysis of the mass estimate and the systematics that contribute to the scatter and bias. In \S \ref{sec:no_zs}, we investigate the effect of not having the redshift information of the background sources ($z_\mathrm{S}$) on the mass estimate. In \S \ref{sec:emp_cor}, we propose an empirical correction to de-bias the mass estimate. Lastly, we present our conclusions and offer a prescription for applying our findings to real data in \S \ref{sec:conclusion}. For consistency with the simulations, we adopt a \textit{WMAP}-7 \citep{Komatsu:11} Flat $\Lambda$CDM cosmology in our analysis $\Omega_{\Lambda} = 0.735$, $\Omega_{M} = 0.265$, and $h = 0.71$. The large scale masses are reported in terms of M$_{\mathrm{Nc}}$, where M$_{\mathrm{Nc}}$ is defined as the mass enclosed within a radius at which the average density is N times the critical density of the universe at the cluster redshift. \section{BACKGROUND: Strong Gravitational Lensing} \label{sec:lensing} Gravitational lensing (see \citealt{Schneider:06a, Kneib:11} for reviews about gravitational lensing) occurs when photons deviate from their original direction as they travel to the observer through a locally curved space-time near a massive object, as described by Einstein's General Theory of Relativity. The lensing equation (\ref{eq:lenseq}) traces the image-plane position of images of lensed sources to the source plane location of the background sources. When multiple solutions to the lensing equation exist, multiply-imaged systems are possible, defining the strong lensing regime. The lensing equation is written as: \begin{equation} \begin{split} \vect{\beta} & = \vect{\theta} - \vect{\alpha} (\vect{\theta}), \\ \vect{\alpha}(\vect{\theta}) & = \frac{D_\mathrm{LS} (z_\mathrm{L},z_\mathrm{S})}{D_\mathrm{S} (z_\mathrm{S})} \vect{\hat{\alpha}} (\vect{\theta}), \label{eq:lenseq} \end{split} \end{equation} \noindent where $\vect{\beta}$ is the position of the lensed source in the source plane, $\vect{\theta}$ is the image plane location of the images, $\vect{\alpha}(\vect{\theta})$ is the deflection angle, $D_\mathrm{LS} (z_\mathrm{L},z_\mathrm{S})$ is the angular diameter distance between the lens and the source, $D_\mathrm{S} (z_\mathrm{S})$ is the angular diameter distance between the observer and the source, $z_\mathrm{L}$ is the redshift of the lens (in our case the redshift of the galaxy cluster), and $z_\mathrm{S}$ is the redshift of the background source. The deflection angle depends on the gravitational potential of the cluster projected along the line-of-sight. The magnification, $\mu$, of a gravitational lens can be expressed as the determinant of the magnification matrix: \begin{eqnarray} \label{eq:mag} \mu^{-1} = det(\mathcal{A}^{-1})=(1-\kappa)^2-\gamma^2, \end{eqnarray} \noindent where $\kappa$ is the convergence and $\gamma$ is the shear. The locations of theoretical infinite magnification in the image plane are called the tangential and radial critical curves, naming the primary direction along which images (arcs) are magnified. For a circularly symmetric lens with the origin centered at the point of symmetry, the angles $\vect{\alpha}(\vect{\theta})$ and $\vect{\beta}$ are collinear with $\vect{\theta}$. Then the lens equation (eq.~\ref{eq:lenseq}) becomes one-dimensional, $\beta = \theta - \alpha(\theta)$, and the deflection angle is: \begin{equation} \label{eq:deflec_angle} \begin{split} \alpha(\theta) & = \frac{2}{\theta} \int_{0}^{\theta} \theta d\theta \kappa(\theta) \\ & = \frac{4GM(<\theta)}{c^2 \theta} \frac{D_\mathrm{LS} (z_\mathrm{L},z_\mathrm{S})}{D_\mathrm{L} (z_\mathrm{L}) D_\mathrm{S} (z_\mathrm{S})} \\ & = \langle \kappa(\theta) \rangle \theta, \end{split} \end{equation} \noindent where $D_\mathrm{L} (z_\mathrm{L})$ is the angular diameter distance from the observer to the lens, $c$ is the speed of light, and $G$ is the gravitational constant. We can then substitute the deflection angle into the one-dimensional lens equation: \begin{equation} \label{eq:1d_lenseq} \beta = \theta (1 - \langle \kappa (\theta) \rangle ), \end{equation} \noindent where the critical region, defined as $\langle \kappa(\theta)\rangle = 1$, defines the tangential critical curve. In this circularly symmetric case, $\alpha(\theta) = \theta$, \autoref{eq:deflec_angle} becomes \begin{equation} \theta^2 = \frac{4GM(<\theta)}{c^2} \frac{D_\mathrm{LS} (z_\mathrm{L},z_\mathrm{S})}{D_\mathrm{L} (z_\mathrm{L}) D_\mathrm{S} (z_\mathrm{S})}. \end{equation} \noindent Last, substituting the critical surface density, $\Sigma_{cr}(z_\mathrm{L},zs)$, \begin{equation} \label{eq:s_crit} \Sigma_{cr}(z_\mathrm{L},z_\mathrm{S}) = \frac{c^2}{4\pi G}\frac{D_\mathrm{S}(z_\mathrm{S})}{D_\mathrm{L}(z_\mathrm{L}) D_\mathrm{LS}(z_\mathrm{L},z_\mathrm{S})}, \end{equation} \noindent we obtain the expression of the Einstein radius \citep{Narayan:96,Schneider:06a,Kochanek:06,Bartelmann:10,Kneib:11}: \begin{equation} \label{eq:er} \theta_E^2 = \frac{M(<\theta_E)}{\pi \Sigma_{cr}(z_\mathrm{L},z_\mathrm{S}) D_\mathrm{L}^2 (z_\mathrm{L})}. \end{equation} \noindent Re-arranging \autoref{eq:er}, the total projected mass enclosed by the Einstein radius of a circularly symmetric lens can be computed as: \begin{eqnarray} \label{eq:m_er} M(<\theta_E)=\Sigma_{cr} (z_\mathrm{L},z_\mathrm{S}) \ \pi \ [D_\mathrm{L}(z_\mathrm{L})\theta_E]^2. \end{eqnarray} An Einstein ring results from the exact alignment of the source, lens, and observer, as well as the circular symmetry of the lens. This causes an observed ring-like feature to appear around the lens. However, the three-dimensional mass density distribution of both simulated halos and observed clusters is better described by a triaxial ellipsoid \citep{Wang:2009, Despali:14, Bonamigo:15}. Complete Einstein rings are not often observed around clusters due to the more complex mass distribution; nevertheless, authors often use the clustercentric projected distance to a giant arc as a proxy for the Einstein radius. The mass calculated using \autoref{eq:m_er} is useful for the study of galaxy clusters, since it provides a quick estimate of the mass within the Einstein radius. It was estimated to produce a scatter of $\sim 30\%$ with respect to the true mass enclosed \citep{Bartelmann:96,Schneider:06b}. This uncertainty was adopted in the literature extensively when estimating the total projected mass enclosed by the Einstein radius (e.g., \citealt{Allam:07, Belokurov:07, Werner:07, Diehl:09, Bettinelli:16, Dahle:16, Nord:16}), despite limited quantification of its accuracy and precision. \section{DATA: Simulated Lenses} \label{sec:sim} \subsection{The Outer Rim Simulation} \label{subsec:outer_rim} To assess the accuracy and precision of the enclosed mass inferred from the Einstein radius, we use the state-of-the-art, large-volume, high-mass-resolution, gravity-only, N-body simulation `Outer Rim' \citep{Heitmann:19} with the HACC framework \citep{Habib:16} carried out at the Blue Gene/Q (BG/Q) system Mira at Argonne National Laboratory. The cosmology used assumes a Flat-$\Lambda$CDM model, with parameters adopted from WMAP-7 \citep{Komatsu:11}, $\mathrm{h} = 0.71$ and $\Omega_M = 0.264789$. The size of the simulation box on the side is $L = 3000 \ \mathrm{Mpc} \ \mathrm{h}^{-1}$ and it evolves $10,240^3 \approx 1.1$ trillion particles with a mass resolution of $m_p = 1.85\times \ 10^9 \ \mathrm{\,M_{\odot}} \ \mathrm{h}^{-1}$ and a force resolution in co-moving units of $3 \ \mathrm{kpc} \ \mathrm{h}^{-1}$. The large volume of the simulation run allows for many massive halos to be included in the same simulation box, covering the redshift range of interest ($z \sim 0.1 - 0.7$), and the high mass resolution provides excellent projected mass profile distributions of the individual clusters. The large number of massive halos allows for a rigorous statistical analysis, representative of the universe and is sufficient to enable strong lensing computations without the need of re-simulation. In previous simulation efforts when small numbers of massive halos were present in the simulation box, re-simulation of those halos was done to increase the sample to better the statistics \citep{Meneghetti:08, Meneghetti:10}. The Outer Rim, amongst other applications, was used to study dark matter halo profiles and the concentration-Mass relation \citep{Child:18} and to construct realistic strong lensing simulated images \citep{Li:16}. The majority of the mass in galaxy clusters is in the form of dark matter. Baryons contribute mostly at the core of the galaxy cluster, where the brightest cluster galaxy (BCG) and the hot intra-cluster medium (ICM) reside. Studies have found non-negligible baryonic effects from subhaloes of satellite galaxies as well as the BCG at small $\theta_E$ scale \citep{Meneghetti:03, Wambsganss:04, Oguri:06, Hilbert:07, Hilbert:08, Wambsganss:08, Oguri:09}. Fully accounting for these baryonic effects awaits for simulations that include baryonic physics in large cosmological boxes. \subsection{Simulated SPT-like Strong Lensing Sample} \label{subsec:sl_sample} Galaxy cluster halos were identified in the simulation using a friends-of-friends algorithm with a unit-less linking length of $b=0.168$ \citep{Heitmann:19}. The surface mass density was then computed using a density estimator. Extensive testing by \citet{Rangel:16} showed that the mass resolution is robust enough to compute strong lensing for halos with masses $\mathrm{M}_{500} > 2 \times 10^{14} \ \mathrm{\,M_{\odot}} \ \mathrm{h}^{-1}$. Following an SPT-like selection function, the halos with a mass larger than $\mathrm{M}_{500} > 2.1 \times 10^{14} \ \mathrm{\,M_{\odot}} \ \mathrm{h}^{-1}$ were selected to form the cluster sample. The simulated halo masses ($M_{500}$, $M_{200}$) and concentrations ($c_{200}$) that we use in this work were calculated by \citet{Li:19} and \citet{Child:18}. We adopt the dynamical state values and definitions from \citet{Child:18}; a dynamically-relaxed cluster is identified where the distance between the dark matter halo center and the spherical over-density center is smaller than $0.7 R_{200}$. When referring to the dynamical state of the galaxy cluster, the center was defined as the center of the potential from all the particles in the simulation corresponding to the particular dark matter halo. To select SL clusters out of the mass-limited sample, we first compute $\kappa(\theta)$ for a background source redshift of $z=2$ for each line of sight. We then identify strong lensing clusters as all lines of sight for which the Einstein radius of the critical region that satisfies $\langle \kappa(\theta) \rangle = 1$ is larger than a few arcseconds. The resulting sample of SPT-like simulated strong lenses includes $74$ galaxy cluster halos spanning the redshift range of $z_L \sim 0.16 - 0.67$. \begin{figure*} \center \includegraphics[width=1\textwidth]{Cluster_Sample_Properties_Dist2.pdf} \caption{\textsc{\textbf{Properties of the Simulated Sample.}} \textit{Top-Left}: the total mass ($M_{200}$), \textit{Top-Right}: redshift ($z$), and \textit{Bottom-Left}: concentration ($c_{200}$) distributions of the simulated halos. The mass-limited sample is shown in blue, and strong lenses are in orange. The masses and concentrations were computed by \citet{Li:19} and \citet{Child:18}. The counts are normalized by the total number of halos in each sample. \textit{Bottom-Right}: the mass-redshift distribution ($M_{500}$ - $z$). Orange squares indicate the Outer Rim strong lensing cluster halos; grey crosses are observed clusters from the 2500-Square-Degree SPT-SZ Survey (\citealt{Bleem:15}). The green circles, and the green dotted line in the \textit{Right} panels, are strong lensing galaxy clusters from \citet{Bleem:15}, which were identified from very heterogeneous imaging data and are likely not representative of all the strong lenses in the SPT sample.} \label{fig:cluster_sample} \end{figure*} In \autoref{fig:cluster_sample}, we summarize some of the halo properties of the mass-limited sample and the SL sample. The first three panels show the distributions of redshifts, masses, and concentrations. As can be seen in these panels, the distribution of strong lensing clusters peaks at higher total mass, higher concentration, and lower redshift than the mass-limited sample. Similar trends have been identified in both simulations \citep{Oguri:11,Giocoli:14} and observations \citep{Gralla:11,Oguri:12}. In the forth panel, we plot the mass-redshift distribution of the simulated clusters and that of the observed clusters from the SPT-SZ 2500 deg$^2$ survey \citep{Bleem:15}. As can be seen in the right panels of \autoref{fig:cluster_sample}, the \citet{Bleem:15} strong lensing sample extends to higher cluster redshifts than our simulated sample. The effective redshift cut in the simulated sample is imposed by the selection of cluster-scale lenses by their lensing efficiency for a $z_\mathrm{S}=2$ source plane. On the other hand, the observational SL clusters have been identified using imaging data from various ground- and space-based observatories. We note that while our simulated sample is statistically inconsistent with the full \citet{Bleem:15} strong lensing sample, considering only lenses at $z_\mathrm{L} < 0.7$ a Kolmogorov-Smirnov (KS) test does not reject the hypothesis that the simulated and observed SL samples are drawn from the same underlying distribution (KS-statistic $0.264$, p-value $0.159$). Regardless, the results presented in this work are not dependent on these samples being drawn from the same underlying distribution. The redshift range of the simulated SL sample, $z_L \sim 0.16 - 0.67$, is similar to that of the Sloan Giant Arc Survey (SGAS; M. Gladders et al., in preparation; \citealt{Bayliss:11, Sharon:20}), which identified lensing clusters from giant arcs in shallow optical SDSS imaging. Future studies will extend to higher redshifts to complement surveys with samples of galaxy clusters out to $z = 1.75$ such as the SPT-SZ 2500-Square-Degree survey \citep{Bleem:15}. \subsection{Ray Tracing and Density Maps} \label{subsec:ray_tracing_sdens} The ray-traced images and the projected mass distributions of the galaxy clusters have a size of $2048 \times 2048$ pixels and a resolution of $dx = 0\farcs09$ per pixel. For more details of the exact procedure to obtain the lensing maps and the ray-traced images, refer to \citet{Li:16}. Using the surface density distributions of these clusters, we compute all of the lensing maps, including the deflection angle ($\vect{\alpha}$) using Fourier methods, the convergence ($\kappa$), the shear ($\gamma$), the magnification ($\mu$), and the tangential and radial critical curves. We draw redshifts for 1024 background sources from a distribution ranging from $z \sim 1.2 $ to $z \sim 2.7$, following the observed distribution of \citet{Bayliss:11} (shown in \autoref{fig:zs_sample}). The image plane of each cluster was generated multiple times, resulting in $5 - 24$ ray-tracing realizations for each cluster halo. The background sources were randomly placed in areas of high magnification to produce highly magnified (total magnification $> 5$) arcs easily detected from ground based observations (e.g., \citealt{Bayliss:11, Sharon:20}). We note that the ray-tracing did not take into account structures along the line-of-sight. Structure along the line-of-sight can boost the total number of lenses observed by increasing the SL cross-section of individual clusters, having a larger effect on the less massive primary lensing halos \citep{Puchwein:09, Bayliss:14, Li:19}. The magnification of the arcs is also affected by the structure along the line-of-sight requiring particular care when studying the background source properties \citep{Bayliss:14, DAloisio:14, Chiviri:18} and using strong lensing clusters for cosmological studies \citep{Bayliss:14}. A statistical analysis of how the measurement of the core mass is affected by line of sight structure is left for future work. We use the ray-traced images to compute the mass enclosed by the Einstein radius, and the surface density maps as ``true'' mass to characterize the efficacy of this mass estimate. \begin{figure} \includegraphics[width=0.5\textwidth]{zs_dist.pdf} \caption{\textsc{\textbf{Simulated Background Source Redshifts, $z_\mathrm{S}$.}} The distribution is centered at $z=2$, consistent with the observed redshift distribution of highly magnified giant arcs \citep{Bayliss:11}.} \label{fig:zs_sample} \end{figure} \section{methodology} \label{sec:methods} Our methodology attempts to mirror the procedures that would be used in SL analyses of real data. Even in large surveys such as SPT, this includes a significant component of manual inspection and identification of SL evidence. Manual inspection is also required for targeted spectroscopic follow-up. \subsection{Einstein Radius Measurement} \label{subsec:get_radii} The first step is to measure an Einstein radius from the positions of the lensed images (arcs). To locate the arcs, we examine each of the ray-traced images by eye to identify sets of multiple images using their morphology and expected lensing geometry, mimicking the process of finding multiply-imaged lensed systems in observational data. If multi-band information is available lens modelers also take advantage of color information of the lensed images, but in this particular case, color information is not available from the ray-traced images. Using this process, we created a catalog with flags identifying the tangential and radial arcs, corresponding to the tangential and radial critical curves, respectively (see \S \ref{sec:lensing}). Identified lensed images whose classification (radial or tangential) is unclear were noted. The radial distribution of the identified arcs is shown in \autoref{fig:arc_dist}. We find that the distribution of tangential and radial arcs match our expectations from lensing geometry, the radial arcs are found near the center while the tangential arcs are typically found farther out. The distribution we find is qualitatively consistent with \citet{Florian:16}. \begin{figure} \includegraphics[width=0.5\textwidth]{Arc_Id_Dist.pdf} \caption{\textsc{\textbf{Radial Distribution of the Identified Arcs.}} Radial distances are measured with respect to the pixel with the highest projected mass density of the simulated galaxy cluster. We display the distribution of the tangential arcs with an orange dashed line, radial arcs with a green dashed-dotted line, and those images for which we are unsure with a red dotted line. The distribution of the radial and tangential arcs matches our expectation from lensing geometry, having radial arcs closer to the center while tangential arcs are found farther out.} \label{fig:arc_dist} \end{figure} Since the Einstein radius is a representation of the tangential critical curve \citep{Bartelmann:10,Kneib:11}, we only include the tangential arcs when finding the Einstein radius. We fit a circle to the tangential arcs as explained below; the radii of the resulting circles shall be our Einstein radii, $\theta_E$. We explore three alternatives for the centering of the circle; in the first method (hereafter \textit{fixed center}) we fix the center of the circle to the point of highest surface density of the projected mass from the simulated halo. Since in observations we do not a priori know where the center of the dark matter halo is located, in the second method we set the center as a free parameter (hereafter \textit{free center}) with a conservative uniform prior of $\pm 13\farcs5$ from the projected 3-D potential center of the halo. Because the free center requires two more free parameters, the free center minimization was only performed on the cases where 3 or more multiple images were identified as tangential arcs. In the observational realm, the BCG can be, and often is, used as a proxy for the cluster center. The third method (hereafter \textit{fixed center with BCG offset}) mimics fixing the center to an observed BCG. Since the Outer Rim simulation does not include baryonic information, we cannot determine the BCG position directly from it. We therefore turn to studies that investigate the BCG offset from the dark matter center. \citet{Harvey:19} explores the radial offset between the BCG and the dark matter (DM) center as an observable test of self-interacting dark matter (SIDM) models with different dark matter cross-sections. They find that the BCG-DM offset follows a log-normal distribution, with the offsets in the cold dark matter (CDM) case being the smallest ($\mu = 3.8 \pm 0.7$ kpc) and increases with increasing dark matter cross section. We use the distribution of the SIDM model with a DM cross-section of $0.3$ cm$^2$/g. This value represent a reasonable/conservative upper boundary according to recent analysis \citep{Pardo:19, Sagunski:20}. Following this rationale, we fix the center of the circle to a point offset from the center of the dark matter halo, with a radial offset drawn from a log-normal distribution with $\mu = 6.1 \pm 0.7$ kpc, in a random direction. For the fitting procedure, we use an ensemble sampler Markov chain Monte Carlo (MCMC) implemented for python using the libraries emcee\footnote{Python emcee \url{https://emcee.readthedocs.io/en/stable/}} \citep{Foreman:13} and lmfit\footnote{Python lmfit \url{https://lmfit.github.io/lmfit-py/index.html}} \citep{Newville:14} method to fit a circle to the tangential arcs. The fitting method minimizes the distance between the 2-D position of the arcs (visually identified morphological features that can be matched between the multiple images) and the nearest point to it on the circle. We use a uniform prior in the radius fitting parameter of $2\farcs25 < \theta_E < 45\farcs0$ for all three of our fitting methods. We note that the cases where only a single arc is identified, the distance between the fixed center and the arc is used to determine the radius of the circle and no scatter is measured. The distribution of the measured $\theta_E$ is shown in \autoref{fig:ang_r_dist} and the distribution of the standard deviation, $\sigma (\theta_E)$, computed from the covariance matrix of the fit is shown in \autoref{fig:ang_r_unc_dist}. Since the free center fitting procedure is significantly more flexible, the standard deviation on the fitted $\theta_E$ for the free center is about 20 times higher compared to that of the fixed center and fix center with BCG offset fit. \begin{figure} \includegraphics[width=0.5\textwidth]{Ang_R_Dist.pdf} \caption{Distribution of the Einstein radii from the fits to the identified tangential arcs utilizing the fixed center (blue), fixed center with BCG offset (orange) and free center (green).} \label{fig:ang_r_dist} \end{figure} \begin{figure*} \center \includegraphics[width=1\textwidth]{Ang_R_Uncertainty_Dist.pdf} \caption{Distribution of the standard deviation of the measured Einstein radii ($\sigma (\theta_E)$) in units of percentage utilizing the fixed center (\textit{left}), fixed center with BCG offset (\textit{middle}), and free center (\textit{right}). We find that the standard deviation of the free center method is about 20 times higher than that of the fixed center and fixed center with BCG offset methods.} \label{fig:ang_r_unc_dist} \end{figure*} \subsection{Inferred Mass} \label{subsec:get_mass} Taking the Einstein radius from \S\ref{subsec:get_radii} and the corresponding lens and source redshifts (\S\ref{subsec:sl_sample}), we compute the enclosed projected mass, $M(<\theta_E)$, via \autoref{eq:m_er}. For our comparison, we use the projected mass distribution from the simulation to measure the true mass enclosed within the same aperture. We refer to this as the ``true'' mass, $M_{sim}(<\theta_E)$. An example of this procedure is shown in \autoref{fig:er_explanation}. \begin{figure*} \center \includegraphics[width=1\textwidth]{Einstein_Radius_Explanation2.pdf} \caption{\textsc{\textbf{Example of the Simulated Images to Illustrate our Methodology.}} \textit{Left}: ray-traced image; the identified lensed images are indicated with magenta symbols, with circles on tangential arcs and squares with a slash through on radial arcs. We fixed the center to the highest surface density point from the projected mass distribution and fit a circle to the tangential arcs of radius of $\theta_E = 15\farcs2$, shown in green. The mass inferred from the Einstein radius is $M(<\theta_E) = 3.38 \text{ x } 10^{13} \ \mathrm{M}_\odot \mathrm{h}^{-1}$. \textit{Right}: projected mass density distribution of the simulated galaxy cluster where the green circle is the same aperture from the lensed image. The color-bar is in units of M$_{\odot}$ Mpc$^{-2}$ h. The ``true'' projected mass enclosed is $M_{sim}(<\theta_E) = 3.00 \text{ x } 10^{13} \ \mathrm{M}_\odot \mathrm{h}^{-1}$. We perform our analysis utilizing these two masses, the inferred ($M(<\theta_E)$) and the ``true'' ($M_{sim}(<\theta_E)$).} \label{fig:er_explanation} \end{figure*} \subsection{Statistical approach to Correctly Represent the Universe} \label{subsec:stats} Our simulated sample consists of a total of $1024$ ray tracing realizations through 74 strong lensing galaxy clusters, resulting with $5$-$24$ ray-tracing realizations for each cluster. Each ray-traced simulated realization includes one of the 74 cluster halos and a single background source at a unique redshift. In addition, in some of the realizations multiple distinct structures (clumps) were identified and used to measure more than one Einstein Radius for that particular realization. For this reason the ray-trace realizations and Einstein Radius for a specific galaxy cluster are not independent from each other. To establish a robust analysis that represents the universe, includes the statistical uncertainty of the fitted Einstein radius, and allows for the application to observational data, we weight each galaxy cluster to equal one. The ray-traced realizations are then evenly weighted by a factor of one over the total number of realizations for the specific cluster, and similarly the Einstein radii were weighted per ray-traced image. For each galaxy cluster, we select, at random, one ray-traced image from that cluster and one Einstein radius measurement for that realization. We then sample the selected Einstein radius using a normal distribution with the mean as the best fit Einstein radius and standard deviation equal to the uncertainty of the fitted Einstein radius. We repeat this process $1,000$ times per cluster and use this sample with $74,000$ points for our statistical analysis. \section{Analysis of Results} \label{sec:analysis} In this section, we compare the mass inferred from the Einstein radius ($M(<\theta_E)$) to the true mass ($M_{sim}(<\theta_E)$), measured from the surface density maps within the same aperture (\autoref{fig:er_explanation}); measure the scatter of this mass estimate; and explore any dependence on the galaxy cluster properties, as well as observational information available from the ray-traced images. In \autoref{fig:MvM}, we show a direct comparison between $M(<\theta_E)$ and $M_{sim}(<\theta_E)$ for the fixed center (left panel), fixed center with BCG offset (middle panel), and free center (right panel) cases. We find that $M(<\theta_E)$ overestimates $M_{sim}(<\theta_E)$ in all cases, especially at large masses. \begin{figure*} \center \includegraphics[width=1\textwidth]{ML_V_MT_2DHist.pdf} \caption{\textsc{\textbf{Mass Comparison Between the $M(<\theta_E)$ and $M_{sim}(<\theta_E)$.}} The mass comparison for the fixed center (\textit{left}), fixed center with BCG offset (\textit{middle}), and free center (\textit{right}) are shown. $M_{sim}(<\theta_E)$ and $M(<\theta_E)$ are given in units of $\,M_{\odot} h^{-1}$ and the solid black line is where $M_{sim}(<\theta_E) = M(<\theta_E)$. The bottom plots show the ratio of the masses, $M(<\theta_E)\ /\ M_{sim}(<\theta_E)$. The total number of counts is the $74,000$ sampled data points (\S\ref{subsec:stats}) used in the analysis of the scatter and bias of the $M(<\theta_E)$ compared to $M_{sim}(<\theta_E)$. We find that $M(<\theta_E)$ overestimates $M_{sim}(<\theta_E)$ in all cases, especially at large masses, and the scatter is smallest for the fixed center method and highest for the free center method.} \label{fig:MvM} \end{figure*} We measure an overall scatter of $13.9$\%\ and bias of $8.8$\%\ for the fixed center, scatter of $14.8$\%\ and bias of $10.2$\%\ for the fixed center with BCG offset, and scatter of $27.4$\%\ and bias of $20.2$\%\ for the free center. The scatter is defined as half the difference between the 84th percentile and the 16th percentile of the distribution and the bias is determined using the median of the distribution. We note that previous estimates of the uncertainty associated with this measurements state $\sim 30 \%$ \citep{Bartelmann:96, Schneider:06b}, however, it is unclear how the uncertainty is defined. Comparing the results of the three methods, we find that the free center method is the least reliable in recovering the true mass. Its measured $\theta_E$ statistical uncertainty is 20 times higher than those of the fixed center (\autoref{fig:ang_r_unc_dist}), and the scatter and bias in $M(<\theta_E)\ /\ M_{sim}(<\theta_E)$ are significantly higher (\autoref{fig:MvM}). In addition, the free center method is limited to cases where $3$ or more tangential arcs are identified. For these reasons, we do not recommend that the free center method be utilized to measure the Einstein radius and the mass enclosed by the Einstein radius. The fixed center with BCG offset shows that the additional scatter due to the offset between the BCG and dark matter center is small, justifying the use of the observed BCG as the fixed center of the Einstein radius. For the rest of the paper we are only going to consider the fixed center and the fixed center with BCG offset. To explore the dependence of this mass estimate on lens properties, we consider the ratio of inferred to true mass, $M(<\theta_E) / M_{sim}(<\theta_E)$, and group the measurements into bins of equal number of points. We plot $M(<\theta_E)/ M_{sim}(<\theta_E)$ with respect to the Einstein radius in \autoref{fig:er_binned_analysis}. This figure shows clearly that the $M(<\theta_E)$ mass estimate is not randomly scattered about the true mass, and that it overestimates the true mass at all radii. In \S \ref{sec:emp_cor}, we describe an empirical correction to de-bias the measurement of the mass enclosed by the Einstein radius. \begin{figure} \includegraphics[width=0.5\textwidth]{Ang_R_Sample_Binned_Analysis.pdf} \caption{\textsc{\textbf{Ratio of inferred to ``true'' mass, $M(<\theta_E) / M_{sim}(<\theta_E)$, with respect to $\theta_E$}}. The fixed center (blue square) and fixed center with BCG offset center (orange diamond), are shown. The symbol marks the median of the distribution of the mass ratio, the horizontal error bars indicate the bin size, and the vertical error bars represent the $16$th and $84$th percentile. We find a positive bias in all of the bins and that both fixed center and fixed center with BCG offset yield a similar $\theta_E$.} \label{fig:er_binned_analysis} \end{figure} In the following sections, we explore possible causes, and identify observable indicators of the scatter and bias of $M(<\theta_E)$. \subsection{Possible causes and indicators of the scatter in the $M(<\theta_E)$ mass estimate} \label{subsec:analysis_systematics} We explore possible dependence of the scatter and bias on $M(<\theta_E)$ with respect to galaxy cluster properties, background source, and lensing geometry. The galaxy cluster properties used in our analysis include: galaxy cluster redshift ($z_\mathrm{L}$), total mass ($M_{200}$), concentration ($c_{200}$), dynamical state, and the shape of the tangential critical curve. The total mass, concentration, and dynamical state information for the simulated cluster sample are adopted from \citet{Child:18}. From the background source, we have the redshift information ($z_\mathrm{S}$) and from the lensing geometry, we measure how much of the Einstein circle is covered by the arcs ($\phi$), as we explain below. \paragraph{Lens and Source Redshifts} The redshifts of the lens and the source determine the lensing geometry of the system through the angular diameter distances (\autoref{eq:lenseq}). Redshifts can be determined observationlly, when spectroscopic or extensive photometric information is available. The redshift distribution of the simulated clusters ($z_\mathrm{L}$) from the Outer Rim and background source redshift ($z_\mathrm{S}$) are shown in \autoref{fig:cluster_sample} and \autoref{fig:zs_sample}, respectively. \paragraph{Total Mass and Concentration} $M_{200}$ and $c_{200}$ are adopted from \citet{Child:18}. The distribution of the simulated galaxy cluster total mass and concentration are shown in the left panels of \autoref{fig:cluster_sample}. We note that $M_{200}$ and $c_{200}$ are not directly available from the imaging data at the core of the cluster where the strong lensing evidence is present. However, since our aim is to use the core mass to inform the mass-concentration relation, it is important to test whether this mass estimator introduces correlated bias. \paragraph{Cluster Deviation from Spherical Symmetry} Since galaxy clusters do not have a circular projected mass distribution, we expect differences between $M_{sim}(<\theta_E)$ and $M(<\theta_E)$ due to deviations from the assumed circular symmetry. To assess the deviation of the lens from spherical symmetry, we use the tangential critical curves derived from the simulation as a proxy for the shape of the projected mass distribution at the core of the cluster. We sample the tangential critical curves with a few hundred to thousands of points by using the python library matplotlib.contour \footnote{Python matplotlib.contour \url{https://matplotlib.org/3.1.0/api/contour_api.html}} setting a contour level at $0$ for the inverse magnification due to the tangential critical curve. Using the technique described in \citet{Fitzgibbon:96}, we fit an ellipse to each tangential critical curve corresponding to every background source redshift. We then use the resultant ellipticity, defined as $\epsilon = (a^2-b^2)/(a^2+b^2)$, where $a$ is the semi-major axis and $b$ is the semi-minor axis. In \autoref{fig:ell_example}, we show three examples of the ellipse fits to the tangential critical curve, over-plotted on the projected mass density distribution. We plot the distribution of ellipticity of the tangential critical curve in \autoref{fig:ell_rel}. This characterization of the projected shape of the galaxy cluster is not accessible directly from the observational data prior to a detailed SL model which this method aims to avoid. \begin{figure*} \center \includegraphics[width=1\textwidth]{Ellipticity_Examples.pdf} \caption{\textsc{\textbf{Examples of the ellipticity ($\epsilon$) of the tangential critical curve (TCC) as a proxy for the cluster deviation from spherical symmetry.}} We show as example three simulated clusters with different projected ellipticities. The red line is the tangential critical curve for a particular background source redshift $z_\mathrm{S}$. The dashed black line indicates the ellipse fitted to the tangential critical line, from which we compute the ellipticity, $\epsilon$. The lines are plotted over the projected mass distribution of the corresponding simulated galaxy clusters. The x- and y- axes are in units of arcseconds. The color bar indicates the surface density value in units of $\mathrm{M}_\odot \mathrm{h} / \mathrm{Mpc}^{2}$.} \label{fig:ell_example} \end{figure*} \paragraph{Galaxy Cluster Relaxation State} We tested whether the relaxation state of the galaxy clusters (see \S \ref{subsec:sl_sample} for the simulated sample dynamical state description) can be used as a proxy for the deviation from spherical symmetry. Observationally, this can be determined from X-ray imaging (e.g., \citealt{Mantz:15}). In \autoref{fig:ell_rel}, we plot $\epsilon$ separated by the relaxation state of the galaxy cluster. We perform a two sample Kolmogorov-Smirnov test to quantify the difference between the two samples with a confidence level of $99.7\%$. The KS-statistic is {$0.0896$}\ with a p-value of {$0.0402$}. With this test, we cannot reject the null hypothesis that the two samples are drawn from the same continuous distribution. From our KS test and \autoref{fig:ell_rel}, we find no correlation between the dynamical state and $\epsilon$. \begin{figure} \includegraphics[width=0.5\textwidth]{Dynamical_State_Dist_VS_Projection.pdf} \caption{\textsc{\textbf{Dynamical State and Deviation from Circular Symmetry.}} Distribution of the tangential critical curve (TCC) ellipticity, $\epsilon$. The overall distribution is indicated by the black line and the contributions from the dynamical (relaxed or un-relaxed) state of the simulated galaxy clusters (from \citet{Child:18}) is indicated by the shaded bars. We observe that the dynamical state information is not an indicator of deviations from spherical symmetry of the simulated galaxy cluster.} \label{fig:ell_rel} \end{figure} \paragraph{The fraction of the Einstein circle covered by arcs of an individual lensed source} \ $\phi$ represents the fraction of the Einstein circle that is covered by arcs of a given source. This property is easily accessible from the imaging data. In \autoref{fig:frac_arc_example}, we show three examples of lensed images plotted with their corresponding Einstein circles fitted using the identified tangential arcs for both the fixed center (blue) and an example of one of the realizations of a fixed center with BCG offset (orange). We plot in \autoref{fig:frac_arc_dist} the distribution of $\phi$ for both the fixed center (blue) and fixed center with BCG offset (orange). \begin{figure*} \center \includegraphics[width=1\textwidth]{Frac_Arc_Examples.pdf} \caption{\textsc{\textbf{The fraction of circle covered by the arcs ($\phi$) for three examples cases.}} The Einstein radius fitted to the identified tangential arcs for both the fixed center (blue) and one example of the fixed center with BCG offset (orange) are plotted; the corresponding centers of the circles are indicated by the crosses. The BCG offset was determined by drawing a radial offset between the BCG and dark matter halo from a log-normal distribution with $\mu = 6.1 \pm 0.7$ kpc \citep{Harvey:19} and an angle from a uniform distribution form $0$ to $359$ degrees. The fraction of the circle covered by the arcs for the fixed center and fixed center with BCG offset is shown in the legend. The x- and y-axis are in units of arcseconds.} \label{fig:frac_arc_example} \end{figure*} \begin{figure} \includegraphics[width=0.5\textwidth]{Frac_Arc_Dist.pdf} \caption{\textsc{\textbf{Distribution of the fraction of the circle covered by arcs ($\phi$) for a given source.}} } \label{fig:frac_arc_dist} \end{figure} \subsection{Results of the Analysis of Systematics} \label{subsec:systematic_results} We split the measurements of $M(<\theta_E)$ into equal bins of $M_{200}$, $c_{200}$, $\epsilon$, $z_\mathrm{L}$, $z_\mathrm{S}$, and $\phi$ and check whether the bias and scatter in the $M(<\theta_E)$ mass estimate depend on these properties. We find that the scatter and bias of $M(<\theta_E)/M_{sim}(<\theta_E)$ do not depend on four of these properties: the total mass, concentration, cluster redshift, and source redshift, showing flat and uniform progression in panels A--D of \autoref{fig:binned_analysis}. We also note, we find no difference in the bias and scatter of $M(<\theta_E)/M_{sim}(<\theta_E)$ between the relaxed and un-relaxed clusters nor a correlation between the relaxation state and the bias and scatter of $M(<\theta_E)/M_{sim}(<\theta_E)$. \begin{figure*} \center \includegraphics[width=1\textwidth]{Binned_Analysis_Systematics_Cirrus.pdf} \caption{\textsc{\textbf{Ratio of Inferred to ``True'' Mass ($M(<\theta_E) / M_{sim}(<\theta_E)$) Binned by Galaxy Cluster Properties, Background Source, and Lensing Geometry.}} Mass ratio binned by total mass ($M_{200}$, panel A), concentration ($c_{200}$, panel B), cluster redshift ($z_\mathrm{L}$, panel C), background source redshift ($z_\mathrm{S}$, panel D), tangential critical curve ellipticity ($\epsilon$, panel E), and fraction of circle covered by arcs ($\phi$, panel F). We show results for both the fixed center (blue square) and the fixed center with a BCG offset (orange diamond). The symbol marks the median of the distribution, the horizontal and vertical error bars indicate the bin size and scatter (the 16th and 84th percentile of the distribution), respectively. We find that there is a positive bias in all of the bins. We observe a clear trend with $\epsilon$, where both the scatter and bias increase with increasing $\epsilon$, and $\phi$, where both the scatter and bias decrease as $\phi$ increases.} \label{fig:binned_analysis} \end{figure*} Conversely, there are strong correlations between the scatter and bias with respect to the ellipticity of the tangential critical curve ($\epsilon$) and the fraction of the circle covered by arcs ($\phi$). As can be seen in panel E of \autoref{fig:binned_analysis}, as $\epsilon$ increases both the scatter and bias increase. The dependence on the ellipticity is expected, since one of the main assumptions in the $M(<\theta_E)$ formalism is circular symmetry ($\epsilon = 0.0$). Unfortunately, the measurement of the ellipticity of the tangential critical curve cannot be determined until after a lens model has been computed. The scatter and bias of $M(<\theta_E)$ decrease with increasing $\phi$ (\autoref{fig:binned_analysis}, panel F). This trend matches our expectation; lenses with $\phi$ closer to $1.0$ are typically more circular. Unlike the ellipticity, the fraction of the fitted circle covered by arcs is readily available from the same data used for analysis of observed clusters. It is therefore a useful estimator of lens-dependent uncertainty. For convenience, we tabulate the information displayed in Panel F of \autoref{fig:binned_analysis}, in \autoref{table:frac_arc_binned} in the Appendix. \section{The Effect of Background Source Redshift} \label{sec:no_zs} The redshifts are a piece of information that ideally would be available to the lensing analysis, coming from spectroscopic follow-up (e.g., \citealt{Sharon:20}) or using photometric redshifts (e.g., \citealt{Molino:17, Cerny:18}) from extensive multi-band photometry. However, this may not always be the case, especially considering future large surveys where follow-up may be incomplete. We therefore investigate the additional scatter in the mass estimate due to an unknown source redshift. In this analysis, we assume that we know the underlying distribution of the background source redshifts \citep{Bayliss:11}. To evaluate this case, we use the Einstein radius from \S\ref{subsec:get_radii} and the lens redshift from \S\ref{subsec:sl_sample}, but instead of using the actual source redshifts, we draw $10,000$ source redshifts from a normal distribution with $\mu = 2.00$ and $\sigma = 0.2$. We repeat the analysis in \S \ref{sec:analysis} with this set of drawn background source redshifts. In \autoref{fig:nozs_binned_analysis}, we plot the ratio of the inferred to ``true'' mass in bins of Einstein radius (left panels) and true background source redshift (right panels). We plot the results for both the fixed center (top panels) and the fixed center with BCG offset (bottom panels). For comparison, we over-plot the results from \S~\ref{subsec:systematic_results}. We compute a scatter of $13.8$\%\ ($18.2$\%) and bias of $9.0$\%\ ($8.5$\%) for the fixed center (fixed center with BCG offset). \begin{figure*} \center \includegraphics[width=1\textwidth]{NOZS_Binned_Analysis.pdf} \caption{\textsc{\textbf{The Effect of Source Redshift Uncertainty on the Results.}} The blue square symbols and orange diamonds represent the fixed center and fixed center with BCG offset, and are the same as \autoref{fig:er_binned_analysis} and \autoref{fig:binned_analysis}, Panel D, respectively. The ratio of the inferred to ``true'' mass for the unknown source redshift are indicated with up-pointing, green, triangles and down-pointing, magenta, triangles. We find that the uncertainty in source redshift has small effects on the results. As expected, when binned by source redshift (\textit{right}), we find that the inferred mass is low at $z_\mathrm{S} <2.0$ and high at $z_\mathrm{S}>2.0$.} \label{fig:nozs_binned_analysis} \end{figure*} As can be seen in the left panel of \autoref{fig:nozs_binned_analysis} and the scatter and bias of the fixed center, not knowing the exact background redshift and assuming a normal distribution with $\mu = 2.00$ and $\sigma = 0.2$ for typical giant arcs introduces a negligible uncertainty, particularly when compared to the magnitude of the systematics presented in \S \ref{sec:analysis}. Split by bins of background source redshift, the scatter remains the same, however, the inferred mass is higher if $z_\mathrm{S}>2$ and lower if $z_\mathrm{S} <2$. It is important to note that precise source redshifts are critical for most applications of strong lensing (e.g., magnification, time delay, and detailed mass maps). They become negligible in this case because the total enclosed mass is a particularly robust measurement, and the goal is determining the mass of a statistical sample. For mass estimates of individual systems, since the dependence on redshift is straight forward (see \autoref{eq:m_er}) the uncertainties can be easily determined. \section{Empirical Corrections} \label{sec:emp_cor} As can be seen in Figures \ref{fig:er_binned_analysis} and \ref{fig:nozs_binned_analysis}, the scatter and bias of this estimator shows dependence on $\theta_E$. We explore the use of an empirical correction to un-bias the mass estimate and reduce the scatter obtained from the Einstein radius method. We bin the $74,000$ data points into $25$ bins with equal number of data points per bin, using the Doane's formula \citep{Doane:76} to determine the number of bins for a non-normal distribution. We fit a linear, quadratic, and cubic models to the median of the mass ratio ($M(<\theta_E) / M_{sim}(<\theta_E)$) in each bin and the center of the bin, using the Levenberg-Marquardt algorithm \citep{Levenberg:44, Marquardt:63}. We compute the Bayesian Information Criterion (BIC) for each model \citep{Schwarz:78, Liddle:07}. The results of the fits can be found in \autoref{table:emp_corr_models} including the scatter and bias of the resulting empirically corrected data. The BIC results for the fixed center (fixed center with BCG offset) are $-125.7\ (-126.5)$ for the linear, $-152.1\ (-157.2)$ for the quadratic, and $-150.7\ (-156.9)$ for the cubic model. Based on this criterion, the quadratic fit, which has the lowest BIC, is clearly preferred over linear and slightly over cubic fits. We therefore use the quadratic fit to determine an empirical correction: \begin{equation} \frac{M(<\theta_E)}{M_{sim}(<\theta_E)} = \mathrm{B} \theta_E^2 + \mathrm{C} \theta_E + \mathrm{D} \equiv {f(\theta_E)}, \label{eq:quad_eq} \end{equation} \noindent where B, C, and D are the fit parameters. \capstartfalse \begin{deluxetable*}{lccccccc} \tablecolumns{8} \tablewidth{2\columnwidth} \tablecaption{Empirical Correction Models.} \tablehead{Model & A[arcsec$^{-3}$] & B[arcsec$^{-2}$] & C[arcsec$^{-1}$] & D & BIC & Scatter & Bias} \startdata \input{Empirical_Correciton_Models.tex} \enddata \tablenotetext{}{Model fit results of an empirical correction to un-bias and decrease the scatter of the mass enclosed by the Einstein radius. The last two columns are the scatter and bias of the empirically corrected data. The ``fixed center with BCG offset'' analysis accounts for the uncertainty added by using the BCG as a proxy for cluster center.} \label{table:emp_corr_models} \end{deluxetable*} \capstarttrue We choose not to include $\phi$ in our empirical correction because the parameter is dependent on the resolution of the telescope, depth of the observations, and observing conditions. The value of $\phi$ varies from observation to observation and therefore having a coarser estimate using the binned value in \autoref{table:frac_arc_binned} is more appropriate. We correct the measured $M(<\theta_E)$ by dividing it by the corresponding value computed from the parabolic equation evaluated at $\theta_E$: \begin{equation} \mathrm{Corrected}\ M(<\theta_E) = \mathrm{Measured}\ M(<\theta_E) / f(\theta_E). \label{eq:un_bias_ml} \end{equation} We plot in \autoref{fig:ec_er_binned_analysis} the empirically corrected values of $M(<\theta_E)$ and show the results from \autoref{fig:er_binned_analysis} for reference. With the mass enclosed by the Einstein radius corrected using the empirical correction, the overall scatter (half of the difference between the 84th and the 16th percentile of the distribution) reduces to $10.1$\%\ ($10.9$\%) and the bias to $-0.4$\%\ ($-0.3$\%) for the fixed center (fixed center with BCG offset). \begin{figure} \includegraphics[width=0.5\textwidth]{EC_Ang_R_Sample_Binned_Analysis.pdf} \caption{\textsc{\textbf{Empirically Corrected Mass Ratio $M(<\theta_E) / M_{sim}(<\theta_E)$ Binned by $\theta_E$.}} The blue and orange are from the analysis in \autoref{fig:er_binned_analysis}, while the green and magenta represent the empirically corrected values, using \autoref{eq:un_bias_ml}. The symbols and error bars are the same as \autoref{fig:er_binned_analysis}. We find that using the empirical correction un-biases and reduces the scatter of $M(<\theta_E)$.} \label{fig:ec_er_binned_analysis} \end{figure} We then perform similar analyses as those in \S \ref{sec:analysis}. We explore the systematics in the mass enclosed by the Einstein radius when the empirical correction is applied, and plot the results in \autoref{fig:ec_binned_analysis}. The blue and orange are the same from \autoref{fig:binned_analysis} and are plotted for reference, while the green and red indicate the empirically corrected values. \begin{figure*} \center \includegraphics[width=1.0\textwidth]{EC_Binned_Analysis_Cirrus.pdf} \caption{\textsc{\textbf{Empirically-Corrected Inferred Mass Binned by Galaxy Cluster Properties, Background Source, and Lensing Geometry.}} Same as \autoref{fig:binned_analysis}, but using \autoref{eq:un_bias_ml} to empirically correct the mass estimates. The blue and orange points are from the analysis in \autoref{fig:binned_analysis}, while the green and purple represent the empirically corrected values. We find overall that using the empirical correction un-biases the results and reduces the scatter of $M(<\theta_E)$. The empirical correction does not introduce significant correlation with total cluster mass, concentration, or redshifts. It does not eliminate the trend due to deviation from circular symmetry, as can be seen in Panel E.} \label{fig:ec_binned_analysis} \end{figure*} We observe in \autoref{fig:ec_binned_analysis} that overall the measurement of the mass enclosed by the Einstein radius becomes un-biased. The scatter of $M(<\theta_E)$ is reduced in all the bins when compared to the analysis without empirical correction for the total mass, concentration, lens redshift, and background redshift. Using the empirical correction reduces the scatter in the highest-scatter bins, i.e., at high and low Einstein radius, small arc fraction, and large ellipticity of the tangential critical curve. \section{Conclusions} \label{sec:conclusion} With current and future large surveys discovering tens of thousands of clusters and groups, with thousands expected to show strong lensing features, an efficient method to estimate the masses at the cores of these systems is necessary. The mass enclosed by the Einstein radius is a quick zeroth-order estimate. Studies that use this method quote an uncertainty of $\sim 30\%$ (e.g., \citealt{Bartelmann:96, Schneider:06b}), although this uncertainty has not been thoroughly quantified. In this work, we conduct a detailed analysis of the efficacy of the mass enclosed by the Einstein radius as core mass estimator, using the Outer Rim cosmological simulation. When measuring the Einstein radius, we explore three centering assumptions: fixed center, free center, and a observationally-motivated centering that mimics fixing the center to the BCG. We measure the scatter and bias of $M(<\theta_E)$, identify sources of systematic errors, and explore possible indicators available from imaging data at the cores of galaxy clusters. The results of our work are summarized below: \begin{itemize} \item In the fixed center approach, the center of the circle is fixed to the highest surface density point and a circle is fitted to the tangential arcs. The statistical uncertainty in the measured Einstein radius is small (see \autoref{fig:ang_r_unc_dist}). We measure an overall scatter of $13.9$\%\ with a bias of $8.8$\%\ in the mass enclosed by the Einstein radius with no correction applied. \item In the free center approach, the center of the circle is a free parameter in the fit. The statistical uncertainty of the Einstein radii fitted with the method is $20$ times higher than that of fixed center and the fixed center with BCG offset (see \autoref{fig:ang_r_unc_dist}). With this method, the overall scatter is $27.4$\%\ with a bias of $20.2$\%\ in the mass enclosed by the Einstein radius with no correction applied. We do not recommend the use of the free center method to measure the mass enclosed by the Einstein radius due to the large scatter in the mass measurement, high uncertainty in the Einstein radius, and restriction of a minimum of 3 or more identified tangential arcs. \item With the intention to apply this to observational data, we investigate the effect of using the BCG as the fixed center. We move the fixed center from the point of highest density by a random offset, following the log-normal distribution ($\mu = 6.1 \pm 0.7$ kpc) of BCG offsets found by \citet{Harvey:19}. This offset increases the scatter to $14.8$\%, and the bias to $10.2$\%\ in the mass enclosed by the Einstein radius when compared to the fixed center method. \item We find that the scatter and bias of $M(<\theta_E)$ with respect to $M_{sim}(<\theta_E)$ does not depend on the total cluster mass, concentration, lens redshift, or source redshift (\autoref{fig:binned_analysis}). \item We explore how the deviation from circular symmetry affects the measurement of $M(<\theta_E)$. The tangential critical curve ellipticity ($\epsilon$) stems from the deviation from spherical symmetry of the projected mass distribution at the core of the cluster. We find that the bias and scatter correlate with $\epsilon$ (\autoref{fig:binned_analysis}), where larger deviations from circular symmetry lead to a larger bias and scatter of $M(<\theta_E)$ when compared to $M_{sim}(<\theta_E)$. \item The fraction of the circle covered by arcs of a single lensed source ($\phi$), can be directly accessed from the imaging data. This observable correlates strongly with the scatter and bias, with both scatter and bias decreasing with an increasing fractional coverage by the arcs (\autoref{fig:binned_analysis}). $\phi$ can be used as an observational indicator to estimate the field-specific scatter and bias of $M(<\theta_E)$ (\autoref{table:frac_arc_binned}). \item Other possible sources of systematic errors exits. While the Outer Rim simulation has a large volume and high mass resolution needed for this work, we are limited by the lack of baryonic information in the simulation and missing the structure along the line-of-sight in the simulated ray-traced images. For example, the structure along the line-of-sight, particularly in the case of low mass systems, will have an effect on this measurement \citep{Bayliss:14,Li:19}. We leave this investigation for future work. \item We evaluated the case when the background source redshift measurement is not available, using instead the distribution of the background source redshifts. While an accurate source redshift is critical for several lensing applications (e.g., magnifications, time delays, mass distribution) for the relatively well-constrained enclosed core mass, the scatter introduced by the uncertainty in the background source redshift is negligible compared to that of other systematics (\autoref{fig:nozs_binned_analysis}), if the underlying source redshifts distribution can be accurately estimated. In addition the dependence on the $z_\mathrm{S}$ is predictable and matches our expectations, \S \ref{sec:no_zs} and \autoref{fig:nozs_binned_analysis}. \item We derive an empirical correction to un-bias and reduce the scatter of the measurement of $M(<\theta_E)$ using a quadratic equation fitted to the mass ratio ($M(<\theta_E) / M_{sim}(<\theta_E)$) with respect to the Einstein radius. The scatter of the empirically corrected masses enclosed by the Einstein radius reduces to $10.1$\%\ and $10.9$\%\ respectively for fixed center and fixed center with a BCG offset. The empirical correction does not introduce correlation between the inferred mass and other cluster or background source properties, which is important for application of this method in measuring cluster properties such as the concentration-mass relation as a function of redshift. \end{itemize} \subsection{Application} In this section we provide a recipe for applying the results of this work to observational data, to statistically correct the bias in $M(<\theta_E)$ and estimate its uncertainty. We note that a more accurate estimate of the field-specific uncertainty can be achieved by using the fraction of the Einstein circle covered by arcs as an indicator of deviation from circular symmetry. We provide instructions for both choices. 1) Starting with a cluster lens field in which lensing evidence has been detected, identify all the secure multiple images (arcs) of the lensed source. Each lensed image should be classified as either tangential or radial. Only the tangential arcs are used to estimate $M(<\theta_E)$. 2) Measure the exact coordinates of a morphological feature (e.g., a bright emission clump) that repeats in each of the arcs. 3) Fit a circle to the list of coordinates. If the cluster has a distinct BCG, we recommend fixing the center of the fitted circle to the position of the BCG. The radius of the fitted circle defines $\theta_E$. 4) Measure $\phi$, the fraction of the circle covered by the arcs of a single lensed source, by summing the angles subtended by the extent of the arcs that overlap with the Einstein circle, and dividing the sum by $360^{\circ}$. An example of three cases of different $\phi$ values is shown in \autoref{fig:frac_arc_example}. 5) Calculate $M(<\theta_E)$, the projected mass density enclosed in $\theta_E$, by evaluating \autoref{eq:s_crit} and \autoref{eq:m_er} for the cluster and source redshifts, and the measured $\theta_E$. If the spectroscopic redshift of the source is unknown, it can be approximated from photometric redshifts or a probability distribution function of source redshifts. we find that for the purpose of a statistical measurement of the enclosed mass, the increase in uncertainty due to a small error in the source redshift is negligible compared to other sources of uncertainty. 6) Evaluate whether an empirical correction is beneficial: If $\phi \gtrsim 0.5$ (i.e., the arcs of an individual lensed source cover at least half of the Einstein circle), the measured $M(<\theta_E)$ is fairly unbiased and an empirical correction is not necessary. In all other cases, or if the choice is to not use $\phi$ as an indicator, proceed to apply the empirical correction as follows. 7) Calculate $f(\theta_E)$, the empirical correction factor, by evaluating \autoref{eq:quad_eq} for $\theta_E$ (see \autoref{table:emp_corr_models} for coefficient values). We recommend using the fixed circle with BCG offset method. For Einstein radii in the range of $\theta_E < 30\farcs0$, we recommend using the quadratic fit. Apply the correction to the measured $M(<\theta_E)$ using \autoref{eq:un_bias_ml}. 8)Determine the uncertainty. The field-specific uncertainty decreases as the fraction of the Einstein circle covered by arcs ($\phi$) increases. The numerical values of the scatter as well as the 16th and 84th percentiles (lower and upper limit) for five $\phi$ bins are tabulated in \autoref{table:frac_arc_binned} in \autoref{appsec:frac_arc_analysis}. If the $\phi$ estimator is not used, one can assume an overall uncertainty in the corrected $M(<\theta_E)$ of $10.1$\%\ ($10.9$\%) for the fixed center (fixed center with BCG offset). \\ With the characterization of the mass enclosed by the Einstein radius presented in this work --- including the application of indicators of the scatter and bias --- measuring the mass at the cores of strong lensing galaxy clusters can be performed in large samples in a very efficient manner. The estimation of the mass at the core can be used to determine the mass distribution profile of the galaxy cluster, the concentration parameter (when combined with a mass estimate at larger radius), and provide information about the baryonic and dark matter properties at the core of galaxy clusters. \section*{Acknowledgements} The authors would like to thank the anonymous referee for insightful suggestions that improved this manuscript. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1256260. Work at Argonne National Lab is supported by UChicago Argonne LLC, Operator of Argonne National Laboratory. Argonne National Lab, a U.S. Department of Energy Office of Science Laboratory, is operated by UChicago Argonne LLC under contract no. DE-AC02-06CH11357. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Semiconductor nanowires fabricated by a bottom-up approach\cite{Thelander06,Lu06,Ikejiri07} have emerged as very interesting systems not only for the design of future nanoscale device structures\cite{Bjoerk02b,Bryllert06,Li06} but also to address fundamental questions connected to strongly confined systems. Regarding the latter, quantum dot structures,\cite{DeFranceschi03,Fasth05a,Pfund06} single electron pumps,\cite{Fuhrer07} or superconducting interference devices\cite{VanDam06} have been realized. Many of the structures cited above were fabricated by employing III-V semiconductors, e.g. InAs or InP.\cite{Thelander06} Apart from these more established materials, InN is particularly interesting for nanowire growth because of its low energy band gap and its high surface conductivity.\cite{Liang02,Chang05,Calarco07} At low temperatures the transport properties of nanostructures are affected by electron interference effects, i.e. weak localization, the Aharonov--Bohm effect, or universal conductance fluctuations.\cite{Beenakker91c,Lin02} The relevant length parameter in this transport regime is the phase coherence length $l_\phi$, that is the length over which phase-coherent transport is maintained. In order to obtain information on $l_\phi$, the analysis of conductance fluctuations is a very powerful method.\cite{Umbach84,Stone85,Lee85,Altshuler85b,Lee87,Thornton87,Beenakker88a} In fact, in InAs nanowires pronounced fluctuations in the conductance have been observed and analyzed, recently.\cite{Hansen05} Here, we report on a detailed study of the conductance fluctuations $\delta G$ measured in InN nanowires of various sizes. Information on the phase-coherent transport is gained by analyzing the average fluctuation amplitude and the correlation field $B_c$. Special attention is drawn to the magnetic field orientation with respect to the wire axis, since this allowed us to change the relevant probe area for the detection of phase-coherent transport. The InN nanowires investigated here were grown without catalyst on a Si (111) substrate by plasma-assisted MBE.\cite{Calarco07,Stoica06a} The measured wires had a diameter $d$ ranging from 42~nm to 130~nm. The typical wire length was 1~$\mu$m. From photoluminescence measurements an overall electron concentration of about $5 \times 10^{18}$~cm$^{-3}$ was determined.\cite{Stoica06a} For the samples used in the transport measurements, first, contact pads and adjustment markers were defined on a SiO$_2$-covered Si (100) wafer. Subsequently, the InN nanowires were placed on the patterned substrate and contacted individually by Ti/Au electrodes. Four wires labeled as A, B, C, and D will be discussed in detail, below. Their parameters are summarized in Table~\ref{Table1}. In order to improve the statistics, additional wires which are not specifically labeled, were included in part of the following analysis. A micrograph of a typical contacted wire is depicted in Fig.~\ref{Fig-Bc-vs-Angle} (inset). \begin{table} \caption{Dimensions and characteristic parameters of the different wires: Length $L$ (separation between the contacts), wire diameter $d$, root-mean-square of the conductance fluctuations $\mathrm{rms}(G)$, correlation field $B_c$. The latter two parameters were determined for $B$ parallel to the wire axis. \label{Table1}} \begin{ruledtabular} \begin{tabular}{ccccccc} Wire & $L$ & $d$ & rms(G) & $B_c$\\ & (nm) & (nm) & ($e^2/h$) & (T) \\ \colrule A& 205 & 58 &1.35& 0.38\\ B& 580 & 66 &0.58& 0.22\\ C& 640 & 75 &0.52& 0.21\\ D& 530 & 130 &0.81& 0.15\\ \end{tabular} \end{ruledtabular} \end{table} The transport measurements were performed in a magnetic field range from 0 to 10~T at a temperature of 0.6~K. In order to vary the angle between the wire axis and the magnetic field $B$, the samples were mounted in a rotating sample holder. The rotation axis was oriented perpendicularly to the magnetic field and to the wire axis. The magnetoresistance was measured by using a lock-in technique with an ac bias current of 30~nA. The fluctuation pattern for nanowires with different dimensions are depicted in Fig. \ref{Fig-compare-ucf}(a). Here, the normalized conductance fluctuations $\delta G$ for wires A to C comprising successively increasing diameters are plotted as a function of the magnetic field $B$. The field was oriented parallel to the wire axis. The measurements were performed up to a relatively large field of 10~T. This is justified, since even at 10~T the estimated cyclotron diameter of 70~nm just begins to become comparable to the wire diameter. The conductance variations were determined by first subtracting the typical contact resistance of $(330 \pm 50)\,\Omega$, and then converting the resistance variations to conductance variations. It can clearly be seen in Fig.~\ref{Fig-compare-ucf}(a), that for the narrowest and shortest wire, i.e. wire~A, the conductance fluctuates with a considerably larger amplitude than for the other two wires with larger diameters and length. The parameter quantifying this feature is the root-mean-square of the fluctuation amplitude $\mathrm{rms}(G)$ defined by $\sqrt{\langle \delta G ^2\rangle}$. Here, $\langle ... \rangle$ represents the average over the magnetic field. For quasi one-dimensional systems where phase coherence is maintained over the complete wire length it is expected that $\mathrm{rms}(G)$ is in the order of $e^2/h$.\cite{Lee85,Altshuler85b,Lee87} As one can infer from Table~\ref{Table1}, for the shortest nanowire, i.e. wire A, $\mathrm{rms}(G)$ falls within this limit. For the other two wires the $\mathrm{rms}(G)$ values are smaller than $e^2/h$ (cf. Table~\ref{Table1}). Thus, for these wires it can be concluded that the phase coherence length $l_\phi$, is smaller than the wire length $L$. \begin{figure} \includegraphics[width=1.0\columnwidth]{Bloemers-Fig1.jpg} \caption{(a) Conductance fluctuations normalized to $e^2/h$ for wires with different length and diameter. The curves are offset for clarity. As illustrated by the sketch, the magnetic field is axially oriented. (b) Conductance fluctuations of wire~C with a magnetic field oriented perpendicularly to the wire axis.\label{Fig-compare-ucf}} \end{figure} Beside $\mathrm{rms}(G)$, another important parameter is the correlation field $B_c$, quantifying on which field scale the conductance fluctuations take place. The correlation field is extracted from the autocorrelation function of $\delta G$ defined by $F(\Delta B)=\langle \delta G(B+\Delta B)\delta G (B)\rangle $.\cite{Lee87} The magnetic field corresponding to half maximum of the autocorrelation function $F(B_c)=\frac{1}{2} F(0)$ defines $B_c$. The $B_c$ values of the measurements shown in Fig.~\ref{Fig-compare-ucf} are listed in Table~\ref{Table1}. Obviously, for wire~A, which has the smallest diameter, one finds the largest value of $B_c$. In a semiclassical approach it is expected that $B_c$ is inversely proportional to the maximum area $A_\phi$ perpendicular to $B$ which is enclosed phase-coherently:\cite{Lee85,Lee87,Beenakker88a} \begin{equation} B_c=\alpha \frac{\Phi_0}{A_\phi} \; . \label{Eq1} \end{equation} Here, $\alpha$ is a constant in the order of one and $\Phi_0=h/e$ the magnetic flux quantum. As long as phase coherence is maintained along the complete circumference, $A_\phi$ is equal to the wire cross section $\pi d^2/4$ and thus one expects $B_c \propto 1/d^2$. The $B_c$ values given in Table~\ref{Table1} follow this trend, i.e. becoming smaller for increasing diameter $d$. As can be recognized in Fig.~\ref{Fig-BcvsDF1} (inset), $F(\Delta B)$ also shows negative values at larger $\Delta B$. This behavior can be attributed to the limited number of modes in the wires, as it was observed previously for small size semiconductor structures.\cite{Jalabert90,Bird96} However, as discussed by Jalabert \emph{et al.}\cite{Jalabert90}, at small fields $F(\Delta B)$ and thus $B_c$ being calculated fully quantum mechanically correspond well to the semiclassical approximation. In order to elucidate the dependence of $B_c$ on the wire diameter in more detail, a larger number of wires was measured. As can be seen in Fig.~\ref{Fig-BcvsDF1}(a), $B_c$ systematically decreases with $d$. Leaving out wire~D which has the largest diameter, the decrease of $B_c$ is well described by a $1/d^2$-dependence. As mentioned above, for short wires ($L\approx 200$~nm) we found that phase coherence is maintained over the complete length. This length corresponds to a circumference of a wire with a diameter of about 64 nm. Except of wire~D, $d$ is in the order of that value, so that one can expect that phase coherence is maintained within the complete cross section. For the parameter $\alpha$ we found a value of 0.24, which is by a factor of 4 smaller than the theoretically expected value of 0.95.\cite{Beenakker88a} Choosing $\alpha=0.95$ would result in lower bound values of $B_c$ being larger than all corresponding experimental values, which is physically unreasonable. We attribute the discrepancy to the different geometrical situation, i.e. for the latter a confined two-dimensional electron gas with a perpendicularly oriented magnetic field was considered,\cite{Beenakker88a} while in our case the field is oriented parallel to the wire axis. In addition, an inhomogeneous carrier distribution within the cross section, e.g. due to a carrier accumulation at the surface,\cite{Mahboob04} can also result in a disagreement between experiment and theoretical model. As can be seen in Fig.~\ref{Fig-BcvsDF1}(a) (inset), the data point of the wire with the largest diameter of 130~nm, i.e. wire~D, is found above the calculated curve. This indicates that presumably for this sample, $A_\phi$ is slightly smaller than the wire cross section. \begin{figure} \includegraphics[width=1.0\columnwidth]{Bloemers-Fig2.jpg} \caption{(a) Correlation field $B_c$ as a function of the wire diameter $d$. As illustrated in the schematics the magnetic field $B$ was oriented axially. The solid lines corresponds to the calculated correlation field. The inset shows $F(\Delta B)/F(0)$ for wire~C. (b) $B_c$ as a function of the maximum area $A=Ld$ (see schematics) of the wire. The magnetic field is oriented perpendicular to the wire axis. The solid lines represents the calculated lower boundary correlation fields assuming $\alpha=0.95$ and 0.24, respectively. \label{Fig-BcvsDF1}} \end{figure} Next, we will focus on measurements of $\delta G$ with a magnetic field oriented perpendicular to the wire axis. As a typical example, $\delta G$ of wire~C is shown in Fig.~\ref{Fig-compare-ucf}(b). Here, a correlation field of $0.17$~T was extracted, which is smaller than the value of corresponding measurements with $B$ parallel to the wire axis [c.f. Fig.~\ref{Fig-compare-ucf}(a) and Table~\ref{Table1}]. The smaller value of $B_c$ can be attributed to the effect that now the relevant area for magnetic flux-induced interference effects is no longer limited by the relatively small circular cross section but rather by a larger area within the rectangle defined by $L$ and $d$, as illustrated by the schematics in Fig.~\ref{Fig-BcvsDF1}(b). In Fig.~\ref{Fig-BcvsDF1}(b) the $B_c$ values of various wires are plotted as a function of the maximum area $A_{max}=Ld$ penetrated by the magnetic field. As a reference, the calculated curve using Eq.~(\ref{Eq1}) and assuming $A_\phi=A_{max}$ are also plotted. It can be seen that the $B_c$ values of two wires with small areas, including wire~A, match to the theoretically expected ones if one takes $\alpha=0.95$, as given by Beenakker and van Houten.\cite{Beenakker88a} This corresponds to the case of phase-coherent transport across the complete wire, as it was, in case of wire~A, already concluded from the $\mathrm{rms}(G)$ analysis. For all other wires the $B_c$ values are above the theoretically expected curve, corresponding to the case $A_\phi<A_{max}$. At this point, one might argue that for $B$ oriented along the wire axis a better agreement is found for $\alpha=0.24$. However, as can be seen in Fig.~\ref{Fig-BcvsDF1}(b), if one assumes $\alpha=0.24$ all experimental values are above the calculated curve, i.e. $A_\phi<A_{max}$. This does not agree with the observation that for short wires $\mathrm{rms}(G)$ is in the order of $e^2/h$. We attribute the difference between the appropriate $\alpha$ values for different field orientations to the different character of the relevant area penetrated by the magnetic flux, e.g. due to carrier accumulation at the surface. Beside $B_c$ we also analyzed the fluctuation amplitude for five different wires with $B$ oriented perpendicular to the wire axis. Only wires with comparable diameters of ($75 \pm 5$)~nm were chosen, here. It can be seen in Fig.~\ref{Fig-rms-75nm} that $\mathrm{rms}(G)$ tends to decrease with increasing wire length $L$. \begin{figure} \includegraphics[width=1.0\columnwidth]{Bloemers-Fig3.jpg} \caption{$\mathrm{rms}(G)$ for wires with a diameter of $(75 \pm 5)$~nm as a function of wire length $L$ (square). The magnetic field is oriented perpendicular to the wire axis. The calculated decrease of $\mathrm{rms}(G)$ proportional to $L^{-3/2}$ is plotted as solid line. The inset shows $B_c$ vs. $L$ for wires with $d \approx 75$~nm. The dashed line corresponds to the calculated value of $B_c$ assuming $l_\phi=430$~nm.\label{Fig-rms-75nm}} \end{figure} From the previous discussion of $B_c$ it was concluded that for long wires, as it is the case here, $l_\phi<L$. In this regime $\mathrm{rms}(G)$ is expected to depend on $L$ as\cite{Lee87,Beenakker88a} \begin{equation} \mathrm{rms}(G)=\beta \frac{e^2}{h} \left(\frac{l_\phi}{L} \right)^{3/2} \; , \label{Eq2} \end{equation} with $\beta$ in the order of one. The above expression is valid as long as the thermal diffusion length $l_T=\sqrt{\hbar \mathcal{D}/k_BT}$, is larger than $l_\phi$. Here, $\mathcal{D}$ is the diffusion constant. From our transport data we estimated $l_T \approx 600$~nm at $T=0.6$~K. As can be seen in Fig.~\ref{Fig-rms-75nm}, the available experimental data points roughly follow the trend of the calculated curve using Eq.~(\ref{Eq2}) and assuming $l_\phi=430$~nm and $\beta=1$. For the limit $l_\phi < L$, a correlation field according to $B_c=0.95 \Phi_0/d \l_\phi$ is expected.\cite{Lee87} As confirmed in Fig.~\ref{Fig-BcvsDF1}(b), most experimental values of $B_c$ are close to the calculated one. If one compares the $\mathrm{rms}(G)$ values for wires with $d \approx 75$~nm and $B$ oriented axially (not shown here) with the corresponding values for $B$ oriented perpendicularly, one finds, that both are in the same range. Thus it can be concluded that the fluctuation amplitude does not significantly depend on the magnetic field orientation. This is in contrast to the correlation field, where one finds a systematic dependence on the orientation of $B$. In order to discuss the latter aspect in more detail the correlation field was studied for various tilt angles $\theta$ of the magnetic field. Figure~\ref{Fig-Bc-vs-Angle} shows $B_c$ of sample~D if $\theta$ is increased from $0^\circ$ to $90^\circ$. The inset in Fig.~\ref{Fig-Bc-vs-Angle} illustrates how $\theta$ is defined. \begin{figure} \includegraphics[width=1.0\columnwidth]{Bloemers-Fig4.jpg} \caption{Correlation field $B_c$ of wire~D as a function of the angle $\theta$ between the wire axis and $B$. The solid line represents a linear fit. The broken line corresponds to the theoretically expected $B_c$ if phase-coherent transport is assumed in the complete wire. The left-hand-side inset shows a schematics of the geometrical situation. The right-hand-side inset shows a micrograph of a 580-nm-long wire with a diameter of 66~nm.\label{Fig-Bc-vs-Angle}} \end{figure} Obviously, $B_c$ decreases with increasing tilt angle $\theta$. As explained above, the value of $B_c$ is a measure of the maximum area normal to $B$, which is enclosed phase-coherently by the electron waves in the wire [see Fig.~\ref{Fig-Bc-vs-Angle} (schematics)]. As long as $\theta \leq \arctan (L/d)$, this maximum area is given by $A(\theta)=\pi d^2/4\cos{\theta}$. The expected $\theta$-dependence of the correlation field is then given by $B_c(\theta)$=$B_c(0) \cos(\theta)$, with $B_c(0)$ the correlation field at $\theta=0$. As can be seen in Fig.~\ref{Fig-Bc-vs-Angle}, the calculated correlation field $B_c$, corresponding to fully phase-coherent transport, decreases much faster with increasing $\theta$ than the experimentally determined values. The experimental situation is better described by a linear decrease. As it was discussed above, at $\theta=0$ one can assume that the area enclosed phase-coherently is equal to $A(0)$. However, if the tilt angle is increased the maximum wire cross section $A(\theta)$ presumably becomes larger than $A_\phi$, resulting in a much smaller decrease of $B_c$ than theoretically expected for fully phase-coherent transport. In addition, as pointed out above, the different tilt angles result in an angle-dependent parameter $\alpha$. This is supported by the measurements of $B_c$ for $B$ parallel and perpendicular to the wire axis, where different values for $\alpha$ were determined, respectively. In conclusion, the conductance fluctuations of InN nanowires with various lengths and diameters were investigated. We found that for an axially oriented magnetic field the correlation field $B_c$ and thus the area where phase-coherent transport is maintained is limited by the wire cross section perpendicular to $B$. In contrast, $\mathrm{rms}(G)$ decreases with the wire length, since this quantity also depends on the propagation of the electron waves along the wire axis. If the magnetic field is oriented perpendicularly we found that for long wires $B_c$ is limited by $l_\phi$ rather than by the length $L$. Our investigations demonstrate that phase-coherent transport can be maintained in InN nanowires, which is an important prerequisite for the design of quantum device structures based on this material system.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For a permutation $w = w_1 \cdots w_n \in S_n$, the \textit{inversion} and \textit{major index} statistics are given by \[ \inv(w) := \#\{i < j : w_i > w_j\} \qquad\text{and}\qquad \maj(w) := \sum_{\substack{1 \leq i \in \leq n-1\\w_i > w_{i+1}}} i. \] It is well-known that $\inv$ and $\maj$ are equidistributed on $S_n$ with common mean and standard deviation \[ \mu_n = \frac{n(n-1)}{4} \qquad\text{and}\qquad \sigma_n^2 = \frac{2n^3 + 3n^2 - 5n}{72}. \] (These results also follow easily from our arguments.) In \cite{1004.1160}, Baxter and Zeilberger proved that $\inv$ and $\maj$ are jointly independently asymptotically normally distributed as $n \to \infty$. More precisely, define normalized random variables on $S_n$ \begin{equation}\label{eq:XnYn} X_n := \frac{\inv - \mu_n}{\sigma_n}, \qquad Y_n := \frac{\maj - \mu_n}{\sigma_n}. \end{equation} \begin{Theorem}[Baxter--Zeilberger, \cite{1004.1160}]\label{thm:BZ} For each $u, v \in \bR$, we have \[ \lim_{n \to \infty} \bP[X_n \leq u, Y_n \leq v] = \frac{1}{2\pi} \int_{-\infty}^u \int_{-\infty}^v e^{-x^2/2} e^{-y^2/2}\,dy\,dx. \] \end{Theorem} See \cite{1004.1160} for further historical background. Baxter and Zeilberger's argument involves mixed moments and recurrences based on combinatorial manipulations with permutations. Romik suggested a generating function due to Roselle, quoted as \Cref{thm:Roselle} below, should provide another approach. Zeilberger subsequently offered a \$300 reward for such an argument. The aim of this note is to give such a proof. Our overarching motivation is to give a \textit{local limit theorem}, i.e.~a formula for the counts $\#\{w \in S_n : \inv(w) = u, \maj(w) = v\}$, with an explicit error term, which will be the subject of a future article. For further context, see \cite{dz} and \cite{MR3482667}. \section{Consequences of Roselle's Formula} Here we recall Roselle's formula, originally stated in different but equivalent terms, and derive a generating function expression which quickly motivates \Cref{thm:BZ}. \begin{Definition} Let $H_n$ be the bivariate $\inv, \maj$ generating function on $S_n$, i.e. \[ H_n(p, q) := \sum_{w \in S_n} p^{\inv(w)} q^{\maj(w)}. \] \end{Definition} \begin{Theorem}[Roselle, \cite{MR0342406}]\label{thm:Roselle} We have \begin{equation}\label{eq:roselle} \sum_{n \geq 0} \frac{H_n(p, q) z^n}{(p)_n (q)_n} = \prod_{a, b \geq 0} \frac{1}{1 - p^a q^b z} \end{equation} where $(p)_n := (1-p)(1-p^2) \cdots (1-p^n)$. \end{Theorem} The following is the main result of this section. \begin{Theorem}\label{thm:Hncla} There are constants $c_\mu \in \bZ$ indexed by integer partitions $\mu$ such that \begin{equation}\label{eq:HnFn} \frac{H_n(p, q)}{n!} = \frac{[n]_p! [n]_q!}{n!^2} F_n(p, q) \end{equation} where \begin{equation}\label{eq:Fncmu} F_n(p, q) = \sum_{d=0}^n [(1-p)(1-q)]^d \sum_{\substack{\mu \vdash n\\\ell(\mu) = n-d}} \frac{c_\mu}{\prod_i [\mu_i]_p [\mu_i]_q} \end{equation} and $[n]_p! := [n]_p [n-1]_p \cdots [1]_p$, $[c]_p := 1 + p + \cdots + p^{c-1} = (1-p^c)/(1-p)$. \end{Theorem} An explicit expression for $c_\mu$ is given below in \eqref{eq:cmu}. The rest of this section is devoted to proving \Cref{thm:Hncla}. Straightforward manipulations with \eqref{eq:roselle} immediately yield \eqref{eq:HnFn} where \begin{equation}\label{eq:FnRatio} F_n(p, q) := (1-p)^n (1-q)^n n! \cdot \{z^n\} \left(\prod_{a, b \geq 0} \frac{1}{1 - p^a q^b z}\right) \end{equation} and $\{z^n\}$ here refers to extracting the coefficient of $z^n$. Thus it suffices to show \eqref{eq:FnRatio} implies \eqref{eq:Fncmu}. By standard arguments, the $z^n$ coefficient of the product over $a, b$ in \eqref{eq:FnRatio} is the bivariate generating function of size-$n$ multisets of pairs $(a, b) \in \bZ_{\geq 0}^2$, where the weight of such a multset is its sum. \begin{Definition} For $\lambda \vdash n$, let $M_\lambda$ be the bivariate generating function for multisets of pairs $(a, b) \in \bZ_{\geq 0}^n$ of type $\lambda$, i.e.~some element has multiplicity $\lambda_1$, another element has multiplicity $\lambda_2$, etc. \end{Definition} We clearly have \begin{equation}\label{eq:FnMla} \{z^n\}\left(\prod_{a, b \geq 0} \frac{1}{1 - p^a q^b z}\right) = \sum_{\lambda \vdash n} M_\lambda(p, q), \end{equation} though the $M_\lambda$ are inconvenient to work with, so we perform a change of basis. \begin{Definition} Let $P[n]$ denote the lattice of set partitions of $[n] := \{1, 2, \ldots, n\}$ with minimum $\widehat{0} = \{\{1\}, \{2\}, \ldots, \{n\}\}$ and maximum $\widehat{1} = \{\{1, 2, \ldots, n\}\}$. Here $\Lambda \leq \Pi$ means that $\Pi$ can be obtained from $\Lambda$ by merging blocks of $\Lambda$. The \textit{type} of a set partition $\Lambda$ is the integer partition obtained by rearranging the list of the block sizes of $\Lambda$ in weakly decreasing order. For $\lambda \vdash n$, set \[ \Lambda(\lambda) := \{\{1, 2, \ldots, \lambda_1\}, \{\lambda_1+1, \lambda_1+2, \ldots, \lambda_1+\lambda_2\}, \ldots\}, \] which has type $\lambda$. \end{Definition} \begin{Definition} For $\Pi \in P[n]$, let $R_\Pi$ denote the bivariate generating function for lists $L \in (\bZ_{\geq 0}^2)^n$ where for each block of $\Pi$ the entries in $L$ from that block are all equal. Similarly, let $S_\Pi$ denote the bivariate generating function of lists $L$ where in addition to entries from the same block being equal, entries from two different blocks are not equal. \end{Definition} We easily see that \begin{equation}\label{eq:Rla} R_\Lambda(p, q) = \prod_{A \in \Lambda} \frac{1}{(1 - p^{\#A})(1 - q^{\#A})} \end{equation} and that \begin{equation}\label{eq:RlaSpi} R_\Lambda(p, q) = \sum_{\Pi : \Lambda \leq \Pi} S_\Pi, \end{equation} so that, by M\"obius inversion on $P[n]$, \begin{equation}\label{eq:SpiRla} S_\Pi = \sum_{\Lambda : \Pi \leq \Lambda} \mu(\Pi, \Lambda) R_\Lambda. \end{equation} Under the ``forgetful'' map from lists to multisets, a multiset of type $\lambda \vdash n$ has fiber of size $\binom{n}{\lambda}$. It follows that \begin{equation}\label{eq:SpiMla} S_{\Pi(\lambda)} = \frac{n!}{\lambda!} M_\lambda \end{equation} where $\lambda! := \lambda_1! \lambda_2! \cdots$. Combining in order \eqref{eq:FnRatio}, \eqref{eq:FnMla}, \eqref{eq:SpiMla}, \eqref{eq:SpiRla}, and \eqref{eq:Rla} gives \begin{equation}\label{eq:FnRla} F_n(p, q) = \sum_{d=0}^n [(1-p)(1-q)]^d \sum_{\lambda \vdash n} \lambda! \sum_{\substack{\Lambda : \Pi(\lambda) \leq \Lambda\\\#\Lambda = n-d}} \frac{\mu(\Pi(\lambda), \Lambda)}{\prod_{A \in \Lambda} [\#A]_p [\#A]_q}. \end{equation} Now \eqref{eq:Fncmu} follows from \eqref{eq:FnRla} where \begin{equation}\label{eq:cmu} c_\mu = \sum_{\lambda \vdash n} \lambda! \sum_{\substack{\Lambda : \Pi(\lambda) \leq \Lambda\\\type(\Lambda)=\mu}} \mu(\Pi(\lambda), \Lambda). \end{equation} This completes the proof of \Cref{thm:Hncla}. \begin{Remark}\label{rem:macmahon} From \eqref{eq:cmu}, $c_{(1^n)} = 1$ since the sum only involves $\Lambda = \widehat{0}$. Letting $p \to 1$ in \eqref{eq:Fncmu}, the only surviving term is $d=0$ and $\lambda = (1^n)$. Consequently, $H_n(1, q) = [n]_q!$, recovering a classic result of MacMahon \cite[\S1]{MR1576566}. \end{Remark} \begin{Remark} Using \eqref{eq:HnFn}, we see that the probability generating function (discussed below in \Cref{ex:pgfchar}) $H_n(p, q)/n!$ differs from $[n]_p! [n]_q!/n!^2$ by precisely the correction factor $F_n(p, q)$. Using \eqref{eq:FnRatio}, this factor has the following combinatorial interpretation: \[ F_n = \frac{n! \cdot \text{g.f. of size-$n$ multisets from $\bZ_{\geq 0}^2$}} {\text{g.f. of size-$n$ lists from $\bZ_{\geq 0}^2$}}. \] Intuitively, the numerator and denominator should be the same ``up to first order.'' \Cref{thm:FnBound} will give one precise sense in which they are asymptotically equal. \end{Remark} \section{Estimating the Correction Factor} This section is devoted to showing that the correction factor $F_n(p, q)$ from \Cref{thm:Hncla} is negligible in an appropriate sense, \Cref{thm:FnBound}. Recall that $\sigma_n$ denotes the standard deviation of $\inv$ or $\maj$ on $S_n$. \begin{Theorem}\label{thm:FnBound} Uniformly on compact subsets of $\bR^2$, we have \[ F_n(e^{is/\sigma_n}, e^{it/\sigma_n}) \to 1 \qquad \text{as} \qquad n \to \infty \] \end{Theorem} We begin with some simple estimates starting from \eqref{eq:FnRla} which motivate the rest of the inequalities in this section. We may assume $|s|, |t| \leq M$ for some fixed $M$. Setting $p=e^{is/\sigma_n}, q=e^{it/\sigma_n}$, we have $|1-p| = |1-\exp(is/\sigma_n)| \leq |s|/\sigma_n$. For $n$ sufficiently large compared to $M$, we also have $|s/\sigma_n| \ll 1$ and so, for all $c \in \bZ_{\geq 1}$, $|[c]_p| = |[c]_{\exp(is/\sigma_n)}| \geq 1$. Thus for $n$ sufficiently large, \eqref{eq:FnRla} gives \begin{equation}\label{eq:ModFnRla} |F_n(e^{is/\sigma_n}, e^{it/\sigma_n}) - 1| \leq \sum_{d=1}^n \frac{|st|^d}{\sigma_n^{2d}} \sum_{\lambda \vdash n} \lambda! \sum_{\substack{\Lambda : \Pi(\lambda) \leq \Lambda\\\#\Lambda = n-d}} |\mu(\Pi(\lambda), \Lambda)| \end{equation} \begin{Lemma} Suppose $\lambda \vdash n$ with $\ell(\lambda) = n-k$, and fix $d$. Then \begin{equation}\label{eq:dksum} \sum_{\substack{\Lambda : \Pi(\lambda) \leq \Lambda\\\#\Lambda = n-d}} \mu(\Pi(\lambda), \Lambda) = (-1)^{d-k} \sum_{\substack{\Lambda \in P[n-k]\\\#\Lambda = n-d}} \prod_{A \in \Lambda} (\#A-1)! \end{equation} and the terms on the left all have the same sign $(-1)^{d-k}$. The sums are empty unless $n \geq d \geq k \geq 0$. \begin{proof} The upper order ideal $\{\Lambda \in P[n] : \Pi(\lambda) \leq \Lambda\}$ is isomorphic to $P[n-k]$ by collapsing the $n-k$ blocks of $\Pi(\lambda)$ to singletons. This isomorphism preserves the number of blocks. Furthermore, recall that in $P[n]$ we have \[ \mu(\widehat{0}, \widehat{1}) = (-1)^{n-1} (n-1)!, \] from which it follows easily that \begin{equation}\label{eq:mu0La} \mu(\widehat{0}, \Lambda) = \prod_{A \in \Lambda} (-1)^{\#A - 1} (\#A - 1)!. \end{equation} The result follows immediately upon combining these observations. \end{proof} \end{Lemma} \begin{Lemma} Let $\lambda \vdash n$ with $\ell(\lambda) = n-k$ and $n \geq d \geq k \geq 0$. Then \begin{equation}\label{eq:ladk_bound} \sum_{\substack{\Lambda : \Pi(\lambda) \leq \Lambda\\\#\Lambda = n-d}} |\mu(\Pi(\lambda), \Lambda)| \leq (n-k)^{2(d-k)}. \end{equation} \begin{proof} Using \eqref{eq:dksum}, we can interpret the sum as the number of permutations of $[n-k]$ with $n-d$ cycles, which is a Stirling number of the first kind. There are well-known asymptotics for these numbers, though the stated elementary bound suffices for our purposes. We induct on $d$. At $d=k$, the result is trivial. Given a permutation of $[n-k]$ with $n-d$ cycles, choose $i, j \in [n-k]$ from different cycles. Suppose the cycles are of the form $(i'\ \cdots\ i)$ and $(j\ \cdots\ j')$. Splice the two cycles together to obtain \[ (i'\ \cdots\ i\ j\ \cdots\ j'). \] This procedure constructs every permutation of $[n-k]$ with $n-(d+1)$ cycles and requires no more than $(n-k)^2$ choices. The result follows. \end{proof} \end{Lemma} \begin{Lemma} For $n \geq d \geq k \geq 0$, we have \begin{equation}\label{eq:dksum_bound} \sum_{\substack{\lambda \vdash n\\\ell(\lambda) = n-k}} \lambda! \sum_{\substack{\Lambda : \Pi(\lambda) \leq \Lambda\\\#\Lambda = n-d}} |\mu(\Pi(\lambda), \Lambda)| \leq (n-k)^{2d-k} (k+1)!. \end{equation} \begin{proof} For $\lambda \vdash n$ with $\ell(\lambda) = n-k$, $\lambda!$ can be thought of as the product of terms obtained from filling the $i$th row of $\lambda$ with $1, 2, \ldots, \lambda_i$. Alternatively, we may fill the cells of $\lambda$ as follows: put $n-k$ one's in the first column, and fill the remaining cells with the numbers $2, 3, \ldots, k+1$ starting at the largest row and proceeding left to right. It's easy to see the labels of the first filling are bounded above by the labels of the second filling, so that $\lambda! \leq (k+1)!$. Furthermore, each $\lambda \vdash n$ with $\ell(\lambda) = n-k$ can be constructed by first placing $n-k$ cells in the first column and then deciding on which of the $n-k$ rows to place each of the remaining $k$ cells, so there are no more than $(n-k)^k$ such $\lambda$. The result follows from combining these bounds with \eqref{eq:ladk_bound}. \end{proof} \end{Lemma} \begin{Lemma}\label{lem:d_bound} For $n$ sufficiently large, for all $0 \leq d \leq n$ we have \begin{align*} \sum_{\lambda \vdash n} \lambda! \sum_{\substack{\Lambda : \Pi(\lambda) \leq \Lambda\\\#\Lambda = n-d}} |\mu(\Pi(\lambda), \Lambda)| \leq 3n^{2d}. \end{align*} \begin{proof} For $n \geq 2$ large enough, for all $n \geq k \geq 2$ we see that $(k+1)! < n^{k-1}$. Using \eqref{eq:dksum_bound} gives \begin{align*} \sum_{\lambda \vdash n} \lambda! \sum_{\substack{\Lambda : \Pi(\lambda) \leq \Lambda\\\#\Lambda = n-d}} |\mu(\Pi(\lambda), \Lambda)| &\leq \sum_{k=0}^d (n-k)^{2d-k} (k+1)! \\ &\leq n^{2d} + 2(n-1)^{2d-1} + \sum_{k=2}^d (n-k)^{2d-k} n^{k-1} \\ &\leq n^{2d} + 2n^{2d-1} + \sum_{k=2}^d n^{2d-1} \\ &= n^{2d} + 2n^{2d-1} + (d-1) n^{2d-1} \leq 3n^{2d}. \end{align*} \end{proof} \end{Lemma} We may now complete the proof of \Cref{thm:FnBound}. Combining \Cref{lem:d_bound} and \eqref{eq:ModFnRla} gives \[ |F_n(e^{is/\sigma_n}, e^{it/\sigma_n}) - 1| \leq 3\sum_{d=1}^n \frac{(Mn)^{2d}}{\sigma_n^{2d}}. \] Since $\sigma_n^2 \sim n^3/36$ and $M$ is constant, $(Mn)^{2d}/\sigma_n^{2d} \sim (36^2M^2/n)^d$. Since $M$ is constant, using a geometric series it follows that \[ \lim_{n \to \infty} \sum_{d=1}^n \frac{(Mn)^{2d}}{\sigma_n^{2d}} = 0, \] completing the proof of \Cref{thm:FnBound}. \begin{Remark} Indeed, the argument shows that $|F_n(e^{is/\sigma_n}, e^{it/\sigma_n}) - 1| = O(1/n)$. The above estimates are particularly far from sharp for large $d$, though for small $d$ they are quite accurate. Working directly with \eqref{eq:FnRla}, one finds the $d=1$ contribution to be \begin{align*} (1-p)(1-q) \frac{2 - \binom{n}{2}}{[2]_p [2]_q}. \end{align*} Letting $p = e^{is/\sigma_n}, q = e^{it/\sigma_n}$, straightforward estimates shows that this is $\Omega(1/n)$. Consequently, the preceding arguments are strong enough to identify the leading term, and in particular \[ |F_n(e^{is/\sigma_n}, e^{it/\sigma_n}) - 1| = \Theta(1/n). \] \end{Remark} \section{Deducing Baxter and Zeilberger's Result}\label{sec:cfs} We next summarize enough of the standard theory of characteristic functions to prove \Cref{thm:BZ} using \eqref{eq:HnFn} and \Cref{thm:FnBound}. \begin{Definition} The \textit{characteristic function} of an $\bR^k$-valued random variable $X = (X_1, \ldots, X_k)$ is the function $\phi_X \colon \bR^k \to \bC$ given by \[ \phi_X(s_1, \ldots, s_k) := \bE[\exp(i(s_1 X_1 + \cdots + s_k X_k))]. \] \end{Definition} \begin{Example}\label{ex:normchar} It is well-known that the characteristic function of the standard normal random variable with density $\frac{1}{\sqrt{2\pi}} e^{-x^2/2}$ is $e^{-s^2/2}$. Similarly, the characteristic function of a bivariate jointly independent standard normal random variable with density $\frac{1}{2\pi} e^{-x^2/2 - y^2/2}$ is $e^{-s^2/2 - t^2/2}$. \end{Example} \begin{Example}\label{ex:pgfchar} If $W$ is a finite set and $\stat = (\stat_1, \ldots, \stat_k) \colon W \to \bZ_{\geq 0}^k$ is some statistic, the \textit{probability generating function} of $\stat$ on $W$ is \[ P(x_1, \ldots, x_k) := \frac{1}{\#W} \sum_{w \in W} x_1^{\stat_1(w)} \cdots x_k^{\stat_k(w)}. \] The characteristic function of the corresponding random variable $X$ where the $w$ are chosen uniformly from $W$ is \[ \phi_X(s_1, \ldots, s_k) = P(e^{is_1}, \ldots, e^{is_k}). \] \end{Example} From \Cref{ex:pgfchar}, \Cref{rem:macmahon}, and an easy calculation, it follows that the characteristic functions of the random variables $X_n$ and $Y_n$ from \eqref{eq:XnYn} are \begin{equation}\label{eq:XnYnChar} \phi_{X_n}(s) = e^{-i\mu_n s/\sigma_n} \frac{[n]_{e^{is/\sigma_n}}!}{n!} \qquad\text{and}\qquad \phi_{Y_n}(t) = e^{-i\mu_n t/\sigma_n} \frac{[n]_{e^{it/\sigma_n}}!}{n!} \end{equation} An analogous calculation for the random variable $(X_n, Y_n)$ together with \eqref{eq:XnYnChar} and \eqref{eq:HnFn} gives \begin{equation}\label{eq:XnYnChar2} \begin{split} \phi_{(X_n, Y_n)}(s, t) &= e^{-i(\mu_n s/\sigma_n + \mu_n t/\sigma_n)} \frac{H_n(e^{is/\sigma_n}, e^{it/\sigma_n})}{n!} \\ &= \phi_{X_n}(s) \phi_{Y_n}(t) F_n(e^{is/\sigma_n}, e^{it/\sigma_n}). \end{split} \end{equation} \begin{Theorem}[Multivariate L\'evy Continuity, {\cite[p.~383]{MR1324786}}]\label{thm:levy} Suppose that $X^{(1)}$, $X^{(2)}$, $\ldots$ is a sequence of $\bR^k$-valued random variables and $X$ is an $\bR^k$-valued random variable. Then $X^{(1)}, X^{(2)}, \ldots$ converges in distribution to $X$ if and only if $\phi_{X^{(n)}}$ converges pointwise to $\phi_X$. \end{Theorem} If the distribution function of $X$ is continuous everywhere, convergence in distribution means that for all $u_1, \ldots, u_k$ we have \[ \lim_{n \to \infty} \bP[X^{(n)}_i \leq u_i, 1 \leq i \leq k] = \bP[X_i \leq u_i, 1 \leq i \leq k]. \] Many techniques are available for proving that $\inv$ and $\maj$ on $S_n$ are asymptotically normal. The result is typically attributed to Feller. \begin{Theorem}{\cite[p.~257]{MR0228020}}\label{thm:feller} The sequences of random variables $X_n$ and $Y_n$ from \eqref{eq:XnYn} each converge in distribution to the standard normal random variable. \end{Theorem} We may now complete the proof of \Cref{thm:BZ}. From \Cref{thm:feller} and \Cref{ex:normchar}, we have for all $s, t \in \bR$ \begin{equation}\label{eq:XnYnLim} \lim_{n \to \infty} \phi_{X_n}(s) = e^{-s^2/2} \qquad\text{and}\qquad \lim_{n \to \infty} \phi_{Y_n}(t) = e^{-t^2/2}. \end{equation} Combing in order \eqref{eq:XnYnLim}, \eqref{eq:XnYnChar2}, and \Cref{thm:FnBound} gives \[ \lim_{n \to \infty} \phi_{(X_n, Y_n)}(s, t) = e^{-s^2/2 - t^2/2}. \] \Cref{thm:BZ} now follows from \Cref{ex:normchar} and \Cref{thm:levy}. \section{Acknowledgments} The author would like to thank Dan Romik and Doron Zeilberger for providing the impetus for the present work and feedback on the manuscript. He would also like to thank Sara Billey and Matja\v{z} Konvalinka for valuable discussion on related work, and he gratefully acknowledges Sara Billey for her very careful reading of the manuscript and many helpful suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} \begin{figure*}[t] \centering \includegraphics[width=13.25cm]{figures_ieee-01.png} \caption{Learning coded-illumination designs for quantitative phase imaging: (a) The LED-array microscope captures multiple intensity measurements with different coded-illumination source patterns. (b) The measurements are used to computationally reconstruct the sample's complex-field using an iterative phase recovery algorithm. (c) An optimization procedure for learning optimal coded-illumination patterns updates the illumination design.} \label{fig:fig1} \end{figure*} Quantitative Phase Imaging (QPI) enables stain-free and label-free microscopy of transparent biological samples \textit{in vitro}~\cite{popescu2011quantitative,Mir:2012jp}. When compared with coherent methods~\cite{Rappaz.etal2014,cuche1999simultaneous}, QPI methods that use partially coherent light achieve higher spatial resolution, more light throughput, and reduced speckle artifacts. Phase contrast may be generated using interference~\cite{Bhaduri:2012vu,Wang.etal2011} or defocus~\cite{Gureyev:95,Streibl:1984we,Waller.etal2010}. More recently, \textit{coded-illumination microscopy}~\cite{Tian:2015ut,Zheng.etal2013,zheng2011microscopy,Tian:2014wv,Tian.Waller2015,Ling:2018vf} has been demonstrated as an accurate and inexpensive QPI scheme. To realize coded-illumination, we replace a commercial microscope's illumination unit with a light-emitting diode (LED) domed array (see Fig.~\ref{fig:fig1})~\cite{Phillips:17}. This provides a flexible hardware platform for various QPI applications including super-resolution~\cite{Tian:2015ut,Zheng.etal2013,Tian:2014wv}, multi-contrast~\cite{zheng2011microscopy,liu2014real}, and 3D imaging~\cite{Tian.Waller2015,Ling:2018vf}. Coded-illumination microscopy uses asymmetric source patterns~\cite{Kachar766} and multiple measurements to retrieve 2D phase information. Quantitative Differential Phase Contrast~\cite{Hamilton_Sheppard_1984,Mehta:09,Tian:2015fs,Claus:15} (qDPC), for example, captures four measurements with rotated half-circle source patterns, from which the phase is computationally recovered using a partially coherent linearized model. The practical performance of qDPC is predominantly determined by how the phase information is encoded in (via coded-illumination patterns) and decoded from (via phase recovery) the intensity measurements. The half-circle illumination designs of qDPC were derived analytically based on a Weak Object Approximation~\cite{Hamilton:1984,Streibl:85,Mehta:09,Tian:2015fs} which linearizes the physics in order to make the inverse problem mathematically convenient. This linearized model enables one to derive a phase transfer function and analyze the spatial frequency coverage of any given source pattern~\cite{Tian:2015fs,Claus:15,Li:17,Lin:2018ks}; however, the non-linearity of the exact model makes it impossible to predict an optimal source design without knowing the sample's phase \textit{a priori}. In addition, these types of analysis are inherently restricted to linear reconstruction algorithms and will not necessarily result in improved accuracy when the phase is retrieved via non-linear iterative methods. Motivated by the success of deep learning \cite{LeCun:2015dt} for image reconstruction problems \cite{Jin:2016up,Wangetal.2016,Rivenson:ksa,Sinha:17,Rivenson:17,DBLP:journals/corr/abs-1805-00334}, data-driven approaches have been adopted for learning coded-illumination patterns. For instance, researchers have used machine learning to maximize the phase contrast of each coded-illumination measurement \cite{Diederich:2018ht}, to improve accuracy on classification tasks \cite{Horstmeyer:2017tg}, and to reconstruct phase \cite{Robeyetal.2018}. All of these techniques learn the input-output relationship with a deep convolutional neural network (CNN) using training data. It is not straightforward to include the well-characterized system physics; hence, the CNN is required to learn both the physical measurement formation and the phase reconstruction process. This task requires training of 10s to 100s of thousands of parameters and an immense number of training examples. Here, we introduce a new data-driven approach to optimizing the source pattern design for coded-illumination phase retrieval by directly including both the system physics and the non-linear nature of a reconstruction algorithm in the learning process. Our approach \textit{unrolls} the iterations of a generic non-linear reconstruction algorithm to construct an \textit{unrolled network}~\cite{gregor2010learning,Hammernik:2017ku,Diamond:2017wa,sun2016deep,Anonymous:XgcAjXu7,Bostan:2018cr}. Similar to CNNs, our \textit{unrolled network} consists of several layers (one for each iteration); however, in our case each layer consists of well-specified operations to incorporate measurement formation and sparse regularization, instead of standard operations such as generic convolutions. The key aspects of our approach are: \begin{itemize} \item incorporation of the system physics and reconstruction non-linearities in the illumination design process. \item efficient parameterization of the unrolled network. \item incorporation of practical constraints. \item reduced number of training examples required. \end{itemize} We deploy our data-driven approach to learn improved coded-illumination patterns for phase reconstruction. Each layer of the unrolled network is parameterized by only a few variables (LED brightness values), enabling an efficient use of training data ($<100$ simulated training examples). We compare the QPI performance of our learned designs to previous work and demonstrate that our designs generalize well to the experimental setting with biological samples. \section{Quantitative Phase Imaging} \label{sec:qpi} qDPC recovers a sample's complex transmittance function from several coded-illumination measurements. The phase recovery optimization algorithm aims to minimize the Euclidean norm of the error between the measurements and the expected measurements formed with the current phase estimate. Using a gradient-based procedure, the phase estimate is iteratively updated until convergence. For a partially coherent source, the phase can be recovered with resolution up to twice the coherent diffraction limit. In this section, we describe the measurement formation process and phase recovery optimization. \subsection{System Modelling} \label{ssec:model} A thin sample's transmission function can be approximated as a 2D complex function, $o(\mathbf{r})=e^{j\phi(\mathbf{r}) - \mu(\mathbf{r})}$, characterized by its absorption, $\mu(\mathbf{r})$, and phase, $\phi(\mathbf{r}) = \frac{2\pi}{\lambda}\Delta n(\mathbf{r}) d(\mathbf{r})$, where $\mathbf{r}$ are 2D spatial coordinates, $\lambda$ is the wavelength of the illumination, $d(\mathbf{r})$ is the physical thickness of the sample, and $\Delta n(\mathbf{r})$ is the change in refractive index from the background. Intensity measurements, $y(\mathbf{r})$, of the sample are a non-linear function of $o(\mathbf{r})$, mathematically described by, \begin{align} y(\mathbf{r}) = |p(\mathbf{r})*(s(\mathbf{r}) \odot o(\mathbf{r}))|^2, \label{eq:eq01} \end{align} \noindent where $|\cdot|^2$ denotes squared absolute value, $*$ denotes convolution, $\odot$ denotes elementwise multiplication, $s(\mathbf{r})$ is the illumination's complex-field at the sample plane and $p(\mathbf{r})$ is the point spread function (PSF) of the microscope. The illumination from each LED is approximated as a tilted plane wave, $s(\mathbf{r}) = e^{\frac{j}{\lambda}\mathbf{u}_{pos}^T\mathbf{r}}$, with tilt angle, $\mathbf{u}_{pos}$, determined by the physical position of the LED relative the microscope~\cite{Zheng:2013gq}. Because the measured image in Eq.~\ref{eq:eq01} is non-linear with respect to the sample's transmission function, recovering phase generally requires non-convex optimization. However, biological samples in closely index-matched fluid have a small \textit{scatter-scatter} term. This means that a \textit{weak object approximation} can be made; linearizing the measurement formation model such that phase recovery requires only a linear deconvolution of the measurements with their respective weak object transfer functions (WOTFs)~\cite{Hamilton:1984,Mehta:09,Streibl:85,Claus:15,Tian:2015fs}. Further, unstained biological samples are predominantly phase objects since they are only weakly absorbing (\textit{i.e.} $\mu(\mathbf{r})$ is small). With these approximations, we can express each intensity measurement as a linear system with contributions from the background and phase contrast. In Fourier space, \begin{align}\label{eq:weakObjectApprox} \widehat{y}(\mathbf{u}) \approx B\delta(\mathbf{u}) + i h(\mathbf{u})\widehat\phi(\mathbf{u}), \end{align} \noindent where $\widehat{\cdot}$ denotes Fourier transform, $\mathbf{u}$ are 2D spatial-frequency coordinates, $B$ is the measurement's background energy concentrated at the DC and $h(\mathbf{u})$ is the phase WOTF. The phase WOTFs are a function of the illumination source and the pupil distribution of the microscope~\cite{Tian:2015fs}. For a single LED the WOTF is: \begin{align} h^{(single)}(\mathbf{u}) &= i(\widehat{p}(\mathbf{u}) \star \widehat{s}(\mathbf{u}) - \widehat{s}(\mathbf{u}) \star \widehat{p}(\mathbf{u})), \end{align} \noindent where $\star$ is the correlation operator, defined as $(x_1 \star x_2)(\mathbf{r}) = \int x_1(\tilde{\mathbf{r}}) x_2^{*}(\tilde{\mathbf{r}}-\mathbf{r}) d\tilde{\mathbf{r}}$ for $\mathbf{r}$ in the domain of $\widehat{p}$ and $\widehat{s}$. In~\cite{Tian:2015fs}, multiple LEDs are turned on simultaneously to increase signal-to-noise (SNR) and improve phase contrast. Because the fields generated by each LED's illumination are spatially incoherent with each other, the measurement from multiple LEDs will simply be the weighted sum of each LED's individual measurement, where the weights correspond to the LEDs' brightness values. The phase WOTF for illumination by multiple LEDs will also be the weighted sum of the single-LED phase WOTFs. Mathematically, \begin{align} \widehat{y}^{(multi)}(\mathbf{u}) &= \sum_{w \in \mathcal{W}} c_w \widehat{y}^{(single)}(\mathbf{u}) \\ h^{(multi)}(\mathbf{u}) &= \sum_{w \in \mathcal{W}} c_w h_w^{(single)}(\mathbf{u}), \end{align} \noindent where $\mathcal{W}$ is the set of LEDs turned on and $c_w \geq 0$ are the LEDs' brightness values. Following common practice~\cite{Bostan.etal2013}, we discretize the 2D spatial distributions and format them as vectors (bold lower case) (\textit{e.g.} $\widehat{\mathbf{h}}$ represents the transfer function's 2D spatial-frequency distribution and $\boldsymbol\phi$ represents the 2D spatial phase distribution). The measurements\footnote{In practice, $\mathbf{y}$ typically refers to the so-called flattened image, where the background energy in~\eqref{eq:weakObjectApprox} is removed via background subtraction.} are described in Fourier space as $\widehat{\mathbf{y}} = \mathbf{A}\widehat{\boldsymbol{\phi}}$ with system function $\mathbf{A} = diag(\widehat{\mathbf{h}})$. Based on this model, we define $\mathbf{Y} \in \mathds{R}^{M \times S}$ as the Fourier transform of $S$ single LED measurements, $\widehat{\mathbf{y}}$, along the columns. Then, $\mathbf{C} \in \mathds{R}^{S \times K}$ is defined as the $S$ single-LED weights for each of $K$ measurements, and $\mathbf{c}_k \in \mathds{R}^{S}$ is the $k^{th}$ column of $\mathbf{C}$. The product $\widehat{\mathbf{y}}_k = \mathbf{Y}\mathbf{c}_k$ simulates the $k^{th}$ multiple-LED measurement. Similarly, we define $\mathbf{H} \in \mathds{R}^{N \times S}$ as $S$ single LED phase WOTFs, $\widehat{\mathbf{h}}$ along the columns, such that the product $\mathbf{A}_k = diag(\mathbf{H}\mathbf{c}_k)$ gives the corresponding multiple-LED phase WOTF for the $k^{th}$ measurement. \subsection{Phase Recovery} \label{ssec:inverseProblem} Phase recovery using the forward model in Sec.~\ref{ssec:model} can be formulated as a regularized linear inverse problem, \begin{align} \widehat{\boldsymbol\phi}^{\star} &= \mathcal{R}((\widehat{\mathbf{y}}_k)_{k=1}^{K}, \mathcal{P}(\cdot)) \\ &= \arg \underset{\widehat{\boldsymbol\phi}}{ \min} \,\, \frac{1}{2K}\sum_{k=1}^{K} \|\widehat{\mathbf{y}}_k - \mathbf{A}_k\widehat{\boldsymbol{\phi}}\|_2^2 + \mathcal{P}(\widehat{\boldsymbol{\phi}}), \label{eq:invproblem} \end{align} \noindent where $\boldsymbol{\phi}^{\star}$ is the recovered phase, $K$ is the number of measurements acquired, $\mathbf{\widehat{y}}_k$ is the Fourier transform of the $k^{th}$ measurement and $\mathcal{P}(\cdot)$ is a user-chosen regularizer. We solve this optimization problem efficiently using the accelerated proximal gradient descent (APGD) algorithm by iteratively applying an acceleration update, a gradient update and a proximal update~\cite{Parikh:2013vb,Beck:2009gh}. The algorithm is detailed in Alg.~\ref{alg:APGD}, where $\alpha$ is the gradient step size, $N$ is the number of iterations, $\mathbf{s}$ and $\mathbf{z}$ are intermediate variables, $\mu^{(n)}$ is the acceleration parameter derived by the recursion, $\mu^{(n)}=\frac{1 + \sqrt{1 + 4\mu^{(n-1),2}}}{2}$~\cite{Beck:2009gh}, and $\text{prox}_{\mathcal{P}}(\cdot)$ is the proximal operator corresponding to the user-chosen regularizer $\mathcal{P}(\cdot)$~\cite{Parikh:2013vb}. \begin{figure*}[tbh] \centering \includegraphics[width=12cm]{figures_ieee-02.png} \caption{Unrolled physics-based network: Feed-forward schematic for the unrolled accelerated proximal gradient descent (APGD) network for $N$ iterations (dark blue box). The network takes intensity measurements, $\mathbf{y}_k$, parameterized by the coded-illumination design, $\mathbf{c}_k$, as input and outputs the reconstructed phase, $\phi^{\star}$. Finally, the output is compared with the ground truth phase, $\phi'$, using a user-chosen loss function, $\mathcal{F}_l$ (pink box). The inset into a single ($n^{th}$) iteration (light blue box) shows each iteration's three steps: acceleration update, gradient update, and proximal update.} \label{fig:fig3} \end{figure*} \begin{algorithm}[tbh] \caption{Accelerated Proximal Gradient Descent (APGD) for Phase Recovery} \label{alg:APGD} \begin{algorithmic}[1] \Procedure{APGD}{$(\mathbf{\widehat{y}}_k)^{K}_{k=1},N,\alpha, \mathcal{P}(\cdot)$} \State $\widehat{\boldsymbol{\phi}}^{(0)} = \mathbf{0}, \widehat{\boldsymbol{\phi}}^{(-1)} = \mathbf{0}$ \For{$n \in \{1 ... N\}$} \State $\mathbf{s}^{(n)} \gets \mu^{(n)}\widehat{\boldsymbol{\phi}}^{(n-1)} + (1-\mu^{(n)})\widehat{\boldsymbol{\phi}}^{(n-2)}$ \State $\mathbf{z}^{(n)} \gets \mathbf{s}^{(n)} - \frac{\alpha}{K}\sum_{k=1}^K(-\mathbf{A}_k^{H})(\mathbf{\widehat{y}}_k - \mathbf{A}_k\mathbf{s}^{(n)})$ \State $\widehat{\boldsymbol{\phi}}^{(n)} \gets \text{prox}_{\alpha \mathcal{P}}(\mathbf{z}^{(n)})$ \EndFor \State \textbf{return} $\widehat{\boldsymbol{\phi}}^{(N)}$ \EndProcedure \end{algorithmic} \end{algorithm} \section{Physics-Based Learned Design} \label{sec:learning} Given the phase recovery algorithm in Sec.~\ref{ssec:inverseProblem}, we now describe our main contribution of learning the coded-illumination designs for a given reconstruction algorithm and training set. \subsection{Unrolled Physics-based Network} \label{ssec:unrolledNetwork} Traditionally, DNNs contain many layers of weighted linear mixtures and non-linear activation functions~\cite{LeCun:2015dt}. Here, we consider specific linear functions which capture the system physics of measurement formation and specific non-linear activation functions which promote sparsity. Starting from Alg.~\ref{alg:APGD}, we treat each iteration as a layer such that when unrolled they form a network of $N$ layers, denoted $\mathcal{R}$ (Fig.~\ref{fig:fig3}). Each layer of $\mathcal{R}$ contains a module for each of the iterative algorithm's updates (\textit{i.e.} an acceleration module, a gradient module (incorporates system physics), and a proximal module (incorporates sparsity)). The regularization and step size parameters specified for Alg.~\ref{alg:APGD} are fixed. The network's inputs comprise $(\widehat{\mathbf{y}}_k)_{k=1}^{K}$ and the network's output is $\widehat{\boldsymbol{\phi}}^{(N)}$. The design parameters of the network, which will be learned, govern the relative brightness of the LEDs and are incorporated in the measurement formation and the system WOTFs. \subsection{Learning Objective} \label{ssec:learn_objective} Our learning objective is to minimize the phase reconstruction error of the training data over the space of possible LED configurations, subject to constraints that enforce physical feasibility and eliminate degenerate and trivial solutions: \begin{align} \mathbf{C}^{\star} = & \arg \underset{\mathbf{C}}{ \min} \ \mathcal{F}(\mathbf{C}) \label{eq:cost} \\ \text{s.t.} \ \ & \mathbf{c}_k \geq 0 \,\, & \text{(non-negativity)} \label{eq:positive} \\ & \|\mathbf{c}_k\|_1 = 1 \,\, & \text{(scale)}\label{eq:power} \\ & \mathbf{m}_k \odot \mathbf{c}_k = \mathbf{0} \,\, & \text{(geometric)} \label{eq:phasecon} \\ & \forall k \in \{1 \hdots K\} \nonumber, \end{align} \noindent where, \begin{align} \mathcal{F}(C) &= \frac{1}{L} \sum_{l=1}^{L} \mathcal{F}_l(\mathbf{C}) \\ &= \frac{1}{2L} \sum_{l=1}^{L} \|\mathcal{R}((\mathbf{Y}_l\mathbf{c}_k)_{k=1}^{K}) - \widehat{\boldsymbol{\phi}}'_l\|_2^2. \end{align} \noindent Here, $(\mathbf{Y}_l, \boldsymbol{\phi}'_l)_{l=1}^{L}$ are $L$ training pairs for which $\mathbf{Y}_l$ is a matrix of the Fourier transform of single-LED measurements for the $l^{th}$ sample with optical phase, $\boldsymbol{\phi}'_l$. $\odot$ is the elementwise product operator, $\mathbf{m}_k$ is a geometric constraint mask for the $k^{th}$ measurement, and $\mathbf{0}$ is the null vector. The non-negativity constraint (Eq.~\ref{eq:positive}) prevents non-physical solutions by enforcing the brightness of each LED to be greater than or equal to zero. This is enforced by projecting the parameters onto the set of non-negative real numbers. The scale constraint (Eq.~\ref{eq:power}) enforces that each coded-illumination design must have weights with sum equal to 1, in order to eliminate arbitrary scalings of the same design. This is enforced by scaling the parameters for each measurement such that their sum is one. The geometric constraint (Eq.~\ref{eq:phasecon}) enforces that the coded-illumination designs do not use conjugate-symmetric LED pairs to illuminate the sample within the same measurement, since these would also result in degenerate solutions (\textit{e.g.} two symmetric LEDs produce opposite phase contrast measurements that would cancel each other out). To prevent this, we force the source patterns for each measurement to reside within only one of the major semi-circle sets (\textit{e.g.} top, bottom, left, right). This constraint is enforced by setting the LED brightnesses outside the allowed semi-circle to zero. We solve Eq.~\ref{eq:cost} iteratively via accelerated projected gradient descent (Alg.~\ref{alg:CLA}). At each iteration, the coded-illumination design for each measurement is updated with the analytical gradient, projected onto the constraints (denoted by $\mathcal{B}(\cdot)$) and updated again with a contribution from the previous iteration (weighted by $\beta^{(t)}$). $\mathcal{B}(\cdot)$ enforces the constraints in the following order: non-negativity, geometric, and scale. \begin{algorithm}[H] \caption{Physics-based Learned Design Algorithm }\label{alg:CLA} \begin{algorithmic}[1] \Procedure{PBLD}{$(\mathbf{Y}_l,\boldsymbol{\phi}'_l)^{L}_{l=0},\mathbf{C},\gamma, T$} \For{$t \in \{0 ... T\}$} \Comment{Gradient descent loop} \For{$l \in \{1 ... L\}$} \Comment{Training data loop} \State $r_l \gets \mathcal{R}((\mathbf{Y}_l\mathbf{c}_k)_{k=1}^{K}) - \widehat{\boldsymbol{\phi}}'_l$ \State $\mathbf{G}_l \gets \textit{BackPropagation}(r_l)$ \EndFor \State $\mathbf{C}^{(t+1)} \gets \mathcal{B}(\mathbf{C}^{(t)} - \frac{\gamma}{L} \sum_{l=1}^{L} \mathbf{G}_l)$ \State $\mathbf{C}^{(t+1)} \gets \beta^{(t)}\mathbf{C}^{(t+1)} + (1-\beta^{(t)})\mathbf{C}^{(t)}$ \EndFor \State \textbf{return} $\mathbf{C}^{(T)}$ \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Gradient Update} \label{ssec:gradient} The gradient of the loss function (Eq.~\ref{eq:cost}) with respect to the design parameters has contributions at every layer of the unrolled network through both the measurement terms, $\widehat{\mathbf{y}}_k$, and the phase WOTF terms, $\mathbf{A}_k$, for each measurement $k \in \{1...K\}$. Here, we outline our algorithm for updating the coded-illumination design weights via a two-step procedure: backpropagating the error from layer-to-layer and computing each layer's gradient contribution. For simplicity, we outline the gradient update for only a single training example, $l$, as the gradient for all the training examples is the sum of their individual gradients. Unlike pure gradient descent, where each iteration's estimate only depends on the previous', accelerated methods like Alg.~\ref{alg:APGD} linearly combine the previous two iteration's estimates to improve convergence. As a consequence, backpropagating error from layer-to-layer requires contributions from two successive layers. Specifically, we compute the error at all $N$ layers with the recursive relation, \begin{align} \frac{\partial \mathcal{F}_l}{\partial \widehat{\boldsymbol{\phi}}^{(n-2)}} &= \frac{\partial \mathbf{s}^{(n)}}{\partial \widehat{\boldsymbol{\phi}}^{(n-2)}}\frac{\partial \mathbf{z}^{(n)}}{\partial \mathbf{s}^{(n)}}\frac{\partial \widehat{\boldsymbol{\phi}}^{(n)}}{\partial \mathbf{z}^{(n)}}\frac{\partial \mathcal{F}_l}{\partial \widehat{\boldsymbol{\phi}}^{(n)}} \nonumber \\ &+ \frac{\partial \mathbf{s}^{(n-1)}}{\partial \widehat{\boldsymbol{\phi}}^{(n-2)}}\frac{\partial \mathbf{z}^{(n-1)}}{\partial \mathbf{s}^{(n-1)}}\frac{\partial \widehat{\boldsymbol{\phi}}^{(n-1)}}{\partial \mathbf{z}^{(n-1)}}\frac{\partial \mathcal{F}_l}{\partial \widehat{\boldsymbol{\phi}}^{(n-1)}}, \end{align} \noindent where each partial gradient constitutes a single step in Alg.~\ref{alg:APGD} (fully derived in the supplement). With the backpropagated error at each layer, we compute the gradient of the loss function with respect to $\mathbf{C}$ as, \begin{align} \nabla_{\mathbf{C}} \mathcal{F}_l(\mathbf{C}) &= \sum_{n=0}^{N} \mathbf{Q}^{(n)}, \end{align} \noindent for which, \begin{align} \mathbf{Q}^{(n)} = \frac{\alpha}{K}\sum_{k=1}^{K} (\frac{\partial \mathbf{A}^H_{k} \widehat{\mathbf{y}}_k}{\partial \mathbf{C}} - \frac{\partial \mathbf{A}^H_{k}\mathbf{A}_{k}}{\partial \mathbf{C}}\mathbf{s}^{(n-1)})\frac{\partial \widehat{\boldsymbol{\phi}}^{(n)}}{\partial \mathbf{z}^{(n)}}\frac{\partial \mathcal{F}_l}{\partial \widehat{\boldsymbol{\phi}}^{(n)}}. \end{align} \noindent Here, $ \left( \partial \widehat{\boldsymbol{\phi}}^{(n)} \middle/ \partial \mathbf{z}^{(n)} \right)$ backpropagates the error through the proximal operator and other partials with respect to $\mathbf{C}$ relate the backpropagated error at each layer to the changes in $\mathbf{C}$. Derivations of these partial gradients are included in the supplementary material. In Alg.~\ref{alg:BP}, we unite these two steps to form a recursive algorithm which efficiently computes the analytic gradient for a single training example. Alternatively, general purpose auto-differentiation included in learning libraries (\textit{e.g.} PyTorch, TensorFlow) can be used to perform the gradient updates. \begin{algorithm}[H] \caption{Gradient Update for Single Training Example}\label{alg:BP} \begin{algorithmic}[1] \Procedure{Backpropagation(BP)}{$\mathbf{r}^{(N)}$ \For{$n \in \{N ... 0\}$} \State $\mathbf{b}^{(n)} \gets \frac{\partial \widehat{\boldsymbol{\phi}}}{\partial \mathbf{z}} \mathbf{r}^{(n)}$ \State $\mathbf{v}^{(n)} \gets (I-\frac{\alpha}{K}\sum_{k=1}^{K}\mathbf{A}^{H}_{k} \mathbf{A}_{k})\mathbf{b}^{(n)}$ \State $\mathbf{r}^{(n-1)} \gets \mu^{(n)} \mathbf{v}^{(n)} + (1-\mu^{(n+1)}) \mathbf{v}^{(n+1)}$ \State $\mathbf{Q}^{(n)} \gets \frac{\alpha}{K}\sum_{k=1}^{K} (\frac{\partial \mathbf{A}^H_{k} \mathbf{\widehat{y}}_k}{\partial \mathbf{C}} - \frac{\partial \mathbf{A}^H_{k}\mathbf{A}_{k}}{\partial \mathbf{C}}s^{(n-1)})\mathbf{b}^{(n)}$ \EndFor \State \textbf{return} $\sum_{n=0}^{N} \mathbf{Q}^{(n)}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{figure}[h!] \centering \includegraphics[width=8.89cm]{figures_ieee-03.png} \caption{Coded-illumination designs and their corresponding phase weak object transfer functions (WOTFs) for: (a) Traditional qDPC and (b) learned designs for the case where 4, 3, or 2 measurements are allowed for each phase reconstruction. The illumination source patterns are in the upper left corners, with gray semi-circles denoting where the LEDs are constrained to be ``off''.} \label{fig:fig4} \end{figure} \section{Results} \label{sec:results} Our proposed method learns the coded-illumination design for a given reconstruction and training set (Fig.~\ref{fig:fig4}b), yet up to this point we have not detailed specific parameters of our phase reconstruction. In our results, we set the parameters of our reconstruction algorithm (Alg.~\ref{alg:APGD}) to have a fixed CPU time by fixing the number of iterations at $N = 40$ and the step size to $\alpha = 0.2$ (see supplement for parameter analysis). In addition, the regularization term, $\mathcal{P}(\boldsymbol{\phi})$, has been defined generally (\textit{e.g.} $\ell_1$ penalty, total variation (TV) penalty~\cite{osher2005iterative}, BM3D~\cite{dabov2007image}). Here, we choose to enforce TV-based sparsity: \begin{align} \mathcal{P}(\boldsymbol{\phi}) &= \tau \sum_{i}\|D_i \boldsymbol{\phi}\|_1, \label{eq:reg} \end{align} \noindent where $\tau = 1\text{e}^{-3}$ is set to trade off the TV cost with the data consistency cost and $D_i$ is the first-order difference operator along the $i^{th}$ image dimension. We efficiently implement the proximal operator of Eq.~\ref{eq:reg} in closed form via parallel proximal method~\cite{Bostan:2018cr,combettes2011proximal,Kamilov:2016gc} (details in supplement). \subsection{Learning} \begin{figure*}[t] \centering \includegraphics[width=18cm]{figures_ieee-05.png} \caption{Phase reconstruction results using simulated measurements with different coded-illumination designs. We compare results from: traditional qDPC (half-circles), annular illumination, condition number optimization, A-optimal design, and our proposed physics-based learned designs. We show results for the cases of (a) four, (b) three, and (c) two measurements allowed for each phase reconstruction. Absolute error maps are shown below each reconstruction.} \label{fig:fig5} \end{figure*} To train our coded-illumination design parameters using Alg.~\ref{alg:CLA}, we generate a dataset of 100 examples (90 for training, 10 for testing). Each example contains ground truth phase from a small region ($95 \times 95\text{pixels}$) of a larger image and $69$ simulated single LED measurements (using Eq.~\ref{eq:eq01}). The LEDs are uniformly spaced within a circle such that each single-LED intensity measurement is a brightfield measurement. The physical system parameters used to generate the phase WOTFs and simulate the training data measurements are $\lambda = 0.532\mu m$, $\text{pixel pitch} = 6.5\mu m$, $\text{magnification} = 20\times$, and $NA_{obj} = 0.25$. To train, we use $\ell_2$ cost between reconstructed phase and ground truth phase as our loss function and approximate the full gradient of Eq.~\ref{eq:cost} with a batch gradient from random batches of $10\%$ of the training pairs at each iteration. We use a learning rate of $\gamma = 1\text{e}^{-2}$ (training and testing convergence curves are provided in the supplement). The training is performed on a multi-core CPU (Dual-socket Intel Xeon\textregistered$\ $ E5 Processor @ 2.1GHz with $64$ cores and $504$GB of RAM) and batch updates are computed in parallel with each training example on a single core. Each batch update takes $\sim$~6 seconds. 200 updates are performed, resulting in a total training time of 20 minutes. \begin{table*}[t!] \centering \caption{PSNR Results: Average and standard deviation PSNR (dB) of phase reconstructions from the simulated testing examples using different illumination schemes and different numbers of measurements. Factor format: Mean $\pm$ Std.} \begin{tabular}{ |c||c|c|c|c|c|c| } \hline \# Meas. & Random & Traditional & Annular & Cond. Number & A-optimal & Physics-based \\ & Illumination & qDPC & Illumination & Optimization & Design & Learned Design \\ \hline 4 & 12.30 $\pm$ 2.12 & 15.67 $\pm$ 2.19 & 20.40 $\pm$ 2.09 & 20.37 $\pm$ 2.41 & 17.94 $\pm$ 2.54 & \bf{28.46} $\pm$ 2.50 \\ 3 & 12.33 $\pm$ 2.12 & 15.28 $\pm$ 2.18 & 20.44 $\pm$ 2.26 & 19.33 $\pm$ 2.03 & 18.05 $\pm$ 2.59 & \bf{28.04} $\pm$ 2.59 \\ 2 & 12.25 $\pm$ 2.12 & 14.87 $\pm$ 2.23 & 20.21 $\pm$ 2.24 & 17.19 $\pm$ 2.28 & 18.08 $\pm$ 2.64 & \bf{23.73} $\pm$ 2.18 \\ \hline \end{tabular} \label{table:dBimprovement} \end{table*} \begin{figure*}[tbh] \centering \includegraphics[width=18.19cm]{figures_ieee-07.png} \caption{USAF phase target reconstructions: Experimental comparison between phase results with (a) Fourier Ptychography (FP) using 69 images, (b) traditional qDPC and (c) learned designs, for the case of 4, 3, and 2 measurements. Error maps show the difference from the FP reconstruction. (d) Cross-sections show that phase from our learned designs (long-dashed red) is closer to that of FP (solid blue) than traditional qDPC (short-dashed green).} \label{fig:fig6} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=18.19cm]{figures_ieee-08.png} \caption{3T3 mouse fibroblast cells reconstructions: Experimental comparison between phase results with (a) Fourier Ptychography (FP) using 69 measurements, (b) traditional qDPC and (c) learned designs, for the case of 4, 3, and 2 measurements. Error maps show the difference from the FP reconstruction. (d) Cross-sections show that phase from our learned designs (long-dashed red) is closer to that of FP (solid blue) than traditional qDPC (short-dashed green).} \label{fig:fig7} \end{figure*} Traditional qDPC uses 4 measurements to adequately cover frequency space. Our learned designs are more efficient and may require fewer measurements; hence, we show learned designs for the cases of 4, 3 and 2 measurements. The designs and their corresponding phase WOTFs are shown in Fig.~\ref{fig:fig4}. Comparing our learned designs with previous work, Fig.~\ref{fig:fig5} shows the phase reconstruction for a single simulated test example using 4, 3 and 2 measurements. The ground truth phase is compared with the phase reconstructed using traditional qDPC designs~\cite{Tian:2015fs}, annular illumination designs~\cite{Tian:2015fs}, condition number optimized designs~\cite{marechal2009}, A-optimal designs~\cite{WONG1984295}, and our physics-based learned designs. Table~\ref{table:dBimprovement} reports the peak SNR (PSNR) statistics (mean and standard deviation) for the phase reconstructions from $\mathcal{R}$ evaluated on our set of testing examples. Our learned designs give significant improvement, recovering both the high and low frequencies more accurately. \subsection{Experimental Validation} \label{ssec:expvalid} To demonstrate that our learned designs generalize well in the experimental setting, we implement our method on an LED array microscope. A commercial Nikon TE300 microscope is equipped with a custom quasi-Dome~\cite{Phillips:17} illumination system (581 programmable RGB LEDs: $\lambda_R = 625$ nm, $\lambda_G = 532$ nm, $\lambda_B = 450$ nm) and a PCO.edge 5.5 monochrome camera ($2560\times2160$, $6.5\mu m$ pixel pitch, 16 bit). We image two samples: a USAF phase target (Benchmark Technologies) and fixed 3T3 mouse fibroblast cells (prepared as detailed in the supplement). In order to validate our method, we compare results against phase experimentally estimated via pupil-corrected Fourier Ptychography (FP)~\cite{Zheng:2013gq,Ou:2014ea,Tian:2014wv} with equivalent resolution. FP is expected to have good accuracy, since it uses significantly more measurements (69 single-LED measurements) and a non-linear reconstruction process. Using the USAF target, we compare phase reconstructions from FP with traditional qDPC and our learned design measurements (Fig.~\ref{fig:fig6}). Traditional qDPC reconstructions consistently under-estimate the phase values. However, phase reconstructions using our learned design measurements are similar to phase estimated with FP. As the number of measurements is reduced, the performance quality of the reconstruction using traditional qDPC degrades, while the reconstruction using the learned design remains accurate. To demonstrate our method with live biological samples, we repeated the experiments with 3T3 mouse fibroblast cells. Figure~\ref{fig:fig7} shows that phase reconstructions from traditional qDPC again consistently under-estimate phase values, while phase reconstructions using learned design measurements match the phase estimated with FP well. \section{Discussion} \label{sec:discussion} Our proposed experimental design method efficiently learns the coded-illumination designs by incorporating both the system physics and the non-linear nature of iterative phase recovery. Learned designs with only 2 measurements can efficiently reconstruct phase with quality similar to Fourier Ptychography ($69$ measurements) and better than qDPC ($4$ measurements), giving an improvement in temporal resolution by a factor of 2$\times$ over traditional qDPC and far fewer than FP. Additionally, we demonstrate (Table~\ref{table:dBimprovement}) that the performance of our designs on a set of testing examples is superior to previously-proposed coded-illumination designs. Visually, our learned design reconstructions closely resemble the ground truth phase, with both low-frequency and high-frequency information accurately recovered. By parameterizing our learning problem with only a few weights per measurement, our method can efficiently learn an experimental design with a small simulated dataset. This enables fast training and reduces computing requirements significantly. Obtaining large experimental datasets for training may be difficult in microscopy, so it is important that our method can be trained on simulated data only. Experimental results in Sec.~\ref{ssec:expvalid} show similar quality to simulated results, with both using the designs learned from simulated data only. Finally, phase recovery with the learned designs' measurements are trained with a given number of reconstruction iterations (\textit{e.g.} determined by a CPU budget). This makes our method particularly well-suited for real-time processing. qDPC can also be implemented in real-time, but limiting the compute time for the inverse problem (by restricting the number of iterations) limits convergence and causes low-frequency artifacts. Our learned designs incorporate the number of iterations (and hence processing time) into the design process, producing high-quality phase reconstructions within a reasonable compute time. \section{Outlook} \label{sec:outlook} Our method is general to the problem of experimental design. Similar to QPI, many fields (\textit{e.g.} Magnetic resonance imaging (MRI), fluorescence microscopy) use physics-based non-linear iterative reconstruction techniques to achieve state-of-the-art performance. With the correct model parameterization and physically-relevant constraints, our method could be applied to learn optimal design for these applications (\textit{e.g.} undersampling patterns for compressed sensing MRI~\cite{lustig2007sparse}, PSFs for fluorescence microscopy~\cite{Pavani2995}). Requirements for applying our method are simple: the reconstruction algorithm's updates must be differentiable (\textit{e.g.} gradient update and proximal update) so that analytic gradients of the learning loss can be computed with respect to the design parameters. Of practical importance, the proximal operator of the regularizer should be chosen so that it has a closed form. While this is not a strict requirement, if the operator itself requires an additional iterative optimization, error will have to be backpropagated through an excessive number of iterations. Here, we choose to penalize anisotropic TV, whose proximal operator can be approximated in closed form~\cite{Kamilov:2016gc}. Further, including an acceleration update improves the convergence of gradient-based reconstructions. As a result, the \textit{unrolled network} can be constructed using fewer layers than its unaccelerated counterpart. This will reduce both computation time and training requirements. \section{Conclusion} \label{sec:conc} We have presented a general framework for incorporating the non-linearities of regularized reconstruction and known system physics to learn optimal experimental design. Here, we have applied this method to learn coded-illumination source designs for quantitative phase recovery. Our coded-illumination designs can improve the temporal resolution of the acquisition and enable real-time processing, while maintaining high accuracy. We demonstrated here that our learned designs achieve high-quality reconstructions experimentally without the need for retraining. \section*{Funding Information} This work was supported by STROBE: A National Science Foundation Science \& Technology Center under Grant No. DMR 1548924 and by the Gordon and Betty Moore Foundation's Data-Driven Discovery Initiative through Grant GBMF4562 to Laura Waller (UC Berkeley). Laura Waller is a Chan Zuckerberg Biohub investigator. Michael R. Kellman is additionally supported by the National Science Foundation's Graduate Research Fellowship under Grant No. DGE 1106400. Emrah Bostan's research is supported by the Swiss National Science Foundation (SNSF) under grant P2ELP2 172278. \section*{Acknowledgment} The authors would like to thank Professor Michael Lustig for his guidance and advice. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \setcounter{equation}{0} \label{sec:1} A part of the mathematical modelling of alpine glaciers and polar ice sheets and ice caps is the description of their longitudinal profiles, which is based on non-linear differential equations. The microphysics and the rheology of ice play a crucial role in determining the shape of glaciers. A good model for the response of glacier ice to stress is Glen's law relating the strain rate tensor $\dot{\epsilon}_{ij}$ to stresses in the ice (Glen~1955) \begin{equation}\label{Glen} {\dot{\epsilon}}_{ij} = {\cal A} \, \sigma_\text{eff}^{n-1} s_{ij} \ee where $s_{ij}$ is the deviatoric stress tensor, \begin{equation} \sigma_\text{eff} =\sqrt{ \frac{1}{2} \mbox{Tr}\left( \hat{s}^2 \right)} \ee is the effective stress, and ${\cal A}$ is a (temperature-dependent) constant (Paterson 1994; Cuffey and Paterson 2010; Hooke 2005; Greve and Blatter 2009). The value $n=3$ is adopted for glacier flow in most theoretical and modelling work. Let $x$ be a coordinate along the glacier bed in the direction of the ice flow. Assuming incompressible and isotropic ice, steady state, a flat bed (this means that the bed is a plane which, in general, has non-zero slope), and Glen's law, the longitudinal glacier profile (or local ice thickness) $h(x)$ obeys the Vialov ordinary differential equation (Vialov 1958; Paterson 1994; Cuffey and Paterson 2010; Hooke 2005; Greve and Blatter 2009) \begin{equation}\label{Vialov} x \, c(x) = \frac{2{\cal A} }{n+2} \left( \rho g h \left| \frac{dh}{dx} \right| \right)^{n} h^2 \,, \ee where $c(x)$ is the accumulation rate of ice, that is, the flux density of ice volume in the $z$-direction perpendicular to $x$, with the dimensions of a velocity. The absolute value in Eq.~(\ref{Vialov}) is introduced when one looks for solutions in the finite interval $x \in \left[0,L \right] $. If $x=0$ and $x=L $ denote the glacier summit and terminus, respectively, then the local surface slope $dh/dx$ is negative and its absolute value must be taken. If instead $x=0$ denotes the glacier terminus while $x=L$ is the summit, it is $dh/dx>0$. For ice caps and ice sheets, once a solution for the longitudinal profile of half of a glacier is found in $\left[ 0, L \right]$, it is extended to the interval $\left[ -L, L \right]$ (or to $\left[0, 2L \right]$, respectively) by reflection about the vertical line $x=0$ (or $x=L$, respectively) passing through the summit. A consequence of this procedure is that the surface profile $h(x)$ of an ice cap or ice sheet is not differentiable at the summit, where the left and right derivatives of $h$ are finite and opposite and, usually, also at the terminus where the slope $dh/dx$ and the basal stress $\tau_b= -\rho g h \, dh/dx$ diverge (here $\rho$ is the ice density and $g$ is the acceleration of gravity). This is, however, common procedure in the literature (Paterson 1994; Cuffey and Paterson 2010; Hooke 2005; Greve and Blatter 2009). The non-linearity of the Vialov equation~(\ref{Vialov}) is a direct consequence of the non-linearity of Glen's law~(\ref{Glen}). The formal solution of the Vialov equation~(\ref{Vialov}) can be expressed as the integral \begin{equation} \label{h(x)} h(x)=\left\{ \mp \frac{ 2\left(n+1\right)}{ n\rho g} \left( \frac{n+2}{2{\cal A}} \right)^{1/n} \int dx \left[ x \, c(x) \right]^{1/n} \right\}^{\frac{n}{2(n+1)} } \equiv A \left[ V(x) \right]^{\frac{n}{2(n+1)} } \,, \ee where the upper sign applies if the summit is at $x=0$ and $dh/dx<0$, while the lower sign applies if $x=L$ is the summit, \begin{equation} A \equiv \left[ \frac{ 2\left(n+1\right)}{n\rho g} \left(\frac{n+2}{2{\cal A}} \right)^{1/n}\right]^{\frac{n}{2(n+1)}} \,, \ee and the integral \begin{equation} V(x) \equiv \int dx \left[ x \, c(x) \right]^{1/n} \,. \label{V} \ee is determined up to an arbitrary integration constant $D$. A function $c(x)$ modelling the accumulation rate of ice must be prescribed. Even for simple choices of $c(x)$, the integral~(\ref{V}) can rarely be computed in terms of elementary functions, which has led to stagnation in the literature on this subject, but a few analytic solutions of the Vialov equation are known\footnote{Other analytic profiles (Nye 1951a; Nye 1951b; Faraoni and Vokey 2015) follow from the rather unrealistic assumption of perfecly plastic ice used in the early days of theoretical modelling and when the deformation of the ice is irrelevant.} (B\"o{\dh}vardsson 1955; Vialov 1958; Weertman 1961; Paterson 1972; Bueler 2003; Bueler et al. 2005). Solutions in $\left[ 0,L \right]$ with $x=0$ and $x=L$ denoting the position of the glacier summit and terminus, respectively, include: \begin{itemize} \item $c=$~const., which yields the {\em Vialov profile} ((Vialov 1958), see also (Paterson~1994; Cuffey and Paterson 2010; Hooke 2005; Greve and Blatter 2009)) \begin{eqnarray} h(x) &=& H \left[ 1- \left( \frac{x}{L} \right)^{\frac{n+1}{n} } \right]^{\frac{n}{2(n+1)}} \,,\label{Vialovprofile1}\\ &&\nonumber\\ H &=& \left[\left( \frac{2}{\rho g} \right)^n \frac{c(n+2)}{2{\cal A}} \right]^{\frac{1}{2(n+1)} } \sqrt{L} \,.\label{Vialovprofile2} \end{eqnarray} \item If $c(x)$ is chosen as a step function, the {\em Weertman-Paterson profile} is obtained by matching two Vialov solutions (Weertman 1961; Paterson 1972). \end{itemize} Assuming instead $x=0$ at the glacier terminus and $x=L$ at the summit, one obtains the following solutions. \begin{itemize} \item The model $c(x)=c_m x^m$ is used in the literature, with the value $m=0$ believed to be appropriate for ice caps and $m=2$ for alpine glaciers. In scaling theory, according to the Buckingham Pi theorem (Buckingham 1914), the mass balance rate is supposed to scale as $l^m$, while the characteristic thickness $h$ of a glacier or ice cap is assumed to scale with its characteristic length $l$ as $h\sim l^s$. The exponents $s=\frac{m+n+1}{2(n+1)} $ and $s=\frac{m+1}{n+2} $ are predicted by scaling theory for ice caps and for alpine glaciers, respectively (Bahr et al. 2015). The power law \begin{equation}\label{powerlaw} h(x) =h_0 \, x^{\frac{n+m+1}{2(n+1)}} \ee (with $h_0$ a constant) solves the Vialov equation~(\ref{Vialov}) with $c(x)=c_m x^m$. The exponent $\frac{n+m+1}{2(n+1)}$ was deduced in scaling theory (Bahr et al. 2015; Faraoni 2016). This solution includes the case $c=$~const. and also the profile $h(x)=h_0 \, \sqrt{x}$ which reproduces the parabolic profile first obtained by Nye under the simplifying assumption of perfectly plastic ice (Nye 1951a; Nye 1951b), which is very different from the more realistic Glen law~(\ref{Glen}) but is obtained as the limit $n\rightarrow +\infty$ of Eq.~~(\ref{powerlaw}). As shown in Sec.~\ref{sec:2.1}, the profile $h(x)=h_0 \, \sqrt{x}$ is not restricted to the unrealistic assumption of perfectly plastic ice but is also a solution of the Vialov equation following from the realistic Glen law. This fact is significant because this profile is currently used in a number of applications ({\em e.g.}, (Benn and Hulton 2010; Ng et al. 2010)) and is appropriate when the internal deformation of the ice is irrelevant. \end{itemize} In Sec.~\ref{sec:2} a simple, yet broad, model of the function $c(x)$ describing the ice accumulation rate is postulated and the Chebysev theorem on the integration of differential binomials is applied to the search of exact solutions of the Vialov equation in terms of elementary functions, in the form~(\ref{h(x)}). Infinitely many new solutions in terms of elementary functions can be obtained, some of which are reported in appendix~\ref{appendix:A}, while known solutions are re-derived. Sec.~\ref{sec:3} contains a discussion of these solutions and of the method employed. \section{Chebysev theorem and Vialov equation} \label{sec:2} Let us return to Eq.~(\ref{Vialov}) and let us search for solutions of the form (\ref{h(x)}) when the integral~(\ref{V}) can be expressed in terms of elementary functions. A wide class of reasonable models for the accumulation rate function is the choice \begin{equation}\label{c(x)} c(x)= a+b \, x^r \,, \ee where $a,b$, and $r$ are constants and where $r$ is chosen to be rational for reasons explained below. Special cases include: \begin{enumerate} \item $a=0$, $b>0$, $r>0$. In this case $x=0$ is the location of the glacier terminus corresponding to zero accumulation rate, while the glacier summit is at $x=L$, where the accumulation rate of ice assumes its largest value $c_\text{max}$. Then it follows that $ b=c_\text{max}/L^r $. The choice $r=2$ is appropriate to describe alpine glaciers (Bahr et al. 2015). Although it is not done in the literature, a better model would assume $a<0$ to describe ablation at the glacier terminus. \item $c(x)=a-|b|x^r$ with $a>0, b<0$, and $r>0$. In this case it is appropriate to locate the summit at $x=0$ and the terminus at $x=L$, with $c(x)$ a decreasing function of $x$ in $\left[0,L \right]$ vanishing at $x=L$ and with the constants assuming the values $a=c_\text{max}$, $b=-c_\text{max}/L^r$. An alternative choice consists of having $c(L)<0$ in order to describe ablation at the terminus. \end{enumerate} \noindent With the choice $r\in \mathbb{Q}$, the integral $V(x)$ falls into the category \begin{equation} \label{I} I\left(x; p, q, r \right)= \int dx \, x^p \left( a+b \, x^r \right)^q \,, \;\;\;\;\;\;\;\; p,q,r \in \mathbb{Q} \,, r\neq 0 \ee (if $r=0$ the integral is trivial). In practice, for glacier flow it is $p=q=1/n=1/3 \in \mathbb{Q}$. The integral~(\ref{I}) can be expressed in terms of an hypergeometric function, \begin{eqnarray} &&\int dx \, x^{1/3} \left( a+b \,x^r\right)^{1/3} =\frac{ 3x^{4/3} }{ 4\left( 4+r\right) \left( a+b \, x^r\right)^{2/3} } \nonumber\\ &&\nonumber\\ & & \cdot \left[ ar\left( \frac{bx^r}{a} +1 \right)^{2/3} {}_2F_1\left( \frac{2}{3}, \frac{4}{3r}; 1+\frac{4}{3r}; \frac{-bx^r}{a} \right)+ 4\left( a+ b \, x^r\right) \right] \,, \end{eqnarray} but this representation is of little use for practical purposes, for example when, in statistics, one needs a simple model of longitudinal glacier profile $h(x)$ to fit a large number of glaciers. For numerical studies of a single glacier, it is convenient to integrate numerically Eq.~(\ref{Vialov}) but for other problems a simple analytic formula for $h(x)$ is required. A necessary and sufficient condition for the integral~(\ref{I}) to be expressed in terms of elementary functions is the\\\\ \noindent {\bf Chebysev theorem} (Chebysev 1853; Marchisotto and Zakeri 1994):\\\\ {\em the integral~(\ref{I}) admits a representation in terms of elementary functions if and only if at least one of} $$ \frac{p+1}{r} \,, \;\;\;\; q \,, \;\;\;\; \frac{p+1}{r} + q $$ {\em is an integer.} Since $n=3$, it is $p=q=1/3 \in \mathbb{Q} $, and $r$ in Eq.~(\ref{c(x)}) is chosen to be rational (in the glaciological literature $r$ is usually the integer~0 or~2). Atmospheric models which could provide hints to fix the function $c(x)$ are not currently coded to have the ability to discriminate between a real number $r$ and a rational approximation of it. One then has \begin{eqnarray} \frac{p+1}{r} &=& \frac{n+1}{nr}=\frac{4}{3r} \,,\\ &&\nonumber\\ \frac{p+1}{r} + q &=& \frac{n+1+r}{nr}=\frac{4+r}{3r}\,. \end{eqnarray} Given the freedom in the choice of the parameters $a,b$, and $r$ of the model~(\ref{c(x)}), one requires that $r\in \mathbb{Q}$ and searches for values of $r$ such that $(p+1)/r$ or $q+(p+1)/r$ are integers. \begin{itemize} \item By imposing that $\frac{4}{3r} \equiv m_0 \in \mathbb{Z}$, one obtains $r \equiv \frac{4}{3m_0}$, ~$m_0=1,2,3, \, ... \,, +\infty $. This choice produces the sequence of values of $r$ \begin{eqnarray} && \frac{4}{3} \simeq 1.33, \;\; \frac{2}{3} \simeq 0.667,\;\; \frac{4}{9} \simeq 0.444,\;\; \frac{1}{3} \simeq 0.333,\;\; \frac{4}{15} \simeq 0.267,\;\; \frac{2}{9} \simeq 0.222 , \nonumber\\ &&\nonumber\\ && \frac{4}{21} \simeq 0.190, \;\; \frac{1}{6} \simeq 0.167,\;\; \frac{4}{27} \simeq 0.148,\;\; \frac{2}{15} \simeq 0.133,\;\; ... \;\; \,, 0 \,. \label{m1} \end{eqnarray} \item Imposing $ \frac{4+r}{3r} \equiv m_0 \in \mathbb{Z}$ gives $r=\frac{4}{3m_0-1}$, ~$m_0=1,2,3, \, ... \, , +\infty$ and the sequence of values of $r$ \begin{eqnarray} && 2 ,\;\; \frac{4}{5} = 0.8,\;\; \frac{1}{2} = 0.5,\;\; \frac{4}{11} \simeq 0.364,\;\; \frac{2}{7} \simeq 0.286,\;\; \frac{4}{17} \simeq 0.235,\;\; \frac{1}{5} = 0.2,\nonumber\\ &&\nonumber\\ && \frac{4}{23} \simeq 0.174,\;\; \frac{4}{23} \simeq 0.174,\;\; \frac{2}{13} \simeq 0.154,\;\; ... \;\; \,, 0 \,.\label{m2} \end{eqnarray} \end{itemize} Not all these values of $r$ are appropriate from the glaciological point of view to describe the ice accumulation rate~(\ref{c(x)}). However, the values~0 and~2 universally used in the literature, and many values of potential interest lying between these two extremes, are reproduced. The value $r=0$ is usually suggested for ice caps and ice sheets while the value $r=2$ is suggested for alpine glaciers (B\"o{\dh}vardsson 1955; Vialov 1958; Weertman 1961; Paterson 1972; Bueler 2003; Bueler et al. 2005). The representation of the integral $V (x)$ in terms of elementary functions falling into the range covered by the Chebysev theorem include the following special cases. \subsection{Choice $c(x)=$~constant} \label{sec:2.1} The choice $c(x)=$~constant can be obtained by setting $b=0$ (in which case $r$ drops out of the discussion) or when $b\neq 0$ with $r=0$ (in which case the Chebysev theorem as stated does not apply). In both cases the integration is trivial and, in the first case, one obtains \begin{equation} V(x)=\frac{na^{1/n}}{n+1} \, x^{1+1/n}+ D \,, \ee where $D$ is an integration constant, and the longitudinal glacier profile \begin{equation} h(x)=A \left[ V(x) \right]^{ \frac{n}{2(n+1)} } = A \left[ \frac{na^{1/n} }{n+1} \, x^{1+1/n}+D \right]^{ \frac{n}{2(n+1)} } \,, \ee which reproduces the Vialov profile~(\ref{Vialovprofile1}), (\ref{Vialovprofile2}) always associated with the choice $c=$~const. in the glaciological literature (Paterson~1994; Cuffey and Paterson 2010; Hooke 2005; Greve and Blatter 2009). Setting $D=0$ yields the parabolic profile $h(x)=h_0 \, \sqrt{x}$ irrespective of the value of $n$. \subsection{Choice $c(x)=b \, x^r$} In this case, with $a=0$ and $b>0$, and without choosing a specific value of $r$, one obtains the integral\footnote{Strictly speaking, in this degenerate case there is no need to assume that $r\in \mathbb{Q}$ and use the Chebysev theorem. In fact, these ingredients were not assumed in the recent work (Bahr et al. 2015) deriving this power law solution.} \begin{equation} V(x)=\frac{3b^{1/3}}{4+r} \, x^{(4+r)/3} +D \,. \ee The corresponding longitudinal glacier profile is \begin{equation} h(x) = A \left[ \frac{3b^{1/3}}{4+r} \, x^{(4+r)/3} +D \right]^{3/8} \,. \ee Setting $r=0, D\neq 0$ reproduces $c=$~const. and gives the Vialov profile~(\ref{Vialovprofile1}) and~(\ref{Vialovprofile2}). Setting instead the integration constant $D$ to zero yields $h(x)=h_0 \, x^{(4+r)/8} $. As already noted, the value $r=2$ is appropriate to describe alpine glaciers (Bahr et al. 2015; Faraoni 2016) and gives $h(x) \propto x^{3/4}$. Setting instead $r=0$, which is appropriate for ice caps, yields the well known profile $h(x) \propto \sqrt{x}$ (Paterson~1994; Bahr et al. 2015). The choice $c(x)=b\, x^r$, usually written as $c(x)=c_m x^m$, reproduces the power law solution $h(x) \propto x^{ \frac{n+m+1}{2(n+1)}} $ of (Bahr et al. 2015; Faraoni 2016). In fact, setting $r=m$ and $n=3$ yields $h \sim x^{(4+r)/8}$. As $x$ becomes large the highest order term is dominant and, in all of these solutions, the profile then approaches $ h(x) \sim \sqrt{x}$. Other examples of plausible models of the accumulation rate $c(x)=a+b \, x^r$ leading to representations of the integral~(\ref{V}) in terms of elementary functions and to relatively simple exact profiles are reported in appendix~\ref{appendix:A}. \section{Discussion} \label{sec:3} Analytic expressions describing longitudinal glacier profiles are needed in several problems of glaciology ({\em e.g.}, (Thorp 1991; Ng et al. 2010; Benn and Hulton 2010)). However, under the realistic and well tested assumption that glacier ice deforms according to Glen's constitutive relationship~(\ref{Glen}), the Vialov ordinary differential equation~(\ref{Vialov}) ruling these longitudinal glacier profiles is non-linear and obtaining analytic solutions in closed form in terms of elementary functions is difficult. Only a few exact solutions are known in the literature (B\"o{\dh}vardsson 1955; Vialov 1958; Weertman 1961; Paterson 1972; Bueler 2003; Bueler et al. 2005). By assuming a simple, yet general, model for the accumulation rate of ice appearing in Eq.~(\ref{Vialov}), the Chebysev theorem provides a necessary and sufficient condition for the integral~(\ref{V}) expressing a formal solution of the Vialov equation to be represented in terms of elementary functions. The solutions provided by the Chebysev theorem include the known solutions, with the exception of the B\"o{\dh}vardsson, Vialov, and Bueler profiles (B\"o{\dh}vardsson 1955; Vialov 1958; Bueler 2003; Bueler et al. 2005). The initial condition $h=0$ of the Vialov equation~(\ref{Vialov}) is imposed at the glacier terminus ($x=0$ or $x=L$, depending on the geometry adopted, which determines also the sign of $dh/dx$), which is a singular point of the equation corresponding to divergent surface slope $dh/dx$. In this situation, the usual uniqueness theorems for ordinary differential equations ({\em e.g.}, (Brauer and Noel 1986)) do not hold and this is the reason why one can find multiple solutions of the Vialov equation, and why the solutions obtained by using the Chebysev theorem do not always generate the well known Vialov (1958) profile, and do not reproduce other profiles (B\"o{\dh}vardsson 1955; Bueler 2003; Bueler et al. 2005). An infinite number of solutions in terms of elementary functions is guaranteed by the Chebysev theorem, corresponding to rational values of the constant $r$, and they can be found easily with computer algebra. The current models for the ice accumulation rate $c(x)$ are very unsophisticated ($c=$~const. being perhaps the most popular choice) and the 3-parameter choice $c(x)=a+b \, x^r$, $r\in \mathbb{Q}$ allows freedom to extend these models. Of course, other functional choices may be appropriate to model the accumulation rate $c(x)$ and, at the same time, provide analytic profiles $h(x)$. However, exact solutions of the Vialov equation~(\ref{Vialov}) in simple form have been hard to find and sometimes they correspond to unintuitive choices of $c(x)$ which make the corresponding analytic profile $h(x)$ more of a toy model achieving one desired physical property than a realistic description of the shape of alpine glaciers and ice caps. This is the case of the Bueler profile (Bueler 2003; Bueler et al. 2005; Greve and Blatter 2009), which exhibits a finite basal stress $\tau_b=-\rho g h \, dh/dx$ at the glacier terminus, contrary to the Vialov and other profiles. The old Chebysev (1853) theorem extends the scope of existing analyses. The values of the parameters $a,b$, and $r$ in Eq.~(\ref{c(x)}) appropriate to particular geographic locations have to be determined by data-fitting and are expected to be different for different situations (alpine glaciers, polar ice caps, cirque glaciers, {\em etc.}). \section*{Acknowledgments} This work is supported by Bishop's University.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Transport properties of hot nuclear matter at various densities, such as the shear viscosity, can be extracted from model analyses of heavy-ion collisions. In relativistic heavy-ion collisions, detailed studies have shown that the produced Quark-Gluon-Plasma (QGP) has a very small shear viscosity and behaves almost like an ideal fluid~\cite{Gyu05,Shu05}. Specifically, it has been found~\cite{Son11,Sch11} that the specific shear viscosity, i.e., the ratio of the shear viscosity to the entropy density, of the QGP is only a few times the KSS lower bound of $\hbar/4\pi$ derived from the AdS/CFT correspondence~\cite{Kov05}. Also, the specific shear viscosity shows a minimum value around the critical temperature of the hadron-quark phase transition~\cite{Cse06,Lac07}. It is argued in Ref.~\cite{Cse06} that the existence of a minimum in the specific shear viscosity is due to the difficulty for the momentum transport in the QGP as its temperature is close to the critical temperature. The shear viscosity of nucleonic matter is important for understanding various phenomena, such as signatures of the possible liquid-gas phase transition, in heavy-ion collisions at intermediate energies~\cite{Dan84,Li11}. Because of the short-range repulsive and intermediate-range attractive nature of the nucleon-nucleon interaction, hot nucleonic matter is expected to undergo a liquid-gas phase transition, see, e.g., Refs.~\cite{Fis67,Sie83}. Imprints of such a phase transition on experimental observables, such as the rank distribution of fragments~\cite{Ma}, are expected in the multifragmentation process of heavy-ion collisions at intermediate energies~\cite{Ber83}. However, while extensive studies have been made to investigate both experimentally and theoretically the signatures and nature of the liquid-gas phase transition using various approaches and observables over the last thirty years, see, e.g., Refs.~\cite{Gross01,Das05,Bor08} for recent reviews, many interesting issues remain to be addressed. In fact, over the last decade much work has been done to better understand the mechanism and nature of the liquid-gas phase transition in isospin asymmetric nucleonic matter, see, e.g., Refs.~ \cite{Ibook,WCI}. In particular, what is the role of the isospin degree of freedom in nuclear thermodynamics? What is the order of the liquid-gas phase transition in neutron-rich nucleonic matter? What are the effects of the density dependence of nuclear symmetry energy on the boundaries of mechanical and chemical instabilities as well as the liquid-gas coexistence line in neutron-rich matter? Answers to these questions are important for understanding both astrophysical observations of supernova explosions and terrestrial experiments done at rare isotope beam facilities. However, many current answers are still under debate. For instance, most models predict that while the liquid-gas phase transition is of first order in isospin symmetric matter, it becomes a continuous transition in isospin asymmetric matter examined at a constant proton fraction. On the other hand, it has been shown that the liquid-gas phase transition is actually still of first order even in isospin asymmetric matter except at the two ending points because of the existence of a spinodal region~\cite{Duc06}. Similar to its behavior at the hadron-quark phase transition, the specific shear viscosity of nucleonic matter also shows a minimum value at the vicinity of its liquid-gas phase transition~\cite{Pal10,Zho12a,Zho12b}. Also, it was speculated that the behavior of the specific shear viscosity at the phase transition may depend on the order of the transition~\cite{Che07}. Thus, further studies on the specific shear viscosity near the liquid-gas phase transition may help shed new light on the nature of this transition in neutron-rich matter. Indeed, it has been shown that the boundaries of both mechanical and chemical instabilities responsible for the phase separation~\cite{LiKo,Li01} and the phase coexistence line~\cite{Xu07b,Xu08} in asymmetric nucleonic matter depend on the value of the nuclear symmetry energy $E_{\rm sym}(\rho)$ at subsaturation densities. It is, however, not known how the $E_{\rm sym}(\rho)$ affects the specific shear viscosity of nucleonic matter at the liquid-gas phase transition. Since the nuclear matter can undergo the liquid-gas phase transition at different temperatures and densities in intermediate-energy heavy-ion collisions, it is of interest to know how the specific shear viscosity would behave under these various conditions. For example, is the valley shape structure in the temperature and density dependence of the specific shear viscosity of nucleonic matter the result of the liquid-gas phase transition? In the present study, we use a relaxation time approach to study the specific shear viscosity of neutron-rich nucleonic matter near the liquid-gas phase transition based on a consistent Gibbs construction. We find that the behavior of the specific shear viscosity at the liquid-gas phase transition depends on its order, and that the phase transition can cause a valley structure in the temperature or density dependence of the specific shear viscosity, although it does not necessarily require the existence of a phase transition. \begin{figure}[h] \centerline{\includegraphics[scale=0.8]{Esym.EPS}} \caption{(Color online) Density dependence of a stiffer ($x=0$) and a softer ($x=1$) symmetry energy from the MDI interaction.} \label{Esym} \end{figure} For the nucleon-nucleon interaction, we use the isospin- and momentum-dependent interaction proposed in Refs.~\cite{Das03,Che05} (hereafter 'MDI') with its parameters fitted to the binding energy $-16$ MeV and incompressibility $212$ MeV of normal nuclear matter at the saturation density $\rho_0=0.16$ fm$^{-3}$. For the density dependence of the symmetry energy, the parameter $x$ is used to change its slope parameter $L=3\rho_0 \left(dE_{\rm sym}/d\rho\right)_{\rho=\rho_0}$ but keeping its value at saturation density fixed to $E_{\rm sym}(\rho_0)=31.6$ MeV. In particular, a stiffer and a softer symmetry energy with $L\approx60$ MeV and $L\approx15$ MeV are obtained with $x=0$ and $x=1$, respectively, as shown in Fig.~\ref{Esym}, corresponding to current uncertainties in the density dependence of the symmetry energy at subsaturation densities~\cite{Che12}. \begin{figure}[h] \centerline{\includegraphics[scale=0.8]{LGPT.EPS}} \caption{(Color online) Chemical potential isobar as a function of isospin asymmetry for the stiffer ($x=0$) (a) and the softer ($x=1$) symmetry energies (b) and binodal surface (c) for both values of $x$ at temperature $T=10$ MeV.} \label{LGPT} \end{figure} To construct the liquid-gas phase transition region in the nuclear phase diagram, we use the Gibbs conditions, i.e., the liquid and gas phases can coexist when they have the same chemical potential ($\mu_l^{(n,p)}=\mu_g^{(n,p)}$), pressure ($P_l=P_g$), and temperature ($T_l=T_g$). Specifically, we plot the chemical potential isobar as a function of the isospin asymmetry $\delta$, defined as $\delta=(\rho_n-\rho_p)/(\rho_n+\rho_p)$, for neutrons as well as protons at a certain temperature, and draw a rectangle within the proton and neutron chemical potential isobars. The two ends of the rectangle then correspond to the two coexisting phases, as shown in panels (a) and (b) of Fig.~\ref{LGPT} for the stiffer ($x=0$) and the softer ($x=1$) symmetry energies, respectively, with the left end point having a smaller isospin asymmetry and a larger density corresponding to a liquid phase (L) and the right end point having a larger isospin asymmetry and a smaller density corresponding to a gas phase (G). This procedure is repeated until the pressure is too low to allow a rectangle to be drawn or too high for the hot nucleonic matter to remain in the chemical instability region, i.e., the chemical potential of neutrons (protons) increases (decreases) monotonically with increasing isospin asymmetry. The coexisting phases at different values of pressure form the binodal surface shown in panel (c) of Fig.~\ref{LGPT}. The right and the left side of the binodal surface correspond to the gas and the liquid phase, respectively, with the mixed phase inside the binodal surface. The binodal surface thus provides all the information needed to study the properties of the mixed phase, i.e., the densities and isospin asymmetries of the two coexisting phases as well as their volume fractions. For more details on the liquid-gas phase transition in nucleonic matter, we refer the readers to Refs.~\cite{Mul95,Xu07b,Xu08}. In the phase coexistence region with the liquid phase occupying a volume fraction $\lambda$, the average number and entropy densities can be expressed as \begin{eqnarray} \rho &=& \lambda\rho_l + (1-\lambda)\rho_g,\\ s &=& \lambda s_l + (1-\lambda)s_g, \end{eqnarray} where $\rho_{l(g)}$ and $s_{l(g)}$ are the number and entropy densities of the liquid (gas) phase, respectively. For the calculation of the shear viscosity, we consider a stationary flow field in the z direction, i.e., $u_z=f(x)$ in the nucleonic matter where $f(x)$ is an arbitrary function of the coordinate $x$, and use a similar framework as in Ref.~\cite{Xu11}. For a single phase of gas or liquid, the shear force on the particles in a flow layer of a unit area in the $y-z$ plane is equal to the net $z-$component of momentum transported per sec in the $x$ direction, i.e., the thermal average of the product of the flux $\rho_\tau v_x$ in the $x$ direction and the momentum transfer $p_z-mu_z$ in the $z$ direction~\cite{Xu11,Hua87} \begin{equation} F_i = \sum_\tau \langle(p_z-mu_z)\rho_\tau v_x\rangle_i, \end{equation} with $\tau=n$ for neutrons and $p$ for protons, $i=l$ for the liquid phase and $g$ for the gas phase, and $m$ being the nucleon mass. The shear viscosity $\eta_{l(g)}$ is then determined by \begin{equation} F_{l(g)} = - \eta_{l(g)} \partial u_z/\partial x \end{equation} for either the liquid phase or the gas phase. We note that the shear viscosity is independent of the flow gradient if $\partial u_z/\partial x$ is sufficiently small. For a mixed phase of liquid and gas, the matter can be viewed either as gas bubbles in a liquid or liquid droplets in a gas. The matter above and below any flow layer are then either both liquids or both gas unless the flow layer is tangent to the surface of a gas bubble or a liquid droplet, which would have the liquid and the gas on the opposite sides of the flow layer. Since the chance for the latter to happen is infinitesimally smaller for an infinitely large system with liquid droplets and gas bubbles randomly distributed as assumed in the present work, the fraction of the area for particle transport across a flow layer in the liquid is thus $\lambda$ and that in the gas is $1-\lambda$, leading to an average shear force on a unit area of flow layer in the mixed phase given by the sum of the contributions from individual phases, i.e., \begin{equation} F = \lambda F_l + (1-\lambda) F_g = -\eta \partial u_z/\partial x. \end{equation} The average shear viscosity of the mixed phase can then be expressed in terms of those in the liquid or the gas phase as \begin{equation} \eta = \lambda \eta_l + (1-\lambda) \eta_g. \end{equation} Because the density is uniform in each phase, $\eta_l$ and $\eta_g$ can be separately calculated using the relaxation time approach as in Ref.~\cite{Xu11} based on free nucleon-nucleon cross sections~\cite{Cha90} modified by the in-medium nucleon masses~\cite{Li05}. \begin{figure}[h] \centerline{\includegraphics[scale=0.9]{etasT.EPS}} \caption{(Color online) Temperature dependence of the average reduced number density (first row), the shear viscosity (second row), and the specific shear viscosity (third row) at the fixed pressure of $P=0.1$ MeV/fm$^3$ for isospin symmetric matter ($\delta=0$) (left column), neutron-rich matter ($\delta=0.5$) (middle column), and pure neutron matter ($\delta=1$) (right column) with the stiffer symmetry energy $x=0$. Solid lines are results including the liquid-gas phase transition with 'L', 'M', and 'G' representing the liquid phase, the mixed phase, and the gas phase, respectively. Dashed lines are results obtained by assuming the liquid-gas phase transition does not happen inside the binodal surface.} \label{etasT} \end{figure} Figure~\ref{etasT} displays the temperature dependence of the average reduced number density, the shear viscosity, and the specific shear viscosity, obtained with the stiffer symmetry energy $x=0$, when the nucleonic matter is heated at the fixed pressure of $P=0.1$ MeV/fm$^3$. As the temperature increases, the hot nucleonic matter undergoes a phase transition from the liquid phase at lower temperatures to the gas phase at higher temperatures if it has an isospin asymmetry $\delta=0$ or $\delta=0.5$ but has no phase transition if the isospin asymmetry is $\delta=1$. The liquid-gas phase transition is of first order in symmetric nucleonic matter ($\delta=0) $ as shown in Fig.~26 of Ref.~\cite{Xu08} by the sudden jump in the entropy per nucleon from the liquid phase to the gas phase as well as the discontinuity of the specific heat at the critical temperature. This leads to the sudden changes in all the thermodynamical quantities and the specific shear viscosity, while the latter evolves smoothly during the phase transition when it changes to a second-order one in neutron-rich matter ($\delta=0.5$), confirming the expectation of Ref.~\cite{Che07}. Also, the liquid phase has a higher density and a lower temperature than the gas phase as shown in the first row of Fig.~\ref{etasT}, leading to a stronger Pauli blocking effect in the liquid phase than in the gas phase. As a result, the liquid phase generally has a larger shear viscosity than the gas phase. For each phase, there are competing density and temperature effects on the evolution of the shear viscosity. As discussed in Ref.~\cite{Xu11}, an increase in temperature results in more frequent nucleon-nucleon scatterings and weaker Pauli blocking effects, thus reducing the shear viscosity. On the other hand, the nucleon-nucleon scattering cross section decreases with increasing center-of-mass energy of two colliding nucleons as shown in Fig.~2 of Ref.~\cite{Xu11}, which makes the shear viscosity to increase with increasing temperature especially at very low densities. At higher densities, although the stronger Pauli blocking effect increases the shear viscosity, the smaller in-medium nucleon mass leads to a larger flux between flow layers and a larger relative velocity between two colliding nucleons, thus reducing the shear viscosity. Due to the combination of these effects together with the behavior of the entropy density with respect to temperature and density, the specific shear viscosity decreases in the liquid phase but increases in the gas phase with increasing temperature. The minimum of the specific shear viscosity is exactly located at the critical temperature when a first-order phase transition happens, while it is located at the boundary of the gas phase if the phase transition is of second order. Even for a pure neutron matter without a liquid-gas phase transition, the specific shear viscosity still shows a valley shape in its temperature dependence as a result of the competing effects discussed above. \begin{figure}[h] \centerline{\includegraphics[scale=0.9]{etasP.EPS}} \caption{(Color online) Density dependence of the pressure (first row), the shear viscosity (second row), and the specific shear viscosity (third row) at temperature $T=10$ MeV for isospin symmetric matter ($\delta=0$) (left column), neutron-rich matter ($\delta=0.5$) (middle column), and pure neutron matter ($\delta=1$) (right column) with the stiffer symmetry energy $x=0$. Solid lines are results including the liquid-gas phase transition with 'L', 'M', and 'G' representing the liquid phase, the mixed phase, and the gas phase, respectively. Dashed lines are results obtained by assuming the liquid-gas phase transition does not happen inside the binodal surface.} \label{etasP} \end{figure} The liquid-gas phase transition can also happen if the hot nucleonic matter is compressed at a fixed temperature. The density dependence of the pressure, the shear viscosity, and the specific shear viscosity in this case are shown in Fig.~\ref{etasP}, again using the stiffer symmetry energy $x=0$. For the symmetric nuclear matter ($\delta=0$) that has a first-order liquid-gas phase transition, the pressure remains a constant when it is compressed from the low-density gas phase to the high-density liquid phase. As the nucleonic matter becomes neutron-rich ($\delta=0.5$) with the phase transition changing to a second-order one, the pressure continues to increase with increasing density in the mixed phase. For the pure neutron matter, it again does not show a liquid-gas phase transition when it is compressed at a fixed temperature. It is shown in the second row of Fig.~\ref{etasP} that the occurrence of the mixed phase in the hot nucleonic matter when it is compressed at a constant temperature generally increases the value of the shear viscosity compared with the case by assuming that the liquid-gas phase transition does not happen. Also, the specific shear viscosity always has a minimum value, and we found that this is due to the difference in the increase of the shear viscosity and the entropy density with increasing density, even for the case of pure neutron matter without a liquid-gas phase transition. Interestingly, the density at which the specific shear viscosity has a minimum value is again located at the boundary of the gas phase for $\delta=0$ and $\delta=0.5$ independent of the phase transition order. \begin{figure}[h] \centerline{\includegraphics[scale=0.9]{etasPT.EPS}} \caption{(Color online) Temperature (upper panels) and density (lower panels) dependence of the specific shear viscosity at different fixed pressures and temperatures, respectively, in isospin symmetric matter ($\delta=0$), neutron-rich matter ($\delta=0.5$), and pure neutron matter ($\delta=1$) for both symmetry energies $x=0$ and $x=1$. } \label{etasPT} \end{figure} Since different values of pressure and temperature are reached in intermediate-energy heavy-ion collisions, it is of interest to study the specific shear viscosity of nucleonic matter at the liquid-gas phase transition under different conditions. In the upper panels of Fig.~\ref{etasPT}, we compare the temperature dependence of the specific shear viscosity at different pressures for isospin symmetric ($\delta=0$) and neutron-rich ($\delta=0.5$) nucleonic matter as well as pure neutron matter ($\delta=1$) with both the stiffer ($x=0$) and the softer ($x=1$) symmetry energies. It is seen that the temperature at which the specific shear viscosity has a minimum increases with increasing value of the fixed pressure, similar to the results in Refs.~\cite{Cse06,Che07}. Also, for larger fixed pressures the minimum value of the specific shear viscosity is smaller for $\delta=0$ and $\delta=0.5$ but seems to be independent of the pressure for $\delta=1$. In the lower panels of Fig.~\ref{etasPT}, we display the density dependence of the specific shear viscosity for different temperatures. Similarly, the density at which the specific shear viscosity has a minimum value increases with increasing value of the fixed temperature, and the minimum value is smaller at higher fixed temperatures for $\delta=0$ and $\delta=0.5$ but is insensitive to the temperature for $\delta=1$. It is worthwhile to note that with further increase in pressure or temperature, the minimum value of the specific shear viscosity decreases and then levels off until the pressure or the temperature is too high for the nucleonic matter to have a liquid-gas phase transition. The resulting lower limit of the specific shear viscosity of nucleonic matter is about $4\sim5$ $\hbar/4\pi$ for isospin symmetric nucleonic matter and is generally smaller than that in neutron-rich nucleonic matter as discussed in Ref.~\cite{Xu11}. As seen in panel (c) of Fig.~\ref{LGPT}, the stiffness of the symmetry energy only slightly affects the gas side of the phase boundary, thus having only negligible effects on the location of the minimum value of the specific shear viscosity. However, due to the difference in the phase coexistence region for the stiffer ($x=0$) and the softer ($x=1$) symmetry energy, different specific shear viscosities are obtained in the mixed-phase region, with the softer symmetry energy ($x=1$) giving a larger value compared with the stiffer symmetry energy ($x=0$) as shown in panels (b) and (e) of Fig.~\ref{etasPT}. For the pure neutron matter without the liquid-gas phase transition under a fixed pressure, it looks like that the specific shear viscosity for $x=1$ is similar to that for $x=0$ obtained under a slightly smaller fixed pressure. On the other hand, the specific shear viscosities from different symmetry energies are the same for pure neutron matter compressed at a fixed temperature as indicated in panel (f) of Fig.~\ref{etasPT}. To summarize, using the relaxation time approach, we have studied the specific shear viscosity of neutron-rich nucleonic matter near its liquid-gas phase transition boundary constructed from the Gibbs conditions. A valley shape is observed in the temperature or density dependence of the specific shear viscosity even in the absence of the phase transition. The value of the specific shear viscosity suddenly drops at the first-order liquid-gas phase transition temperature, while it varies smoothly for the second-order phase transition. Moreover, the density dependence of the symmetry energy is found to affect the value of the specific shear viscosity of nucleonic matter in the mixed-phase region, although it has little effects on the location of its minimum. Our results are expected to be useful for investigating the nature and signatures of the liquid-gas phase transition in neutron-rich matter using intermediate-energy heavy-ion collisions induced by rare isotopes. \begin{acknowledgments} We thank Yunpeng Liu for helpful discussions. This work was supported by the "100-talent plan" of Shanghai Institute of Applied Physics under grant Y290061011 from the Chinese Academy of Sciences, the Major State Basic Research Development Program in China under Contract No. 2014CB845401, the NNSF of China (11135011, 11275125, 11035009, and 11220101005), the Shanghai Rising-Star Program (11QH1401100), Shanghai "Shu Guang" Project, the Eastern Scholar Program, and the STC of Shanghai Municipality (11DZ2260700), the CUSTIPEN (China-U.S. Theory Institute for Physics with Exotic Nuclei) under DOE grant number DE-FG02-13ER42025, the US National Science Foundation under Grant No. PHY-1068572 and PHY-1068022, the Welch Foundation under Grant No. A-1358, and the National Aeronautics and Space Administration under grant NNX11AC41G issued through the Science Mission Directorate. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The detection of a single electron or nuclear spin is perhaps the ultimate goal in the development and refinement of sensitive measurement techniques in solid state nanostructure devices. While of interest in their own right, single spin measurements are particularly important in the context of recently proposed solid state quantum computers, where electron \cite{Loss98} \cite{Burkard99} \cite{Vrijen99} and nuclear \cite{Kane98} spins are qubits which must be initialized and measured in order to perform computation. Methods proposed for measuring single electron spins include using a sensitive magnetic resonance atomic force microscope \cite{Sidles95} \cite{Wago97} and detecting charge transfer across magnetic tunnel barriers \cite{Divincenzo98}. Sensitive optical techniques may also be promising \cite{Bonadeo98}. Even if these techniques cannot readily be integrated into a quantum computer architecture, single spin measurements will be invaluable for measuring the electromagnetic environment of the spin, which will determine the decoherence mechanisms ultimately limiting a quantum computer's capability. Here we discuss a method for probing the spin quantum numbers of a two electron system using a single electron transistor (SET). Because of the Pauli Exclusion Principle, spin quantum numbers of such systems profoundly affect the orbital states (positions of the two electrons) of the system. Recently developed SET devices are extraordinarily sensitive to charge configuration in the vicinity of the SET island electrode, and they can consequently be used to measure the spin state of two electron systems in appropriate circumstances. In the scheme previously proposed \cite{Kane98}, electron transfer into and out of bound states on donors in Si are measured to determine whether the electrons are in a relative singlet or triplet configuration. SET's have already been proposed for performing quantum measurement on qubits in a Josephson Junction-based quantum computer \cite{Shnirman98}. SET's, operating at temperature $T \cong$100 mK, have the recently demonstrated capacity to measure charge to better than $10^{-4} e\mathrm{/\sqrt{Hz}}$ at frequencies over 200 MHz \cite{Schoelkopf98}. Several material parameters make Si a good choice in which to fabricate single spin measuring devices: spin orbit coupling is small in Si, so the phonon- induced spin lattice relaxation rate is almost seven orders of magnitude smaller in Si \cite{Feher59b} than it is in GaAs \cite{Frenkel91}. Also, nuclear isotopes with nonzero spin can in principle be eliminated in Si by isotope purification. The bound states on Si donors have been thoroughly characterized and studied. A complication of Si arises from its sixfold degenerate band structure. We will focus on Si devices in this paper, but the ideas presented here can be readily generalized to other material systems. The configuration we will study is extremely simple (Fig. 1): a SET lies directly above two electrons bound to a single donor impurity in an otherwise undoped layer of a Si crystal. Such a two-electron system, which can be thought of as a solid state analog of a He atom, can be created in Si by doping with S, Se, Te, or Mg \cite{Grimmeiss86}\cite{Grossmann87}. A $\mathrm{SiO_2}$ barrier layer isolates the SET from the Si, and the substrate is heavily doped, and hence conducting, beginning a few hundred \AA~ below the donor. As drawn in Fig. 1 the device requires careful alignment of the SET to the donor; however, the ideas in this paper could be verified using a scanned probe SET \cite{Yoo97}, and the Te donor could be deposited by ion implantation, so no nanofabrication on the Si would be required. The ground state of the electrons on the donor is a spin singlet. The experiment proceeds by applying a voltage between the SET and the substrate just sufficient to ionize the donor and draw one electron towards the interface. In this situation small changes in the applied voltage cause the electron to move between the donor and the interface, and this electron motion will change the SET conductance. If the electrons are in a spin triplet state, however, no bound state of appropriate energy exists on the donor, and no charge motion will be observed. All the donors listed above have stable isotopes with both zero and nonzero nuclear spin. If the donor is a nucleus with nonzero spin, strong hyperfine interactions couple the nuclear spin to the electrons, and the nuclear spin can be inferred from measurements of the motion of the electrons. The measurement of both electron and nuclear spin will require that the electron Zeeman energy exceed $kT$ so that the electron spin states are well resolved, a condition which is readily met in Si at a temperature T$\approx$100 mK and magnetic field $\mathbf{B}\approx$1 T. \section{Experimental Configuration} Of the several possible two-electron donors in Si, we will focus on Te for two reasons: firstly, its energy levels are the shallowest of the Group VI donors \cite{Grimmeiss86}, enabling it to be ionized by a relatively modest applied electric field. Secondly, it is a reasonably slow diffuser in Si \cite{Stumpel88}, and thus should be compatible with most Si processing techniques. The bound state energies of Te donor states are shown in Fig. 2a. $\mathrm{Te^0}$ and $\mathrm{Te^+}$ ground states are respectively 200 and 400 meV below the conduction band. Electron orbital states in the Si conduction band have a six-fold valley degeneracy, with valley minima located along the [100] directions 85\% of the distance to the Brillouin zone boundary. This degeneracy is broken in states at a donor by the central cell potential into a singly degenerate $\mathrm{A_1}$ state, a triply degenerate $\mathrm{T_2}$ state, and a doubly degenerate E state. The $\mathrm{A_1}$ state, which is a linear combination of each of the six valleys, is the only state which has a finite probability density at the donor site, and consequently has the lowest energy, owing to the central cell attractive potential. In $\mathrm{Te^0}$ two electrons lie in the $\mathrm{A_1}$ state in a nondegenerate spin-singlet configuration. This state is over 150 meV below the excited states, including the lowest lying triplet configuration of the two electron spins \cite{Grossmann87} \cite{Peale88}. In the proposed measurement configuration, an electric field $F$ is applied so that an electron on the Te donor is weakly coupled to a state at a [100] oriented Si/$\mathrm{SiO_2}$ interface (Fig. 2b). The condition that the donor and interface states be weakly coupled requires that the distance between the donor and the $\mathrm{SiO_2}$ interface must be 100-200 \AA. Pulling the electron to the interface will thus require $F$=1-2 mV/\AA=0.1-0.2 MV/cm. $F$ in the $\mathrm{SiO_2}$ layer will be approximately three times bigger owing to the smaller dielectric constant in $\mathrm{SiO_2}$. ($\epsilon_{\mathrm{Si}} \cong 12$; $\epsilon_{\mathrm{SiO_2}} \cong 4$.) At these fields Fowler-Nordheim tunneling across a 100 \AA~ $\mathrm{SiO_2}$ barrier or between the Si valence and conduction band is negligible \cite{Nagano94}, so charge will not leak into or out of the donor or interface states. The substrate must be $p$ doped, however, so that the carriers in the substrate will be repelled from the interface by $F$. When $F$=0, both electrons are bound to the Te donor (Fig. 3a); however, one electron will occupy an interface state when $F$ is sufficiently large (Fig. 3b). In Si, the electron mass in each valley is anisotropic with $m_\parallel=0.92~ m_0$ and $m_\perp=0.19~ m_0$ \cite{Ando82}, masses corresponding to motion parallel and perpendicular to the valley axis respectively. At a [100] oriented Si/$\mathrm{SiO_2}$ interface, the sixfold valley degeneracy of electron states is broken, and lowest energy states correspond to the two valleys along the axis perpendicular to the interface. When it is not located at the Te donor, the electron is still attracted to the donor by its net positive charge. While this attraction is counteracted by $F$ in the $z$ direction, perpendicular to the interface, the electron is drawn toward the donor in the $x-y$ plane, resulting in the potential drawn in Fig. 3c. Thus, the electron at the interface is still weakly bound to the donor. The energies of the electron interface states will be the sum of the binding energies in the $z$ and in the $x-y$ directions. We assume that the $z$ confinement can be approximated by a triangular potential. The energies of the states are \cite{Ando82}: \begin{equation} E_z (i)\cong \left\{ \frac{9 \pi^2}{8} \times \frac{\hbar^2 c^2}{m_z} \times e^2 F^2 \times \left[ i- \frac{1}{4} \right]^2 \right\}^{\frac{1}{3}}, \end{equation} for $i \ge$1. For $m_z =m_\parallel$ and $F$=2 mV/\AA, $E_z (1)$=59 meV and $E_z (2)$=104 meV. The ground state electron probability density function is peaked at a distance $2 E _z (1)/ 3 eF \approx$ 20 \AA~ from the interface and falls off rapidly at large distances. The effect of the donor a distance $z_0$=100-200 \AA~ from the interface is minimal on the interface energy levels, but weak tunneling between the donor and the interface is still possible. For modeling of the system, we will assume $z_0$=125 \AA. The potential in the $x-y$ plane is: \begin{equation} U(r)=-\frac{e^2}{\epsilon_{eff.}} \times \left( r^2 + z^2_0 \right)^ {-\frac{1}{2}}. \end{equation} Here, $r$ is the distance in the plane from the point in the plane nearest the donor. Because the electron sees an attractive image charge associated with the Si/$\mathrm{SiO_2}$ dielectric boundary, $\epsilon_{eff.}=(\epsilon_{\mathrm{SiO_2}}+ \epsilon_{\mathrm{Si}})/2=8$. This potential is easily approximated by a parabolic potential, leading to the following energies: \begin{equation} E_{xy} (j,k)= \frac{1}{2} \left( \frac{\hbar^2 e^2} {\epsilon_{eff.} ~ m_{xy} ~ z^3_0 } \right)^{\frac{1}{2}} \times (j+k), \end{equation} for $j,k \ge 1$. For $m_x=m_y=m_\perp$, $E_{xy} (1,1)$=6 meV and $E_{xy} (1,2)$=9 meV. The probability density for the parabolic approximation wave function, plotted in Fig. 3c, is only large in the region where the potential is well approximated by a parabola, indicating that the parabolic approximation is justified. An applied $\mathbf{B}\parallel z$ will modify these energies significantly if the cyclotron energy, $\hbar \omega_c$, becomes comparable to the state energy differences \cite{Ashoori96}. However, at $\mathbf{B}$=1 T in Si, $\hbar \omega_c \approx$ 0.6 meV, so magnetic modification of the orbital states should be minimal. These results show that the lowest lying interface state is about 65 meV above the conduction band, separated from the first excited state by $\approx$ 3meV. These states are in the valleys along the $z$-axis. Energies of the states in the valleys along the $x$ and $y$ axes are $\sim$40 meV higher in energy. Because there are two valleys along the $z$ axis, the electron interface states are still two-fold degenerate. Sham and Nakayama \cite{Sham79} have shown that this degeneracy is lifted by the sharp Si/$\mathrm{SiO_2}$ interface potential in the presence of an applied electric field. They estimate $\Delta E_V \approx eF \times$ 0.5 \AA, corresponding to a splitting of 1 meV for the proposed measurement configuration. Although small, this splitting is sufficient to insure that the interface electron occupies a single valley state at $T<$1K. \section{Simplified Model Hamiltonian} We model the system using a simple Hamiltonian for the two electrons: they can be in only two spatial states: either located at the donor $|\rightarrow\rangle$ or at the interface $|\leftarrow\rangle$. Additionally, the two electrons can be in one of two spin states $|\uparrow\rangle$ or $|\downarrow\rangle$. Of the sixteen possible configuration states of two electrons in the model, only six are antisymmetric with respect to particle interchange, and are appropriate for electrons. Measurements will be made in the regime where the energy of the state in which both electrons lie on the donor, $|{\rightarrow \atop \rightarrow} \rangle$, is nearly degenerate with the states in which one electron is at the donor and one is at the interface, $|\rl \rangle$ and $|\lr \rangle$ . The removal of both electrons from the donor requires an additional 400 meV of energy (the binding energy of the $\mathrm{Te^+}$ ground state). Consequently, we neglect the state $|\l2 \rangle$ in which both electrons are at the interface, since it is of much higher energy than the others. The five remaining antisymmetric basis states, eigenstates of both the particle and spin exchange operator, are: \begin{equation} \begin{array}{rcl} |1 \rangle & = & | (\rr ) \ss \rangle \\ |2 \rangle & = & | \pt \ss \rangle \\ |3 \rangle & = & | \ps (\dd ) \rangle \\ |4 \rangle & = & | \ps \st \rangle \\ |5 \rangle & = & | \ps (\uu ) \rangle, \end{array} \end{equation} where we have neglected normalization factors. In the simplest approximation, there are three terms in the Hamiltonian: $\Delta$, the energy difference between the $|{\rightarrow \atop \rightarrow} \rangle$ and the $|(\lr \pm \rl)\rangle$ states, can be varied by the bias applied between the substrate and the SET island electrode. The energy difference between $|\uparrow\rangle$ and $|\downarrow\rangle$ states is the Zeeman energy, $g \mu_B \mathbf{B}$ where $\mu_B$ is the Bohr magneton and $g$ is the Land{\'e} $g$ factor. $t$ is the amplitude for the electron to tunnel from the donor state to the interface state. The Hamiltonian matrix of the system is: \begin{equation} \label{H1} \mathcal{H}_0= \left( \begin{array}{ccccc} \Delta & t & 0 & 0 & 0\\ t & 0 & 0 & 0 & 0\\ 0 & 0 & -g \mu_B B & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & g \mu_B B \end{array} \right). \end{equation} The energy levels of this system, plotted as a function of $\Delta$, are shown in Fig. 4. Because of the overall antisymmetry of the electron wave function, the $|{\rightarrow \atop \rightarrow} \rangle$ state must be a spin singlet: $|(\ud -\du) \rangle$. A spin singlet state is also possible with a symmetric spatial state of one electron on the donor and one at the interface $|(\lr +\rl) \rangle$. Hybridization of these two levels results in the anticrossing behavior seen in Fig. 4. In this system the only possible spin triplet levels are associated with the spatially antisymmetric state $|(\lr -\rl) \rangle$. The energy of these three states, although split by the magnetic field, are unaffected by the applied electric field. Consequently, the spin singlet states are polarizable by an applied electric field, while the spin triplet states are not. This fact illustrates how an electrical measurement can in principle determine a spin quantum number. \section{Measurement Procedure} The difference in electric polarizability of singlet and triplet spin states discussed above can be detected by a SET.~~SET's are typically fabricated from Al, with a small island electrode weakly coupled to two leads (the source and drain) through thin $\mathrm{Al_2O_3}$ tunnel barrier layers (Fig. 1). For sufficiently small islands and at low temperatures the Coulomb blockade prevents electron transport across the island unless a discrete energy level of the island is resonant with the Fermi level in the source and drain. A SET can function as a sensitive electrometer because this resonance condition is sensitive to any potentials coupling to the island - for example, coming from the substrate in Fig. 1. The SET shown will exhibit periodic conductance peaks with magnitude of order $e^2/h~$ as a function of substrate bias, each corresponding to the addition of one electron to the island. Charge motion in the vicinity of the SET changes the island potential and results in shifts in the substrate bias voltage at which the peaks occur. Figure 5 depicts both the energy levels of the two electron system as a function of $\Delta$ and the conductance of the SET as a function of substrate bias. For simplicity we assume that the SET conductance peaks are spaced symmetrically away from the point where the electron levels cross ($\Delta$ = 0). (The conductance peaks can be moved to any position relative to the level crossing by applying a voltage to an additional remote electrode, weakly coupled capacitively to the SET.) The measurement proceeds by measuring the SET conductance on both sides of the level crossing (at voltages $V_1$ and $V_2$) by applying a voltage waveform to the substrate similar to that shown in the inset to Fig. 5a. The measurement must distinguish whether the electrons are in the lowest energy spin singlet or the lowest energy spin triplet state. At $V_2$ one electron is on the donor and one electron is at the interface for both singlet and triplet states, so the SET island potential - and hence the SET conductance - is the same for both triplet and singlet states. At $V_1$, however, the singlet state is in a configuration where both electrons are on the donor, while in the triplet state the electron positions are the same as they were at $V_2$. This difference in the electron positions results in a difference in the potential at the island, and hence a difference in the voltage at which the SET conductance maximum occurs. This conductance change can thus be used to infer the spin state of the two electrons. The size of the offset between triplet and singlet conductance peak positions is determined by how well the electrons are coupled to the SET island and how far the electron moves. If the electron moved all the way from the conducting substrate to the island, the conductance peaks would be offset by one electron. The approximate peak position change for smaller electron movement is given by the ratio $r$: \begin{equation} \label{r} r=\left(\frac{z_0}{\epsilon_{\mathrm{si}}} \right) \times \left( \frac{w_{\mathrm{Si}}}{\epsilon_{\mathrm{si}}}+ \frac{w_{\mathrm{SiO_2}}}{\epsilon_{\mathrm{SiO_2}}} \right)^{-1}, \end{equation} where $w_{\mathrm{SiO_2}}$ and $w_{\mathrm{Si}}$ are the thicknesses of the $\mathrm{SiO_2}$ and undoped Si layers, respectively, and $z_0$ is the distance which the electron moves. For the layer thicknesses shown in Fig. 1 and $z_0$= 125 \AA, $r$=0.12. Thus, the conductance peaks of the SET can be offset approximately 10\% by the motion of the electrons between the donor and interface states. A charge sensitivity of 0.1 $e$ is readily achievable with SET's and has been demonstrated with the recently developed RF-SET's \cite{Schoelkopf98}, which are capable of fast ($>$100 MHz) measurements. These RF-SET's have a demonstrated charge noise of $<5 \times 10^{-5} e\mathrm{/\sqrt{Hz}}$, implying that the SET can measure 0.1$e$ in 0.25 $\mu$sec. High speed operation of the SET's may be necessary for the measurement because the measurement must occur on a time short compared to the time the electron scatters between spin states. Spin scattering and fluctuations are not included in the simplified Hamiltonian of Eq. \ref{H1} but will be present in real systems, and will be discussed below. In principle, a single conductance measurement at $V_1$ would be sufficient to determine the spin state of the electrons, and the need to measure repeatedly at $V_1$ and at $V_2$ would be unnecessary. However, motion of remote charges will also couple to the SET \cite{Zimmerman97} leading to drifting of the conductance peak positions (1/$f$ noise). AC modulation of the substrate bias can be used to measure the separation between adjacent conductance peaks, rather than their absolute position, and so can eliminate this drift from the measurement. \section{Effect of Fluctuations} If the terms in Eq. \ref{H1} fluctuate, the energy levels shown in Fig. 4 will not necessarily be eigenstates of the system, and transitions between states will be possible. Fluctuations will arise due to lattice vibrations, and also will inevitably emanate from the SET, since tunneling of electrons on and off of the SET island is a random process. A rigorous approach to the effect of SET fluctuations must treat the SET and the electrons being probed as a coupled quantum system. Master equation techniques can be applied to this problem, and have been used to analyze the system of a Josephson Junction qubit coupled to a SET \cite{Shnirman98} and tunneling devices \cite{Sun98}. While a similar analysis of a two electron system coupled to a SET is in preparation \cite{Milburn99}, we will proceed by assuming that scattering of the electrons is driven by external classical fluctuating fields, the magnitudes of which we estimate from experimental conditions. The scattering times so derived will then be compared to the measurement time, derived above. Fluctuations in the occupancy of the SET island will couple into the electron system via $\Delta$, the potential difference between the donor and interface states. Phonon-induced fluctuations are, however, the dominant mechanism of electron spin relaxation in lightly doped Si, measured in electron spin resonance (ESR) experiments \cite{Wilson61}. The degeneracy of the six conduction band valleys is broken by uniaxial stress directed along the [100] directions, with compression lowering the energy of the two valleys along the strain axis with respect to the other four valleys. To first order strain does not affect the energy of the donor ground state, which is composed of equal amounts of each of the six valleys, but the interface state energy level will shift with respect to the donor state level with the application of strain. Thus, phonons will also lead to fluctuations in $\Delta$. Additionally, both bias and phonon fluctuations couple to the $t$ term in the Hamiltonian, a mechanism of relaxation which we will consider separately below. \section{Scattering between Spin Singlet States} We treat the simplest case first, the effect of fluctuations in $\Delta$ on the two spin singlet states of Eq. \ref{H1}: \begin{equation} \label{H3} \mathcal{H}= \left( \begin{array}{cc} \Delta & t \\ t & 0 \end{array} \right). \end{equation} The Hamiltonian is exactly diagonalized by rotating the basis states though an angle $\chi=\tan^{-1}(2t/\Delta)$. For fluctuations in $\Delta$, the relaxation rate between the eigenstates of Eq. \ref{H3} is given by: \begin{equation} \label{g} \Gamma = \frac{M^2}{4 \hbar^2} S_\Delta, \end{equation} where $M=\sin \chi$ and $S_\Delta$ is the spectral density of fluctuations of $\Delta$ evaluated at the transition frequency between eigenstates. The magnitude of $M$ determines the degree to which the fluctuations couple between the eigenstates, and scattering is reduced when $M \ll 1$. Larger values of $|\Delta/t|$, far away from the anticrossing region, will lead to smaller scattering rates between the coupled singlet states if $S_\Delta$ is constant. To determine an explicit value for $\Gamma$, we need to know $S_\Delta$. For voltage noise emanating from the SET, $S_\Delta$ can be determined from the time dependence of the charge on the SET island electrode. The high frequency dynamics of SET's is still a topic of research, and will depend sensitively on capacitances and inductances of the SET and in the external circuit. To obtain crude estimates of relaxation times, we will simply assume that SET noise is frequency independent shot noise determined entirely by the SET current and the SET resistance: \begin{equation} S_V= S_I \times R^2=2 e I R^2 = 2 e V R, \end{equation} where $V$, $I$, and $R$ are the voltage, current and small signal resistance of the SET. This leads to: \begin{equation} S_\Delta= 2 r^2 e^3 V R, \end{equation} where $r$, defined in Eq. \ref{r}, determines the proportion of voltage that drops between the donor and interface states. A quiescent SET, in which $V$=0, will generate a much smaller amount of noise, especially if the island is biased so that $R \rightarrow \infty$. Again, for the purpose of generating crude estimates, we assume that quiescent SET noise is given by Johnson noise ($S_V=4 k T R$) when the SET is at a conductance peak. To determine the magnitudes of shot noise and Johnson noise, we use parameters tabulated by Schoelkopf for an optimized RF-SET \cite{Schoelkopf98} biased to maximum sensitivity ($V \cong$ 1 mV, $R$= 50 k$\Omega$, $T$=100 mK) in a configuration in which $r$=0.1. Using these numbers maximal scattering rates (using Eq. \ref{g} with $M$=1) are plotted in Fig. 6. Realistic values of the capacitance of the SET, which has been entirely neglected in the foregoing, will tend to roll off the spectra at frequencies $>$10 GHz. Thus, the data constitutes an upper bound on the scattering rates to be expected. The magnitude of fluctuations in $\Delta$ induced by phonons is determined by $\Xi$, the deformation potential, which is the rate the valley energy varies as strain is applied, and by the density of states of phonons at a given frequency. A straightforward calculation leads to: \begin{equation} \label{phonon} \Gamma_{phon.} = M^2 \times \nu^3 \coth \left( \frac{h \nu}{2 k T} \right) \times \frac{8 \pi^3 \Xi^2}{h \rho} \left\{\frac{1}{v^5_l}+\frac{1}{v^5_t} \right\}, \end{equation} where $\rho$ is the density of the Si crystal and $v_l$ and $v_t$ are the velocities of longitudinal and transverse acoustic phonons respectively. Angular dependences of the phonon couplings have been neglected in Eq. \ref{phonon}, as has the presence of the nearby surface, which will modify the phonon spectrum at low frequencies. Thus, Eq. \ref{phonon} only provides an approximate relaxation rate, which is plotted in Fig. 6 for $T$=100 mK and $M$=1. This expression includes vacuum fluctuations, and is thus only appropriate for transitions from higher to lower energy states when $h \nu > k T$. While the phonon contribution to $\Delta$ rises rapidly as a function of frequency, it only exceeds the shot noise contribution at frequencies approaching 100 GHz. To obtain approximate transition rates between the singlet states using the shot noise expression, we assume $\Delta/h$=100 GHz and $2 t/h$= 1 GHz, so $M^2 =10^{-4}$. With these values, we obtain $\Gamma = 10^{7}$ $\mathrm{sec^{-1}}$~ or a decay time of 0.1 $\mu$sec. This time is almost the same as the time estimated above for RF-SET's to measure the spin state of the two electron system. It is likely that our use of a frequency independent shot noise is an overestimate, and that the relaxation time exceeds the measurement time. Also, the measurement time can possibly be reduced a factor of 10-100 in optimized SET devices \cite{Schoelkopf98}. \section{Scattering between Different Spin States} At first glance, it would appear that the measurement time $must$ be less than the singlet-singlet scattering time in order for spin detection to be viable. However, the point of the measurement procedure is to distinguish between the lowest lying singlet and triplet states. Scattering between these states (labeled ``3" in Fig. 5) must not occur. However, scattering between the other states can occur so long as the average electron position difference between the singlet and triplet states is resolvable. Type 3 scattering must occur through spin flips and in general will be much weaker than scattering between the electric dipole coupled singlet states. This is a crucial distinction between the measurement of spin using SET's and the measurement of charge quantum states, such as those in Josephson Junction qubits \cite{Shnirman98}, where the states $to~be~distinguished$ are electric dipole coupled. The Hamiltonian of Eq. \ref{H1} is obviously oversimplified, since no terms couple different spin states, and no spin relaxation is possible. The dependence of the electron $g$ factor on external conditions, and in particular on band structure parameters, is the major source of spin relaxation in Si \cite{Wilson61} and consequently must be included in a more accurate model of a two electron system in Si. The extremely long relaxation times measured in Si at low temperatures ($>$1000 sec.) \cite{Feher59b} are a consequence of the fact these parameters are small in Si. Additionally, if the electrons can exchange spin with other particles, in particular with nuclear spins, then scattering between different electron spin states will occur. The $g$ factor of an electron in a conduction band valley in Si is not exactly equal to the free electron value and is slightly anisotropic, a consequences of spin orbit coupling. The $g$ anisotropy leads to a modified one electron spin Hamiltonian: \begin{equation} H = \frac{1}{2} \mu_B B \left\{ g_{\parallel} \cos \theta ~ \mathbf{\sigma}_z + g_{\perp} \sin \theta ~ \mathbf{\sigma}_x \right\}, \end{equation} where $\mathbf{\sigma}$ are the Pauli spin matrices, and $\theta$ is the angle of $\mathbf{B}$ with respect to the valley ($z$) axis. If the $z$ axis is redefined to be along $\mathbf{B}$, the spin Hamiltonian becomes: \begin{equation} H = \frac{1}{2} \mu_B B \left\{ g_z \mathbf{\sigma}_z + \beta \mathbf{\sigma}_x \right\}, \end{equation} where: \begin{equation} g_z \equiv g_{\parallel} \cos^2 \theta + g_\perp \sin^2 \theta, \end{equation} and: \begin{equation} \beta \equiv (g_\perp - g_\parallel ) \sin \theta \cos \theta. \end{equation} The $g$ anisotropy will be the same for each of the two valleys comprising the interface states; however, since the donor state is an equal admixture of all six valleys, its $g$-factor will be isotropic=$g_{0}$. For two electron systems, the spin dependent corrections to the Hamiltonian in Eq. \ref{H1} are: \begin{equation} \label{H2} {\mathcal{H}}'=\frac{1}{2}\mu_B B \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -\frac{\beta}{\sqrt{2}} & (g_{0}- g_z) & \frac{\beta}{\sqrt{2}}\\ 0 & -\frac{\beta}{\sqrt{2}} & -(g_{0}+g_z) & \frac{\beta}{\sqrt{2}} & 0\\ 0 & (g_{0} - g_z) & \frac{\beta}{\sqrt{2}} & 0 & \frac{\beta}{\sqrt{2}}\\ 0 & \frac{\beta}{\sqrt{2}} & 0 & \frac{\beta}{\sqrt{2}} & (g_{0}+g_z) \end{array} \right). \end{equation} At the conduction band in Si, $g=(1/3)g_\parallel+(2/3)g_\perp$=1.99875 \cite{Feher59a}. $g_\parallel-g_\perp$, measured by applying strain to shallow donors \cite{Wilson61}, is 1.0$\times10^{-3}$. Finally, $g_{0}$ for $\mathrm{Te}^+$=2.0023 \cite{Grimmeiss81}. The off-diagonal terms, which will lead to scattering between spin states if fluctuations are present, are each $\cong10^{-3}$ and are small perturbations on the original Hamiltonian. The $\beta$ term will vanish, in principle, if $\mathbf{B}$ is precisely aligned along a [100] axis of the crystal, perpendicular to the interface, and this orientation will presumably be the optimal experimental configuration. To obtain an estimate for scattering rates between spin states, an approximate solution to the full Hamiltonian (Eqs. \ref{H1} and \ref{H2}) must be calculated. The solution is complicated by the fact that the $| \pt \ss \rangle $ and $| \ps \st \rangle$ states (the states with energy $\approx 0$ in Fig. 4) are nearly degenerate when $t/\Delta \ll$1, a degeneracy also weakly broken by the difference $(g_0-g_z)$ in $g$ factors at the donor and interface states. To obtain an approximate solution, valid in the measurement regime when $t/\Delta \ll$1, we first diagonalize the Hamiltonian matrix to second order in $t$, which lowers the $| \pt \ss \rangle $ state energy with respect to $| \ps \st \rangle$ by $t^2/\Delta$. The $| \pt \ss \rangle $ and $| \ps \st \rangle$ submatrix is then diagonalized exactly by rotating the basis states through an angle $\xi$, where: \begin{equation} \tan \xi = (g_0-g_z)\mu_B B \times \frac{\Delta}{t^2}. \end{equation} Finally, corrections to the resultant wave functions are determined to first order in the remaining $\beta$ terms of Eq. \ref{H2}. Once the perturbed wave functions are known, the matrix elements coupling states generated by fluctuating terms may be easily determined. For a fluctuation of the form $\Delta+\delta$ the perturbation Hamiltonian matrix $\delta M_\Delta$ is given by: \begin{equation} \label{MD} M_\Delta = \left( \begin{array}{ccccc} 1 & -(\frac{t}{\Delta}) \cos (\frac{\xi}{2}) & -(\frac{\beta}{4\sqrt{2}})(\frac{t}{\Delta}) & -(\frac{t}{\Delta}) \sin (\frac{\xi}{2}) & -(\frac{\beta}{4\sqrt{2}})(\frac{t}{\Delta})\\ -(\frac{t}{\Delta}) \cos (\frac{\xi}{2})& (\frac{t}{\Delta})^2 \cos^2 (\frac{\xi}{2}) & (\frac{\beta}{4 \sqrt{2}})(\frac{t}{\Delta})^2 \cos (\frac{\xi}{2}) & \frac{1}{2}(\frac{t}{\Delta})^2 \sin ( \xi ) & (\frac{\beta}{4 \sqrt{2}})(\frac{t}{\Delta})^2 \cos (\frac{\xi}{2})\\ -(\frac{\beta}{4 \sqrt{2}}) (\frac{t}{\Delta}) & (\frac{\beta}{4 \sqrt{2}})(\frac{t}{\Delta})^2 \cos (\frac{\xi}{2}) & (\frac{\beta}{4 \sqrt{2}})^2 (\frac{t}{\Delta})^2 & (\frac{\beta}{4 \sqrt{2}})(\frac{t}{\Delta})^2 \sin (\frac{\xi}{2}) & (\frac{\beta}{4 \sqrt{2}})^2 (\frac{t}{\Delta})^2\\ -(\frac{t}{\Delta}) \sin (\frac{\xi}{2}) & \frac{1}{2}(\frac{t}{\Delta})^2 \sin ( \xi ) & (\frac{\beta}{4 \sqrt{2}})(\frac{t}{\Delta})^2 \sin (\frac{\xi}{2}) & (\frac{t}{\Delta})^2 \sin^2 (\frac{\xi}{2}) & (\frac{\beta}{4 \sqrt{2}})(\frac{t}{\Delta})^2 \sin (\frac{\xi}{2})\\ -(\frac{\beta}{4 \sqrt{2}}) (\frac{t}{\Delta}) & (\frac{\beta}{4 \sqrt{2}})(\frac{t}{\Delta})^2 \cos (\frac{\xi}{2}) & (\frac{\beta}{4 \sqrt{2}})^2 (\frac{t}{\Delta})^2 & (\frac{\beta}{4 \sqrt{2}})(\frac{t}{\Delta})^2 \sin (\frac{\xi}{2}) & (\frac{\beta}{4 \sqrt{2}})^2 (\frac{t}{\Delta})^2 \end{array} \right). \end{equation} Only lowest order terms have been retained, and we have simplified the expression by writing $(g_0+g_z)=4$. These matrix elements may be inserted directly into Eq. \ref{g} to determine scattering rates between states induced by fluctuations in $\Delta$. The matrix elements for scattering into and out of $| \downarrow \downarrow \rangle$ (state $| 3 \rangle$) contain a $\beta/(4 \sqrt{2})$ term in addition to the $t/\Delta$ present in the terms scattering between the singlet states discussed above. Neglecting entirely the angular dependence of $\beta$ and using $g_\parallel-g_\perp =10^{-3}$, $(\beta/(4 \sqrt{2}))^2= 3 \times 10^{-8}$, resulting in a total scattering rate out of the $| \downarrow \downarrow \rangle$ state of about 0.3 $\mathrm{sec^{-1}}$~ for the same conditions used to calculate the scattering rate between the singlet states above. This result suggests that very long averaging times of the SET measurement will be possible before spin relaxation occurs, and that single spin measurement in Si will be possible in appropriately designed devices. In experimental conditions $\mathbf{B}$ will be sufficient to effectively polarize the electrons, i.e. $g \mu_B B / k T \ge 10$. At $T$=100 mK, this requires that $\mathbf{B}\cong$ 0.7 T and $\mu_B B \cong$10 GHz. For $t/h$=1 GHz and $\Delta/h$=100 GHz, this implies that $\tan \xi \cong$1 and that scattering to states $| \pt \ss \rangle $ and $| \ps \st \rangle$ (labeled respectively '1' and '2' in Fig. 5) will be comparable. As mentioned above, this type of scattering will not harm the measurement as long as the average positions of electrons in the states being distinguished differs. \section{Scattering Induced by Fluctuations of $t$} The calculation leading to Eq. \ref{MD} may be repeated to determine the effect of fluctuations in $t$ on scattering between states. The result is: \begin{equation} M_t = \left( \begin{array}{ccccc} (\frac{2 t}{\Delta}) & \cos (\frac{\xi}{2}) & -(\frac{\beta}{4\sqrt{2}}) & \sin (\frac{\xi}{2}) & -(\frac{\beta}{4\sqrt{2}})\\ \cos (\frac{\xi}{2})& -(\frac{2 t}{\Delta})\cos^2 (\frac{\xi}{2}) & -(\frac{\beta}{4 \sqrt{2}})(\frac{2 t}{\Delta}) \cos (\frac{\xi}{2}) & -(\frac{t}{\Delta}) \sin ( \xi ) & -(\frac{\beta}{4 \sqrt{2}})(\frac{2 t}{\Delta}) \cos (\frac{\xi}{2})\\ (\frac{\beta}{4 \sqrt{2}}) & -(\frac{\beta}{4 \sqrt{2}})(\frac{2 t}{\Delta})\cos (\frac{\xi}{2}) & -(\frac{\beta}{4 \sqrt{2}})^2 (\frac{2 t}{\Delta})& -(\frac{\beta}{4 \sqrt{2}})(\frac{2 t}{\Delta})\sin (\frac{\xi}{2}) & -(\frac{\beta}{4 \sqrt{2}})^2 (\frac{2 t}{\Delta})\\ \sin (\frac{\xi}{2}) & -(\frac{t}{\Delta}) \sin ( \xi ) & -(\frac{\beta}{4 \sqrt{2}})(\frac{2 t}{\Delta}) \sin (\frac{\xi}{2}) & (\frac{2 t}{\Delta})\sin^2 (\frac{\xi}{2}) & -(\frac{\beta}{4 \sqrt{2}})(\frac{2 t}{\Delta}) \sin (\frac{\xi}{2}) \\ (\frac{\beta}{4 \sqrt{2}}) & -(\frac{\beta}{4 \sqrt{2}})(\frac{2 t}{\Delta})\cos (\frac{\xi}{2}) & -(\frac{\beta}{4 \sqrt{2}})^2 (\frac{2 t}{\Delta})& -(\frac{\beta}{4 \sqrt{2}})(\frac{2 t}{\Delta})\sin (\frac{\xi}{2}) & -(\frac{\beta}{4 \sqrt{2}})^2 (\frac{2 t}{\Delta}) \end{array} \right). \end{equation} Because of the absence of the $t/\Delta$ term present in most of the matrix elements of Eq. \ref{MD}, fluctuations in $t$ will have a greater effect on scattering than fluctuations in $\Delta$ if the fluctuations are of the same magnitude. Band structure effects in Si can further magnify the importance of $t$ fluctuations. In Si the valley minima are located at $k_0=0.85\times2 \pi/a$, where $a$=5.43 \AA~ is the lattice constant. If valleys on opposite sides of the Brillouin zone are coupled to each other at two points in real space (at two donors or at a donor and an interface), standing waves with node spacing $\pi/k_0$ appear in the coupling between the two sites. These rapid oscillations have been previously analyzed in the context of the exchange interaction between donors in doped Si \cite{Andres81}. For a donor located near an interface which breaks the valley degeneracy, the coupling between the donor states and the two valley states at the interface is a rapidly oscillating function of the separation between the donor and the interface (Fig 7). If $t$ is a rapidly oscillating function of external parameters, fluctuations in the external parameters will be strongly amplified. The magnitude of this effect may be most readily estimated when the fluctuations arise from strain. As mentioned above, strain shifts the energies of the valleys along the strain axis relative to the valleys on axes perpendicular to the strain axis. Strain, $s$, will also change the value of $k_0$, the location of the valley minima, and hence the wavelength of the standing waves. We are unaware of measurements of $dk_0/ds$ but estimate its order of magnitude by assuming that the effect of strain on electron energy levels is linear in $k_z$ in the neighborhood of the valley minimum on the $z$ axis: \begin{equation} E(k_z)=\frac{\hbar^2}{2m_l}(k_z-k_0)^2+\frac{k_z}{k_0}\Xi s, \end{equation} where $z$ is the direction along the valley axis. We entirely neglect effect of the orientation of the applied strain. Here, $\Xi$ is the deformation potential introduced above = 9 eV in Si. From this equation, we obtain: \begin{equation} \frac{dk_o}{ds}= - \frac{m_l \Xi}{\hbar^2 k_0}. \end{equation} Assuming $t=t_0\sin ( 2 k_0 z_0 )$, we obtain the maximum effect of the strain as: \begin{equation} \left( \frac{dt}{ds} \right)_{max} = \left[ \frac{2 t_0 z_0 m_l}{\hbar^2 k_0} \right] \Xi. \end{equation} For $t_0/h$=1 GHz, and $z_0$=125 \AA, the term in brackets is $\cong 10^{-4}$. This is the magnitude of phonon-induced $t$ fluctuations relative to $\Delta$ fluctuations. For $t/\Delta = 10^{-2}$, the conditions considered above, scattering rates attributable to fluctuations in $\Delta$ will be four orders of magnitude larger than those from $t$ fluctuations. While the derivation leading to this result is highly approximate, it does suggest that $t$ fluctuations may be neglected, despite the amplifying effect of oscillations induced by band structure. Fluctuations in the voltage bias, or in the electric field in the vicinity of the electrons, will also lead to fluctuations in $t$. It would seem that the effect of an electric field, highly uniform on the scale of the lattice, would be small on intervalley coupling. However, the applied bias does change the area on the interface where the electron wave function is sizable, and the valley splitting induced by the interface will be highly sensitive to the morphology of the interface, and hence is very difficult to estimate. While we do not have a numerical estimate for bias-induced $t$ fluctuations, it seems unlikely that they will be an important source of scattering between states. \section{Additional Sources of Noise in the Electromagnetic Environment} The major source of both electric dipole and spin flip scattering in the electromagnetic environment arises from the fluctuating electric field generated by the SET. Because the SET is a high impedance device, with resistance of order $h/e^2$, the ratio of the magnetic to the electric field generated by the SET is $\sim e^2/\hbar c$=1/137 in cgs units. $\mu_B B/(e z_0 F)$, the ratio of magnetic to electric interaction energies of the SET with the electrons is $\sim 10^{-7}$. This leads to a spin relaxation rate induced by the $magnetic$ field emanating from the SET of $\sim 10^{-3}$ $\mathrm{sec^{-1}}$, which can be neglected. A more relevant source of fluctuations arises because RF-SET's are AC devices, biased by a tuned circuit oscillating at $\nu \sim 1$ GHz. GHz frequencies are employed to minimize the contribution of noise from GaAs field effect transistors (FET's) amplifying the SET output. Because the tuned circuit must be near the SET, the device will be exposed to both electric and magnetic fields at the SET bias frequency. Consequently, the electron energy state differences will need to be away from the SET bias frequency and its harmonics during measurement. Doing measurements at magnetic fields when $g \mu_B B/h$=10-20 GHz should fulfill this requirement. \section{Scattering between States at Level Crossings} To perform repeated measurements on the system, it will be necessary to traverse the region where the two spin levels cross one another (Figs. 4 and 5). If there is a small coupling between the two states, an anticrossing will occur, and scattering will occur between the levels if the crossing region is not traversed sufficiently rapidly. Ideally however, the passage should be ``adiabatic" with regard to the strongly coupled singlet states, so that these states simply follow the levels plotted in Figs. 4 and 5 as $\Delta$ is varied. These two requirements imply that there is an optimal value for the traversal rate, $\dot{\nu}$, where undesired scattering is minimized. As mentioned above, however, scattering between the states being distinguished is much more harmful than scattering between the singlet states, implying that $\dot{\nu}$ be as large as possible. Additionally, even though the SET can be turned off during traversal, some noise in the environment will be present during the passage (the Johnson noise and the phonons, plotted in Fig. 6), and a rapid traversal rate will minimize the contribution of this noise to scattering at the level crossing. A simple Golden Rule calculation determines the scattering probability $P$ between two crossing levels as a function of $\dot{\nu}$. The result is: \begin{equation} P=2\pi^2 \frac{\nu^2_{int}}{\dot{\nu}}, \end{equation} where $h\nu_{int}$ is the energy difference between the levels at the anticrossing point. A likely upper limit to the traversal rate is of order 100 GHz/nsec =$10^{20} \mathrm{~Hz}^2$. We first estimate the scattering resulting from the $\beta$ term in Eq. \ref{H2}, again neglecting its angular dependence: $h\nu_{int} \cong \beta \mu_B B $=10 MHz. These values result in $P=2 \times 10^{-5}$. Spin scattering can also occur near the crossing point through the exchange of electron spin with nuclear spins in the lattice, since natural Si contains 5\% $^{29}$Si with $I$=1/2. The small value of the nuclear Zeeman energy compared to the electron Zeeman energy means that such scattering can only occur near the level crossing point. The electron interaction with $^{29}$Si will be dominated by the contact hyperfine interaction \cite{Slichter}: \begin{equation} \label{nu} h\nu_A=\frac{8\pi}{3} \mu_B \mu_N g_N |\Psi(0)|^2, \end{equation} where $|\Psi(0)|^2$ is the electron probability density at the nuclear site. Evaluation of $P$ for the hyperfine interaction entails an appropriate average over all lattice sites, assuming that the total polarization of the nuclei is zero: \begin{equation} P=2\pi^2 \frac{\overline{\nu^2_{A}}}{\dot{\nu}}. \end{equation} The numerator in this expression is exactly the same average as that used to determine the mean square line width of donor ESR lines, a parameter which has been measured for Si:Te \cite{Grimmeiss81}. In $\mathrm{Te}^+$ using Si of natural isotopic composition, the ESR line width is $\sim$ 30 MHz, leading to an estimate of $P\cong 10^{-4}$. Interaction with lattice nuclei could be further reduced if necessary by using Si depleted of $^{29}$Si. These results imply that perhaps thousands of passes across the level crossing can be made before a spin scattering event occurs. \section{Extension to Nuclear Spin Measurement} In the foregoing discussion we have implicitly assumed that the nuclear spin on the Te donor is zero. While Te is composed of 92\% stable $I=0$ isotopes, 7\% of natural Te is $^{125}\mathrm{Te}$, with $I$=1/2. For the $\mathrm{Te}^+$ donor level, the electron spends approximately 10\% of its time on the donor site, and consequently $|\Psi(0)|^2$ in Eq. \ref{nu} can be large \cite{Grimmeiss81} \cite{Niklas83}. For Si:$\mathrm{Te}^+$ the zero $\mathbf{B}$ level splitting induced by hyperfine interactions is 3.6 GHz, which is comparable to the electron Zeeman splitting for $\mathrm{B}$=0.1 T. The levels for a coupled two electron- and one nuclear-spin system are plotted in Fig. 8, with the small nuclear Zeeman energy splitting greatly exaggerated so that the levels may be distinguished. The electron Hamiltonian is that of Eq. \ref{H1}, while the nucleus couples only to electrons at the donor site by the contact hyperfine interaction. The Hamiltonian again does not contain any terms that change the total $z$ component of angular momentum of the system, and the state with all spins pointing in the same direction (designated $|(\downarrow \downarrow)(0) \rangle$ in Fig. 8) does not hybridize with other states. The nuclear spin state in which the nuclear spin points opposite to the electrons $|(\downarrow \downarrow)(1) \rangle$ does hybridize with the electron spin singlets that couple to the applied bias $\Delta$, leading to the separation of the nuclear spin states shown in Fig. 8. Measurement of the nuclear spin state proceeds in a manner entirely analogous to the spin measurement of the electron discussed above. SET conductance peak positions are measured at two fixed points on opposite sides of the level crossing. As in the case with electrons, scattering between the electric dipole coupled states can occur during the measurement, so long as scattering does not take place between the states being distinguished. As in the case with electrons, these latter types of scattering processes will occur as a result of electron $g$ fluctuations, impurity nuclear spins, and nuclear and electron dipole interactions. Since the magnitudes of these effects are similar for the electron and nuclear spin measurement problem, it does not appear that measurement of nuclear spins will be intrinsically more difficult than of electron spins. \section{Experimental and Materials Issues} We have focused on the Si/$\mathrm{SiO_2}$ material system for single spin measurement devices, primarily because of the wealth of data in Si on ESR of donors. These ideas may be viable in other systems, and possibly in GaAs/Al$_x$Ga$_{1-x}$ As heterostructures, if the greater spin-orbit and hyperfine interactions in these materials do not pose insurmountable problems. The lesser quality of the Si/$\mathrm{SiO_2}$ interface compared to GaAs/Al$_x$Ga$_{1-x}$ As should not affect the proposed devices: a mobility of $10^4 \mathrm{~cm^2/Vsec}$ implies energy fluctuations on the Si/$\mathrm{SiO_2}$ interface of order 0.5 meV, less than the lateral binding energy of the interface electrons to the donor calculated above. We have neglected entirely the effects of the $\mathrm{SiO_2}$ layer on the resonance and relaxation of the electrons. ESR of conduction electrons at the Si/$\mathrm{SiO_2}$ interface is very difficult to measure \cite{Stesmans93} \cite{Wallace91}, so experimental data on the effect of the $\mathrm{SiO_2}$ interface is lacking. Initial experiments will most simply be carried out on samples randomly doped with Te by ion implantation or diffusion, and the measurements made with a scanned SET so that many donors can be tested for possible single spin sensitivity. Even if a scanned probe SET is used, the material will have to be extraordinarily free ($\le 10^{10}/\mathrm{cm}^2$) of bulk and interface spin and charge impurities in order to have a reasonable probability of success in measuring a single spin, a requirement that may prove very difficult to meet using conventional Si processing. SiGe heterostructures may be an attractive alternative system \cite{Vrijen99} if problems associated with interface states and dangling bonds in Si/$\mathrm{SiO_2}$ structures prove to be insurmountable. Finally, in order to demonstrate the measurement of a single spin, the spin must first be prepared by placing it in a known initial state. For electrons, this can be accomplished by simply waiting for a time long compared to the spin relaxation time, so that the system will be in its lowest energy state with high probability at low temperatures. As shown in Fig. 5, the spin singlet is the ground state at $V_1$ while the triplet is the ground state at $V_2$, so the system can be prepared in either of the two states by appropriately biasing the system and waiting a sufficiently long time. For nuclear spins, the relaxation times may be unreasonably long, and the nuclear spin is best prepared by exposing the system to an externally applied AC magnetic field $B_{AC}$ resonant with the nuclear spin. Action of $B_{AC}$ can be used to flip the nuclear spin from one state to another by appropriate pulses or adiabatic passes across the resonance line prior to the measurement process. At higher frequencies an applied $B_{AC}$ can also be used on the electrons, and the small difference in the $g$ factor of the donor and interface states allows particular electron spins to be selectively flipped. \section{Conclusion} We have outlined a method for measuring single spin quantum numbers using single electron transistors in a Si solid state device that can be fabricated with currently emerging technology. While the impetus for realizing these devices is the eventual development of a viable solid state quantum computer technology, these devices will only be capable of very rudimentary (single qubit, and perhaps two qubit) quantum logic. They should more appropriately be considered as solid state analogs of the single ion traps, which have successfully demonstrated simple quantum logic on single quantum states \cite{Monroe95}. The analogy between these devices and the single ion trap goes further, in that measurements are made in ion traps by exciting transitions between the first of two states being distinguished and a third state that is not coupled to the second state. If, and only if, the system is in the first state, many ``cycling transitions" are excited to the third state, allowing the states to be distinguished with relative ease. In the devices discussed above, only one of the two states being distinguished is electric dipole coupled to the measuring SET, and the measurement process can continue until a forbidden spin flip process occurs. Also in analogy to the single ion trap, these devices can be used to measure the relaxation and decoherence processes operative on single spins in solid state systems. These measurements can be made by using an applied $B_{AC}$ to perform $\pi$ and $\pi/2$ rotations on a single spin. Such measurements will be critical to determine whether quantum computation in a solid state environment will be viable. Aside from quantum computation, precise measurement of single spins will be an extremely sensitive probe of the electromagnetic environment of the spin, and may have important heretofore unforeseen applications.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{The Polish topology on the hyperbolic isometries} Alike Euclidean spaces, hyperbolic spaces have natural infinite dimensional analogs. Here we consider the separable one. Let $\mathcal{H}$ be some separable Hilbert space with some Hilbert base $(e_i)_{i\in\mathbf{N}}$. Let $Q$ be the quadratic form with signature $(1,\infty)$ defined by $$Q(x)=x_0^2-\sum_{i\geq1}x_i^2$$ where $(x_i)$ are the coordinates of $x$ in the above base. We denote by $\OO(1,\infty)$ the orthogonal group of $Q$. The infinite dimensional (separable) hyperbolic space is then $$\H=\left\{x\in\mathcal{H},\ Q(x)=1,\ x_0>0\right\}.$$ The hyperbolic metric can be defined via the formula $$\cosh(d(x,y))=\left( x,y\right)=\langle x,Jy\rangle$$ where $\left(\cdot,\cdot\right)$ is the symmetric bilinear form obtained by polarization from $Q$, $J$ is the linear operator such that $Je_0=e_0$ and $Je_i=-e_i$ for $i>0$ and $\langle\cdot,\cdot\rangle$ is the scalar product on $\mathcal{H}$. This space appears in several previous works. Among some examples, it appears in \cite{MR2152540, MR2811600, MR2881312, MR3263898, AHL_2019__2__259_0, monod2018notes}. The group of isometries of $\H$ is $$\Isom(\H)=\PO(1,\infty)=\OO(1,\infty)/\{\pm\Id\}.$$ A proof of this identification can be obtained by mimicking the classical proof in finite dimension \cite[Theorem I.2.24]{MR1744486}. It is also a particular case of identification of isometry groups of infinite dimensional Riemannian symmetric spaces. See \cite[Theorem 1.5]{MR3044451} and \cite[Theorem 3.3]{duchesne2019representations}. This infinite dimensional group has a natural group topology which makes the action $\Isom(\H)\times \H\to \H$ continuous. This is the \emph{pointwise convergence} topology, that is the coarsest group topology on $\Isom(\H)$ such that orbit maps $g\mapsto gx$ are continuous. Actually, this topology is merely the quotient topology of the strong operator topology on $\OO(1,\infty)$ (Proposition~\ref{strong}). Since $\H$ is separable, this topology is known to be Polish \cite[\S9.B]{MR1321597}, that is separable and completely metrizable. The aim of this paper is to study this Polish group. It lies at the crossroads of two worlds: classical Lie groups and non-locally compact Polish groups with surprising properties for group theorists used to the locally compact ones. Actually, $\OO(1,\infty)$ \emph{is} a Lie group --more precisely a Banach-Lie group-- but for the stronger topology given by the operator norm. There is another Polish group which really looks like $\Isom(\H)$. This is the isometry group of the Hilbert space $\Isom(\mathcal{H})$. Actually, these two groups are homeomorphic but the homeomorphism (provided by the Cartan decomposition proved in Proposition~\ref{Cartan}) is not a group homomorphism. So, as a leitmotif, we will compare $\Isom(\H)$ and $\Isom(\mathcal{H})$ all along the paper. \subsection{Flows and amenability} Actions on compact spaces are useful tools to study topological groups. For example, they play a crucial role in Furstenberg boundary theory and rigidity results for lattices of Lie groups. A \emph{flow} or a $G$-\emph{flow} is a continuous action $G\times X\to X$ of a topological group $G$ on compact Hausdorff space $X$. A flow is \emph{minimal} if every orbit is dense, equivalently, there is no non-trivial closed invariant subspace. One can reduce to the study of minimal flows since any flow contains a minimal one. Ellis proved that each topological group $G$ has a universal minimal flow $M(G)$ in the sense that for any other flow $X$, there is a continuous $G$-equivariant map $M(G)\to X$. In some sense, the study of all minimal flows is contained in the study of the universal one. See for example \cite{MR0474243} for a general reference. For locally compact infinite groups $G$, this universal minimal flow $M(G)$ is very large (it is non-metrizable for example) and we lack a handy description. In the other hand some infinite dimensional\footnote{We use the terms "infinite dimensional groups" as a synonym of non-locally compact groups, as it is customary.} Polish groups have an easily describable universal minimal flow: It is reduced to a point. Such groups are called \emph{extremely amenable} and equivalently any continuous action on a compact space has a fixed point. One of the first groups known to be extremely amenable is the orthogonal group $O$ of a separable Hilbert space. It appears has a consequence of the concentration of the measure phenomenon for Euclidean spheres in high dimensions \cite{MR708367,MR1900705}. For simple non-compact Lie groups $G$, a flow is provided by the homogeneous space $G/P$ where $P$ is a minimal parabolic subgroup. This flow is often called the \emph{Furstenberg boundary} of $G$. It is, moreover, strongly proximal: any probability measure on $G/P$ has a Dirac mass in the closure of its orbit in the compact space of probability measures on $G/P$ with its weak-* topology. This flow is actually the universal strongly proximal minimal flow of $G$: any other strongly proximal minimal flow is an equivariant continuous image of $G/P$ \cite[Chapter 6]{MR0474243}. For finite dimensional hyperbolic spaces $\H^n$ and its isometry group $\PO(1,n)$, the parabolic subgroup $P$ is the stabilizer of a point at infinity and $G/P$ is the sphere at infinity $\partial \H^n$. In our infinite dimensional setting, the sphere at infinity $\partial \H$ is no more compact and thus does not provide a flow. To any metric space $(X,d)$, one can associate a compactification of $X$ which is the completion of the uniform structure induced by the functions $x\mapsto d(x,y)-d(x,z)$ from $X$ to $[-d(y,z),d(y,z)]$. This is the \emph{horicompactification} of $X$. For both $\H$ and $\mathcal{H}$, the points of this horicompactification have been described explicitly in \cite{gutierrez2019metric,claassens2018horofunction}. Here we give a slightly different presentation emphasizing the topological structure and we explicit how it is an $\Isom(X)$-flow. It is actually a closed subspace of a product space on which $\Isom(X)$ acts by permutation of the factors. A bit surprisingly, the horicompactifications of $\H$ and $\mathcal{H}$ are the same as topological spaces. Let $\mathbf{D}$ be the unit ball in $\mathcal{H}$ and $\overline{\mathbf{D}}$ its closure endowed with the weak topology. The \emph{frustum} (or merely the truncated cone) over $\overline{\mathbf{D}}$ is $$\mathbf{F}=\{(x,r)\in\overline{\mathbf D}\times[0,1], \ ||x||\leq r\}.$$ Observe that $\mathbf{F}$ is homeomorphic to the Hilbert cube thanks to Keller's theorem (see e.g. \cite[Theorem 12.37]{MR2766381}). \begin{thm} The horicompactifications of $\H$ and $\mathcal{H}$ are homeomorphic to the frustum $\mathbf{F}$. \end{thm} So $\mathbf{F}$ is a flow for both $\Isom(\H)$ and $\Isom(\mathcal{H})$ but there is an important difference between these two flows. The projection map $\mathbf{F}\to\overline{\mathbf{D}}$ is a factor map between $G$-flows only in the case $G=\Isom(\H)$. The fact that the closed unit ball with the weak topology $\overline{\mathbf{D}}$ is a flow for $\Isom(\H)$ can be seen more geometrically using the Klein model. See Proposition~\ref{Gflow}. Pestov proved that $\Isom(\mathcal{H})$ is extremely amenable \cite[Theorem 6.15]{MR1900705} and since the action $\Isom(\mathcal{H})\curvearrowright\overline{\mathbf{D}}$ has no fixed point, it cannot be continuous. The action $\Isom(\H)\curvearrowright\overline{\mathbf{D}}$ is, moreover, strongly proximal, minimal and this is indeed the universal strongly proximal minimal flow (or Furstenberg boundary) of $\Isom(\H)$. \begin{thm}\label{usp} The universal strongly proximal minimal flow of $\Isom(\H)$ is $\overline{\mathbf{D}}$.\end{thm} Let us emphasize that this flow has two orbits (the open ball and the unit sphere) and the sphere is comeager. Moreover, this is the universal proximal flow of $\Isom(\H)$ as well (Theorem~\ref{prox}). Once the action $\Isom(\H)\curvearrowright\overline\mathbf D$ has been proved to be continuous, the key point of Theorem~\ref{usp} is the fact that stabilizers of points at infinity of $\H$ are amenable subgroups. The simplicity of this flow allows us to investigate the structure of the universal minimal flow $M(G)$ of $G=\Isom(\H)$. By universality, there is a continuous $G$-map $M(G)\to\overline{\mathbf{D}}$ and for $x\in\overline{\mathbf{D}}$, $M_x=\pi^{-1}(\{x\})$ is a $G_x$-flow where $G_x$ is the stabilizer of $x$. So, the preimage by $\pi$ of the orbit of $x$ is a suspension of the action $G_x\curvearrowright M_x$. Since the sphere (identified with $\partial \H$ in the Klein model) is a comeager orbit, it is natural to try to understand what is the suspension over this sphere $\partial \H$. Let $\xi\in \partial \H$ and $G_\xi$ be its stabilizer. One has the following short exact sequence $$0\to H_\xi\to G_\xi\to\mathbf{R}\to0$$ where $H_\xi$ is the kernel of the Busemann homomorphism $\beta_\xi\colon G_\xi\to\mathbf{R}$. Actually $H_\xi$ is isomorphic to $\Isom(\mathcal{H})$ and so is extremely amenable as well (Lemma~\ref{isomext}). In particular, for any minimal $G_\xi$-flow $M$, $H_\xi$ acts trivially and thus $M$ is an $\mathbf{R}$-flow. Let $M(\mathbf{R})$ be the universal minimal flow of the reals. Let us denote by $S(M(\mathbf{R}))$ the completion of the suspension of the action $G_\xi\curvearrowright M(\mathbf{R})$ with respect to some natural uniform structure. This construction is detailed in Section \ref{umf}. \begin{thm} The universal minimal flow of $\Isom(\H)$ is the completed suspension $S(M(\mathbf{R}))$.\end{thm} As a corollary, we get that this flow is not metrizable, as it is the case in finite dimension. For infinite dimensional Polish groups, universal minimal flows are often obtained as Samuel compactifications of homogeneous space $G/H$ where $H$ is an extremely amenable closed subgroup. For $\Isom(\H)$, maximal extremely amenable subgroups are stabilizers of points $G_x$ with $x\in\H$ and horospherical subgroups $H_\xi$ with $\xi\in\partial \H$. In both cases, the Samuel compactification $\Sa(G/H)$ is not minimal (Lemma \ref{not_min}) and thus the universal minimal flow of $\Isom(\H)$ is not of the form $\Sa(G/H)$. So, whereas the Furstenberg boundary is covered by the quite common situation described in \cite[Theorem 7.5]{zucker2018maximally} for universal flows with a comeager orbit, this is not the case for the universal minimal flow. \subsection{Automatic continuity}Another surprising property that some Polish groups may have is automatic continuity. A topological group $G$ has \emph{automatic continuity} if any homomorphism $G\to H$ where $H$ a separable Hausdorff topological group is continuous. In \cite{AHL_2019__2__259_0}, Monod and Py proved that irreducible self-representations of $\Isom(\H)$ are automatically continuous and asked more generally whether the automatic continuity holds for $\Isom(\H)$ in \cite[\S1.3]{AHL_2019__2__259_0}. We answer positively this question for the groups $\Isom(\H)$ and $\Isom(\mathcal{H)}$. As it is well known, automatic continuity implies uniqueness of the Polish topology and thus we can speak about the Polish topology of either group. \begin{thm}\label{autcont} The groups $\Isom(\H)$ and $\Isom(\mathcal{H})$ have automatic continuity.\end{thm} A common strategy to prove automatic continuity is to prove the existence of ample generics \cite{MR2535429}. Here, this is not possible since $\Isom(\H)$ has no dense conjugacy class (Theorem~\ref{nodenseconj}). With respect to this property, the group $\Isom(\H)$ looks like its finite dimensional siblings. We prove that both $\Isom(\H)$ and $\Isom(\mathcal{H})$ have the Steinhaus property and this property is well known to imply automatic continuity. Let $G$ be a topological group. A subset $W\subset G$ is $\sigma$-syndetic if $G$ is the union of countably many left translates of $W$. It is symmetric if $W=W^{-1}=\{w^{-1},\ w\in W\}$. The group $G$ has the \emph{Steinhaus property} if there is some natural integer $k$ such that for any $\sigma$-syndetic symmetric subset $W\subset G$, $W^k$ contains an open neighborhood of the identity. To prove this property for $\Isom(\H)$ and $\Isom(\mathcal{H})$, we rely on the same property for the orthogonal group, which was proved by Tsankov \cite{MR3080189}. The orthogonal group appears as a point stabilizer in both groups. While the groups $\Isom(\H)$ and $\Isom(\mathcal{H})$ exhibit strong geometric differences, the proof of the Steinhaus property is the same for both groups and rely on the use of the stabilizers of three non-aligned points. \subsection{Minimality of the topology} Automatic continuity means that the Polish topology is maximal (i.e., the finest) among separable group topologies on $G$. In the other direction, one can look for minimality properties for $G$. A Hausdorff topological group $(G,\tau)$ is said to be \emph{minimal} if there is no Hausdorff group topology on $G$ coarser than $\tau$. Since the 1970s, minimal topological groups have been an active subject and we refer to the survey \cite{MR3205486} for background and history. For example, connected semisimple groups with finite center as well unitary or orthogonal groups of separable Hilbert spaces are minimal groups. Stojanov \cite{zbMATH03838344} gave the first proof than the orthogonal group $O$ of the separable Hilbert space $\mathcal{H}$ is minimal with the strong operator topology. This group can be thought as the isometry group of the unit sphere (or the projective space) with the angular metric. The group $\Isom(\H)$ can be identified with the group of Möbius transformations of the unit sphere \cite{AHL_2019__2__259_0} that are transformations that preserve angles infinitesimally. Using some ideas borrowed to Stojanov, we prove the minimality of $\Isom(\H)$. \begin{thm}\label{topmin} The Polish group $\Isom(\H)$ is minimal.\end{thm} Since $\Isom(\H)$ is topologically simple, we get immediately that $\Isom(\H)$ is \emph{totally minimal}, that is any continuous homomorphism to a Hausdorff topological group is open. Combining minimality and automatic continuity, we get, moreover, the following characterization of the Polish topology. \begin{cor} The Polish topology on $\Isom(\H)$ is the unique separable Hausdorff group topology on $\Isom(\H)$.\end{cor} For the isometry group of the Hilbert space, the group structure is different since the orthogonal group $O$ is a quotient of $\Isom(\mathcal{H})$ but we prove minimality of the topology in a different way. \begin{thm}\label{topmineuc} The Polish group $\Isom(\mathcal{H})$ is minimal.\end{thm} As above we get the uniqueness of the separable Hausdorff topology on $\Isom(\mathcal{H})$. Let us observe that this answers the question whether the isometry group of a separable homogeneous complete metric $X$ is minimal \cite[Question 4.33]{MR3205486} in the cases $X=\mathcal{H},\H$. \subsection{Dense conjugacy classes} For matrix groups, the spectrum is a continuous conjugacy invariant and this fact essentially proves that there is no dense conjugacy class. Actually, Wesolek proved that no locally compact second countable group has a dense conjugacy class \cite{MR3415606}. In infinite dimension, some Polish groups like $\mathcal{S}_\infty,\Aut(\mathbf{Q},<), \Aut(\mathcal{R})$ where $\mathcal{R}$ is the random graph or $\Aut(D_\infty)$ where $D_\infty$ is the universal Wa\.zewski dendrite \cite{JEP_2020__7__431_0} have dense conjugacy classes and even a comeager one. The groups $\Isom(\H)$ and $\Isom(\mathcal{H})$ are in opposite position with respect to the existence of dense conjugacy classes. \begin{thm} The Polish group $\Isom(\H)$ has no dense conjugacy class. \end{thm} \begin{thm} The Polish group $\Isom(\mathcal{H})$ has dense conjugacy classes. \end{thm} The explanation of this difference has a pure geometric origin. In a Euclidean space, bounded parts of hyperplanes can be approximated uniformly by subsets of spheres with very large radii and this is not possible in hyperbolic spaces. Another way to express this approximation property is the fact that Euclidean hyperplanes coincide with horospheres whereas hyperplanes and horospheres are different things in hyperbolic spaces. \subsection{Coarse geometry} This work should be followed by a second one about the \emph{coarse geometry} of $\Isom(\H)$. The study of coarse geometry of topological groups was initiated by Rosendal \cite{rosendal2017coarse} after the initial work of Roe about coarse geometry of metric spaces \cite{MR2007488}. One specific question that will be addressed is the existence of lattices. In this infinite dimensional setting, the usual definition for locally compact groups does not make sense and we generalize the notion of uniform lattice. \subsection{Acknowledgements} This paper greatly benefits from discussions with Lionel Nguyen Van Thé. Nicolas Monod shared his ideas about the shape of the universal minimal flow and this was a great help for the description obtained here. Todor Tsankov made comments about a previous version of this paper and asked whether the topology of $\Isom(\mathcal{H})$ is minimal. This led to Theorem~\ref{topmin}. I would like to thank them warmly. \section{The infinite dimensional hyperbolic space and its boundary at infinity} \subsection{The hyperboloid and projective models} There is another close model to the hyperboloid model described in the introduction. This is the projective model where $\H$ coincides with its image in the projective space $\PP(\mathcal{H})$. This way, one can identify $\H$ with lines in $\PP(\mathcal{H})$ that are positive for $Q$. In this model, the visual boundary $\partial \H$ is given by $Q$-isotropic lines in $\PP(\mathcal{H)}$. We denote by $\overline{\H}$ the union $\H\cup\partial \H$. The \emph{cone topology} on $\overline{\H}$ is the one obtained from the inverse system of bounded convex subsets \cite[Chapter II.8]{MR1744486}. In finite dimension, it provides a compactification of $\H$ but this is no more the case in our infinite dimensional setting. In opposition to the weak topology described below, it can be thought as \emph{the strong topology} but it will have almost no role in this paper. \subsection{The ball model or Klein model}\label{Klein} Let $\mathcal{H}_-$ be the closed subspace $\{x\in\mathcal{H},\ x_0=0\}$ and let $\mathbf D$ be the unit ball of $\mathcal{H}_-$. To an element $x\in \mathbf D$, we associate the point $f(x)=\frac{e_0+x}{\sqrt{1-||x||^2}}\in \H$. This is a bijection between $\mathbf D$ and $\H$. It can be understood geometrically. The point $\varphi(x)$ is the intersection of the line through the point $e_0+x$ and $\H$. The inverse map of $f$ can also be understood geometrically. If $y\in \H$, $f^{-1}(y)$ is given by the orthogonal projection on $\mathcal{H}_-$ of the intersection of the line through $y$ and the affine hyperspace $\{x\in\mathcal{H},\ x_0=1\}$. The metric induced by the norm on $\mathbf D$ and the pullback of the hyperbolic metric via $f$ induced the same Polish topology but not the same uniform structure (The hyperbolic metric is complete whereas the norm is not on $\mathbf{D}$). When $\mathbf D$ is endowed with the pullback of the hyperbolic metric then geodesics are straight lines. This is the famous Klein model. In this case, the hyperbolic metric coincides with the Hilbert metric associated to the bounded convex subspace $\mathbf D\subset\H$. The identification between $\H$ and $\mathbf D$ induces an action of $\Isom(\H)$ on $\mathbf D$. \begin{center} \begin{figure} \begin{tikzpicture}[scale=3] \coordinate (O) at (0,0); \draw[fill = black!30] (0,0.725) ellipse ({0.71} and {0.15}); \draw[fill = black!30] (0,0) ellipse ({0.71} and {0.15}); \draw (0.725,0) node[right] {$\mathbf{D}$}; \draw[densely dashed] (O) -- (1.33,1.33); \draw[densely dashed] (O) -- (-1.33,1.33); \draw[->] (O) -- (0,0.725); \draw (0,0.725) node[below right] {$e_0$}; \draw[domain=-1.7:1.7] plot (0.725*\x,{ 0.725*sqrt(1+\x*\x)}); \fill[bottom color= black!20, top color=black!55] (1.24,1.46) -- (-1.24,1.46)-- plot [domain=-1.7:1.7] (0.725*\x,{ 0.725*sqrt(1+\x*\x)}) -- cycle; \shadedraw[bottom color= black!30, top color=black!60,color=black] (0,1.46) ellipse (12.4mm and 2.4mm); \draw (-1.29,1.52) arc [start angle=-200, end angle = 160, x radius = 13.75mm, y radius = 3.15mm]; \filldraw (O) circle (.01)node[below]{$O$}; \draw ( .5,-.7) node[above] {$\mathcal{H}_-$} ; \draw (-1.5,0) -- (.5,-.8)--(2,0); \coordinate (x) at (-.65,1.05); \path[name path = proj] (-.65,1.05)--(O); \path[name path = hori] (-1,0.725)--(1,0.725); \filldraw (x) circle (.01); \draw (-.65,1.1) node[below left] {$f(x)$}; \def0.707{.64} \def.57{.57} \coordinate (p) at (0.707*-.65,0.707*1.05); \coordinate (m) at (.57*-.65,.57*1.05); \filldraw[name path =p] (0.707*-.65,0.707*1.05) circle (.01); \draw (0.707*-.65,0.707); \draw (-.65,1.05)--(0.707*-.65,0.707*1.05); \draw (.57*-.65,.57*1.05)--(O); \draw (0.707*-.65,0.707*.94)--(0.707*-.65,0) node[below] {$x$}; \filldraw (0.707*-.65,0) circle (.01); \end{tikzpicture} \caption{The correspondence between the hyperboloid model and the Klein model.} \end{figure} \end{center} \subsection{The weak topology} A \emph{hyperplane} of $\overline\H$ is a non-empty intersection of $\overline{\H}$ with a linear hyperplane. A \emph{closed half-space} of $\overline{\H}$ is the intersection of $\overline{\H}$ and a linear closed half-space of the Hilbert space $\mathcal{H}$. An \emph{open half-space} of $\overline{\H}$ is the complement of a closed half-space. Let us endow $\overline{\H}$ with the coarsest topology such that closed half-spaces are closed. This topology $\mathscr{T}_c$ is introduced in \cite[Example 19]{MR2219304}. It is proved that $\overline{\H}$ is a compact Hausdorff space. In the ball model $\overline{\mathbf D}$, a closed half-space corresponds (via $f^{-1}$) to the intersection of $\overline{\mathbf D}$ with some affine closed half-space of $\mathcal{H}_+$ and thus the topology $\mathscr{T}_c$ coincides with the weak topology of this closed unit ball of the Hilbert space $\mathcal{H}_+$. In particular, it is metrizable. Thus we will call $\mathscr{T}_c$ the \emph{weak topology} on $\overline{\H}$. \begin{rem} Since $\H$ is a subspace of $\mathcal{H}$, one can also endow it with the restriction of the weak topology on $\mathcal{H}$. Let us denote by $\mathscr{T}'$ this topology. Since $\cosh(d(x,y))=(x,y)=\langle x,Jy\rangle$, a sequence of $\H$ which is $\mathscr{T}'$-convergent in $\H$ is actually strongly convergent. \end{rem} \begin{lem} The collection of open half-spaces is a base of the weak topology on $\overline{\H}$. \end{lem} \begin{proof} By definition of the $\mathscr{T}_c$-topology, it suffices to see that any finite intersection of open half-spaces contains some open half-space. Let us use the ball model. Let $U_1,\dots,U_n$ be open half-spaces (which are the intersection of open affine half-spaces of $\mathcal{H}_-$ with $\overline{\mathbf{D}}$) with non-empty intersection in $\overline{\mathbf D}$. Let $\xi$ be on the sphere $\partial \mathbf D$ and in the intersection $U_1\cap\dots\cap U_n$. Each $U_i$ has a boundary included in some closed affine hyperplane $H_i$. Let us denote $S_i=\partial \mathbf D\cap H_i$, which is a closed (for the strong topology) subspace of $\partial \mathbf D$ that does not contain $\xi$. In particular, there is $\alpha_i>0$, such that the ball of radius $\alpha_i$ for the angular metric on $\partial \mathbf D$ is included in $U_i$. Thus for $\alpha=\inf_i \alpha_i$, the ball of radius $\alpha$ around $\xi$ is included in $U_1\cap\dots\cap U_n$. This spherical ball is exactly the intersection of some open half-space $U$ and $\partial \mathbf D$. In particular, $U$ is included in $U_1\cap\dots\cap U_n$. \end{proof} A bit counterintuitively, the substantial part of the closed unit ball, for the weak topology, is not the open unit ball but the unit sphere. So, stabilizers of points at infinity in $\Isom(\H)$ will play a more important role in the sequel than stabilizers of points in $\H$. \begin{lem}\label{lem_com} The sphere at infinity $\partial \H$ is comeager in $\overline{\H}$ for the weak topology.\end{lem} \begin{proof} One can write $\H$ as the countable union of closed balls around $e_0$ with integral radius. None of these balls contains an open half-space and thus, they have empty interior. So $\H$ is meager. \end{proof} \begin{lem} The restriction of the weak topology on $\partial \H$ coincides with the cone topology. \end{lem} \begin{proof} A sequence of unit vectors that converges weakly to a unit vector, converges strongly actually. \end{proof} \subsection{Horicompactifications} In this subsection, we recall a construction of a compact space associated to any metric space which is originally due to Gromov in a slightly different form. Let $(X,d)$ be some metric space with isometry group $G=\Isom(X)$ endowed with the pointwise convergence topology (recalled in Section \ref{secpolish}). For $x,y,z\in X$, let us define $$\varphi_{y,z}(x)=d(x,y)-d(x,z)\in[-d(y,z),d(y,z)]$$ and let $$X_2=\Pi_{y\neq z\in H}[-d(y,z),d(y,z)]$$ be the product space with the product topology. It is thus compact. The \emph{horicompactification} $\widehat{X}$ of $X$ is the closure of the image of $X$ in $X_2$ via the continuous map $$\varphi\colon x\mapsto\{\varphi_{y,z}(x)\}_{y\neq z}.$$ Let us observe that for any $g\in G$, $x\neq y$ and $z\in X$, $\varphi_{y,z}(gx)=\varphi_{g^{-1}y,g^{-1}z}(x)$. In particular, the map $\varphi$ is equivariant with respect to the action of $G$ by permutations of the indices in $X_2$. Let $f\colon Y\to Z$ be some continuous map between topological spaces. The map $f$ is said to be an \emph{embedding} if it is injective and induces a homeomorphism onto its image (with the induced topology from $Z$). Let us emphasize that we do not require the image to be open. If, moreover, the image of $f$ is dense and $Z$ is compact, we say that $f$ or $Z$ is a \emph{compactification} of $Y$. The following lemma justifies the name \emph{horicompactification} which is an equivariant compactification of the space $X$ endowed with a weaker topology (namely the topology denoted by $\mathscr{T}_w$ in \cite[\S3.7]{MR2219304}). \begin{lem}\label{horoflow} The map $\varphi\colon X\to\widehat{X}$ is an injective continuous map. The horicompactification $\widehat{X}$ is a $\Isom(X)$-flow which is metrizable as soon as $X$ is separable.\end{lem} \begin{proof} The continuity of $\varphi$ follows from the inequality $$|\varphi_{y,z}(x)-\varphi_{y,z}(x')|\leq2 d(x,x')$$ which shows actually that $\varphi$ is uniformly continuous. Injectivy of $\varphi$ follows from the fact that $|\varphi_{x,x'}(x)-\varphi_{x,x'}(x')|=2d(x,x')$. The triangle inequality implies that for any $x,y,z\in X$, \begin{equation}\label{lip}|\varphi_{y,z}(x)-\varphi_{y',z'}(x)|\leq d(y,y')+d(z,z').\end{equation} In particular, for any $\psi\in\widehat{X}$, $|\psi_{y,z}-\psi_{y',z'}|\leq d(y,y')+d(z,z')$. Thus, for any $\psi,\psi'\in\widehat{X}$, $g\in\Isom(X)$ and $y,z\in X$, \begin{align*} |(g\psi')_{y,z}-\psi_{y,z}|&\leq|\psi'_{g^{-1}y,g^{-1}z}-\psi'_{y,z}|+|\psi'_{y,z}-\psi_{y,z}|\\ &\leq d(gy,y)+d(gz,z)+|\psi'_{y,z}-\psi_{y,z}|. \end{align*} This shows the continuity of the action since the pointwise convergence topology on $\Isom(X)$ is a group topology. The metrizability follows from Equation \eqref{lip} which shows that any $\psi\in\widehat{X}$ is completely determined by $(\psi_{y,z})_{y,z\in X_0}$ where $X_0$ is some dense countable subset of the metric space $X$. \end{proof} \begin{rem} This horicompactification is also known as the \emph{metric compactification} of $X$ \cite{MR2015055,gutierrez2019metric} and sometimes the space $\widehat{X}\setminus X$ is called the \emph{horofunction boundary}, for example in \cite{MR2456635} Let us emphasize that Gromov originally defines a similar space in \cite{MR624814} but with the uniform convergence on bounded subsets. For proper metric spaces, this is equivalent with pointwise convergence but in our infinite dimensional context, the two notions of convergence are different. \end{rem} \begin{rem}\label{X_1} Often the horicompactification of a metric space is defined in a slightly different way. One fixes a point $x_0$ and consider the product $$X_1=\Pi_{y\neq x_0\in X}[-d(y,x_0),d(y,x_0)]$$ and the map $$\begin{matrix} X&\to& X_1\\ x&\mapsto&( \varphi_{y,x_0}(x))_{y\neq x_0} \end{matrix}$$ where $\varphi_{x,x_0}(y)=d(y,x)-d(y,x_0)$ as above. The fact that $\varphi_{y,z}=\varphi_{y,x_0}-\varphi_{z,x_0}$ shows the closure of the images of $X$ in $X_1$ and $X_2$ are homeomorphic. A difference is the fact that $\varphi\colon X\to X_2$ is equivariant without the need to consider a quotient of the image. Moreover, the action on the image $\widehat{X}$ is maybe more explicit since it appears as a subshift of a generalized Bernoulli shift. \end{rem} For the separable Hilbert and hyperbolic space, explicit descriptions of the horofunction boundary are given in \cite{gutierrez2019metric} and \cite{claassens2018horofunction}. In the following two subsections we r,eformulate these descriptions to fit our objectives. We implicitly consider $\overline{\mathbf D}$ with its weak topology. \subsection{Horicompactification of the hyperbolic space} Let $\mathbf F$ be the frustum $$\{(x,r)\in\overline{\mathbf D}\times[0,1], \ ||x||\leq r\}$$ considered as a subset of $\overline{\mathbf D}\times[0,1]$ with the topology product. This is a compact metrizable space with a continuous projection $\pi\colon \mathbf F\to \overline{\mathbf D}$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=3] \coordinate (O) at (0,0); \def\rx{0.71 \def\ry{0.15 \def\z{0.725 \draw[fill = black!50] (O) ellipse ({\rx} and {\ry}); \begin{scope} \path [name path = ellipse] (0,\z) ellipse ({\rx} and {\ry}); \path [name path = horizontal] (-\rx,\z-\ry*\ry/\z) -- (\rx,\z-\ry*\ry/\z); \path [name intersections = {of = ellipse and horizontal}]; \draw[bottom color= black!20, top color=black!70] (intersection-1) -- (0,0) -- (intersection-2) -- cycle; \draw[fill = black!60] (0,\z) ellipse ({\rx} and {\ry}); \end{scope} \filldraw (O) circle (.01) node[below] {$O$}; \draw (O) to (-0.71,0.71) ; \draw (-0.71,0.5) node[left] {$\mathbf{F}$}; \draw (O) -- (0.71,0.71); \draw[->] (0,\z)--(0,1) node[right] {$r$} ; \draw[dashed] (O)--(0,\z); \draw (\rx,-.1) node[right] {$\overline{\mathbf{D}}$}; \draw ( .5,-.7) node[above] {$\mathcal{H}$} ; \draw (-1.5,0) -- (.5,-.8)--(2,0); \draw[->] (.8,\z)--(.8,0) node[midway,right] {$\pi$}; \end{tikzpicture} \caption{The frustum $\mathbf{F}$.} \end{center} \end{figure} Following notations in \cite{claassens2018horofunction}, we denote by $i\colon \mathbf D\to\mathbf D_1$ the map given by $i(x)(y)=d(x,y)-d(x,0)$ where $d$ is the hyperbolic metric on $\mathbf{D}$. This is $(\varphi_{y,0}(x))_{y\neq0}$ in the above notations i.e. this, corresponds to the point $x_0=0$ in Remark \ref{X_1}. It is proved in \cite[Theorem 3.3]{claassens2018horofunction} that any function in $\overline{i(\mathbf D)}$ is given by the formula, for $y\in\mathbf D$: \begin{equation}\label{hori}\xi_{x,r}(y)=\log\left(\frac{1-\langle x,y\rangle+ \sqrt{\left(1-\langle x,y\rangle\right)^2-\left(1-||y||^2\right)\left(1-r^2\right)}}{(1+r)\sqrt{1-||y||^2}}\right)\end{equation} where $x,r$ are uniquely determined elements of $\overline{\mathbf D}\times[0,1]$ such that $||x||\leq r$. Actually, $\varphi_{y,0}(x)$ coincides with $\xi_{x,r}(y)$ when $r=||x||$ and the Busemann function vanishing at 0 associated to $x\in \partial \mathbf D$ is $\xi_{x,1}$. The points that do not come from $\overline{\mathbf D}$ are thus the one corresponding to $x\in\mathbf D$ and $r>||x||$. \begin{prop}\label{horihyp}The horicompactification $\widehat{\H}$ is homeomorphic to the frustum $\mathbf F$ (and the projection $\pi\colon \mathbf F\simeq\widehat{\H}\to \overline{\mathbf D}$ is a continuous $G$-equivariant map). \end{prop} \begin{proof} It follows from Formula \eqref{hori} that the map $h\colon (x,r)\mapsto \left(\xi_{x,r}(y)-\xi_{x,r}(z)\right)_{y,z}$ from $\mathbf F$ to $\widehat{\H}$ is a continuous bijection between compact Hausdorff spaces and thus a homeomorphism. The continuity of the projection $\pi\colon \widehat{\H}\to\overline{\mathbf D}$ is thus a direct consequence. The equivariance is a consequence of the construction of the map $f^{-1}\colon\H\to\mathbf D$. We detail the computation below. For $z\in\mathbf D\subset\mathcal{H}_-$, we denote by $\tilde{z}$ the point $e_0+z$. Let us denote by $\alpha(g)$ the action of $g\in\Isom(\H)$ on $\mathbf D$. More precisely, $\alpha(g)(x)=f^{-1}gf(x)$. Let us write $$ \xi_{x,r}(y)=\log\left(\frac{(\tilde x,\tilde y)+ \sqrt{(\tilde x,\tilde y)^2-Q(\tilde y)\left(1-r^2\right)}}{(1+r)\sqrt{Q(\tilde y)}}\right)$$ and observe that if we multiply $\tilde y$ by some positive constant, we get the same value. If we identify $g\in\Isom(\H)\simeq\PO(1,\infty)$ with an element of $\OO(1,\infty)$ which preserves the upper sheet of the hyperboloid, one has $(g\tilde y,e_0)\widetilde{\alpha(g)(y)}=g\tilde y$. So, if we set $\lambda=(g\tilde y,e_0)^{-1}$, $\mu=(g^{-1}\tilde x,e_0)>1$ and $\rho=\sqrt{1-\frac{1-r^2}{\mu^2}}$, we have \begin{align*} \xi_{x,r}(\alpha(g)(y))&=\log\left(\frac{(\tilde x,\widetilde{\alpha(g)(y)})+ \sqrt{(\tilde x,\widetilde{\alpha(g)(y)})^2-Q\left(\widetilde{\alpha(g)(y)}\right)\left(1-r^2\right)}}{(1+r)\sqrt{Q\left(\widetilde{\alpha(g)(y)}\right)}}\right)\\ &=\log\left(\frac{(\tilde x,\lambda g\tilde y)+ \sqrt{(\tilde x,\lambda g\tilde y)^2-Q(\lambda g\tilde y)\left(1-r^2\right)}}{(1+r)\sqrt{Q(\lambda g\tilde y)}}\right)\\ &=\log\left(\frac{(\tilde x, g\tilde y)+ \sqrt{(\tilde x,g\tilde y)^2-Q( g\tilde y)\left(1-r^2\right)}}{(1+r)\sqrt{Q( g\tilde y)}}\right)\\ &=\log\left(\frac{(g^{-1}\tilde x,\tilde y)+ \sqrt{(g^{-1}\tilde x,\tilde y)^2-Q(\tilde y)\left(1-r^2\right)}}{(1+r)\sqrt{Q(\tilde y)}}\right)\\ &=\log\left(\frac{\left(\mu\widetilde{\alpha(g^{-1})(x)},\tilde y\right)+ \sqrt{\left(\mu\widetilde{\alpha(g^{-1})(x)},\tilde y\right)^2-Q(\tilde y)\left(1-r^2\right)}}{(1+r)\sqrt{Q(\tilde y)}}\right)\\ &=\log\left(\frac{\left(\widetilde{\alpha(g^{-1})(x)},\tilde y\right)+ \sqrt{\left(\widetilde{\alpha(g^{-1})(x)},\tilde y\right)^2-Q(\tilde y)\left(1-\rho^2\right)}}{(1+\rho)\sqrt{Q(\tilde y)}}\right)-\log\left(\frac{1+r}{\mu(1+\rho)}\right)\\ &=\xi_{\alpha(g^{-1})(x),\rho}(y)+\xi_{\alpha(g^{-1})(x),\rho}(0).\end{align*} This computation shows that $g^{-1}\cdot h(x,r)=h(\alpha(g^{-1})(x),\rho)$ and we have the following commutative diagram giving the equivariance. \begin{center} \begin{tikzcd} \widehat{\H} \arrow[r, "g^{-1}"] \arrow[d, "\pi\circ h^{-1}"] &\widehat{\H} \arrow[d, "\pi\circ h^{-1}"] \\ \overline{\mathbf D} \arrow[r,"\alpha(g^{-1})" ] &\overline \mathbf D \end{tikzcd} \end{center}\end{proof} \begin{rem} This horicompactification is a $G$-flow but not a minimal one since the sheet $r=1$ is $G$-invariant and homeomorphic to $\overline{\mathbf D}$ via $\pi$.\end{rem} \subsection{Horicompactification of the Hilbert space} Let us denote $\sigma_\mathcal{H}$ the map $$\begin{matrix} \mathcal{H}&\to&\mathbf D\\ x&\mapsto&\frac{x}{\sqrt{1+||x||^2}} \end{matrix}$$ which can be understood geometrically in the following way. Let $\mathcal{H}'$ be the Hilbert space $\mathcal{H}\oplus\mathbf{R}$. We identify $x\in\mathcal{H}$ with $(x,1)\in\mathcal{H}'$ and $y\in\mathbf D$ with $(y,0)\in\mathcal{H}'$. In this way, $\sigma_\mathcal{H}$ is the composition of the stereographic projection on the unit sphere in $\mathcal{H}'$ centered at the origin and the vertical projection $\mathcal{H}'\to\mathcal{H}$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=3] \coordinate (O) at (0,0); \def\rx{0.71 \def\ry{0.15 \def\z{0.725 \draw[fill = black!40] (O) ellipse ({\rx} and {\ry}); \filldraw (O) circle (.01) node[below] {$O$}; \draw[->] (0,\z)--(0,1) node[right] {$\mathbf{R}$} ; \draw[dashed] (O)--(0,\z); \draw (\rx,0) node[right] {$\overline{\mathbf{D}}$}; \filldraw[ball color=black!40,opacity=.5] (-\rx,0) arc [start angle=-180, end angle = 0, x radius = {\rx}, y radius = {\ry}]--(\rx,0) arc (0:180:\rx)--cycle ; \draw (\rx,0) arc (0:180:\rx) ; \draw ( .5,-.7) node[above] {$\mathcal{H}$} ; \draw (-1.5,0) -- (.5,-.8)--(2,0); \draw (-1.5,\z) -- (.5,-.8+\z)--(2,\z); \def0.707{0.707} \draw (-\rx,\rx)--(-0.707*\rx,0.707*\rx); \draw[dashed] (O)--(-0.707*\rx,0.707*\rx); \draw[dashed] (-0.707*\rx,0)--(-0.707*\rx,0.707*\rx); \draw[fill] (-\rx,\rx) circle (.01) node[above] {$(x,1)$}; \draw[fill] (-0.707*\rx,0) circle (.01); \draw (-0.707*\rx,-.1) node[below] {$\sigma_\mathcal{H}(x)$}; \end{tikzpicture} \caption{The geometric interpretation of the map $\sigma_\mathcal{H}$.} \end{center} \end{figure} \begin{prop}The horicompactification $\widehat{\mathcal{H}}$ is homeomorphic to the frustum $\mathbf F=\{(x,r)\in\overline{\mathbf D}\times[0,1], \ ||x||\leq r\}$. \end{prop} As before, we denote by $i\colon \mathcal{H}\to\mathcal{H}_1$ the map corresponding to $x_0=0$ in Remark \ref{X_1}. More precisely, $i()(z)=||x-z||-||x||$. Since the inverse map $\sigma_\mathcal{H}^{-1}\colon \mathbf D\to\mathcal{H}$ is given by $\sigma_\mathcal{H}^{-1}(y)=\frac{y}{\sqrt{1-||y||^2}}$, a computation shows that $$i(\sigma_\mathcal{H}^{-1}(y))(z)=\sqrt{\frac{||y||^2}{1-||y||^2}-2\frac{\langle z,y\rangle}{\sqrt{1-||y||^2}}+||z||^2}-\frac{||y||}{\sqrt{1-||y||^2}}$$ for $y\in\mathbf D$ and $z\in\mathcal{H}$. Following \cite{gutierrez2019metric} and put to taking a subsequence of a sequence $(y_n)$ such that $y_n$ weakly converges to some $y\in\overline\mathbf D$ and $||y_n||$ converges to $r\in[0,1]$ one gets that any element of $\widehat{\mathcal{H}}$ is given by $$\xi_{y,r}(z)=\sqrt{\frac{r^2}{1-r^2}-2\frac{\langle z,y\rangle}{\sqrt{1-r^2}}+||z||^2}-\frac{r}{\sqrt{1-r^2}}$$ $||y||\leq r<1$ and this formula collapses to $$\xi_{y,1}(z)=-\langle y,z\rangle$$ when $r=1$. The formulas aboe shows that the map $(x,r)\mapsto \xi_{x,r}$ is a continuous bijection between the compact Hausdorff spaces $\mathbf F$ and $\mathcal{H}$ and thus is a homeomorphsim. \begin{rem} For $x=0$ and $r=1$, one gets the zero function and in particular a global fixed point for the action of $\Isom(\mathcal{H})$ on $\widehat{\mathcal{H}}$. It is tenting to try to understand how the action of $\Isom(\mathcal{H})$ on $\widehat{\mathcal{H}}$ translates to an action of $\Isom(\mathcal{H})$ on $\mathbf F\subset \overline{\mathbf D}\times [0,1]$. The action of the orthogonal group $O$ is simple to describe. For $g\in O$ and $(x,r)\in\mathbf F$, $g(x,r)=(g(x),r)$. For the translation group $(\mathcal{H},+)$, the situation is different. Let $\tau_v$ be the translation with vector $v\in\mathcal{H}$ then $$\tau_v\cdot(x,r)=\left(\frac{1}{\sqrt{1+\lambda^2}}\left(\frac{x}{\sqrt{1-r^2}}+v\right),\frac{\lambda}{\sqrt{1+\lambda^2}}\right)$$ where $\lambda=\sqrt{\frac{r^2}{1-r^2}+\|v\|^2-2\langle v,\frac{x}{\sqrt{1-r^2}}\rangle}$. Whereas the hyperbolic and Hilbert horicompactifications are homeomorphic, one can observe that contrarily to the hyperbolic space, the projection $\mathbf F\to\overline{\mathbf D}$ does not induce an action of $\Isom(\mathcal{H})$ since the first component of $\tau\cdot(x,r)$ does depend on the second one. \end{rem} \section{A Polish group}\label{secpolish} Let $\tau$ be the topology of pointwise convergence on $\Isom(\H)$ that is the one induced by the pseudo-metrics $(g,h)\mapsto d(gx,hx)$ with $x\in X$. This topology is Polish since $\H$ is separable. We call this topology "the Polish topology". This will be justified later, since we will prove there is a unique Polish topology on $\Isom(\H)$. Let us recall that the strong operator topology on $\GL(\mathcal{H})$ (the set of all bounded invertible operators on $\mathcal{H}$) is given by the family of pseudo-metrics $(g,h)\mapsto ||g(x)-h(x)||$ where $x$ varies in $\mathcal{H}$. \begin{prop}\label{strong} The Polish topology coincides with the strong operator topology coming from the embedding $\Isom(H)\leq \GL(\mathcal{H})$. \end{prop} \begin{proof} Let us embed $\H$ as $\{x\in \mathcal{H},\ Q(x)=1,\ x_0>0\}$. The hyperbolic metric and the Hilbert metric give rise to the same topology on $\H$. Thus, the Polish topology is weaker than the strong operator topology. The converse holds because, $\H$ is total in $\mathcal{H}$ and there is a bound on the operator norm. \end{proof} It is proved in \cite[Theorem 3.14]{duchesne2019representations} that the group $\Isom(\H)$ is topologically simple but not abstractly simple. Let $\H^n$ be the hyperbolic space of dimension $n>1$. A \emph{standard embedding} of $\Isom(\H^n)$ in $\Isom(\H)$ comes from a totally geodesic embedding $\varphi$ of $\H^n$ in $\H$. The action of $\Isom(\H^n)$ on $\H$ is such that $\varphi$ is equivariant and the action is trivial on the orthogonal of the image of $\varphi$. \begin{lem} The group $\Isom(\H)$ is the completion of the union of all standard embeddings of $\Isom(\H^n)$.\end{lem} \begin{proof}It is shown in \cite[Proposition 3.10]{duchesne2019representations} that for any $g\in\Isom(\H)$ and points $x_1,\dots,x_n\in \H$, there is a standard embedding $\Isom(\H^k)\hookrightarrow\Isom(\H)$ and $h\in\Isom(\H^k)$ such $h(x_i)=g(x_i)$ for all $I$. \end{proof} \begin{defn} A \emph{symmetry} is a non-trivial involutive isometry of $\H$. \end{defn} Let us observe that a symmetry $\sigma$ has non-empty fixed points set $F_\sigma$ (simply because orbits are bounded and $\H$ is {\upshape CAT($0$)}\xspace). This set is a closed totally geodesic subspace. The differential $d_x\sigma$ of a $\sigma$ at a fixed point $x\in F_\sigma$ is the orthogonal symmetry of $T_x\H$ (the tangent space at $x$) with $T_xF_\sigma$ as fixed points set. For any totally geodesic subspace $E\subset \H$ (maybe reduced to a point), we define the symmetry $\sigma_E$ whose fixed point set is exactly $E$. For a point $x\in \H$, there is a correspondence between orthogonal symmetries of the tangent space $T_x\H$ and symmetries of $\H$ fixing $x$. Cartan-Dieudonné theorem tells us that any element of $\Isom(\H^n)$ is the product of at most $n+1$ symmetries with respect to hypersurfaces. If we allow more general symmetries and go to infinite dimensions, we get the following. Any two symmetries are conjugate in $\Isom(\H)$ if and only if the $\pm1$-eigenspaces of their differentials at fixed points have the same dimensions. \begin{lem} An isometry of $\H$ is a product of at most 5 symmetries.\end{lem} \begin{proof} It is well known that any element if the orthogonal group $O$ is a product of at most 4 symmetries. If $g\in\Isom(H)$, choose $x\in H$ and let $m$ be the mid-point of $[x,g(x)]$ and $\sigma_m$ be the symmetry at $m$. Then $\sigma_m\circ g$ fixes $x$ and the result follows. \end{proof} \subsection{Cartan decomposition} Let $\mathfrak{o}(1,\infty)=\mathfrak{k}\oplus\mathfrak{p}$ be the Cartan decomposition of the Lie algebra $\mathfrak{o}(p,\infty)\leq\L(\mathcal{H})$ associated to the point $e_0\in\mathcal{H}$ and let us define $O=\Stab(e_0)$ and $P=\exp( \mathfrak{p})$. Let us observe that $O$ is isomorphic to the orthogonal of the separable Hilbert space $\mathcal{H}_-$. Let $\varphi_{e_0}\colon T_{e_0}\H\to \mathfrak{p}$. When $\mathfrak{p}$ is endowed with the Hilbert-Schmidt metric and $T_{e_0}H$ is endowed with its Riemannian metric then $\varphi_{e_0}$ is a linear isometry (up to a scalar multiplication) between separable Hilbert spaces. With this identification, for any $v\in T_{e_0}\H$ one has $\exp(v)=\exp(\varphi_{e_0}(v))e_0$ where the first exponential is the Riemannian one and the second one is the exponential of bounded operators. We refer to \cite{MR3044451} for details. Let us endow $O,P$ with the induced topology from the Polish topology on $G$. With this topology $\exp\colon \mathfrak{p}\to P$ is a homeomorphism. We endow the product $O\times P$ with the product topology. \begin{prop}\label{Cartan} The following map is a homeomorphism. $$\begin{matrix} O\times P&\to& G\\ (k,p)&\mapsto& pk \end{matrix}$$ \end{prop} \begin{proof}Since $\H$ is a simply connected manifold of non-positive curvature, the exponential map $\exp_{e_0}\colon T_{e_0}\H\to \H$ is a homeomorphism. Let $\log_{e_0}\colon \H\to\exp_{e_0}\H$ be the inverse of $\exp_{e_0}$. For $g\in G$, let $p=\exp(\varphi_{e_0}(\log_{e_0}(ge_0))$. By construction, $pe_0=ge_0$ and $p^{-1}g\in O$. So, the existence of the decomposition follows. Uniqueness follows from the fact that if $g=pk$ then $pe_0=ge_0$, and thus $p=\exp(\varphi_{e_0}(\log_{e_0}(ge_0))$. Continuity is automatic since the Polish topology is a group topology. For the inverse, the map $g\mapsto p$ is continuous by composition of $\exp, \varphi$ and $\log_{e_0}$ and we conclude that $g\mapsto k$ is continuous because $k=p^{-1}g$. \end{proof} Since $O$ and $P$ are contractible, we obtain the following immediate consequence. \begin{cor}The group $G$ is contractible. \end{cor} As it is well known (it is particular and easy case of the Mazur-Ulam theorem) and easy fact that any isometry of a Hilbert space is affine. It follows that $\Isom(\mathcal{H})$ splits as $O\ltimes \mathcal{H}$ where $\mathcal{H}$ is identified with the group of translations and $O$ is the orthogonal group identified with the stabilizer of the origin. \begin{lem} The group isomorphism $\Isom(\mathcal{H})\simeq O\ltimes \mathcal{H}$ is, moreover, a homeomorphism between the pointwise convergence topology $\Isom(\mathcal{H})$ land the product topology of the strong operator topology on $O$ and the strong topology on $\mathcal{H}$. \end{lem} \begin{proof} The isomorphism is then $\varphi\colon(\rho,v)\in O\times\mathcal{H}\mapsto \tau_v\circ\rho$ where $\tau_v$ is the translation by $v\in\mathcal{H}$. The inverse is given by $g\mapsto(\tau_{g(0)},\tau_{-g(0)}\circ g)$. The continuity of $\varphi$ and its inverse is an easy consequence of the joint continuity of the addition in $\mathcal{H}$. \end{proof} We also get the fact (as in finite dimension) that $\Isom(\mathcal{H})$ and $\Isom(\H)$ are homeomorphic but, of course, not isomorphic as groups. \section{Automatic continuity} Let $G$ be topological group. A subset $W\subset G$ is $\sigma$-syndetic if $G$ the union of countably many left translates of $W$. It is symmetric if $W=W^{-1}=\{w^{-1},\ w\in W\}$. The group $G$ has the \emph{Steinhaus property} if there is some natural integer $k$ such for any $\sigma$-syndetic symmetric subset $W\subset G$, $W^k$ contains an open neighborhood of the identity. It is proved in \cite[Theorem 3]{MR3080189} that the orthogonal or unitary group of a real or complex Hilbert space of infinite countable dimension has the Steinhaus property (with $k=506$). It is a key fact for the following result. \begin{thm} The Polish groups $\Isom(\H)$ and $\Isom(\mathcal{H})$ have the Steinhaus property. \end{thm} \begin{proof} Let $X=\H$ or $\mathcal{H}$ and $G=\Isom(X)$. For any $x\in X$, $\Stab(x)$ with the induced topology is isomorphic (as Polish group) to the orthogonal group $O$. Let $W$ be some symmetric $\sigma$-syndetic subset of $G$. By \cite[Lemma 4]{MR3080189}, $W^2\cap\Stab(p)$ is symmetric and $\sigma$-syndetic in $\Stab(p)$ for any $p \in X$. In particular, $W^{1012}$ contains an open neighborhood of the identity in $\Stab(p)$. Let us fix three distinct points $x,y,z\in X$ such that $z$ does lie on the geodesic line through $x$ and $y$. For a point $p\in X$, we say that some $g\in G$ is a rotation at $p$ if $g$ fixes $p$ and its differential (which coincides with the linear part of $g$ if $X=\mathcal{H}$) at $p$ has a codimension 2 subspace of invariant vectors and it acts as a standard rotation on the orthogonal plane. Thus, one can find finitely points many $p_1,\dots,p_n$ and $\varepsilon>0$ such that $$\{g\in\Stab(p),\ d(gp_i,p_i)<\varepsilon,\ \forall i\leq n\}\subset W^{1012}$$ for $p=x,y$ and $z$. In particular, there is $\theta_0>0$ such that for any rotation $\rho$ at $p=x,y,z$ with angle $\theta<\theta_0$, $d(\rho p_i,p_i)<\varepsilon/3$ for any $i\leq n$. In particular, for $\alpha>0$ small enough and any $u,v\in B(x,\alpha)$ and at the same distance from $y$, there is $g\in W^{1012}\cap\Stab(y)$ such $g(u)=v$ and displacing the $p_i$'s by at most $\varepsilon/3$. Since $z$ does not belong to the geodesic through $x$ and $y$ then the set of distances $d(\rho(x),y)$ contains an interval $(d(x,y)-\lambda,d(x,y)+\lambda)$ with $\lambda\in(0,\alpha)$ where $\rho$ is a rotation centered at $z$ of angle $\theta<\theta_0$ in the totally geodesic plane containing $x,y,z$. Now, let $g\in G$ such that $d(gx,x)<\lambda$ and $d(gp_i,p_i)<\varepsilon/3$ for $i\leq n$. From above, one can find a rotation $\rho_1\in W^{1012}\cap\Stab(z)$ with angle less than $\theta_0$ such that $d(\rho_1(gx),y)=d(x,y)$. Moreover, one can find a rotation $\rho_2\in W^{1012}\cap\Stab(y)$ with angle less than $\theta_0$ such that $\rho_2(\rho_1(gx))=x$. Now, $\rho_2\rho_1g$ moves the $p_i$'s by at most $3\times \varepsilon/3$ and thus belongs to $\Stab(x)\cap W^{1012}$. Finally, $g=\rho_1^{-1}\rho_2^{-1}(\rho_2\rho_1g)\in W^{3036}$ and the Steinhaus property is proved. \end{proof} As it is standard for Polish groups, the Steinhaus property implies the automatic continuity property, that is any homomorphism to a separable Hausdorff group $\H$ is continuous \cite[Proposition 2]{MR2535429}. So, this proves Theorem~\ref{autcont} and implies that these groups have a unique Polish group topology. \section{Amenable and extremely amenable subgroups} The possibilities for amenable groups acting on $\H$ are well understood thanks to \cite[Theorem 1.6]{MR2558883}. For this theorem, the finiteness of the telescopic dimension is required and for $\H$ the telescopic dimension is exactly 1 since $\H$ is Gromov-hyperbolic. \begin{prop}\label{prop:amen} Let $G$ be an amenable topological group acting continuously by isometries on $\H$. Then $G$ has a fixed point in $\overline{\H}$ or stabilizes a geodesic line in $\H$. In particular, there is a subgroup of index at most 2, fixing a point in $\overline{\H}$. \end{prop} Let us first observe that the Polish group $\Isom(\H)$ is not amenable since there is no fixed point in $\H$ nor in $\partial \H$. It does have a continuous isometric action on a Hilbert space without fixed point since the distance function on $\H$ induces a kernel of conditionally negative type \cite[\S7.4.2]{MR1852148}. Thus it does not have property FH and a fortiori it does not have property T. For $x\in \H$, we denote by $\Stab(x)$ its stabilizer in $\Isom(\H)$. As observed previously, this group is isomorphic to the orthogonal group of a separable and since the latter is extremely amenable, we get immediately the following. \begin{lem}\label{Lem:amen}For any $x\in H$, the Polish group $\Stab(x)$ is extremely amenable. \end{lem} Let $\xi\in \partial \H$, $x\mapsto \beta_\xi(x,x_0)$ the associated Busemann function vanishing at $x_0$ and $G_\xi$ its stabilizer in $\Isom(\H)$. The \emph{Busemann homomorphism} at $g\in G_\xi$, $\beta_\xi(g)$ is $\beta_\xi(gx_0,x_0)$ which does not depend on $x\in \H$. This defines a continuous surjective homomorphism $$\beta_\xi\colon G_\xi\to \mathbf{R}.$$ Let $H_\xi\leq G_\xi$ be the kernel of the Busemann homomorphism, which is the set-wise stabilizer of horospheres around $\xi$. \begin{lem}\label{isomext} The closed subgroup $H_\xi$ is isomorphic to $\Isom(\mathcal{H})$ as Polish group. In particular, it is extremely amenable. \end{lem} \begin{proof} In the model of the hyperbolic space $\H$ described in \cite[\S2.2]{AHL_2019__2__259_0}, one sees that the hyperbolic metric on horospheres is a bijective function of an underlying Hilbert structure. So, any isometry preserving an horosphere induces an isometry of the Hilbert structure. Conversely, any isometry of this Hilbert structure can be extended uniquely as an element of $H_\xi$. See \cite[\S2.4]{AHL_2019__2__259_0} for the description of horospheres in this model. This gives a group isomorphism $H_\xi\to\Isom(\mathcal{H})$. It is continuous since the topology on $\Isom(\mathcal{H})$ comes from the pointwise convergence for points in a fixed horosphere centered at $\xi$. One can prove easily by geometric means that the inverse is continuous as well but the automatic continuity of $\Isom(\mathcal{H})$ is a handy shortcut. \end{proof} \begin{lem} Let $\xi\in\partial \H$. The group $G_\xi$ is a closed amenable subgroup of $\Isom(\H)$\end{lem} \begin{proof} Let us prove that $G_\xi$ is closed first. Let $x\in X$ and $y\neq x$ such that $\xi$ is one extremity of the geodesic through these points. Let $(g_n)$ be a sequence converging to $g\in\Isom(\H)$. By definition of the topology $g_nx\to gx$ and $g_ny\to y$. In particular, the geodesic $(g_nx,g_ny)$ converges uniformly on bounded subsets to $(gx,gy)$. So, if $g_n\in G_\xi$ then $g\in G_\xi$ as well because $\xi$ is an extremity of the geodesic line $(gx,gy)$. The group $G_\xi$ splits as a semi-direct product $H_\xi\rtimes \mathbf{R}$. The elements of the group $\mathbf{R}$ can be realized as transvections along a fixed geodesic pointing to $\xi$. Since $\mathbf{R}$ is abelian, the amenability of $G_\xi$ follows from the one of $H_\xi$. \end{proof} The following lemma is a standard fact about extremely amenable groups. \begin{lem}\label{ext_amen_loc} Any continuous homomorphism of an extremely amenable group to a locally compact group is trivial.\end{lem} \begin{proof} Let $A$ be some extremely amenable group and $L$ be some locally compact group. If there is a continuous homomorphism $A\to L$ then this induces a continuous action of $A$ on the Stone-\v{C}ech compactification $\beta L$ of $L$. By extreme amenability, $A$ fixes a point in $\beta L$ and since $L$ acts freely on $\beta L$, this means that the image of $A$ in $L$ is trivial. \end{proof} For a topological group, $H$ we denote by $M(H)$ its universal minimal flow. \begin{lem}\label{Lem:factor}Let $E$ be a topological extension of a quotient group $Q$ by an extremely amenable normal subgroup $A$ then any minimal action of $E$ on a compact space factorizes to a minimal $Q$-flow. In particular $M(E)\simeq M(Q)$. \end{lem} \begin{proof} Let $X$ be some minimal $E$-flow. By extreme amenability of $A$, this subgroup has a fixed point $x$. Since $A$ is normal, any point $y$ in the same orbit as $x$ is a $A$-fixed point. Actually if $y=gx$ with $g\in E$, $ay=g(g^{-1}ag)x=gx=y$ for any $a\in A$. By minimality of the action, the orbit of $x$ is dense and thus $A$ acts trivially on $X$ and the action factorizes into a $Q$-action. \end{proof} In the particular case where $G_\xi$ is the topological semi-direct product $H_\xi\rtimes\mathbf{R}$, we get the following identification of the universal minimal $M(G_\xi)$. \begin{prop} The universal minimal flow $M(G_\xi)$ is homeomorphic to $M(\mathbf{R})$.\end{prop} \begin{rem}\label{minR}The universal minimal flow $M(\mathbf{R})$ can be easily described from the Stone-\v{C}ech compactification of the integers $\beta\mathbf Z$. Actually, it is merely the suspension from $\mathbf Z$ to $\mathbf{R}$ of the extension to $\beta\mathbf Z$ of the shift map $n\mapsto n+1$ . One may look at \cite{MR1357536} for details. Let us observe that this universal minimal space is not metrizable since $\beta\mathbf Z$ is not. \end{rem} A topological group $H$ is said to be \emph{strongly amenable} if any proximal minimal $H$-flow is trivial. For example, all abelian groups are strongly amenable \cite[\S II.4]{MR0474243}. \begin{cor}\label{cor:strongamen} The group $G_\xi$ is strongly amenable.\end{cor} \begin{proof} Let $X$ be some proximal $G_\xi$-flow. By Lemma~\ref{Lem:factor}, this is a minimal proximal $\mathbf{R}$-flow as well and thus it is reduced to a point. \end{proof} \section{Universal strongly proximal minimal flow } Since $\mathcal{H}$ is separable, it is well known that the weak topology on the closed unit ball $\overline{\mathbf D}\simeq\overline{\H}$ is compact and metrizable. Let us recall that a flow is \emph{strongly proximal} if the closure (for the weak-* topology on the space of probability measures $\Prob(X)$) of every orbit in $\Prob(X)$ contains a Dirac mass. In the proof of the following proposition, we use angles $\angle_{p}(x,y)$ between points $x,y\in\overline{\H}$ at $p\in \H$. See \cite[I.2]{MR1744486} for a definition and basic properties of these angles. Let us observe that these angles coincide with the Riemannian angles of the tangent vectors of $[p,x]$ and $[p,y]$ in the tangent space $T_p\H$. One can also define them in the following way: Let $u,v$ be the initial vectors of the hyperbolic segments $[p,x]$ and $[p,y]$ then $\cos(\angle_p(x,y))=-(u,v)$ \cite[I.2]{MR1744486}\footnote{They use the opposite of $Q$.}. \begin{lem}\label{continuity_angle} Let $x,y$ be distinct points of $\H$ and $(x_n),(y_n)$ be sequences in $\H$ converging strongly to $x$ and $y$. Let us fix $r>0$. Then for any $\varepsilon>0$, there is $N$ such that for all $n\geq N$ and $z\in\overline{\H}\setminus B(x,r)$, $$|\angle_{x_n}(y_n,z)-\angle_x(y,z)|<\varepsilon.$$\end{lem} \begin{proof}Let $u_n$ and $v_n$ be the initial vectors at $x_n$ of the segments $[x_n,z]$ and $[x_n,y_n]$. So $\cos(\angle_{x_n}(z,y_n))=-(u_n,v_n)$. These initial vectors can be expressed as $u_n=\frac{z-(x_n,z)x_n}{(-Q(z-(x_n,z)x_n))^{1/2}}$ and $v_n=\frac{y_n-(x_n,y_n)x_n}{(-Q(y_n-(x_n,y_n)x_n))^{1/2}}$. Let us emphasize that $\|u_n\|$ is bounded by the supremum of the operator norms of the transvections $\tau_n$ from $e_0$ to $x_n$ because $\tau_n$ maps $e_0^{\bot}$ to $x_n^\bot$ and the preimage of $u_n$ by $\tau_n$ is a unit vector in $e_0^{\bot}$. The operator norm of $\tau_n$ is bounded above by $\cosh(d(e_0,x_n))$ and thus uniformly bounded. By homogeneity of the numerator and denominator, we may replace $z$ by the corresponding point $z'$ in $e_0+\overline{\mathbf D}$ (i.e., the point $z'$ in $\mathcal{H}$ collinear to $z$ such that $(e_0,z')=1$). Similarly, the initial vectors $u$ and $v$ of the segments $[x,z]$ and $[x,y]$ can be expressed as $u=\frac{z'-(x,z')x}{(-Q(z'-(x,z')x))^{1/2}}$ and $v=\frac{y-(x,y)x}{(-Q(y-(x,y)x))^{1/2}}$. By uniform continuity of the arccosine function, it suffices to prove that for any $\alpha>0$, there is $N$ such that for any $n\geq N$, $|(u_n,v_n)-(u,v)|<\alpha$. Let us write $$z'-(x,z')x-(z'-(x_n,z')x_n)=(x_n-x,z')x_n+(x,z')(x_n-x)$$ Now, since $(p,q)=\langle Jp,q\rangle\leq||p||\cdot||q||$ and $||z'||\leq 2$, \begin{equation}\label{ineqtruc}||z'-(x,z')x-(z'-(x_n,z')x_n)||\leq 2\left(||x_n||+||x||\right)||x_n-x||.\end{equation} Let us observe that $|Q(z'-(x_n,z')x_n))|, |Q(z'-(x,z')x))|$ are bounded below by a constant depending only on $x$ and $r$ as soon $d(x_n,x)<r/2$. Actually, it suffices to consider the case $x=e_0$ and in this case $|Q(z'-(x,z')x))|\geq \tanh(r)^2$. Their are also bounded above independently of $z'$. So, together with Inequality \eqref{ineqtruc}, for any $\alpha_0>0$, one can find $N$ (independent of $z'$) such that $||u-u_n||<\alpha_0$ for any $n>N$. From the inequality \begin{align*} |(u_n,v_n)-(u,v)|&\leq |(u_n,v_n)-(u_n,v)|+|(u_n,v)-(u,v)|\\ &\leq |(u_n,v_n-v)|+|(u_n-u,v)|\\ &\leq ||u_n||\cdot||v_n-v||+||u_n-u||\cdot ||v||,\\ \end{align*} the strong convergence $v_n\to v$ and the boundedness of $\|u_n\|$, the desired domination follows. \end{proof} \begin{prop}\label{Gflow} The space $\overline{\H}$ is a metrizable strongly proximal minimal flow of $\Isom(\H)$ with 2 orbits and one of these orbits is comeager. \end{prop} \begin{proof} The main point is the continuity of the action. Since both $\Isom(\H)$ and $\overline{\H}$ are metrizable, it suffices to prove sequential continuity. Let $(x_n)$ be a sequence of points in $\overline{\H}$ converging to $x$ for the weak topology and $(g_n)$ be a sequence of isometries converging to $g\in \Isom(H)$ for the Polish topology. Since the topology on $\Isom(\H)$ is a group topology, it suffices to deal with the case $g=\Id$. Let $U$ be some open half-space of $\overline{\H}$ containing $x$. Let $x_0$ be the projection of $x$ on the closed half-space $C=\overline{\H}\setminus U$. If $x\in\partial \H$ then $x_0$ is the minimum in $C$ of the Busemann function associated to $x$. Choose $W$ be some hyperplane orthogonal to the geodesic line $L$ through $x$ and $x_0$ that separates $x$ and $x_0$ and let $U'$ be the open half-space of $\overline{\H}$ associated to $W$ that contains $x$. Let $y= L\cap W$. By invariance of $U'$, $x_0$ and $y$ by rotation around the geodesic line through $x_0$ and $y$, there is $\alpha<\pi/2$ such that for all $z\in \overline{U'}$, $\angle_{x_0}(y,z)\leq \alpha$. Actually, this $\alpha$, can be obtained by a compactness argument in a hyperbolic plane containing $x_0,y$ and $z$. For $n$ large enough, $x_n\in U'$ and thus $\angle_{x_0}(x,x_n)=\angle_{x_0}(y,x_n)\leq \alpha$ for $n$ large enough. Since $g_nx_0\to x_0$ and $g_ny\to y$ (for the strong topology), by continuity of the angle (Lemma \ref{continuity_angle} with $r=d(x,y)$), for any $\varepsilon>0$, $n$ large enough and $z\in U'$, $$|\angle_{g_nx_0}(g_ny,z)-\angle_{x_0}(y,z)|<\varepsilon.$$ In particular for $z=g_nx_n$, $\angle_{x_0}(y,g_nx_n)\leq\angle_{g_nx_0}(g_ny,g_nx_n)+\varepsilon=\angle_{x_0}(y,x_n)+\varepsilon\leq\alpha+\varepsilon$. For $\varepsilon<1/2(\pi/2-\alpha)$, one gets that for $n$ large enough, $\angle_{x_0}(y,g_nx_n)\leq \pi/2-1/2(\pi/2-\alpha)$ and thus $g_nx_n\in U$. This proves the continuity of the action. The two orbits are $\H$ and $\partial \H$ which are both dense. The minimality follows and it is proved in Lemma~\ref{lem_com} that $\partial \H$ is comeager. Strong proximality follows from the fact that any proper closed subspace is contained in some closed half-space and any closed half-space $X$ can be sent inside any open half-space via some hyperbolic element of $\Isom(\H)$. \end{proof} Let us recall that a closed subgroup $H$ of a topological group is \emph{coprecompact} if $G/H$ is precompact for the uniformity coming from the right uniform structure on $G$. This means that the completion $\widehat{G/H}$ is compact. This is equivalent to the fact that for any open neighborhood of the identity, $V$, there is a finite subset $F\subset G$ such that $VFH=G$. In the remaining of this section, let us denote $G=\Isom(\H)$ and $G_\xi$ is the stabilizer of $\xi\in\partial\H$. \begin{prop} The subgroup $G_\xi$ is coprecompact in $G$. \end{prop} \begin{proof} Let $V$ be some open neighborhood of the identity. By definition of the topology, it suffices to consider the case $$V=\{g\in G,\ \forall i=1,\dots,n,\ d(gx_i,x_i)<\varepsilon\}$$ where $x_1,\dots,x_n\in \H$ and $\varepsilon>0$. Let $\H_0$ be the minimal totally geodesic subspace containing the $x_i$'s and let $\H_1$ be some totally geodesic subspace containing $\H_0$ and $\xi$ with $\dim(\H_1)=\dim(\H_0)+1$. Let $c$ be the circumcenter of the $x_i$'s and let $\rho>0$ be the associated circumradius. By compactness of the stabilizer of $c$ in $\Isom(\H_1)$ for the uniform convergence on compact subsets (i.e. the Lie group topology), one can find finitely many elements of $\Stab(c)$ in $\Isom(\H_1)$ such that any other element of $\Stab(c)$ lies at distance at most $\varepsilon$ for the metric $d(g,h)=\sup\{d(gx,hx),\ x\in B(c,\rho)\}$. Let us denote by $F$ the image of these elements under the standard embedding $\Isom(\H_1)\to G$. Let $g\in G$. The group $G_\xi$ acts transitively on pairs $(x,\H')$ where $x$ is a point in the totally geodesic subspace $\H'$ of dimension $\dim(\H_0)+1$ such that $\xi\in\partial \H'$. Let $\H_1'$ be some totally geodesic subspace containing $g^{-1}(\H_0)$ and $\xi$ in its boundary. So one can find $h\in G_\xi$ such that $h(g^{-1}(\H_0))\subset\H_1$ and $h(g^{-1}(c))=c$. Now, the restriction of $h\circ g^{-1}$ on $\H_0$ coincide with some element of $\Isom(\H_1)$ fixing $c$ and thus there is $f\in F$ such that the restrictions of $h\circ g^{-1}$ and $f^{-1}$ coincide up to $\varepsilon$ on $B(c,\rho)\cap \H_0$. In particular, if we set $v^{-1}=f\circ h\circ g^{-1}$ then $v^{-1}\in V$ and thus $v\in V$. So $g=vfh\in VFH$. \end{proof} \begin{thm}\label{strongprox} The universal strongly proximal minimal flow of $G$ is $\overline{\H}\simeq\widehat{G/G_\xi}$. \end{thm} \begin{proof} Let $X$ be some strongly proximal minimal flow. By amenability, $G_\xi$ fixes a point $x$. The orbit map $g\mapsto gx$ is uniformly continuous and thus induces a continuous $G$-map $\widehat{G/G_\xi}\to X$. It is surjective by minimality of $X$. This proves that $G/G_\xi$ is the strongly proximal minimal $G$-flow. Since $G$ acts continuously on $\overline{\H}$, we have a continuous map $\widehat{G/G_\xi}\to \overline{\H}$. Let us prove that the inverse of the bijection $G/G_\xi\to \partial\H$ is uniformly continuous by showing that the image of any Cauchy sequence is a Cauchy sequence (this is equivalent since the spaces are precompact \cite{MR603371}). Let $(g_n)$ be sequence of elements in $G$ such that $g_n\xi$ converges in $\overline{\H}$ for the weak topology. We aim to show that $g_nG_\xi$ is Cauchy in $\widehat{G/G_\xi}$. It suffices to prove that for $x_1,\dots,x_k\in \H$, $\varepsilon>0$ and $V=\{v\in G,\ d(vx_i,x_i)<\varepsilon,\ \forall i=1,\dots,k\}$ there is $N\in\mathbf{N}$ such that for $n,m\geq N$ there is $v_{n,m}\in V$ such that $g_mG_\xi=v_{n,m}g_nG_\xi$ i.e. $g_m\xi=v_{n,m}g_n\xi$. If $g_n\xi$ converges to some point in $\H$, we define $x_0$ to be this limit point. Otherwise, we define $x_0$ to be any point in $\H$. Since $G_\xi$ acts transitively on $\H$, we may and will assume that $g_n$ fixes $x_0$ for any $n\in\mathbf{N}$. Let us define $v_{n,m}$ to be the rotation centered at $x_0$ such that $v_{n,m}g_n\xi=g_m\xi$. If $g_n\xi$ converges in $\partial \H$ then $g_n\xi$ converges in the cone topology and the angle of $v_{n,m}$ goes to 0 when $n,m\to\infty$. So $v_{n,m}$ converges uniformly to the identity on bounded subsets and $v_{n,m}\in V$ for $n,m$ large enough. In the other case, the angle between the geodesic segment $[x_0,x_i]$ and the geodesic ray $[x_0,g_n\xi)$ goes to $\pi/2$ because $x_0$ is the limit point of $g_n\xi$. Thus, the angle between $[x_0,x_i]$ and the totally geodesic plane containing the geodesic rays $[x_0,g_n\xi)$ and $[x_0,g_m\xi)$ goes to $\pi/2$ for $n,m\to\infty$. In particular, $d(v_{n,m}x_i,x_i)\to0$ for $n,m\to\infty$. Thus $v_{n,m}\in V$ for $n,m$ large enough. \end{proof} Theorem~\ref{usp} is then a consequence of the homeomorphism $\overline{\mathbf D}\simeq\overline{\H}$ where the two spaces are endowed with the weak topologies. \begin{thm}\label{prox} The universal proximal minimal flow of $G$ is $\overline{\H}\simeq \widehat{G/G_\xi}$. \end{thm} \begin{proof} The flow $\widehat{G/G_\xi}$ is a minimal proximal $G$-flow. The universal property follows from the fact that $G_\xi$ is strongly amenable (Corollary~\ref{cor:strongamen}).\end{proof} \begin{rem} One can deduce Theorem~\ref{prox} from Theorem~\ref{strongprox} thanks to \cite[Theorem 1.7]{MR3509926} as well. \end{rem} \section{The universal minimal flow }\label{umf} Let us denote by $M(G)$ the universal minimal Flow of the Polish group $G=\Isom(\H)$. The aim of this section is to describe this flow as the completion of some suspension. This suspension will be defined in two ways. The first definition will be more concrete but relies on some choices and a cocycle. The second one will be more pleasant but more abstract. \begin{prop}\label{prop:decomp} There is a continuous $G$-equivariant map $\pi\colon M(G)\to\overline{\H}$ such that for any $\xi\in\partial \H$, $M_\xi=\pi^{-1}(\{\xi\})$ is a minimal $G_\xi$-flow and thus a minimal $\mathbf{R}$-flow. \end{prop} \begin{proof} The existence of the map $\pi$ follows directly from the definition of the universal minimal flow. For $\xi\in \partial \H$, $M_\xi=\pi^{-1}({\xi})$ is a closed $G_\xi$-invariant subspace. Let $N$ be some closed minimal $G_\xi$-invariant subspace of $M_\xi$. Let $m\in N$ and $y\in M_\xi$. By minimality of the action of $G$ on $M(G)$, $y\in\overline{G\cdot m}$, so there is a net $(g_\alpha m)$ converging to $y$. By continuity of $\pi$, $g_\alpha G_\xi$ converges to $G_\xi$ in $G/G_\xi$. So there is a net $(g'_\alpha)$ with $g'_\alpha\in G_\xi$ such that $g_\alpha g'_\alpha$ converges to the identity in $G$. By compactness of $N$, there is subnet $(g'_\beta)$ such that $(g'_\beta)^{-1} m$ converges to $m'\in N$. Now $g_\beta m=(g_\beta g'_\beta)(g'_\beta)^{-1} m$ converges to $m'$ and thus $y=m'\in N$. So $M_\xi$ is a minimal $G_\xi$-flow. \end{proof} \begin{defn} Let $(X,\mathcal{U}_X)$ be a uniform space and $G$ a topological group with its right uniformity. An action by uniform isomorphisms is \emph{bounded} if for any $U\in \mathcal{U}_X$ there is an open neighborhood $V\subseteq G$ such that for any $x\in X$, $V\cdot x\subset U(x)$ \end{defn} Let us emphasize that a bounded action is continuous and any continuous action on a compact space is bounded \cite[Remarks 2.17]{MR1900705}. We include a proof since we haven't found one elsewhere. \begin{lem} Let $X$ be a compact space with its unique compatible uniform structure. Any continuous action $G\times X\to X$ is bounded. \end{lem} \begin{proof} Let $U$ be some entourage in $X$. By continuity of the action, for any $x\in X$, there is an identity neighborhood $W_x$ in $G$ and $V_x\subset U$ entourage such that for any $g\in W_x$ and $y\in V_x(x)$, $g(y)\in U(x)$. By compactness, there is $x_1,\dots,x_n$ such that $V_{x_1}(x_1)\cup\dots\cup V_{x_n}(x_n)=X$. Let us define $W=W_{x_1}\cap\dots\cap W_{x_n}$. For any $x\in X$, there is $x_i$ such that $(x,x_i)\in V_{x_i}$ and thus, for $g\in W$, $(gx,x_i)\in U$. So $(gx,x)\in U^2$. \end{proof} For a continuous action $G\curvearrowright X$ on a uniform space, it is natural to ask when the action extends to the completion $\overline{X}$ of $X$. If $X$ is precompact, the boundedness of the action is required. Boundedness is actually a sufficient condition. \begin{lem}\label{bounded} Let $X$ be a uniform space and let us assume that $G$ is a topological group with a bounded action on $X$ by uniform isomorphisms then this action extends to a continuous $G$-action on $\overline{X}$. \end{lem} \begin{proof} Each element of $G$ is a uniform isomorphism of $X$ and thus extends uniquely to a uniform isomorphism of $\overline{X}$. So we get an action of $G$ by uniform isomorphisms. By definition of the uniform structure on the completion, the extended action is bounded as well and thus continuous. \end{proof} Let $M$ be some universal minimal $\mathbf{R}$-flow with a free orbit (for example the universal minimal flow $M(\mathbf{R})$). Any orbit can be identified with the group $\mathbf{R}$ and the action $\mathbf{R}\curvearrowright M$ extends the action by addition. So we denote this action additively. We aim to describe the universal minimal $G$-flow as some $G$-equivariant compactification of the homogeneous space $G/H_{\xi}$ where $\xi\in\partial \H$. Let us denote by $\pi\colon G/H_\xi\to G/G_\xi$ the quotient map and $\beta\colon G/H_\xi\to M$ be the map $gH_\xi\mapsto\beta_{g\xi}(gx_0,x_0)$, which is well defined because $H_\xi$ is exactly the stabilizer of the Busemann function associated to $\xi$. Actually, for $h\in H_\xi$, $\beta_{gh\xi}(ghx_0,x_0)=\beta_{gh\xi}(ghx_0,gx_0)+\beta_{gh\xi}(gx_0,x_0)=\beta_\xi(hx_0,x_0)+\beta_{g\xi}(gx_0,x_0)=\beta_{g\xi}(gx_0,x_0)$.\\ We denote by $\mathcal{U}$ the smallest uniform structure on $G/H_\xi$ making $\pi$ and $\beta$ uniformly continuous maps and let $\overline{G/H_\xi}$ be its completion with respect to $\mathcal{U}$. By definition, $\beta$ and $\pi$ extends to uniformly continuous maps on $\overline{G/H_\xi}$. \begin{rem} The map $\beta\times \pi\colon G/H_\xi\to M\times G/G_\xi$ is injective and uniformly continuous with dense image. So, the completion $\overline{G/H_\xi}$ is isomorphic with $M(\mathbf{R})\times \widehat{G/G_\xi}$ as uniform space ($ \widehat{G/G_\xi}$ is the completion of $G/G_\xi$ with respect to the right uniformity). \end{rem} \begin{prop} The uniform space $\overline{G/H_\xi}\simeq M\times \widehat{G/G_\xi}$ is a minimal $G$-flow. \end{prop} \begin{proof} It suffices to prove that $G$ acts continuously and the action is minimal. Compactness is immediate. Let us start by observing that $\pi$ is $G$-equivariant by definition and \begin{equation}\label{cocycle} \beta(hgH_\xi)=\beta(gH_\xi)+c(h,g\xi) \end{equation} where $c$ is the cocycle $c\colon G\times \partial \H\to\mathbf{R}$, $c(h,\eta)=\beta_{h\eta}(hx_0,x_0)=\beta_\eta(x_0,h^{-1}(x_0))$. The cocycle relation is $c(gh,\eta)=c(g,h\eta)+c(h,\eta)$. Let us observe that the restriction of $\beta$ on $G_\xi$ coincides with the Busemann homomorphism $\beta_\xi$ defined above. In particular $\beta(H_\xi)=0\in \mathbf{R}$. The action on $M\times G/G_\xi$ is given by the following formula: $$g(m,\eta)=(m+c(g,\eta),g\eta).$$ The Equation~\eqref{cocycle} shows that the map $\beta\times\pi$ is $G$-equivariant. Let us show that the action by left multiplications $G\curvearrowright G/H_{\xi}$ extends to a continuous $G$-action on is completion. Thanks to Lemma~\ref{bounded}, it suffices to prove that $G$ acts boundedly by uniform isomorphisms on $M\times G/G_\xi$. By compactness, we already know that the actions $\mathbf{R}\curvearrowright M$ and $G\curvearrowright \widehat{G/G_\xi}$ are bounded. In particular, for any entourage $V$ in $M$, there is $\varepsilon>0$ such that for any $m\in M$ and $r\in \mathbf{R}$, if $|r|<\varepsilon$ then $m+r\in V(m)$. For any compact interval $I\subset \mathbf{R}$, the restriction of the action $\mathbf{R}\times M\to M$ on $I\times M$ is uniformly continuous and in particular, for any entourage $W$ in $M$, there is an entourage $V$ such that for any $r\in I$ and $(m,n)\in V$, $(m+r,n+r)\in W$. Since $c(g,\eta)=\beta_\eta(x_0,g^{-1}x_0)$, $|c(g,\eta)|\leq d(x_0,gx_0)$. Let us fix $(m,\eta)\in M\times G/G_\xi$. For any $\varepsilon>0$, if $g$ is the neighborhood of the identity $\{g\in G,\ d(gx_0,x_0)<\varepsilon\}$, $|c(g,\eta)|<\varepsilon$ and thus the action $G\curvearrowright M\times G/G_\xi$ is bounded. Let us prove minimality. For $x,y\in\overline{G/H_\xi}$, we aim to show that $y\in\overline{Gx}$. First assume that $x,y\in\pi^{-1}(G/G_\xi)$. Since $G/G_\xi$ is a homogeneous $G$-space and $\pi$ is equivariant, we may assume that $\pi(x)=\pi(y)=G_\xi$. Let $(r_\alpha)$ be a net of real numbers such that $ \beta(x)+r_\alpha$ converges to $\beta(y)$ (such a net exists by minimality of the action $\mathbf{R}\curvearrowright M$). Let $(g_\alpha)$ be a net of transvections along a geodesic line with $\xi$ in its boundary at infinity such that $\beta_\xi(g_\alpha x_0,x_0)=r_\alpha$ for all $\alpha$. So $\beta(g_\alpha x)= \beta(x)+r_\alpha\to\beta(y)$ and thus $g_\alpha x\to y$. Now assume that $y\in\pi^{-1}\left(\widehat{G/G_\xi}\setminus G/G_\xi\right)$ and $x\in\pi^{-1}(G/G_\xi)$. By the above argument, it suffices to deal with the case where $\beta(x)=\beta(y)$. Let $\rho_n$ be a sequence of rotations centered at $x_0$ such that $\rho_n(\pi(x))\to\pi(y)$. Since for any $n$ and any $\eta$, $c(\rho_n,\eta)=0$, one has that $\beta(\rho_n x)=\beta(x)$ for any $n$ and thus $\rho_nx\to y$. Thus, we showed that a point $x\in \pi^{-1}(G/G_\xi)$ has a dense orbit. So it remains to show that for some $x\in \pi^{-1}\left(\widehat{G/G_\xi}\setminus G/G_\xi\right)$, there is $y\in \overline{Gx}$ and $\pi(y)\in G/G_\xi$. It suffices to consider some sequence $g_n$ such that $g_n\pi(x)$ converges to some point in $G/G_\xi$ and extract a subnet $g_\alpha$ to guarantee that $\beta(g_\alpha x)$ converges as well. \end{proof} \begin{rem} The cocycle $c\colon G\times G/G_\xi\to\mathbf{R}$ extends to a cocycle $\overline{c}\colon G\times \widehat{G/G_\xi}\to\mathbf{R}$ via the formula $\overline{c}(g,x)=\xi_{\hat x,1}(gx_0)-\xi_{\hat x,1}(x_0)$ where $\xi_{\hat x,1}$ is defined in Equation ~\eqref{hori} and $\hat x\in\mathbf D$ is the point corresponding to $x\in\overline\H\simeq\widehat{G/G_\xi}$. \end{rem} Now, let us present the suspension as a quotient. On the space $G\times M$, we consider the product uniform structure given by the right uniformity on $G$ and the unique uniform structure compatible with the topology on $M$. The group $G_\xi$ acts on this space by $h\cdot(g,m)=(gh^{-1},hm)$. We denote by $\sim$ the equivalence relation induced by this action. Let $R=\{(x,y),\ x\sim y\}\subset (G\times M)^2$. The equivalence relation is \emph{weakly compatible} with the uniform structure if for any entourage $D$, there is an entourage $D'$ such that $D'\circ R\circ D'\subseteq R\circ D\circ R$ (see \cite[Condition 2.13]{MR1069947}). \begin{lem} The equivalence relation $\sim$ is weakly compatible with the uniform structure on $G\times M$.\end{lem} Thanks to this weak compatibility, the quotient uniform structure on $\left(G\times M\right)/G_\xi$ is well defined. Moreover, if $Z$ is a any uniform space, a function $f\colon \left(G\times M\right)/G_\xi\to Z$ is uniformly continuous if and only if $f\circ p\colon G\times M\to Z$ is uniformly continuous, where $p\colon G\times M\to \left(G\times M\right)/G_\xi$ is the projection map. \begin{proof} Let $U$ be some neighborhood of the identity in $G$ (thought as the entourage $\{(g,h),\ gh^{-1}\in U\}$ in $G$) and $V$ be some entourage of $M$. Let $D$ be the product entourage. Then $R\circ D\circ R$ is $$\left\{\left((g,m),(ug(hh')^{-1},n)\right),\ g\in G, h,h'\in G_\xi, u\in U, (h'm,h^{-1}n)\in V\right\}.$$ Similarly, for $U', V'$ and $D'$ the associated product entourage, $D'\circ R\circ D'$ is $$\bigcup_{n'\in M}\left\{\left((g,m),(u'ugh^{-1}, n)\right),\ g\in G,\ (m,n')\in V',\ (n',hn)\in V',\ u,u'\in U', h\in G_{\xi}\right\}.$$ So if one chooses $U'$ and $V'$ such that $(U')^2\subseteq U$ and $(V')^2\subseteq V$, one has $D'\circ R\circ D'\subseteq R\circ D\circ R$ (it suffices to take $h'=e$ in $R\circ D\circ R$ and replace $h$ by $h^{-1}$). \end{proof} The right multiplication on the first factor $G\curvearrowright G\times M$ commutes with the action of $G_\xi$ and thus gives a continuous action by uniform isomorphisms on the quotient space $\left(G\times M\right)/G_\xi$. The quotient space $\left(G\times M\right)/G_\xi$ is the classical \emph{suspension} of the action $G_\xi\curvearrowright M$. See \cite[I.1.3.j]{MR1928517} for generalities about suspensions. We call its completion, the \emph{completed suspension} and denote it by $S(M)$. \begin{prop} The space $S(M)$ is compact and isomorphic to $\widehat{G/G_\xi}\times M$ as $G$-flow. \end{prop} \begin{proof} Let us show that the suspension is precompact. That is, for any entourage $D$, there are finitely many $x_i\in S(M)$ such that $S(M)=\cup_i D(x_i)$. Basic entourages of $S(M)$ are given by $p\times p(U\times V)$ where $U$ is an open neighborhood of the identity and $V$ is an entourage in $M$. Let us fix $U$ and $V$. By coprecompactness of $G_\xi$ in $G$, there are $g_1,\dots, g_n$ such that $\bigcup_{i=1}^n Ug_iG_\xi=G$ and by compactness of $M$, one can find $m_1,\dots,m_k$ such $M=\cup_{j=1}^kV(m_j)$. We claim that $\cup_{i,j} p(Ug_i\times V(m_j))=S(M)$, which proves the precompactness of $S(M)$. Any element of $S(M)$ is some $p(g,m)$ for some $(g,m)\in G\times M$. There is $u\in U$, $i\leq n$ and $h\in G_\xi$ such that $g=ug_ih^{-1}$. Since $h\left(\cup_j V(m_j))\right)=M$, there is $j$ such that $m\in h(V(m_j))$ and thus $p(g,m)\in p(Ug_i\times V(m_j))$. Let us prove the boundedness of the action $G\curvearrowright \left(G\times M\right)/G_\xi$ and thus obtain an extended action $G\curvearrowright S(M)$. Let us fix a basic entourage $D=p\times p(U\times V)$. For $u\in U$ and $p(g,m)\in (G\times M)/G_\xi$, $up(g,m)=p(ug, m) \in D(p(g,m))$ and thus the action is bounded. So $S(M)$ is a $G$-flow. The map $$\begin{matrix} \varphi\colon G\times M&\to&\widehat{G/G_\xi}\times M\\ (g,m)&\mapsto& \left(gG_\xi, m+c(g,\xi)\right) \end{matrix}$$ is $G$-equivariant, invariant for the action of $G_\xi$ and uniformly continuous. So it induces a uniformly continuous $G$-equivariant map $\left(G\times M\right)/G_\xi\to \widehat{G/G_\xi}\times M$ (that we denote by $\varphi$ as well) and thus a $G$-equivariant uniformly continuous map $S(M)\to \widehat{G/G_\xi}\times M$. Its image is dense and compact, so it is surjective. Let us prove that $\varphi$ has a uniformly continuous inverse on its image. On $G/G_\xi\times M$, the inverse map $\psi$ of $\varphi$ is given by $(gG_\xi, m)\mapsto p(g,m-c(g,\xi))$. Let $U\times V$ be some basic entourage of $G\times M$. Let $U'$ be an open neighborhood of the identity such that $(U')^2\subseteq U$ and for any $u\in U'$, $\xi\in \partial \H$ and $m\in M$, $m+c(u,\xi)\in V(m)$ (such $U'$ exists by boundedness of the action $G_\xi\curvearrowright M$). By precompactness of $G/G_\xi$, one can find $g_1,\dots,g_n\in G$ such that $\cup_{i=1}^n U''g_iG_\xi=G$ where $U''$ is some open neighborhood of $e$ with $(U'')^2\subset U'$. The maps $m\mapsto m-c(g_i,\xi)$ are uniformly continuous and thus one can find $V'\in \mathcal{U}_M$ such that $(m,n)\in V'$ implies $( m-c(g_i,\xi), n-c(g_i,\xi))\in V$ for all $i$. Now, let us consider $(gG_\xi,m),(hG_\xi,n)\in G/G_\xi\times M$ such that $g\in U''hG_\xi$ and $(m,n)\in V'$. There is $g_i$ such that $g,h\in U'g_iG_\xi$. One has $\psi(gG_\xi,m)=p(ug_i,m-c(ug_i,\xi))$ and $\psi(hG_\xi,n)=p(vg_i,n-c(vg_i,\xi))$ for some $u,v\in U'$. In particular $ug_i(vg_i)^{-1}\in (U')^2$, $(m-c(ug_i,\xi),m-c(g_i,\xi))\in V$, $(m-c(g_i,\xi), n-c(g_i,\xi))\in V$ and $(n-c(g_i,\xi),n-c(vg_i,\xi))\in V$. So $\left(\psi(gG_\xi,m),\psi(hG_\xi,n)\right)\in p\times p(U\times V^3)$. In particular, $\psi$ is uniformly continuous and extends to a continuous inverse $\widehat{G/G_\xi}\times M\to S(M)$ and thus the two $G$-flows are isomorphic.\end{proof} \begin{thm} The universal minimal $G$-flow is $S(M(\mathbf{R}))$. \end{thm} \begin{proof}Let $M(G)$ be the universal minimal $G$-flow. Let $M_\xi$ given by Proposition \ref{prop:decomp}. There is a $G_\xi$-equivariant continuous map $f\colon M(\mathbf{R})\to M_\xi$. Let us define $\varphi\colon G\times M(\mathbf{R})\to M(G)$ given by $\varphi(g,m)=gf(m)$. For $h\in G_\xi$, $\varphi(gh^{-1},hm)=\varphi(g,m)$ and we claim that this map is uniformly continuous. So it defines a uniformly continuous map $\left(G\times M(\mathbf{R})\right)/G_\xi\to M(G)$ and it extends to a map $S(M(\mathbf{R}))\to M(G)$. By construction, it is clearly $G$-equivariant. So, by uniqueness of the universal minimal flow, this map is a homeomorphism between $M(G)$ and $S(M(\mathbf{R}))$. Let us prove the uniform continuity claim. Since $\widehat{G/G_\xi}\times M(\mathbf{R})$ and $S(M(\mathbf{R}))$ are isomorphic, It suffices to prove that $\varphi$ is uniformly continuous on $G/G_\xi\times M(\mathbf{R})$ seen as a subset of $S(M(\mathbf{R}))$. Actually, $\varphi(gG_\xi,m)=gf(m-c(g,\xi))$ for $(gG_\xi,m)\in G/G_\xi\times M(\mathbf{R})$. By precompactness of $G/G_\xi\times M(\mathbf{R})$, it suffices to prove that the map is Cauchy continuous, that is it maps Cauchy filters to Cauchy filters or equivalently Cauchy nets to Cauchy nets \cite[Theorem 3]{MR603371}. So, let $(g_\alpha G_\xi,m_\alpha)_\alpha$ be some Cauchy net and $V$ be some symmetric entourage in $M(G)$. By boundedness of the action $G\curvearrowright M(G)$, there is a symmetric open neighborhood of the identity $U$ such that for any $x\in M(G)$, $Ux\subseteq V(x)$. For each $\alpha$, let us choose a representative $g_\alpha\in G$ of $g_\alpha G_\xi$ such that $g_\alpha\in G_{x_0}$, the stabilizer of some point $x_0\in \H$. This way, $c(g_\alpha,\xi)=0$ for all $\alpha$ and thus $\varphi(g_\alpha G_\xi,m_\alpha)=g_\alpha f(m_\alpha)$. Let us denote by $u_{\alpha,\beta}$ the rotation centered at $x_0$ such that $u_{\alpha,\beta}g_\beta\xi=g_\alpha\xi$. So $g_\alpha= u_{\alpha,\beta}g_\beta k_{\alpha,\beta}$ with $k_{\alpha,\beta}\in G_{x_0}\cap G_\xi\subset H_\xi$. Since $H_\xi$ fixes pointwise $M_\xi$, $\varphi(g_\alpha,m_\alpha)=u_{\alpha,\beta}g_\beta f(m_\alpha)$. Since the net is Cauchy, there is $\alpha_0$ such that for all $\alpha,\beta\geq\alpha_0$, $u_{\alpha,\beta}\in U$, thus, for $\alpha,\beta\geq \alpha_0$, $(g_\alpha f(m_\alpha), g_{\alpha_0}f(m_\alpha))\in V$ and $(g_{\alpha_0}f(m_\beta), g_\beta f(m_\beta))\in V$. Since $(m_\alpha)$ is Cauchy and $g_{\alpha_0}\circ f$ is uniformly continuous, $(g_{\alpha_0}f(m_\alpha))_\alpha$ is Cauchy as well and thus there is $\alpha_1\geq \alpha_0$ such that for $\alpha,\beta\geq \alpha_1$, $(g_{\alpha_0}f(m_\alpha),g_{\alpha_0}f(m_\beta))\in V$. Thus, for $\alpha,\beta\geq \alpha_1$, $(g_\alpha f(m_\alpha),g_\beta f(m_\beta))\in V^3$ and $\varphi$ is Cauchy continuous. \end{proof} Since $M(\mathbf{R})$ is not metrizable (Remark \ref{minR}), we deduce the following corollary. This non-metrizability can also be deduced from Lemma~\ref{not_min} below and results in \cite{zucker2018maximally}. \begin{cor} The universal minimal space $M(G)$ is not metrizable.\end{cor} We conclude this subsection by observing that $M(G)$ does not coincide with the Samuel compactification of some homogeneous space $G/H$ with $H$ extremely amenable subgroup. The maximal extremely amenable subgroups are the stabilizers $G_x$ of a point $x\in\H$ or the horospherical groups $H_\xi$ for $\xi\in\partial \H$. So it suffices to prove the following. \begin{lem}\label{not_min} The Samuel compactifications $\Sa(G/H)$ for a closed subgroup $H\leq G_{x}$ or $H\leq H_\xi$ are not minimal $G$-spaces. \end{lem} \begin{proof}We rely on \cite[Proposition 6.6]{zucker2018maximally} where a characterization of minimality of the action of $G$ on $\Sa(G/H)$ for a closed subgroup $H\leq G$ is given. More precisely, the action is minimal if for any open neighborhood of the identity $U\subset G$, $UH$ is syndetic in $G$ i.e., finitely many left\footnote{We are left-handed where Zucker is right-handed.} translates of $UH$ cover $G$. So we prove that for some $U$, $UK$ and $UH_\xi$ are not syndetic since it implies that $UH$ is not syndetic. Let us take $U=\{g\in G,\ d(gx,x)<1\}$. Let $F$ be any finite subset of $G$. Let us start with $G_x$. An element in $FUG_x$ sends $x$ to a point at distance at most 1 from a point $fx$ with $f\in F$. Since $G$ acts transitively on $X$ and $X$ is unbounded, $FUG_x\neq G$. Let us continue with $H_\xi$, denote $R=\max_{f\in F}d(fx,x)$ and let $g$ be the transvection along the geodesic line $L$ through $x$ to $\xi$ with translation length $d$. Assume that $G=FUH_\xi$. Thus, let us write $g=fuh$ and denote $l=fu$ with $f\in F$, $u\in U$ and $h\in H_\xi$. So, $l=gh^{-1}\in G_\xi$ and \begin{align*} d(l(x),x)&\leq d(l(x),f(x))+d(f(x),x)\\ &\leq d(u(x),x)+d(f(x),x)\\ &\leq 1+R \end{align*} Now, $h=l^{-1}g$ and thus $h(L)=l^{-1}(L)$. Let $\beta_\xi$ be the Busemann function with respect to $\xi$. Since $l^{-1}(x)$ and $h(x)$ are on the same geodesic line toward the point $\xi$, $d(l^{-1}(x),h(x))=|\beta_\xi(l^{-1}(x),h(x))|=|\beta_\xi(l^{-1}(x),x)|\leq d(l(x),x)$. So, \begin{align*} d(x,h(x))&\leq d(x,l^{-1}(x))+d(l^{-1}(x),h(x))\\ &\leq 2 d(x,l(x))\\ &\leq 2(R+1) \end{align*} The following inequality gives a contradiction for $d>3(R+1)$. \begin{align*} d(g(x),x)&\leq d(lh(x),x)\\ &\leq d(lh(x),l(x))+d(l(x),x)\\ &\leq 2(R+1)+(R+1)= 3(R+1). \end{align*}\end{proof} \section{Minimality of the topology} \subsection{Hyperbolic isometries} In this subsection, we prove Theorem~\ref{topmin}, that is $\Isom(\H)$ with its Polish topology is minimal. Our proof is inspired the original proof of the minimality of the orthogonal group with its strong operator topology by Stojanov \cite{zbMATH03838344}. But the induction in Stojanov proof will be short-lived since we will only use the first step. Essentially we prove that the stabilizer $G_\xi$ of a point $\xi\in\partial \H$ in $G=\Isom(\H)$ is closed and the $G/G_\xi$ has a unique compact extension. We use the same terminology as in \cite{zbMATH03838344}. Let $X$ be a $G$-space. A $G$-\emph{compactification} $Y$ of $X$ is a $G$-flow with a continuous $G$-equivariant map $X\to Y$ with dense image. It is, moreover, a $G$-\emph{extension} if it is injective. The stabilizer $G_\xi$ can be identified with the group of Möbius transformations of the Hilbert space $\mathcal{H}$ and thus splits as the semi-direct product $(\mathbf{R}\times O)\ltimes \mathcal{H}$ where $(\mathcal{H},+)$ is the group of translations, $O$ is the orthogonal group and $\mathbf{R}$ corresponds to positive homotheties. See \cite[\S2]{AHL_2019__2__259_0}. \begin{lem}\label{Gext} The only non-trivial $G$-compactification of the sphere $\partial \H\simeq G/G_\xi$ is $\overline{\H}$ with its weak topology.\end{lem} \begin{proof} Let $C$ be such a $G$-compactification which is not reduced to a point. The action $G\curvearrowright C$ is bounded by compactness of $C$. Thus the map $\partial \H\to C$ is uniformly continuous and extends to a surjective $G$-map $\overline{\H}\simeq \widehat{\partial \H}\to C$. It suffices to prove that this map is injective to get that $C\simeq \overline{\H}$. It is injective on $\partial \H$ because otherwise the double transitivity $G\curvearrowright \partial \H$ would imply that the image of $\partial \H$ collapses to a point. Since $G$ acts on transitively on pairs $(x,\xi)\in \H\times\partial\H$, the same argument shows that $x$ and $\xi$ cannot have the same image. Assume that two distinct points $x,y$ of $\H$ are sent to the same point. We claim that the image of $\H$ is reduced to this point. Actually, the stabilizer of a point acts transitively on each sphere around this point. So the whole sphere of radius $d(x,y)$ around $x$ is mapped to a unique point. For any $r\leq 2d(x,y)$, this sphere contains a point at distance $r$ from $y$. So, by the same argument, all the points in the closed ball of radius $2d(x,y)$ is mapped to a unique point. An induction shows that for any $n\in \mathbf{N}$, the closed ball of radius $n$ around $x_n$ is mapped to a the same point in $C$ where $x_n$ is $x$ if $n$ is even and $y$ otherwise. Since $\H$ is dense in $\overline{\H}$, this implies that the image is a point. \end{proof} \begin{lem}\label{basis} The subsets $VP=\{vp,\ v\in V,\ p\in P\}$ where $P$ is the stabilizer of a point in $\partial \H$ and $V$ is an open neighborhood of the identity, is a sub-basis of identity neighborhoods. \end{lem} \begin{proof}It suffices to prove that for $x\in \H$ and $\varepsilon>0$, there is an identity neighborhood $V$ and point stabilizers $P_1,P_2,P_3$ such that for all $g\in VP_1\cap VP_2\cap VP_3$, $d(gx,x)<\varepsilon$. Let $X$ be a two-dimensional totally geodesic submanifold of $\H$ such that $x\in\H$. Let $\xi_1,\xi_2\in \partial X$ such that $x$ lies on the geodesic line between these two points at infinity. Let $\xi_3 \in \partial X$ distinct from $\xi_1$ and $\xi_2$. Let $x_1,x_2$ be distinct points on the geodesic $(\xi_1,\xi_2)$ and $x_3$ be a point on $(x,\xi_3)$. Let $\alpha>0$ and $V_\alpha=\{g\in G,\ d(gx_i,x_i)<\alpha,\ \forall i=1,2,3\}$. For any neighborhood $W$ of $\xi_i$, there is $\alpha>0$ such that for any $g\in V_\alpha$, $g\xi_i\in W$ and thus for any $g\in V_\alpha P_i$, $g\xi_i\in W$. We claim that for any $\varepsilon>0$ one can find $\alpha$ small enough such that $d(gx,x)<\varepsilon$ for any $g\in\cap_i V_\alpha P_i$. Assume this is not the case then we can find a sequence $(g_n)$ such that $d(g_nx,x)>\varepsilon$ for all $n\in\mathbf{N}$ and for all $\alpha>0$, $g_n\in \cap_i VP_i$ eventually. Since the pointwise stabilizer of the totally geodesic hyperbolic plane containing $\xi_1,\xi_2,\xi_3$ acts transitively on totally geodesic subspaces of dimension 5 containing $\xi_1,\xi_2,\xi_3$, we may assume that the $g_n\xi_i'$s belong to the boundary of a fixed totally geodesic subspace $Y$ of dimension 5 and actually that $g_n$ lies in the image of $\Isom(Y)$ in $G$ for all $n\in\mathbf{N}$. Since the action of $\Isom(Y)$ on triples of distinct points of $\partial Y$ is proper (this is one of the first examples of convergence groups), the sequence $(g_n)$ is bounded in $\Isom(Y)$ and by local compactness of $\Isom(Y)$, one can extract a converging subsequence with some limit $g\in \Isom(Y)$. Since $g$ fixes each $\xi_i$, the restriction of $g$ on $X$ is trivial and thus $gx=x$. So we have a contradiction. \end{proof} Let us recall a standard fact about topological groups. \begin{lem} Let $(G,\sigma)$ be a topological group and $H\leq G$ be a closed subgroup then the normalizer of $H$ in $G$ is closed.\end{lem} \begin{proof} For $h\in H$, let $\varphi_h(g)=ghg^{-1}$. The map $\varphi_h\colon G\to G$ is continuous and the normalizer of $H$ is $\cap_{h\in H}\varphi_h^{-1}(H)$ and thus closed.\end{proof} Let $\sigma$ be any Hausdorff group topology on $G=\Isom(\H)$ and $\xi\in\partial\H$. \begin{lem}The subgroup of translations $T_\xi\simeq\mathcal{H}$ in $G_\xi$ is a closed subgroup of $G$.\end{lem} \begin{proof} Let $t$ be some non-trivial translation of $G_\xi$. If $g\in G$ commutes with $t$, then it fixes the unique fixed point at infinity of $t$, which is $\xi$. So its centralizer is a closed subgroup of $G$ included in $G_\xi$. From the semi-direct structure of $G_\xi$, it easy to see that the intersection of the centralizers of all $t\in T_\xi$ is exactly $T_\xi$ and thus this group is closed in $G$. \end{proof} \begin{prop}\label{cloclo} The subgroup $G_\xi$ is a closed subgroup of $G$. \end{prop} \begin{proof} The group $G_\xi$ is the normalizer of $T_\xi$. Actually, $T_\xi$ is normal in $G_\xi$ and conversely any element normalizing $T_\xi$ fixes the unique point at infinity fixed by all elements in $T_\xi$.\end{proof} We can now prove that the Polish topology on $G$ is minimal. \begin{proof}[Proof of Theorem ~\ref{topmin}] Assume that $\sigma$ is coarser than the Polish topology $\tau$ on $G$. Let $P$ be the stabilizer of a point in $\partial \H$ which is a closed subgroup of $G$ for $\sigma$ by Proposition~\ref{cloclo}. Let us endow $G/P$ with the quotient topology $\sigma_P$ obtained from $\sigma$. Let $C$ be some compact $G$-extension of $G/P$ (which exists by \cite[Lemma 4.4]{zbMATH03838344}). By Lemma ~\ref{Gext}, $C=\overline{\H}$ and thus the quotient topologies $\sigma_P$ and $\tau_P$ coincide on $G/P$. So, for any identity neighborhood $V$ for $\tau$, $VP$ is an identity neighborhood for $\sigma$. By Lemma~\ref{basis}, this implies that $\sigma$ is finer than $\tau$ and thus $\sigma=\tau$. \end{proof} \subsection{Euclidean isometries} We now prove that the Polish group $\Isom(\mathcal{H})$ is minimal. We use the semi-direct decomposition $\Isom(\mathcal{H})=O\ltimes \mathcal{H}$ where $O$ is identified to the stabilizer of some origin in the Hilbert space $\mathcal{H}$ and $\mathcal{H}$ is identified to the subgroup of translations. In this identification, the action of $O$ on the Hilbert space corresponds to the action by conjugations on the subgroup of translations. The abelian group $(\mathcal{H},+)$ (which is the norm topology) is not minimal. For example, the weak topology is a coarser group topology. To get the minimality of $G=\Isom(\mathcal{H})$, the action of the rotations in $O$ on $\mathcal{H}$ plays a key role and we use ideas from \cite{MR551694} where it is proved that the affine group of the real line $\mathbf{R}$ is minimal whereas the group $(\mathbf{R},+)$ with its usual topology is not minimal. \begin{proof}[Proof of theorem~\ref{topmineuc}] Let $\tau$ be the Polish topology on $G=\Isom(\mathcal{H})$ and $\sigma$ be a coarser Hausdorff group topology. We first observe that the stabilizer of any point is isomorphic to $O$ and by minimality of the strong topology on $O$, $\sigma|_O=\tau|_O$. Let $u$ be a unit vector in $\mathcal{H}$ and $\sigma_u\in O$ the associated symmetry fixing pointwise the orthogonal of $u$. A simple computation shows that for any $t\in\mathcal{H}$, the commutator $[t,\sigma_u]$ is $2\langle t,u\rangle u$. Let us denote by $\mathbf{R} u$ the line spanned by $u$ in $\mathcal{H}$. It is a closed subgroup of $(G,\sigma)$ since it is the commutator of the semi-direct product of stabilizer of $u$ in $O$ with $\mathbf{R} u$. The map $$\begin{matrix} \pi_u\colon \mathcal{H}&\to&\mathbf{R} u\\ t&\mapsto&[t,\sigma_u] \end{matrix}$$ is a continuous map with respect to $\sigma$ on the domain and the target. Let $v$ be a unit vector orthogonal to $u$ and let $\rho_\theta$ be the rotation of angle $\theta$ in the base $(u,v)$. The map $$\begin{matrix} \psi\colon\mathbf{R}\times \mathcal{H}&\to&\mathbf{R} u\\ (\theta,t)&\mapsto& \pi_u(\rho_\theta \pi_u(t)\rho_\theta^{-1})\end{matrix}$$ is continuous and $ \pi_u(\rho_\theta \pi_u(t)\rho_\theta^{-1})=4\cos(\theta)\langle u,t\rangle u$. Let $U$ be an identity neighborhood of $(\mathbf{R} u,\sigma)$ such that $U-U\neq \mathbf{R} u$ (which exists since $\sigma$ is Hausdorff). By continuity of $\psi$, there is $\theta_0>0$ (smaller than $\pi/2$) and $V$ be an identity neighborhood of $\mathcal{H}$ such that for any $(\theta, t)\in ]-\theta_0,\theta_0[\times V$, $\psi(\theta,t)\in U$. Let $\varepsilon=\min\{\cos(\theta_0/2)-\cos(\theta_0), 1-\cos(\theta_0/2)\}$. Since $\psi(\theta,t)-\psi(\theta_0/2,t)\in U-U$ for all $(\theta,t)\in ]-\theta_0,\theta_0[\times V$, for all $t\in V$, $]-4\varepsilon\langle u,t\rangle u, 4\varepsilon\langle u,t\rangle u[\in U-U$. Since $U-U\neq G$, $\{\langle u,t\rangle, t\in V\}$ is bounded. So the restriction of $\sigma$ on $\mathbf{R} u$ has bounded identity neighborhoods and since the restriction of $\tau$ on $\mathbf{R} u$ is locally compact, $\sigma$ and $\tau$ coincide on $\mathbf{R} u$. Using this fact for all unit vectors $u$, we get that $\sigma$ is finer than the weak topology on $\mathcal{H}$. We claim there is a bounded neighborhood of the origin in $\mathcal{H}$ for $\sigma$. Otherwise, one can find a net $(t_\alpha)$ of $\mathcal{H}$ converging to $0$ for $\sigma$ and such that $||t_\alpha||\to\infty$. We observe that $t_\alpha$ does not lie in $\mathbf{R} u$ for $\alpha$ larger than some fixed $\alpha_0$ (otherwise, it would contradict the weak convergence). Let $\rho_\alpha$ be the rotation in the plane spanned by $u$ and $t_\alpha$ with angle $\theta_\alpha$ such that $\langle\rho_\alpha(t_\alpha),u\rangle\to+\infty$ and $\theta_\alpha\to0$. It is possible to find such an angle $\theta_\alpha$ because $$\langle\rho_\alpha(t_\alpha),u\rangle=\sin(\theta_\alpha)\langle t_\alpha,u_\alpha\rangle+\cos(\theta_\alpha)\langle t_\alpha,u\rangle$$ where $u_\alpha$ is the unit vector orthogonal to $u$ in the spanned of $u$ and $t_\alpha$ such that $\langle u_\alpha,t_\alpha\rangle>0$. By weak convergence, $\langle t_\alpha,u\rangle\to0$ and thus $\langle t_\alpha,u_\alpha\rangle\sim||t_\alpha||\to +\infty$. So one can choose, for example, $\theta_\alpha=\arcsin\left(\log(||t_\alpha||)/\langle t_\alpha,u_\alpha\rangle\right)$. Since $\theta_\alpha\to0$, $\rho_\alpha$ converges to the identity for the strong topology. So $\rho_\alpha t_\alpha \rho_\alpha^{-1}$ converges to the identity and we have a contradiction with $\langle\rho_\alpha(t_\alpha),u\rangle\to\infty$. So there is $R>0$ such that the ball $B(0,R)$ is a neighborhood of 0 in $\mathcal{H}$ for $\sigma$. Since this is a group topology, there is an open set $U$ containing 0 such that $U+U\subset B(0,R)$. In particular $U\subset B(0,R/2)$. Repeating this argument, we see that the collection of open balls around the origin is a collection of origin neighborhoods for $\sigma$. So $\sigma$ and $\tau$ coincides on $\mathcal{H}$. Since $\sigma$ and $\tau$ coincide on $O\simeq G/\mathcal{H}$ and on $\mathcal{H}$, they coincide on $G$ thanks to \cite[Lemma 1]{MR551694}. \end{proof} \begin{rem}This yields another proof of Theorem~\ref{topmin}. One can prove first that the Polish topology on the group of Möbius transformations on $\mathcal{H}$, i.e., the group of similiarities $\left(\mathbf{R}\times O\right)\ltimes \mathcal{H}$ is minimal (which follows easily from Theorem~\ref{topmineuc}) and combine this fact with Lemma~\ref{Gext} and \cite[Lemma 1]{MR551694} to get the minimality of $\Isom(\H)$. \end{rem} \section{Existence and lack of dense conjugacy classes} A simple idea to separate conjugacy classes is to find a continuous non-constant invariant under conjugation. For finite dimensional linear groups, the spectrum is such an invariant. In our geometric setting, a natural invariant is the translation length. For a metric space $X$ and $g\in\Isom(X)$, the translation length is $$\ell(g)=\inf_{x\in X}d(gx,x).$$ \begin{lem} The translation length is upper semi-continuous on $\Isom(X)$ for the pointwise convergence topology. \end{lem} \begin{proof} This follows from the general fact that an infimum of continuous functions is upper semi-continuous. \end{proof} \begin{cor}\label{neutral} For any separable metric space $X$, if $g\in\Isom(X)$ has a dense conjugacy class then $\ell(g)=0$.\end{cor} \begin{proof} Let $g_n$ be a sequence in the conjugacy class of $g$ converging to the identity. Since $\ell(g)=\limsup_{n\to\infty} \ell(g_n)\leq\ell(\Id)=0$, we have the result. \end{proof} The following theorem is surely well known but we provide a proof since we use some elements constructed in the proof. \begin{prop}\label{densecc} The orthogonal and the unitary groups of a separable Hilbert $\mathcal{H}$ space have a dense conjugacy class.\end{prop} \begin{lem}\label{finite rank} Let $A\in U(\mathcal{H})$ and $x_1,\dots,x_k\in\mathcal{H}$ then there is an operator $A'\in U(\mathcal{H})$ which coincides with $A$ on each $x_i$ and which is the identity on a subspace of finite codimension. \end{lem} \begin{proof}Without loss of generality, we may assume that $(x_1,\dots, x_k)$ is free. Let $e_1,...,e_k$ be the basis obtained by the Gram–Schmidt process and let $e'_1,\dots,e'_k$ its image by $A$. We define $\mathcal{F}$ to be the (finite dimensional) span of the union of these two families. By completing these two orthonormal families, one can find an orthogonal or unitary operator $U_1$ of $\mathcal{F}$ mapping $e_i$ to $e'_i$ and thus $x_i$ to $U_0(x_i)$. Extending this operator by the identity on $\mathcal{F}^\bot$, we get $A'$. \end{proof} \begin{proof}[Proof of Theorem~\ref{densecc}] We prove the theorem in the complex case. In the real case, the proof is the same, using rotations instead of complex homotheties with unitary ratio. Let $(\lambda_n)$ be a sequence of complex numbers in the unit circle that is dense (in the real case, we choose rotations with angles the arguments of the $\lambda_n$). Let us write $\mathcal{H}$ as an infinite orthogonal sum $\oplus_n\mathcal{H}_n$ where $\mathcal{H}_n$ is a closed subspace of infinite dimension. Let us define a unitary operator $U$ that is the multiplication by $\lambda_n$ on each $\mathcal{H}_n$. We claim that the conjugacy class of $U$ is dense in $U(\mathcal{H})$. That is, for any $U_0\in U(\mathcal{H})$, $x_1,\dots,x_k$ in the unit sphere of $\mathcal{H}$ and $\varepsilon>0$, there is $U'$ in the conjugacy class of $U$ such that for any $i=1,\dots,k$, \begin{equation}\label{fr}\|U'(x_i)-U_0(x_i)\|<\varepsilon.\end{equation} Let us apply Lemma~\ref{finite rank} to $U_0$ and $x_1,\dots,x_k$. We get an operator $U'_0$ that coincides with $U_0$ on the span of the $x_i$'s and that is trivial on a finite codimension subspace. Let us denote by $U_1$ the restriction of $U'_0$ on the orthogonal of this finite codimension subspace. There is an orthonormal basis $f_1,\dots, f_l$ of $\mathcal{F}$ such that $U_1$ acts diagonally in this basis, multiplying each $f_j$ by some $\alpha_j\in S^1$. For each $j$, choose $\lambda_{i(j)}$ such that $|\alpha_j-\lambda_{i(j)}|<\varepsilon/k$. Now, find a unitary operator $T$ mapping some unit vector of $\mathcal{H}_{i(j)}$ to $f_j$ for each $j$ and set $U'$ to be $TUT^{-1}$. This way, each $f_j$ is an eigenvector of $U'$ with eigenvalue $\lambda_{i(j)}$ and thus for each $x$ in the unit sphere of $\mathcal{F}$, $\|U'(x)-U_1(x)\|<\varepsilon$. Restricting this equality to the $x_i$'s, we get Inequality \eqref{fr}.\end{proof} \begin{thm}\label{densecca} The Polish group $\Isom(\mathcal{H})$ has a dense conjugacy class.\end{thm} \begin{proof} We prove that the element $U$ constructed in the proof of Theorem~\ref{densecc} has a dense conjugacy class in $\Isom(\mathcal{H})$. Thanks to Lemma~\ref{finite rank} and the fact that translations act transitively, it suffices to approximate elements $g$ such that $g$ preserves a finite dimensional linear subspace and acts trivially on its orthogonal. Let us recall that all isometries of finite dimensional Hilbert spaces are semisimple, that is the infimum in the definition of the translation length is actually a minimum. If $g$ has a fixed point, it is conjugated to an element of $O$, the stabilizer of the origin in $\mathcal{H}$. In this case, it lies in the closure of the conjugacy class of $U$ by Theorem~\ref{densecc}. Now assume that $\ell(g)>0$. Let us write $g(x)=Ax+b$ where $A\in O$ and $b\in\mathcal{H}$. The Hilbert space $\mathcal{H}$ splits orthogonally as $\im(I-A)\oplus\ker(I-A)$. Let us set $A_0$ to be the restriction of $A$ to $\im(I-A)$ and let $b=b_0+b_1$ be the decomposition of $b$ with respect to this splitting. The isometry $g$ acts diagonally with respect to this splitting as $g_0\times g_1$ where $g_0(x_0)=A_0x_0+b_0$ and $g_1(x_1)=x_1+b_1$. Since $b_0\in\im(I-A_0)$, $g_0$ has a fixed point. Up to conjugating $g$ by a translation along $\im(I-A)$, we may assume that this fixed point is $0$, that is $b_0=0$. Actually, $\|b_1\|=\ell(g)$ and thus $b_1\neq0$. Thanks to Theorem~\ref{densecc}, it suffices to show that for any $\varepsilon>0$ and $x_1,\dots,x_k\in\mathcal{H}$, one can find an elliptic element $h$, that is an element with a fixed point, such that for any $i\in\{1,\dots,k\}$, $\| g(x_i)-h(x_i)\|<\varepsilon$. Up to projecting these vectors on $\im(I-A)$ and $\ker(I-A)$, we may assume they lie in $\im(I-A)$ or in $\ker(I-A)$. Let $g_0'\in\OO(\mathcal{H})$ acting like $g_0$ on the $x_i$'s in $\im(I-A)$ and being trivial on a finite codimension subspace $\mathcal{F}$ containing $\ker(I-A)$. Such an element is given by Lemma~\ref{finite rank} for $g_0$. Now choose a unit vector $u\in\mathcal{F}$ orthogonal to all the $x_i$'s and $b_1$. Fix $r$ such that all projections of the $x_i$'s on $\mathbf{R} b_1$ and $b_1$ have norm at most $\frac{\varepsilon r}{b_1}$. Choose $R$ large enough such that $R\left(1-\frac{1}{\sqrt{1+r^2/R^2}}\right)<\varepsilon$. Let $c$ be the point $Ru$. Let $\rho$ be the rotation with center $c$ in plane spanned by $(u,b_1)$ and angle $\alpha_1$ such that $\sin(\alpha_1)=\frac{b_1}{R}$. In the frame centered at $c$ with bases $(u,b_1)$, the point $\lambda \frac{b_1}{\|b_1\|}$ has coordinates $(R,\lambda)$. Its image by the translation of vector $b_1$ is $(R,\lambda+b_1)$ and its image by $\rho$ is $(R\cos(\alpha_1)-\lambda\sin(\alpha_1),R\sin(\alpha_1)+\lambda\cos(\alpha_1))$. With our assumptions, for $\lambda$ corresponding to the projection of one of the $x_i$'s, one has $\lambda\sin(\alpha_1)\leq \frac{\varepsilon r}{b_1}\times \frac{b_1}{R}<\varepsilon$. Since $|R-R\cos(\alpha_1)|<\varepsilon$, $| \lambda-\lambda\cos(\alpha_1)|<\varepsilon$ and $R\sin(\alpha_1)=b_1$, the images of the projections of the $x_i$'s on $\mathbf{R} b_1$ by the translation or the rotation are at distance at $\sqrt{5}\, \varepsilon$. Let us define $h$ to coincide with $g_0'$ on $\mathcal{F}^\bot$ and $\rho$ on $\mathcal{F}$. By construction, for any $x_i$, $\| g(x_i)-h(x_i)\|<\sqrt{5}\, \varepsilon$ and this finishes the proof.\end{proof} \begin{rem} Some Polish groups, like $\mathcal{S}_\infty$, have, moreover, generic elements, that are elements with a comeager conjugacy class. There elements are generic elements. One can find in \cite[Discussion 5.9]{ben2004generic} that $Isom(\mathcal{H})$ nor the unitary group have generic elements. \end{rem} \begin{thm}\label{nodenseconj} The Polish group $\Isom(\H)$ has no dense conjugacy class. \end{thm} \begin{proof} By Corollary \ref{neutral}, if there are elements with a dense conjugacy class then those elements are neutral, that they have vanishing translation length. These elements preserve some sphere or horosphere. Let $\varepsilon>0$ and $g$ be a transvection with positive translation length. Let us fix $x_1,x_2,x_3$ on the axis of $g$ such that $x_2=g(x_1)$ and $x_3=g(x_2)$. For a contradiction, assume there is a neutral element $h\in\Isom(\H)$ such that $d(h(x_i),g(x_i))<\varepsilon/2$ for $i=1,2,3$. So $d(h^2(x_1),x_3)<\varepsilon$. If $c$ (respectively $\xi$) is a fixed point of $h$ in $\H$ (respectively in $\partial \H$), $x_2,x_3$ are at most $\varepsilon$ from the sphere (respectively the horosphere) centered at $c$ (resp. at $\xi$) through $x_1$. Up to using a rotation around the axis of $g$, we may assume $c$ (resp. $\xi$) lies in some fixed two-dimensional totally geodesic subspace $\H^2$. Letting $\varepsilon \to0$ and upon extracting a converging sequence for the centers in $\overline{\H^2}$, we find a sphere or a horosphere containing $x_1,x_2,x_3$ in $\H^2$. But this impossible because in the Poincaré ball model for $\H^2$, the axis of the transvection may be represented by a straight line, a sphere or an horosphere is a circle and the intersection of a circle and a line contains at most 2 points. \end{proof} \bibliographystyle{../../Latex/Biblio/halpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter{Introduction} \label{sec:intro} \pagenumbering{arabic} \setlength{\footskip}{.5in} Let ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ be the moduli space of stable maps from $n$-pointed, genus $g$ curves to $X$ of class $\beta$, to be defined in Section \ref{sec:modintro}. If $X={\mathbb P}^r$, we can identify the homology class $\beta$ with an integer $d=d[line]$, the {\em degree} of the stable map. In this case we write the moduli space of stable maps as ${\overline{\mathcal{M}}_{g,n}(\mathbb{P}^{r},d)}$. In this dissertation we will study the intersection theory of the moduli spaces ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$. Moduli spaces of stable maps have proven useful in studying both superstring theory and enumerative geometry. Here we will only mention briefly the physics side of the story. In the last fifteen years, superstring theory emerged as a serious contender for the ``Theory of Everything," {\em i.e.}, a fundamental physical description of the universe. At the heart of the extremely complex string theory revolution lies a simple concept. Elementary ``particles,'' the basic constituents from which everything in our universe is built, are actually tiny vibrating loops. More recently it has become apparent that string theory is not quite the final answer. Higher dimensional versions of strings, called $D$-branes, are added in the more comprehensive $M$-theory. However, the theory of strings continues to play a key role in this bigger picture. See \cite{G} for a non-technical introduction to string theory and $M$-theory. As a string propagates through time, it traces out a {\em world sheet}, which can be viewed mathematically as a Riemann surface or an algebraic curve. Since the world sheet lives in space-time, it seems reasonable to study algebraic curves living inside manifolds or smooth varieties. However, preserving the full data of the world sheet requires using {\em maps} of curves into space-time instead. Considering the world sheet as a curve already inside the ambient space amounts to looking at the image of the map. Much like representing the path of an object as a zero set of equations rather than with a parametric curve, this viewpoint discards important information about the the world sheet, such as how it may cross itself. It turns out that studying maps from algebraic curves into a space is also necessary in order to obtain a sound mathematical theory of enumerative geometry of curves in that space. Physicists want to determine various values associated with the space of all possible world sheets via the use of {\em correlation functions}. They compute these correlation functions by means of Feynman integrals, which do not have a rigorous mathematical definition. Developing a solid mathematical foundation for such computations was the primary motivation behind the introduction of moduli spaces of stable maps in \cite{KM}. Examples include {\em instanton numbers}, which intuitively count the number of holomorphic instantons on $X$ (nonconstant holomorphic maps from Riemann surfaces to $X$). Instanton numbers are calculated using other values called {\em Gromov-Witten invariants}. Naively, Gromov-Witten invariants should count the number of curves of a certain homology class and genus which pass through certain subvarieties of the target space. More specifically, let $X$ be a projective manifold, $\beta\in H_2(X)$, $g$ and $n$ nonnegative integers. Let $\gamma_1,\ldots,\gamma_n\in H^*(X)$ be cohomology classes such that there exist subvarieties $Z_1,\ldots,Z_n$ with $Z_i$ representing the Poincar\'{e} dual of $\gamma_i$. Naively, the Gromov-Witten invariant $\langle\gamma_1,\ldots,\gamma_n\rangle_{g,\beta}$ should count the number of genus $g$ curves of class $\beta$ that intersect all of the $Z_i$. Also of interest are {\em gravitational correlators}, to be defined in Section \ref{sec:app}, which generalize Gromov-Witten invariants and include them as a special case. Gravitational correlators are defined and computed mathematically as intersection numbers on the moduli space of stable maps. For example, the Gromov-Witten invariant $\langle\gamma_1,\ldots,\gamma_n\rangle_{g,\beta}$ described above is given by \[\langle\gamma_1,\ldots,\gamma_n\rangle_{g,\beta}=\int_{[{\overline{\mathcal{M}}_{g,n}(X,\beta)}]^\text{vir}}\prod_{i=1}^n \operatorname{ev}_i^*(\gamma_i)\text{.}\] Of course, it is necessary to understand exactly what this integral means and how to compute it. This brings us to the consideration of the mathematical aspects of the moduli space of stable maps and concludes our excursion into the physical motivation. While physics provided the original impetus for their introduction, moduli spaces of stable maps have also been used to give answers to many problems of enumerative geometry that do not necessarily arise in physics and that were inaccessible by previous methods. Enumerative geometry seeks to determine numbers of geometric objects of a given type that satisfy certain conditions. The most natural method for answering such enumerative questions consists of the following steps. First, construct a moduli space parameterizing the type of objects to be counted. Second, define an intersection theory on the moduli space, including an appropriate fundamental class against which integrals are to be evaluated. Third, identify subspaces of the moduli space (and their associated classes in the intersection theory) corresponding to the various conditions imposed by the enumerative problem. Finally, integrate the product of these classes against the fundamental class. The result can only be nonzero if the sum of the codimensions of these subspaces is equal to the dimension of the fundamental class. In this case the intersection of these subspaces (with respect to the fundamental class) is expected to be a finite number of points. These points correspond to solutions of the enumerative problem. The integral computes the number of such points (with multiplicities), and hence the number of solutions to the enumerative problem. We will particularly direct our attention to the enumerative geometry of curves and to the second step of the above process. This step involves defining an intersection ring for the moduli space. Its algebraic incarnation, which we deal with for the most part, is also called the Chow ring. In the topological category it is usually referred to as the cohomology ring. The Chow ring $A^*(X)$ of a space X gives an algebraic way to compute intersections in it. Intersections correspond to multiplications in the ring; unions correspond to addition. See Section \ref{sec:int} for the definition of the relevant Chow rings. To ensure a satisfactory theory, computations in enumerative geometry should occur on a compact moduli space; this prevents solutions from ``disappearing to infinity." (See \cite{Katz} for some simple examples.) One is usually interested in the enumeration of smooth curves, but the parameter space of smooth curves in an ambient space is not compact in any nontrivial situation. There are various ways to compactify this space, but Kontsevich's compactification by the moduli space of stable maps seems especially well-suited for enumerative purposes. In the ten years since Kontsevich introduced the concept in \cite{KM} and \cite{Ko}, the moduli space of stable maps has been exploited to solve a plethora of enumerative problems for curves via the above process. As a rule, these results were derived without a complete description of the Chow rings involved. Instead, the requisite intersection numbers were calculated somewhat indirectly, most often using the method of {\em localization}, which will be described in Section \ref{sec:eq}. Such a complete description would be the key step in giving another, more direct, computation of these enumerative numbers and possibly many others. Since a presentation for a ring gives an easy way to compute all products in the ring, giving presentations for the Chow rings of moduli spaces of stable maps is the clear path for attaining a full and direct knowledge of their intersection theory. As a consequence, this would also help give a new and more direct way of determining values of instanton numbers, Gromov-Witten invariants, and gravitational correlators. So far, presentations for Chow rings of moduli spaces of stable maps have been given only in a few special cases. Most of these have projective space as the target of the stable maps, and in this case the moduli space ${\overline{\mathcal{M}}_{g,n}(\mathbb{P}^{r},d)}$ depends on four nonnegative integer parameters: the genus $g$ of the curves, the number $n$ of marked points on the curves, the dimension $r$ of the target projective space, and the degree $d$ of the stable maps. Most impressive is Mustata's presentation in \cite{M} for $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,d))$ for arbitrary $d$ and $r$. (Recently A. Mustata and M. A. Mustata have provided an extended description of this presentation in \cite{MM}.) Behrend and O'Halloran give a presentation for $A^*({\overline{\mathcal{M}}}_{0,0}({\mathbb P}^r,2))$ and conjecture a presentation for $A^*({\overline{\mathcal{M}}}_{0,0}({\mathbb P}^r,3))$ in \cite{BO}. Also of relevance, Dragos Oprea has recently described a system of {\em tautological subrings} of the cohomology (and hence Chow) rings in the genus zero case and shown that, if the target $X$ is an $\operatorname{SL}$ flag variety, then all rational cohomology classes on ${\overline{\mathcal{M}}_{0,n}(X,\beta)}$ are tautological. This gives, at least in principle, a set of generators for any such Chow ring, namely its tautological classes. He furthermore describes an additive basis for the cohomology ring of any genus zero moduli space (with target a projective algebraic variety), which is a substantial step toward giving a presentation. Finally, he speculates that all relations between the tautological generators are consequences of the topological recursion relations. These developments may provide direction for finding presentations for the Chow rings of many more moduli spaces of stable maps in the near future. See \cite{O} and \cite{O2} for more details. More basic examples include $A^*({\overline{\mathcal{M}}}_{0,n}({\mathbb P}^r,0))\simeq A^*({\mathbb P}^r)\times A^*({\overline{M}}_{0,n})$, where ${\overline{M}}_{0,n}$ is the moduli space of stable curves. This case reduces to finding presentations for the rings $A^*({\overline{M}}_{0,n})$, and Keel does so in \cite{K}. Also ${\overline{\mathcal{M}}}_{0,0}({\mathbb P}^r,1)$ is isomorphic to ${\mathbb G}(1,r)$, the Grassmannian of lines in projective space, and ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,1)$ is isomorphic to ${\mathbb F}(0,1;r)$, the flag variety of lines in projective space together with a point on the line. The spaces ${\overline{\mathcal{M}}}_{0,n}({\mathbb P}^1,1)$ are Fulton-MacPherson compactifications of configuration spaces of ${\mathbb P}^1$. Presentations for their Chow rings were given by Fulton and MacPherson in \cite{FM}. Detailed descriptions of Chow rings of spaces ${\overline{\mathcal{M}}_{g,n}(\mathbb{P}^{r},d)}$, with $g>0$, are almost nonexistent. Additional complications arise in this case. This dissertation gives the first known presentation for a Chow ring of a moduli space of stable maps of degree greater than one with more than one marked point. In particular, we give the following presentation for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$: \[A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))\simeq\frac{{\mathbb Q}[D_0,D_1,D_2,H_1,H_2,\psi_1,\psi_2]} {\left({H_1^2, H_2^2,D_0\psi_1,D_0\psi_2,D_2-\psi_1-\psi_2, \psi_1-\frac{1}{4}D_1-\frac{1}{4}D_2-D_0+H_1, \atop \psi_2-\frac{1}{4}D_1-\frac{1}{4}D_2-D_0+H_2, (D_1+D_2)^3, D_1\psi_1\psi_2}\right)}\text{.}\] \vspace{.1in} \noindent Some steps involved in finding this presentation are extended to the case of the Chow rings $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))$, with target arbitrary dimensional projective space. Presentations for these Chow rings will be included in a future paper. Chapter \ref{sec:mod} gives the background on moduli spaces of stable maps and their intersection theory. Chapters \ref{sec:ser} through \ref{sec:prez} provide a detailed construction of the presentation for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$ and prove that this presentation is complete. Knowing the Betti numbers of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)$ is the important first step in our computation, since it will give us a good idea of how many generators and relations to expect in each degree. We accomplish this in Chapter \ref{sec:ser} by using the equivariant Serre polynomial method of \cite{GP}. In fact, in this chapter we compute the Betti numbers of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ for arbitrary $r$. In Chapter \ref{sec:gen} we list some natural divisor classes that occur in the Chow rings of all moduli spaces of stable maps. It will become clear later that in the case of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)$ these classes generate the Chow ring. Relations among these classes are found in Chapter \ref{sec:rel}. In the course of proving these relations, computations on moduli spaces of stable maps with fewer marked points or lower degree naturally arise. Chapter \ref{sec:simpler} describes presentations for the Chow rings of these simpler moduli spaces. All of these pieces are compiled to give the whole presentation in Chapter \ref{sec:prez}. This chapter also explains why the presentation is complete, {\em ie.}, how we know it includes enough generators and relations to capture the entire Chow ring. Finally, Chapter \ref{sec:app} applies the presentation to give a new computation of the genus zero, degree two, two-pointed gravitational correlators of ${\mathbb P}^1$. Algorithms for computing theses values have previously been developed; see \cite{KM2} and \cite{CK}, for example. \renewcommand{\baselinestretch}{1} \chapter{Preliminaries on moduli spaces of stable maps} \label{sec:mod} \section{Moduli spaces of stable maps} \label{sec:modintro} We will work over the field ${\mathbb C}$ of complex numbers. All schemes will be algebraic schemes over ${\mathbb C}$, and we let $(Sch/{\mathbb C})$ denote the category of such schemes. In this section, the definitions could just as well be given for the category of schemes over any field, and all the results about moduli stacks hold over any field as well. Let $\und{n}={\mathbb N}\cap[1,n]$ be the initial segment consisting of the first $n$ natural numbers. The basic notions of algebraic geometry used below can be found in \cite{H}. For information about stacks, see \cite{LM}. \begin{Def} \label{curve} An {\em $n$-pointed prestable curve $(C,p_1, . . . ,p_n)$ of genus $g$} over ${\mathbb C}$ is a connected, reduced, projective, at worst nodal curve $C$ of arithmetic genus $g$ together with $n$ distinct, nonsingular marked points $p_1, . . . ,p_n$ on $C$. \end{Def} We will often refer to $n$-pointed prestable curves of genus $g$ as $n$-pointed, genus $g$ curves, or simply as {\em curves}. \begin{Def} The {\em special points} of a curve are the marked points and the nodes. \end{Def} \begin{Def}\label{def:sm} Let X be a scheme. Let $\beta\in H_2(X,{\mathbb Z})$. A {\em stable map $(C,x_1,\ldots,x_n,f)$ of class $\beta$} from an $n$-pointed, genus $g$ curve $C$ to $X$ is a morphism $f:C\rightarrow X$ such that the push-forward $f_*([C])$ of the fundamental class is $\beta$ and, moreover, this data satisfies the stability condition: If $E$ is an irreducible component of $C$ on which $f$ is constant, then \begin{enumerate} \item If $g(E)=0$, then $E$ contains at least three special points of $C$. \item If $g(E)=1$, then $E$ contains at least one special point of $C$. \end{enumerate} \end{Def} \begin{Def}\label{def:fsm} A {\em family $(\pi:{\mathcal C}\rightarrow S,s_1,\ldots,s_n,\mu)$ of stable maps from $n$-pointed, genus $g$ curves to $X$ of class $\beta$} over a scheme $S$ consists of a flat, proper morphism $\pi:{\mathcal C}\rightarrow S$, $n$ sections $s_1,...,s_n$ of $\pi$, and a morphism $\mu:{\mathcal C} \rightarrow X$ such that for every geometric point $s\in S$, the fiber $({\mathcal C}_s,s_1(s),...,s_n(s),\mu|_{{\mathcal C}_s})$ is a stable map from an $n$-pointed, genus $g$ curve to $X$ of class $\beta$. \end{Def} \begin{Def}\label{def:mor} A {\em morphism} from a family of stable maps $(\pi:{\mathcal C}\rightarrow S,s_1,...,s_n,\mu)$ to another family $(\pi^{\prime}:\mathcal{C}^{\prime}\rightarrow T,t_1,...,t_n,\nu)$ is a fiber diagram \begin{table*}[h] \begin{center} \begin{equation*} \leavevmode \xymatrix{{{\mathcal C}} \ar[d]^{\pi} \ar[r]^{\phi} & {\mathcal{C}^{\prime}} \ar[d]^{\pi^{\prime}}\\ {S} \ar[r]^{f} & {T}} \end{equation*} \end{center} \end{table*} such that $\nu\circ\phi=\mu$ and $t_i\circ f=\phi\circ s_i$ for every $i\in \und{n}$. \end{Def} The definitions of an isomorphism of stable maps and of an automorphism of a stable map are clear from the definition of a morphism. A stable map as in Definition \ref{def:sm} is a family of stable maps over $\operatorname{Spec}({\mathbb C})$. The above definitions concerning stable maps produce a moduli problem. The associated moduli functor \linebreak[3] ${\overline{\mathbf M}_{g,n}(X,\beta)}:(Sch/{\mathbb C})^\circ \rightarrow(Sets)$ to the category of sets is given on objects by \renewcommand{\baselinestretch}{1} \small\normalsize \vspace{.1in} \[ {\overline{\mathbf M}_{g,n}(X,\beta)}(S)=\left\{\begin{array}{c} \text{isomorphism classes of families of} \\ \text{stable maps from $n$-pointed, genus $g$ } \\ \text{curves to $X$ of class $\beta$ over $S$} \end{array}\right\}\text{.} \] \small\normalsize \noindent Given a morphism $g:T\rightarrow S$, ${\overline{\mathbf M}_{g,n}(X,\beta)}(g)$ takes a family $(\pi:C\rightarrow S,s_1,\ldots,s_n,\mu)$ to $(\pi^{\prime}:C\times_S T\rightarrow T,s_1^{\prime}, \ldots,s_n^{\prime}, \mu\circ g^{\prime})$, where $\pi^{\prime}$, $g^{\prime}$, and the $s_i^{\prime}$ are the morphisms naturally induced by the fiber product. Unfortunately, the functor ${\overline{\mathbf M}_{g,n}(X,\beta)}$ is not usually representable by a scheme; there is hardly ever a fine moduli space for this moduli problem. There are two natural routes to pursue in search of some other sort of moduli space, and both prove fruitful. First, ${\overline{\mathbf M}_{g,n}(X,\beta)}$ does have a {\em coarse} moduli scheme ${\overline{M}_{g,n}(X,\beta)}$, at least if $X$ is projective. Second, by enlarging our ambient category from complex schemes to complex stacks $(St/{\mathbb C})$, we can consider the moduli stack ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ of stable maps. The moduli stack captures all the data of the moduli problem, while the moduli scheme loses some information, including that of automorphisms of families. Since retaining all of this data leads to a more beautiful, powerful, and complete theory, we will work with the stack incarnations of the moduli spaces. Define a category ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ over $Sch/{\mathbb C}$ whose objects are families of stable maps from $n$-pointed, genus $g$ curves to $X$ of class $\beta$ over complex schemes and whose morphisms are as in Definition \ref{def:mor}. \begin{prop} The category ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ is a complex stack. \end{prop} \noindent First, ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ is a groupoid by definition: Given a family over a scheme $S$ and a morphism $T\rightarrow S$, the fiber product as in Definition \ref{def:mor} always exists, and it is unique up to a unique isomorphism. Further, isomorphisms are a sheaf and every descent datum is effective. These properties follow from Grothendieck's descent theory. (See \cite{Gr} and \cite[Chapter V]{Man}.) The following two basic properties of moduli stacks of stable maps were first proven by Kontsevich in \cite{Ko}. More detailed proofs appear in \cite{FP}, although the language of stacks is avoided there. \begin{prop} Let $X$ be a projective scheme of finite type over ${\mathbb C}$. Then ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ is a proper Deligne-Mumford stack of finite type. \end{prop} For the next property, we need two definitions. \begin{Def} Let $\mathcal{M}_{g,n}(X,\beta)$ be the substack of ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ corresponding to stable maps from smooth curves. \end{Def} \begin{Def} A smooth, complete variety $X$ is {\em convex} if, for any morphism $f:{\mathbb P}^1\rightarrow X$, $H^1({\mathbb P}^1,f^*(T_X))=0$, where $T_X$ is the tangent bundle of $X$. \end{Def} \noindent The most important examples of convex varieties are {\em homogeneous} varieties, that is, varieties which are the quotient of an algebraic group by a parabolic subgroup. In particular, projective spaces are homogeneous, thus convex. Moduli spaces of stable maps to convex varieties have many nice properties, especially when $g=0$. \begin{prop} \label{nicemod} Let $X$ be a smooth, proper, convex scheme. Then the stack \linebreak[4] ${\overline{\mathcal{M}}_{0,n}(X,\beta)}$ is smooth, and the complement of $\mathcal{M}_{0,n}(X,\beta)$ is a divisor with normal crossings. \end{prop} We think of ${\overline{\mathcal{M}}_{0,n}(X,\beta)}$ as a compactification of $\mathcal{M}_{0,n}(X,\beta)$; indeed, stable maps were defined with this in mind. Besides allowing degenerations of the maps themselves, the compactification allows marked points to approach nodes and each other without coinciding. In the limit, new components of the curve appear at the place where such coincidence would otherwise occur. We illustrate the two main types of such ``sprouting" of new components. \begin{center} \begin{pspicture}(0,0)(9,4) \pnode(.5,1){a} \pnode(3.5,1){b} \dotnode(1.5,1){c} \dotnode(2.5,1){d} \ncline{a}{b} \uput{5pt}[d](1.5,1){$i$} \uput{5pt}[d](2.5,1){$j$} \uput{5pt}[l](.5,1){$d$} \pnode(1.5,1.25){e} \pnode(2.25,1.25){f} \ncline{->}{e}{f} \pnode(4,1.5){g} \pnode(5,1.5){h} \ncline{->}{g}{h} \pnode(5.5,1){i} \pnode(8.5,1){j} \pnode(7.5,.5){k} \pnode(7.5,3.5){l} \dotnode(7.5,1.75){m} \dotnode(7.5,2.75){n} \ncline{i}{j} \ncline{k}{l} \uput{5pt}[r](7.5,2.75){$i$} \uput{5pt}[r](7.5,1.75){$j$} \uput{5pt}[u](7.5,3.5){$0$} \uput{5pt}[l](5.5,1){$d$} \end{pspicture} \vspace{0.1in} \begin{pspicture}(0,0)(9,4) \pnode(.5,1){a} \pnode(3.5,1){b} \pnode(2.5,.5){c} \pnode(2.5,3.5){d} \ncline{a}{b} \ncline{c}{d} \dotnode(1.5,1){z} \uput{5pt}[d](1.5,1){$i$} \uput{5pt}[l](.5,1){$d_1$} \uput{5pt}[u](2.5,3.5){$d_2$} \pnode(1.5,1.25){e} \pnode(2.25,1.25){f} \ncline{->}{e}{f} \pnode(4,1.5){g} \pnode(5,1.5){h} \ncline{->}{g}{h} \pnode(5.5,1){i} \pnode(8.5,1){j} \pnode(7.5,.5){k} \pnode(7.5,3.5){l} \pnode(8.5,3){m} \pnode(5.5,3){n} \ncline{i}{j} \ncline{k}{l} \ncline{m}{n} \dotnode(7.5,2){y} \uput{5pt}[l](7.5,2){$i$} \uput{5pt}[l](5.5,1){$d_1$} \uput{5pt}[l](5.5,3){$d_2$} \uput{5pt}[u](7.5,3.5){$0$} \end{pspicture} \end{center} \noindent The new components arising in this way always have degree zero and contain any marked points involved in the limit. Of course, larger numbers of marked points can simultaneously approach each other as well. The degenerations become more complicated. (See \cite{FM}.) Behrend and Manin prove further basic properties of these moduli stacks in \cite{BM}, including a description of the universal family $(\pi:{\mathcal C}\rightarrow{\overline{\mathcal{M}}_{g,n}(X,\beta)},\sigma_1,\ldots,\sigma_n,\mu:{\mathcal C})$ over the moduli stack ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$. This description can be conveniently expressed using contraction (or ``forgetful") morphisms, which we will now introduce. En route to doing so, we will briefly describe another class of moduli spaces that will play an auxiliary role in this dissertation, the moduli spaces of stable curves. \begin{Def} An $n$-pointed, genus $g$ curve $C$ (Definition \ref{curve}) is {\em stable} if, whenever $E$ is an irreducible component of $C$, then \begin{enumerate} \item If $g(E)=0$, then $E$ contains at least three special points of $C$. \item If $g(E)=1$, then $E$ contains at least one special point of $C$. \end{enumerate} \end{Def} \noindent Comparing with Definition \ref{def:sm}, we see that stable curves correspond to stable maps to a point. The definitions of families of stable curves and morphisms between these families are analogous to Definitions \ref{def:fsm} and \ref{def:mor}. Families of $n$-pointed, genus $g$ stable curves, together with their morphisms, form a category ${\overline{\mathcal{M}}}_{g,n}$. All of the ${\overline{\mathcal{M}}}_{g,n}$ are Deligne-Mumford stacks except for ${\overline{\mathcal{M}}}_{0,0}$, ${\overline{\mathcal{M}}}_{0,1}$, ${\overline{\mathcal{M}}}_{0,2}$, and ${\overline{\mathcal{M}}}_{1,0}$. Stable curves do not exist in these cases with the definition above, although these moduli spaces do exist as Artin stacks. Coarse moduli schemes ${\overline{M}}_{g,n}$ also exist with the same exceptions. In fact, for $n\geq 3$, Knudsen shows in \cite{Kn} that ${\overline{M}}_{0,n}$ is a fine moduli scheme and a complete, nonsingular variety. The stack ${\overline{\mathcal{M}}}_{0,n}$ itself may be considered as a variety in this case. We will use the notation ${\overline{M}}_{0,n}$ for these moduli spaces in recognition of these nice properties. Knudsen defines and constructs contraction morphisms for families of stable curves. Assume that $2g-2+n>0$, and let $\phi$ be a morphism over $S$ from a family $({\mathcal C}\rightarrow S,s_1,\ldots,s_{n+1})$ of $(n+1)$-pointed, genus $g$ stable curves to a family $({\mathcal C}^{\prime}\rightarrow S,s^{\prime}_1,\ldots,s^{\prime}_{n})$ of $n$-pointed, genus $g$ stable curves. \begin{Def}\label{def:contract} With notation as above, let $s\in S$ be a geometric point, $E_s\subset{\mathcal C}_s$ the component of the fiber over $s$ containing the $(n+1)$'st marked point $s_{n+1}(s)$. The morphism $\phi$ is a {\em contraction} if $s_i^{\prime}=\phi s_i$ for $i\in\und{n}$ and $\phi_s:{\mathcal C}_s\rightarrow{\mathcal C}_s^{\prime}$ is an isomorphism except in the following situation: If $g(E_s)=0$ and $E_s$ has only three special points, then $E_s$ maps to a closed point ({\em i.e.}, is contracted), and $\phi_s$ is an isomorphism between the complements of $E_s$ and its image. \end{Def} Knudsen further shows that these contraction morphisms are unique. These morphisms give rise to contraction morphisms $\pi:{\overline{\mathcal{M}}}_{g,n+1}\rightarrow{\overline{\mathcal{M}}}_{g,n}$ of the moduli stacks via their universal families. By permuting labels on the marked points and applying the above repeatedly, we can extend the definition of contraction to morphisms that forget the marked points labeled by any subset $B\subset\und{n+1}$ of the labeling set (as long as the target is not one of the four exceptions listed earlier). Similarly, given $A\subset\und{n}$, there is a contraction morphism $\pi_A:{\overline{\mathcal{M}}}_{g,n}(X,\beta)\rightarrow{\overline{\mathcal{M}}}_{g,A}(X,\beta)$. (If $\beta=g=0$, we require $|A|\geq 3$, and if $\beta=0$ and $g=1$, we require $A\neq\emptyset$.) Here ${\overline{\mathcal{M}}}_{g,A}(X,\beta)$ parameterizes stable maps whose marked points are indexed by $A$, although contraction morphisms often implicitly include a monotonic relabeling of the marked points of the target by $\und{|A|}$. This map is given pointwise by forgetting all the marked points in the complement of $A$ and contracting components that become unstable. Behrend and Manin show these are morphisms by modifying the argument of \cite{Kn}. Contraction morphisms on the coarse moduli spaces are defined in the same way. In case only the $i$'th marked point is forgotten, we write the contraction as $\pi_i$. The family of curves $\pi:{\mathcal C}\rightarrow{\overline{\mathcal{M}}_{g,n}(X,\beta)}$ involved in the universal family is given by $\pi_{n+1}:{\overline{\mathcal{M}}}_{g,n+1}(X,\beta)\rightarrow{\overline{\mathcal{M}}_{g,n}(X,\beta)}$. On the fiber over $(C,x_1,\ldots,x_n,f)$, the universal stable map $\mu:{\mathcal C}\rightarrow X$ is induced by $f$ since there is an inclusion $C\rightarrow{\overline{\mathcal{M}}}_{g,n+1}(X,\beta)$. For $i\in\und{n}$, the image of a stable map $(C,x_1,\ldots,x_n,f)$ under the universal section $\sigma_i$ is the {\em stabilization} of the prestable map $(C,x_1,\ldots,x_n,x_i,f)$ where the $i$'th and $n+1$'st marked points agree. Stabilization is achieved by replacing $x_i$ with a rational component containing the $i$'th and $(n+1)$'st marked points and mapping this new component to the image of $x_i$. Stabilization is also a morphism (\cite{Kn}, \cite{BM}). There is another forgetful morphism $\operatorname{st}:{\overline{\mathcal{M}}_{g,n}(X,\beta)}\rightarrow{\overline{\mathcal{M}}}_{g,n}$ which forgets the map data, remembering only the source curves and their marked points, and contracts components which become unstable as a result. Manin calls this the {\em absolute stabilization map} and shows that it is a morphism in \cite[Chapter V]{Man}. More detail is available in \cite{BM}. Composing each universal section with the universal map of the universal family of stable maps over the moduli space, we get evaluation maps $\operatorname{ev}_1, . . . ,\operatorname{ev}_n$; $\operatorname{ev}_i=\mu_i\circ\sigma_i$. The universal map $\mu$ considered above can be identified with $\operatorname{ev}_{n+1}$. Pointwise we have $\operatorname{ev}_i(C,x_1,\ldots,x_n,f) =f(x_i)$. The last collection of morphisms we need is that of the {\em gluing morphisms}. Fiberwise, these involve gluing two disjoint curves together at a distinguished marked point of each. For stable maps, the images of these marked points must agree. (There is another kind of gluing morphism that identifies two marked points on the same curve. However, since this type of gluing always increases the arithmetic genus of the curve, it will not play a role in our study.) For families, this amounts to gluing along two sections. Proofs that these are morphisms can be found in \cite{Kn} for stable curves and \cite{Man} for stable maps. We end up with gluing morphisms \[{\overline{\mathcal{M}}}_{g_1,n_1+1}\times{\overline{\mathcal{M}}}_{g_2,n_2+1}\rightarrow{\overline{\mathcal{M}}}_{g_1+g_2,n_1+n_2} \] and \[{\overline{\mathcal{M}}}_{g_1,n_1+1}(X,\beta_1)\times_X{\overline{\mathcal{M}}}_{g_2,n_2+1}(X,\beta_2)\rightarrow {\overline{\mathcal{M}}}_{g_1+g_2,n_1+n_2}(X,\beta_1+\beta_2) \] among moduli spaces. For the latter maps, the fiber product is with respect to the evaluation morphisms corresponding to the two markings being glued. The image of a gluing map is a boundary divisor whose generic stable map has domain curve with two irreducible components, as described in Section \ref{sec:gen} for some special cases. The attributes of each component are inherited from the corresponding factor in the domain of the gluing map. \section{Intersection theory} \label{sec:int} We need to define the Chow rings of the moduli stacks ${\overline{\mathcal{M}}_{g,n}(\mathbb{P}^{r},d)}$ and describe their basic properties. Fulton's book \cite{F} presents a comprehensive introduction to intersection theory on algebraic schemes. The requisite extensions to an analogous theory on Deligne-Mumford stacks were developed by Vistoli in \cite{V}. Manin gives a wonderfully clear exposition of Vistoli's theory in \cite[Chapter V,\S\S 6--8]{Man}. Since Manin records much of the information we need on Chow groups, the first subsection will essentially reproduce parts of his exposition for the reader's convenience. The second subsection is for the most part taken directly from \cite{V}. Below we only review the relevant definitions and properties, referring the reader to \cite{V} and \cite{Man} for proofs and details. All stacks are assumed to be of finite type over a fixed field $k$ unless otherwise specified. \subsection{Chow groups} \begin{Def} Let $F$ be a stack. A {\em cycle of dimension $n$} on $F$ is an element of the free abelian group $Z_n(F)$ generated by the symbols $[G]$ for all $n$-dimensional integral closed substacks $G$ of $F$ . \end{Def} \noindent Here $[G]$ is the cycle associated with the closed substack $G$ via equivalence groupoids. A {\em rational function} on an integral stack $G$ is an equivalence class of morphisms $G^{\prime}\rightarrow{\mathbb A}^1$, where $G^{\prime}$ is an open dense substack of $G$, and morphisms are equivalent if they agree on a dense open substack. The rational functions on $G$ form a field $k(G)$. Let $W_n(F)=\+_Gk(G)$, the sum being taken over all integral substacks $G$ of $F$ of dimension $n+1$. If $X$ is a scheme, there is a homomorphism \[\operatorname{div}_n^X:W_n(X)\rightarrow Z_n(X)\] that takes a rational function to the cycle associated to its Weil divisor. Thus there is a morphism $\operatorname{div}_n$ of presheaves on the \'{e}tale topology of $F$ given on an open set $X\rightarrow F$ by $\operatorname{div}_n^X$. The presheaves $W_n$ and $Z_n$ are sheaves by descent theory, and their groups of global sections are $W_n(F)$ and $Z_n(F)$, respectively. \begin{Def} The {\em group $R_n(F)$ of $n$-dimensional cycles rationally equivalent to zero} on a stack $F$ is the image of $\operatorname{div}_n$ on global sections. \end{Def} \begin{Def} The {\em Chow group} of $F$ is $A_*(F)=\+_{n\geq 0}A_n(F)$, where $A_n(F)$ is the quotient $Z_n(F)/R_n(F)$. The {\em rational Chow group} $A_*(F)_{\mathbb Q}$ of $F$ is $A_*(F)\*{\mathbb Q}$. \end{Def} The basic operations of Vistoli's intersection theory regularly introduce fractional coefficients to cycles and their classes in reaction to nontrivial automorphism groups of the corresponding substacks. Thus it only makes sense to work with the rational Chow groups $A_*(F)_{\mathbb Q}$. We will use rational Chow groups throughout this dissertation, and from now on we write $A_*(F)_{\mathbb Q}$ as simply $A_*(F)$. All cycle groups will be tensored with ${\mathbb Q}$ as well. (Kresch has developed an integer-valued intersection theory on Deligne-Mumford stacks in \cite{Kr}, but one must still tensor with ${\mathbb Q}$ in order to do enumerative geometry.) For an integral stack $G$ of a stack $F$, let $\delta(G)$ be the degree of the automorphism group of a generic point of $G$. Define the {\em normalized fundamental cycle} of $G$ to be $[G]_{\operatorname{nor}}=\delta(G)[G]$, where $[G]$ is the fundamental cycle associated to $G$ as above. We will use the same symbols for the corresponding classes in the Chow group of $F$. Thus there are two notions of the fundamental class of a substack, and it turns out both are important. This may have first been pointed out by Mumford in \cite{Mu}, who described the classes $[G]_{\operatorname{nor}}$ as the appropriate ones for determining rational equivalences and the classes $[G]$ as the right ones for computing intersections. Although these classes only differ by a rational number, we must be careful to properly distinguish between them later. Otherwise the results will be confused and useless. Let $f:F\rightarrow G$ be a separated dominant morphism of finite type of integral stacks. It gives an imbedding $k(G)\rightarrow k(F)$. We define the {\em degree} of $f$ by \[\operatorname{deg}(f)=\operatorname{deg}(F/G)=\frac{\delta(G)}{\delta(F)}[k(F):k(G)]\text{.}\] Some basic operations of intersection theory are proper pushforward, flat pullback, and Gysin maps. We now give the definitions of the first two in Vistoli's theory. We will sketch the construction of Gysin maps and list some of their properties in the next subsection. \begin{Def} Let $f:F\rightarrow G$ be a morphism of stacks. \begin{enumerate} \item If $f$ is flat (and of constant relative dimension), we define the flat pullback \[f^*:Z_*(G)\rightarrow Z_*(F)\] by $f^*[G^{\prime}]=[G^{\prime}\times_G F]$ for any closed integral substack $G^{\prime}$ of $G$. \item If $F$ is proper, the proper pushforward \[f_*:Z_*(F)\rightarrow Z_*(G)\] is defined by $f_*[F^{\prime}]=\operatorname{deg}(F^{\prime}/G^{\prime})[G^{\prime}]$, where $F^{\prime}$ is an integral substack of $F$ and $G^{\prime}$ is its image in $G$. \end{enumerate} \end{Def} \begin{prop} The flat pullback and proper pushforward defined above pass to rational equivalence. \end{prop} Thus we get flat pullback $f^*:A_*(G)\rightarrow A_*(F)$ and proper pushforward $f_*:A_*(F)\rightarrow A_*(G)$. \subsection{Cones, local regular embeddings, and Gysin maps} Constructing Gysin maps for Chow groups of stacks requires additional theory, which we will only sketch. The most important players are cones and regular local embeddings. We quote the relevant material from \cite{V}. Cones can be constructed by descent: Let \vspace{0.1in} \renewcommand{\baselinestretch}{1}\small\normalsize \psset{arrows=->} \begin{center} \begin{psmatrix} $U\times_F U$ & $U$ \ncline[offset=-3pt]{1,1}{1,2}\nbput{$p_1$} \ncline[offset=3pt]{1,1}{1,2}\naput{$p_2$} \end{psmatrix} \end{center} \psset{arrows=-} \vspace{0.1in} \noindent be a presentation of $F$. If $C_U$ is a cone on $U$, and there is an isomorphism of cones $p_1^*C_U\simeq p_2^*C_U$ satisfying the cocycle condition, then $C_U$ is the pullback of a canonically defined cone on $F$. \begin{lem} \label{locemb} Let $f:F\rightarrow G$ be a representable morphism of finite type of stacks. Then $f$ is unramified if and only if there are atlases $U\rightarrow F$ and $V\rightarrow G$ together with an embedding $U\rightarrow V$ compatible with $f$. \end{lem} For this reason, a representable, unramified morphism of finite type of stacks is called a {\em local embedding}. \begin{Def} A local embedding of stacks $f:F\rightarrow G$ is {\em regular of codimension $d$} if we can choose $U$, $V$ and the local embedding $f^{\prime}:U\rightarrow V$ as in the previous lemma such that $f^{\prime}$ is a regular embedding of codimension $d$. \end{Def} \begin{eg} If $F$ is a smooth stack over a scheme $S$ of constant relative dimension $d$, then the diagonal $F\rightarrow F\times_S F$ is a regular local embedding of codimension $d$. \end{eg} In the situation of Lemma \ref{locemb}, consider the normal cone $C_{U/V}$ to $U$ in $V$. One can show that there is an isomorphism from $p_1^*C_{U/V}$ to $p_2^*C_{U/V}$ that satisfies the cocycle condition. \begin{Def} The cone $C_{F/G}$ obtained by descent from $C_{U/V}$ is called the {\em normal cone} to $F$ in $G$. If $f$ is a regular local embedding, $C_{F/G}$ is a vector bundle called the normal bundle and written $N_{F/G}$. \end{Def} Gysin maps can be defined in a manner quite similar to that used for schemes in \cite{F}. Suppose we have a fiber square of stacks \vspace{0.2in} \renewcommand{\baselinestretch}{1}\small\normalsize \psset{arrows=->} \begin{equation}\label{square} \text{ \begin{psmatrix} $F^{\prime}$ & $G^{\prime}$ \\ $F$ & $G$ \ncline{1,1}{2,1}\naput{$p$} \ncline{1,1}{1,2}\naput{$g$} \ncline{2,1}{2,2}\naput{$f$} \ncline{1,2}{2,2}\naput{$q$} \end{psmatrix} }\end{equation} \psset{arrows=-} \vspace{0.2in} \noindent with $f$ a local regular imbedding of codimension $d$. Vistoli assumes that $G^{\prime}$ is a scheme for his construction. We can reduce to the case where $G^{\prime}$ is a scheme by taking atlases. Thus we can use Vistoli's approach. First assume $G^{\prime}=T$ is a purely $k$-dimensional scheme, so that $F^{\prime}=U$ is as well. Then $g$ is a local imbedding of schemes. There is a natural closed imbedding $C_{U/T}\rightarrow N$, where $N=p^*N_{F/G}$ is the pullback of the normal bundle of $F$ in $G$. Let $s:U\rightarrow N$ be the zero section, and let $s^*:A_*(N)\rightarrow A_*(U)$ be the Gysin homomorphism defined in \cite[\S 3.3]{F} as the inverse to the isomorphism induced by the vector bundle projection. Define $(F.T)\in A_{k-d}(U)$ to be $(F.T)=s^*[C_{U/T}]$. Now suppose $G^{\prime}$ is an arbitrary scheme. Let $y^{\prime}=\sum_im_i[T_i]\in A_*(G^{\prime})$, where the $T_i$ are subvarieties of $G^{\prime}$. Let $h_i:F\times_G T_i\rightarrow F^{\prime}$ be the natural imbeddings. \begin{Def} The {\em Gysin map} $f^!:Z_*(G^{\prime})\rightarrow A_*(F^{\prime})$ is a homomorphism defined by \[f^!y^{\prime}=\sum_im_i{h_i}_*(F.T_i)\text{.}\] \end{Def} \noindent Gysin maps pass to rational equivalence, and we call the resulting homomorphisms $f^!:A_*(G^{\prime})\rightarrow A_*(F^{\prime})$ Gysin maps as well. These statements are also true if $G^{\prime}$, and thus $F^{\prime}$, are allowed to be stacks, as we assume for the rest of the subsection. \begin{prop}\label{Gys} Gysin maps satisfy the following properties: \begin{enumerate} \item {\bf (Compatibility with proper pushforwards)} If $p$ is proper and $f$ is a local regular imbedding in the fiber diagram (\ref{Gys1}) below, then $f^!p_*=q_*f^!$. \item {\bf (Compatibility with flat pullbacks)} If $p$ is flat and $f$ is a local regular imbedding in the fiber diagram (\ref{Gys1}) below, then $f^!p^*=q^*f^!$. \item {\bf (Commutativity)} If $f$ and $j$ are local regular imbeddings in the fiber diagram (\ref{Gys2}) below, then $f^!j^!=j^!f^!$. \end{enumerate} \vspace{0.2in} \renewcommand{\baselinestretch}{1}\small\normalsize \psset{arrows=->} \begin{equation}\label{Gys1} \text{ \begin{psmatrix} $F^{\prime\prime}$ & $G^{\prime\prime}$ \\ $F^{\prime}$ & $G^{\prime}$ \\ $F$ & $G$ \ncline{1,1}{2,1}\naput{$q$} \ncline{1,1}{1,2}\naput{$f^{\prime\prime}$} \ncline{2,1}{2,2}\naput{$f^{\prime}$} \ncline{1,2}{2,2}\naput{$p$} \ncline{2,1}{3,1} \ncline{2,2}{3,2}\naput{$g$} \ncline{3,1}{3,2}\naput{$f$} \end{psmatrix} }\end{equation} \psset{arrows=-} \vspace{0.2in} \psset{arrows=->} \begin{equation}\label{Gys2} \text{ \begin{psmatrix} $F^{\prime\prime}$ & $G^{\prime\prime}$ & $X$ \\ $F^{\prime}$ & $G^{\prime}$ & $Y$ \\ $F$ & $G$ \ncline{1,1}{2,1} \ncline{1,1}{1,2} \ncline{2,1}{2,2} \ncline{1,2}{2,2} \ncline{2,1}{3,1} \ncline{2,2}{3,2} \ncline{3,1}{3,2}\naput{$f$} \ncline{1,2}{1,3} \ncline{2,2}{2,3}\naput{$g$} \ncline{1,3}{2,3}\naput{$j$} \end{psmatrix} }\end{equation} \psset{arrows=-} \renewcommand{\baselinestretch}{2}\small\normalsize \begin{flushright} $\Box$ \end{flushright} \end{prop} Gysin maps also satisfy other properties, but they will not be relevant for us. In case $G^{\prime}=G$ and $q=\operatorname{id}_G$ in Diagram (\ref{square}), we use the notation $f^*:A_*(G)\rightarrow A_*(F)$ for the Gysin map. \subsection{Intersection product} If $F$ is a smooth stack of dimension $n$, then the diagonal imbedding $\delta:F\rightarrow F\times F$ is a local regular imbedding of codimension $n$. Define a product morphism $A_k(F)\* A_l(F)\rightarrow A_{k+l-n}(F)$ by \[x\cdot y=\delta^*(x\times y)\text{.}\] Set $A^k(F)=A_{n-k}(F)$ and $A^*(F)=\sum_k A^k(F)$. Then this product makes $A^*(F)$ into a commutative and graded ring with identity element $[F]$, called the {\em Chow ring of $F$}. \subsection{Additional intersection theory facts} In this subsection, we mention some other intersection theoretic objects and ideas that we will need, including the excision sequence, Chern classes, expected dimension and virtual fundamental classes of moduli spaces, and homology isomorphism. \begin{prop}[Excision sequence] Let $i:G\rightarrow F$ be a closed substack of a stack $F$ with complement $j:U\rightarrow F$. Then the sequence \psset{arrows=->} \begin{equation}\label{eksiz} \text{ \begin{psmatrix} $A_k(G)$ & $A_k(F)$ & $A_k(U)$ & 0 \ncline{1,1}{1,2}\naput{$i_*$} \ncline{1,2}{1,3}\naput{$j^*$} \ncline{1,3}{1,4} \end{psmatrix} } \end{equation} \psset{arrows=-} \noindent of Chow groups is exact for all $k$. \end{prop} The excision sequence is a useful tool for learning about the structure of the Chow group of a space by decomposing it into simpler spaces. Chern classes of vector bundles on stacks can be defined almost exactly as done in \cite{F} for schemes. We will take the definition from \cite{Man}. Let $L$ be an invertible sheaf on a stack $F$ of dimension $n$. Then $c_1(L)\cap [F]\in A_{n-1}$ is defined to be the Cartier divisor associated to $L$. More generally, for a closed substack $f:G\rightarrow F$, set \[c_1(L)\cap [G]=f_*(c_1(f^*(L))\cap G)\text{.}\] Thus $c_1(L)\cap$ is an operator on $A_*(F)$. If $E\rightarrow F$ is a vector bundle of rank $e+1$, let $p:P\rightarrow F$ be its projectivization with tautological bundle ${\mathcal O}_P(1)$. First, we define Segre classes $s_i(E)\cap$ by \[s_i(E)\cap y=p_*(c_1({\mathcal O}_P(1))^{e+i}\cap p^*(y))\] for $y\in A_*(F)$. The Segre polynomial of $E$ is $s_t(E)=\sum_i s_i(E)t^i$. The Chern polynomial $c_t(E)$ is defined similarly; its coefficients are the Chern classes of $E$. We define the Chern classes by the formula $c_t(E)=(s_t(E))^{-1}$. These classes satisfy the usual properties of Chern classes. We will often slightly abuse terminology and notation by calling the cycle class $c_i(E)\cap[F]$ the $i$'th Chern class of $E$ and writing it as just $c_i(E)$. The moduli spaces ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ have an {\em expected dimension} from deformation theory, given by the formula \[(\operatorname{dim}(X)-3)(1-g)-\int_{\beta}K_X + n\text{,}\] where $K_X$ is the canonical class of $X$. If $X$ is convex, then the spaces ${\overline{\mathcal{M}}_{0,n}(X,\beta)}$ always have the expected dimension. (See \cite{FP}.) For ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ this dimension is $d+r+dr+n-3$. All of our work will occur in this pleasant situation. However, in general the dimension of ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ can be larger than its expected dimension. This is analogous to the situation where two subvarieties of a variety do not intersect properly. In this case the ordinary fundamental class $[{\overline{\mathcal{M}}_{g,n}(X,\beta)}]$ is not the correct class to integrate against (nor is $[{\overline{\mathcal{M}}_{g,n}(X,\beta)}]_{\operatorname{nor}}$) in order to compute gravitational correlators or do enumerative geometry in general. We must introduce a third type of fundamental class $[{\overline{\mathcal{M}}_{g,n}(X,\beta)}]^{\text{vir}}$ called the {\em virtual fundamental class}, which lives in the Chow group of the expected dimension and gives enumerative geometry and gravitational correlators the desired properties. Construction of virtual fundamental classes is very complicated. See \cite[\S 7.1.4]{CK} for an overview and further references. We mention the virtual fundamental class only because it appears in some of our general definitions and formulas. Let $H^*(F)$ denote the rational de Rham cohomology ring of a Deligne-Mumford stack $F$. \begin{prop}[Homology isomorphism] \label{hi} Let $X$ be a flag variety. Then there is a ring isomorphism \begin{equation}\label{a2h} A^*({\overline{\mathcal{M}}_{0,n}(X,\beta)})\rightarrow H^*({\overline{\mathcal{M}}_{0,n}(X,\beta)}) \end{equation} \end{prop} \noindent See \cite{O} for a proof. In particular, this holds for $X={\mathbb P}^r$. It allows us to switch freely between cohomology and Chow rings. We should note that the isomorphism doubles degrees. The degree of a $k$-cycle in $A^k({\overline{\mathcal{M}}_{0,n}(X,\beta)})$, called the algebraic degree, is half the degree of its image in $H^{2k}({\overline{\mathcal{M}}_{0,n}(X,\beta)})$. \renewcommand{\baselinestretch}{1} \section{Equivariant Cohomology and Localization on ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$} \label{sec:eq} Localization often greatly simplifies gravitational correlator and other enumerative geometry calculations. The calculations we are interested in take place in intersection rings of moduli stacks ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$. We will concentrate on the relatively simple case where computations occur in the intersection rings of spaces ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$. \subsection{Equivariant cohomology and the localization theorem of Atiyah and Bott} Equivariant cohomology was originally constructed in the topological setting by Atiyah and Bott (\cite{AB}). More recently, the corresponding algebraic theory of equivariant Chow rings was developed by Edidin and Graham (\cite{EG}). It follows from Proposition \ref{hi} that the cycle map of \cite{EG} gives an isomorphism between the equivariant Chow ring and the equivariant cohomology ring of a moduli space of stable maps, so we can switch freely between these settings. Our description below follows the exposition of \cite{CK}, which takes a topological perspective. Let $X$ be a topological space and $G$ a connected Lie group with classifying bundle $EG\rightarrow BG$. Setting $X_G=X\times_{G}EG$, we define the {\em equivariant cohomology} of $X$ to be $H_G^*(X)=H^*(X_G)$. We now state some basic facts about equivariant cohomology. First, $H_G^*(\operatorname{point})=H^*(BG)$. By pullback via $X\rightarrow\operatorname{point}$, $H_G^*(X)$ is an $H^*(BG)$-module. We can regard $H^*(BG)$ as the coefficient ring for equivariant cohomology. Note that inclusion of a fiber $i_X:X\rightarrow X_G$ induces a ``forgetful map'' $i_X^*:H_G^*(X)\rightarrow H^*(X)$. We will consider the case where $G$ is the torus $T=({\mathbb C}^*)^n$. Let $M(T)$ be the character group of $T$. For each $\rho\in M(T)$, we get a 1-dimensional vector space ${\mathbb C}_\rho$ with a $T$-action given by $\rho$. If $L_\rho=({\mathbb C}_\rho)_T$ is the corresponding line bundle over $BT$, then the assignment $\rho\mapsto -c_1(L_\rho)$ defines an isomorphism $\psi:M(T)\rightarrow H^2(BG)$, which induces a ring isomorphism $\operatorname{Sym}(M(T))\simeq H^*(BG)$. We call $\psi(\rho)$ the {\em weight} of $\rho$. Let $\rho_i\in M(T)$ be the character given by the $i$'th projection, and let $\lambda_i$ be the weight of $\rho_i$. Then \[H^*(BG)\simeq {\mathbb C}[\lambda_1,\ldots,\lambda_n]\text{.}\] The map $i_X^*$ can be thought of as a nonequivariant limit that maps all the $\lambda_i$ to 0. We denote the line bundle $L_{\rho_i}$ by ${\mathcal O}(-\lambda_i)$, so that $\lambda_i=c_1({\mathcal O}(\lambda_i))$. Let $T$ act on a smooth manifold $X$. The fixed point locus $X^T$ is a union of smooth connected components $Z_j$. We have inclusions $i_j:Z_j\rightarrow X$ and normal bundles $N_j=N_{Z_j/X}$ which are equivariant. Inclusion induces $i_j^*:H_T^*(X)\rightarrow H_T^*(Z_j)$. Since $Z_j$ is a submanifold of $X$, we also have a Gysin map ${i_j}_!:H_T^*(Z_j)\rightarrow H_T^*(X)$. If $E$ is a $G$-equivariant vector bundle of rank $r$ on $X$, the top Chern class $\text{Euler}_G(E)=c_r^G(E)$ is called the equivariant Euler class of $E$. Let ${\mathcal R}_T\simeq{\mathbb C}(\lambda_1,\ldots,\lambda_n)$ be the field of fractions of $H^*(BT)$. Atiyah and Bott \cite{AB} have shown that $\operatorname{Euler_T}(N_j)$ is invertible in the localization $H_T^*(Z_j)\*{\mathcal R}_T$ for all $j$. \begin{thm}[Localization Theorem of Atiyah-Bott] There is an isomorphism \[H_T^*(X)\*{\mathcal R}_T\simeq\+_j H_T^*(Z_j)\*{\mathcal R}_T\] induced by the map $\alpha\mapsto(i_j^*(\alpha)/\operatorname{Euler_T}(N_j))_j$. The inverse is induced by $(\alpha_j)_j\mapsto\sum_j{i_j}_!(\alpha_j)$. Thus for any $\alpha\in H_T^*(X)\*{\mathcal R}_T$ we have \begin{equation}\label{locexp} \alpha=\sum_j {i_j}_!\left(\frac{i_j^*(\alpha)}{\operatorname{Euler_T}(N_j)}\right)\text{.} \end{equation} \end{thm} \noindent {\bf Idea of Proof:} The self-intersection formula says $i_j^*\circ{i_j}_!(\gamma)=\gamma\cup\operatorname{Euler_T}(N_j)$ for any $\gamma\in H_T^*(Z_j)$. Let $\gamma=i_j^*(\alpha)$.$\Box$ For any variety $X$ with $T$-action, $X\rightarrow\operatorname{point}$ induces an equivariant projection $\pi_X:X_T\rightarrow BT$. The pushforward map ${\pi_X}_!$ will be called the equivariant integral and written \[\int_{X_T}:H_T^*(X)\rightarrow H^*(BT)\text{.}\] \begin{cor} For any $\alpha\in H_T^*(X)\*{\mathcal R}_T$, \[\int_{X_T}\alpha=\sum_j \int_{(Z_j)_T}\frac{i_j^*(\alpha)}{\operatorname{Euler_T}(N_j)}\text{.}\] \end{cor} \noindent {\bf Proof.} Apply ${\pi_X}_!$ to both sides of (\ref{locexp}).$\Box$ A more topological or analytical approach to moduli problems leads to the notion of an {\em orbifold}. Orbifolds correspond to smooth Deligne-Mumford stacks. A variety $X$ is an orbifold if it admits local (analytic) charts $U/H$ with $U$ smooth and $H$ a small subgroup of $\operatorname{GL}(n,{\mathbb C})$ acting on $U$. Deligne-Mumford stacks always admit stratification into quotient substacks. Viewing these quotients as varieties rather than stacks, it is not hard to imagine that one can get an orbifold associated to the stack if the stack is smooth. A $T$-action on a stack gives rise to local $T$-actions on the charts $U$. By working with the charts, the technical issues that would arise in considering localization on smooth stacks directly can be avoided. We will always take this approach, accounting for the local quotients at the end by dividing answers by the order of the quotienting group. The resulting formula is otherwise the same as that for varieties. \begin{cor} \label{stackloc} Let $X$ be an orbifold which is the variety underlying a smooth stack with a $T$-action. If $\alpha\in H_T^*(X)\*{\mathcal R}_T$, then \[\int_{X_T}\alpha=\sum_j \int_{(Z_j)_T}\frac{i_j^*(\alpha)}{a_j\operatorname{Euler_T}(N_j)}\text{,}\] where $a_j$ is the order of the group $H$ occurring in a local chart at the generic point of $Z_j$. \end{cor} \subsection{Localization in ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$} The natural action of $T=({\mathbb C}^*)^{r+1}$ on ${\mathbb P}^r$ induces a $T$-action on ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ by composition of the action with stable maps. This moduli space is a smooth Deligne-Mumford stack, so we can apply Corollary \ref{stackloc}. The fixed point loci and their equivariant normal bundles were determined for ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ by Kontsevich \cite{Ko}. A $T$-fixed point of ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ corresponds to a stable map $(C, p_1,\ldots, p_n, f)$ where each component of $C_i$ of $C$ is either mapped to a $T$-fixed point of ${\mathbb P}^r$ or multiply covers a coordinate line. Also, each marked point $p_i$, each node of $C$, and each ramification point of $f$ is mapped to a $T$-fixed point of ${\mathbb P}^r$. This implies that coordinates on $C_i$ and its image can be chosen so the cover is given by $(x_0,x_1)\mapsto(x_0^{d_i},x_1^{d_i})$. As a result, the fixed point components $Z_j$ of the $T$-action can be described by combinatorial data. Let $q_0,\ldots, q_r$ be the fixed points of ${\mathbb P}^r$ under this $T$-action, so that $q_i=(0:\ldots:0:1:0\ldots:0)$, with the 1 in the $i$'th position. The connected components of ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}^T$ are in 1--1 correspondence with connected trees ${\Gamma}$ of the following type: The vertices $v$ of ${\Gamma}$ are in 1--1 correspondence with the connected components $C_v$ of $f^{-1}(\{q_0,\ldots,q_r\})$, so each $C_v$ is either a point or a connected union of irreducible components of $C$. The edges $e$ of ${\Gamma}$ correspond to irreducible components $C_e$ of $C$ which are mapped onto some coordinate line $\ell_e$ in ${\mathbb P}^r$. The graph ${\Gamma}$ has the following labels: Associate to each vertex $v$ the number $i_v$ defined by $f(C_v)=q_{i_v}$, as well as the set $S_v$ consisting of those $i$ for which the marked point $p_i$ is in $C_v$. Associate to each edge $e$ the degree $d_e$ of the map $f|_{C_e}$. Finally, we impose the following three conditions: \begin{enumerate} \item If an edge $e$ connects $v$ and $v^{\prime}$, then $i_v\neq i_{v^{\prime}}$, and $\ell_e$ is the coordinate line joining $q_{i_v}$ and $q_{i_{v^{\prime}}}$. \item $\sum_e d_e=d$. \item $\coprod_v S_v=\und{n}$. \end{enumerate} The locus of stable maps with graph ${\Gamma}$ is a fixed point substack ${\overline{\mathcal{M}}}_{\Gamma}$ of ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$. Fix $(C, p_1,\ldots, p_n, f)\in{\overline{\mathcal{M}}}_{\Gamma}$. For each $v$ such that $C_v$ is a curve, $C_v$ has $n(v)=|S_v|+\operatorname{val}(v)$ special points. The data consisting of $C_v$ plus these $n(v)$ points forms a stable curve, giving an element of ${\overline{M}}_{0,n(v)}$. Using the data of ${\Gamma}$, we can construct a morphism \[\psi_{\Gamma}:\prod_{v:\operatorname{dim} C_v=1} {\overline{M}}_{0,n(v)}\rightarrow {\overline{\mathcal{M}}}_{\Gamma}\text{.}\] Define ${\overline{M}}_{\Gamma}$ to be the above product (which is a point if all components are contracted). The morphism $\psi_{\Gamma}$ is finite. Indeed, there is a finite group of automorphisms $A_{\Gamma}$ acting on ${\overline{M}}_{\Gamma}$ such that ${\overline{\mathcal{M}}}_{\Gamma}$ is the quotient stack $[{\overline{M}}_{\Gamma}/A_{\Gamma}]$. We have an exact sequence \[0\longrightarrow\prod_e {\mathbb Z}/d_e{\mathbb Z}\longrightarrow A_{\Gamma}\longrightarrow\operatorname{Aut}({\Gamma})\longrightarrow 0\text{,}\] where $\operatorname{Aut}({\Gamma})$ is the group of automorphisms of ${\Gamma}$ which preserve the labels. The left-hand term comes from sheet interchanges of multiply covering components. When Corollary \ref{stackloc} is applied to ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$, the factor $a_{{\Gamma}}$ appearing in the denominator of the term corresponding to ${\overline{\mathcal{M}}}_{\Gamma}$ is the order of $A_{\Gamma}$. The last ingredients needed in order to use localization on ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ are the Euler classes of the fixed components. Denote the normal bundle of ${\overline{\mathcal{M}}}_{\Gamma}$ by $N_{\Gamma}$. Define a {\em flag} $F$ of a graph to be a pair $(v,e)$ such that $v$ is a vertex of $e$. Put $i(F)=v$ and let $j(F)$ be the other vertex of $e$. Set \[\omega_F=\frac{\lambda_{i_{i(F)}}-\lambda_{i_{j(F)}}}{d_e}\text{.}\] This corresponds to the weight of the $T$-action on the tangent space of the component $C_e$ of $C$ at the point $p_F$ lying over $i_v$. Let $e_F$ be the first Chern class of the bundle on ${\overline{\mathcal{M}}}_{\Gamma}$ whose fiber is the cotangent space to the component associated to $v$ at $p_F$. (More information about this type of class is given in Section \ref{sec:psi}.) If $\operatorname{val}(v)=1$, let $F(v)$ denote the unique flag containing $v$. If $\operatorname{val}(v)=2$, let $F_1(v)$ and $F_2(v)$ denote the two flags containing $v$. Similarly, let $v_1(e)$ and $v_2(e)$ be the two vertices of an edge $e$. \begin{thm}\label{norm} The equivariant Euler class of the normal bundle $N_{\Gamma}$ is a product of contributions from the flags, vertices and edges: \[\operatorname{Euler_T}(N_{\Gamma})=e_{{\Gamma}}^{\text{F}}e_{{\Gamma}}^{\text{v}}e_{{\Gamma}}^{\text{e}}\text{,}\] where \[e_{{\Gamma}}^{\text{F}}=\frac{\prod_{F:n(i(F))\geq 3}(\omega_F-e_F)} {\prod_{F}\prod_{j\neq i_{i(F)}}(\lambda_{i_{i(F)}}-\lambda_j)}\] \[e_{{\Gamma}}^{\text{v}}=\left(\prod_v \prod_{j\neq i_v}(\lambda_{i_v}-\lambda_j) \right) \left(\prod_{{\operatorname{val}(v)=2\atop S_v=\emptyset}}(\omega_{F_1(v)}+\omega_{F_2(v)})\right) /\prod_{{\operatorname{val}(v)=1\atop S_v=\emptyset}} \omega_{F(v)}\] \[e_{{\Gamma}}^{\text{e}}=\prod_e\left(\frac{(-1)^{d_e}(d_e!)^2 (\lambda_{i_{v_1(e)}}-\lambda_{i_{v_2(e)}})^{2d_e}}{d_e^{2d_e}} \prod_{{a+b=d_e\atop k\neq i_{v_j(e)}}} \left(\frac{a\lambda_{i_{v_1(e)}}+b\lambda_{i_{v_2(e)}}}{d_e}-\lambda_k\right)\right) \text{.} \] \end{thm} This formula won't be very efficient in most particular examples because there will be many cancellations. However, it is sufficient for our purposes. We are now equipped with the theory necessary to compute integrals over moduli spaces of stable maps using localization. This will be undertaken in Chapters \ref{sec:simpler} and \ref{sec:rel}. \chapter{The Betti numbers of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$} \label{sec:ser} \renewcommand{\baselinestretch}{1} \section{Serre polynomials and the Poincar\'{e} polynomial of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$} \label{sec:poincare} This section owes much to Getzler and Pandharipande, who provide the framework for computing the Betti numbers of all the spaces ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ in their unpublished work \cite{GP}. (They have recently completed these computations in \cite{GP2}.) However, we will take the definitions and basic results from other sources, and prove their theorem in the special case that we need. We will compute a formula for the Poincar\'{e} polynomials of the moduli spaces ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ using what are called Serre polynomials in \cite{GP} and \cite{Ge}. (These polynomials are also known as virtual Poincar\'{e} polynomials or E-polynomials.) Serre polynomials are defined for quasi-projective varieties over ${\mathbb C}$ via the mixed Hodge theory of Deligne (\cite{D}). Serre conjectured the existence of polynomials satisfying the first two key properties given below. A formula was later given by Danilov and Khovanski\u{^{-1}} in \cite{DK}. If $(V,F,W)$ is a mixed Hodge structure over ${\mathbb C}$, set \[V^{p,q}=F^p\operatorname{gr}_{p+q}^WV\cap \bar{F}^q\operatorname{gr}_{p+q}^WV \] and let ${\mathcal X}(V)$ be the Euler characteristic of $V$ as a graded vector space. Then \[\operatorname{Serre}(X)=\sum_{p,q=0}^\infty u^pv^q{\mathcal X}(H_c^\bullet(X,{\mathbb C})^{p,q})\text{.} \] If $X$ is a smooth projective variety, then the Serre polynomial of $X$ is just its Hodge polynomial: \[\operatorname{Serre}(X)=\sum_{p,q=0}^\infty(-u)^p(-v)^q\operatorname{dim} H^{p,q}(X,{\mathbb C})\text{.}\] If $X$ further satisfies $H^{p,q}(X,{\mathbb C})=0$ for $p\neq q$, then we can substitute a new variable $q=uv$ for $u$ and $v$. In this case, the coefficients of the Serre polynomial of $X$ give its Betti numbers, so that $\operatorname{Serre}(X)$ is the Poincar\'{e} polynomial of $X$. We will use two additional key properties of Serre polynomials from \cite{Ge}. The first gives a compatibility with decomposition: If $Z$ is a closed subvariety of $X$, then $\operatorname{Serre}(X)=\operatorname{Serre}(X\backslash Z)+\operatorname{Serre}(X)$. Second, it respects products: $\operatorname{Serre}(X\times Y)=\operatorname{Serre}(X)\operatorname{Serre}(Y)$. (This is actually a consequence of the previous properties.) It follows from these two properties that the Serre polynomial of a fiber space is the product of the Serre polynomials of the base and the fiber. The definition and properties above come from \cite{Ge}. We also use the following consequence of the Eilenberg-Moore spectral sequence, which is essentially Corollary 4.4 in \cite{Sm}. \begin{prop} Let $Y\rightarrow B$ be a fiber space with $B$ simply connected, and let $X\rightarrow B$ be continuous. If $H^*(Y)$ is a free $H^*(B)$-module, then \[H^*(X\times_B Y)\simeq H^*(X)\*_{H^*(B)}H^*(Y)\] as an algebra. \end{prop} Since we deal exclusively with cases where the isomorphism (\ref{a2h}) holds, there is never any torsion in the cohomology. Thus we have the following. \renewcommand{\baselinestretch}{1} \begin{cor} \label{serfiber} Let $X$ and $Y$ be varieties over a simply connected base $B$, and suppose either $X$ or $Y$ is locally trivial over $B$. Then \[\operatorname{Serre}(X\times_B Y) = \frac{\operatorname{Serre}(X)\operatorname{Serre}(Y)}{\operatorname{Serre}(B)}\text{.}\] \end{cor} \noindent We will sometimes use the notation $Y/B$ for the fiber of a fiber space $Y\rightarrow B$. To extend this setup to Deligne-Mumford stacks, where automorphism groups can be nontrivial (but still finite), {\em equivariant} Serre polynomials are needed. Let $G$ be a finite group acting on a (quasiprojective) variety $X$. The idea is this: The action of $G$ on $X$ induces an action on its cohomology (preserving the mixed Hodge structure), which in turn gives a representation of $G$ on each (bi)graded piece of the cohomology. The cohomology of the quotient variety $X/G$, and hence of the quotient stack $[X/G]$, is the part of the cohomology of $X$ which is fixed by the $G$-action, {\em i.e.}, in each degree the subspace on which the representation is trivial. Our definition comes from \cite{Ge}. The equivariant Serre polynomial $\operatorname{Serre}(X,G)$ of $X$ is given by the formula \[\operatorname{Serre}_g(X)=\sum_{p,q=0}^\infty u^pv^q \sum_i(-1)^i\operatorname{Tr}(g|(H_c^i(X,{\mathbb C}))^{p,q})\text{.} \] for each element $g\in G$. We can also describe the equivariant Serre polynomial more compactly with the formula \[\operatorname{Serre}(X,G)=\sum_{p,q=0}^\infty u^pv^q \sum_i(-1)^i[H_c^i(X,{\mathbb C})^{p,q}]\text{,}\] taken from \cite{GP}. In the case $G=S_n$, we write $\operatorname{Serre}_n(X)$ for $\operatorname{Serre}(X,S_n)$. A $G$-equivariant Serre polynomial takes values in $R(G)[u,v]$, where $R(G)$ is the virtual representation ring of $G$. The augmentation morphism $\epsilon:R(G)\rightarrow{\mathbb Z}$, which extracts the coefficient of the trivial representation $\textrm{\makebox[0.02in][l]{1}1}$ from an element of $R(G)$, extends to an augmentation morphism $R(G)[u,v]\rightarrow{\mathbb Z}[u,v]$. If $G$ acts on a quasi-projective variety $X$, the Serre polynomial of the quotient stack $[X/G]$ is the augmentation of the equivariant Serre polynomial of $X$. Every virtual representation ring $R(G)$ has the extra structure of a {\em $\lambda$-ring}. Our definition of a $\lambda$-ring comes from \cite{Knutson}. First, let $\xi_1,\ldots,\xi_q,{\eta}_1,\ldots,{\eta}_r$ be variables, and let $s_i$ and $\sigma_i$ be the $i$'th elementary symmetric functions in the $\xi_j$'s and the ${\eta}_j$'s, respectively. Define $P_n(s_1,\ldots,s_n,\sigma_1,\ldots\sigma_n)$ be the coefficient of $t^n$ in \[\prod_{i,j}(1+\xi_i{\eta}_jt) \] and $P_{n,d}(s_1,\ldots,s_{nd})$ to be the coefficient of $t^n$ in \[\prod_{1\leq i_1<\ldots<i_d\leq q}(1+\xi_{i_1}\cdots\xi_{i_d}t) \] \begin{Def}\label{lring} A {\em $\lambda$-ring} is a commutative ring $R$ with a series of operations $\lambda_k:R\rightarrow R$ for $k\in\{0\}\cup{\mathbb N}$ satisfying the following properties. \begin{enumerate} \item For all $x\in R$, $\lambda_0(x)=1$. \item For all $x\in R$, $\lambda_1(x)=x$. \item \label{lamsum} For all $x,y\in R$, $\lambda_n(x+y)=\sum_{k=0}^n \lambda_k(x)\lambda_{n-k}(y)$. \item $\lambda_t(1)=1+t$. \item For all $x,y\in R$ and $n\in\{0\}\cup{\mathbb N}$, $\lambda_n(xy)=P_n(\lambda_1x,\lambda_2x,\ldots\lambda_nx,\lambda_1y,\ldots,\lambda_ny)$. \item For all $x\in R$ and $n,m\in\{0\}\cup{\mathbb N}$, $\lambda_m(\lambda_n(x))=P_{m,n}(\lambda_1x,\ldots,\lambda_{mn}x)$. \end{enumerate} \end{Def} \noindent Here $\lambda_t(x)$ is the formal power series $\sum_{i=0}^\infty \lambda_i(x)t^i$. If the $\lambda_i$ satisfy the first three properties, $R$ is called a {\em pre-$\lambda$-ring}. Let $V$ be a $G$-module. Then $\lambda_i(V)$ is the {\em $i$'th exterior power} ${\Lambda}^iV$ of $V$, where we define $g\in G$ to act by $g(v_1\wedge\ldots\wedge v_i)=gv_1\wedge\ldots\wedge gv_i$. Define $\lambda_0(V)$ to be the trivial one-dimensional representation. (One can similarly define a $G$-module structure on the $i$'th symmetric power $S^iV$.) Knutson proves in \cite[Chapter II]{Knutson} that these exterior power operations give $R(G)$ the structure of a $\lambda$-ring for any finite group $G$. Addition is given by $[V]+[W]=[V\+W]$, and the product is $[V]\cdot[W]=[V\* W]$, both with the naturally induced actions. Knutson also shows that ${\mathbb Z}$ is a $\lambda$-ring with $\lambda$-operations given via $\lambda_t(m)=(1+t)^m$. For $m,n\geq 0$, this definition gives $\lambda_n(m)=\lchoose{m}{n}$. Finally, he shows that if $R$ is a $\lambda$-ring, then there is a unique structure of $\lambda$-ring on $R[x]$ under which $\lambda_k(rX^n)=\lambda_k(r)X^{nk}$ for $n,k\in{\mathbb N}\cup\{0\}$ and $r\in R$. This gives a $\lambda$-ring structure on ${\mathbb Z}[q]$. The augmentation morphism $\epsilon:R(G)\rightarrow{\mathbb Z}$ is a {\em map of $\lambda$-rings}; it commutes with the $\lambda$-operations. We will use the following facts about Serre polynomials and equivariant Serre polynomials. For $n\in{\mathbb N}$, let $[n]=\frac{q^n-1}{q-1}$. Then $[n+1]$ is the Serre polynomial of ${\mathbb P}^n$, as is clear from the presentation for its Chow ring. Getzler and Pandharipande prove that the Serre polynomial of the Grassmannian $G(k,n)$ of $k$-planes in ${\mathbb C}^n$ is the $q$-binomial coefficient \[\chews{n}{k}=\frac{[n]!}{[k]![n-k]!}\text{,}\] where $[n]!=[n][n-1]\cdot\cdot\cdot[2][1]$. We will prove this formula in the special case $k=2$. \begin{lem}\label{grasser} The Serre polynomial of $G(2,n)$ is $\chews{n}{2}$. \end{lem} \noindent {\bf Proof.} By projectivizing the ambient affine space, $G(2,n)\simeq{\mathbb G}(1,n-1)$, the Grassmannian of lines in ${\mathbb P}^{n-1}$. We will use the projective viewpoint in this proof. The universal ${\mathbb P}^1$ bundle over ${\mathbb G}(1,n-1)$ is isomorphic to ${\mathbb F}(0,1;n-1)$ of pairs $(p,\ell)$ of a point $p$ and a line $\ell$ in ${\mathbb P}^{n-1}$ with $p\in\ell$. On the other hand, there is a projection ${\mathbb F}(0,1;n-1)\rightarrow{\mathbb P}^{n-1}$ taking $(p,\ell)$ to $p$. Its fiber over a point $p$ is $\{\ell\, |\, p\in\ell\}$, which is isomorphic to ${\mathbb P}^{n-2}$. (To see this isomorphism, fix a hyperplane $H\subset{\mathbb P}^{n-1}$ not containing $p$ and map each line to its intersection with $H$.) It follows that $\operatorname{Serre}({\mathbb F}(0,1;n-1)) =[n][n-1]$. Since $\operatorname{Serre}({\mathbb F}(0,1;n-1))=\operatorname{Serre}({\mathbb G}(1,n-1))[2]$ also, we are able to conclude that $\operatorname{Serre}({\mathbb G}(1,n-1))=[n][n-1]/[2]$. $\Box$ Next, since $\operatorname{PGL}(2)$ is the complement of a quadric surface in ${\mathbb P}^3$, $\operatorname{Serre}(\operatorname{PGL}(2))=[4]-[2]^2=q^3-q$. In addition to the $\lambda$-operations, every $\lambda$-ring $R$ has {\em $\sigma$-operations} as well. These can be defined in terms of the $\lambda$-operations by $\sigma_k(x)=(-1)^k\lambda_k(-x)$. Routine checking shows that the $\sigma$-operations give $R$ the structure of a pre-$\lambda$-ring. Here we simply note the following formulas for the $\lambda$-ring ${\mathbb Z}[q]$. \[\sigma_k([n])=\chews{n+k-1}{k} \text{ and } \lambda_k([n])=q^{k \choose 2}\chews{n}{k}\text{.}\] Proofs of these formulas can be found in \cite[Section I.2]{Mac}. Next we explain why these formulas are relevant. Let $\epsilon$ be the sign representation of $S_n$. Note the identity $\epsilon^2=\textrm{\makebox[0.02in][l]{1}1}$. We will prove the following claim from \cite{GP}. \begin{lem} If $X$ is smooth and $S_2$ acts on $X^2$ by switching the factors, then \[\operatorname{Serre}_2(X^2)=\sigma_2(\operatorname{Serre}(X))\text{\em \textrm{\makebox[0.02in][l]{1}1}}+\lambda_2(\operatorname{Serre}(X))\epsilon\text{.}\] \end{lem} \noindent {\bf Proof.} Let $V$ be a vector space. Now $V\* V=S^2V\+{\Lambda}^2V$ as $S_2$-modules, with $S_2$ acting by switching the factors of $V\* V$, trivially on $S^2V$, and by sign on ${\Lambda}^2V$. If 0 is the zero representation, certainly $\lambda_i(0)=0$ for $i>0$. We use this and the properties of $\lambda$-rings in Definition \ref{lring} to obtain \begin{eqnarray*} 0 & = & \lambda_2([V]-[V]) \\ & = & \textrm{\makebox[0.02in][l]{1}1}\cdot\lambda_2(-[V])+[V]\cdot(-[V])+\lambda_2[V]\cdot\textrm{\makebox[0.02in][l]{1}1}\\ & = & \lambda_2(-[V])-[S^2V]-[{\Lambda}^2V]+\lambda_2[V] \end{eqnarray*} Since $\sigma_2[V]=\lambda_2(-[V])$, this implies $\sigma_2[V]=[S^2V]$. Since $X$ is smooth $H^*(X^2)=H^*(X)\* H^*(X)$, with the action of $S_2$ switching the factors. Applying the above with $V=H^*(X)$ gives $[H^*(X^2)]=\sigma_2[H^*(X)]+\lambda_2[H^*(X)]$. Breaking this down by (cohomological) degree, we have $[H^i(X^2)]q^i=[\sigma_2[H^*(X)]]_iq^i+[\lambda_2[H^*(X)]]_iq^i$. We need to show that $[H^i(X^2)]q^i=[\sigma_2(\operatorname{Serre}(X))]_i\textrm{\makebox[0.02in][l]{1}1} +[\lambda_2(\operatorname{Serre}(X))]_i\epsilon$. We will show the equality of the first summands of each expression; showing equality of the terms involving $\lambda_2$ is easier. First, by induction the identity $\lambda_2(-m)=\lambda_2(m+1)$ holds. Second, note that any pre-$\lambda$-operation $\lambda_2$ acts on sums by $\lambda_2(\sum_i x_i)=\sum_i \lambda_2(x_i)+\sum_{i<j}x_ix_j$. Third, note that vector spaces in the following computation live in the graded algebra $H^*(X)\* H^*(X)$, and we will apply the usual rules for grading in a tensor product. Finally, all of the representations below are trivial. We find \begin{eqnarray*} & & [\sigma_2[H^*(X)]]_iq^i \\ & = & \left[\sigma_2\left[\sum_j H^j(X)\right]\right]_iq^i\\ & = & \left[\sum_j[S^2H^j(X)]+\sum_{j<k}[H^j(X)\* H^k(X)]\right]_iq^i\\ & = & \begin{cases} \left([S^2H^{i/2}(X)]+\sum_{\stackrel{\scriptstyle j+k=i}{j<k}} [H^j(X)\* H^k(X)]\right)q^i & \text{if $i$ is even,}\\ \left(\sum_{\stackrel{\scriptstyle j+k=i}{j<k}}[H^j(X)\* H^k(X)]\right)q^i & \text{if $i$ is odd,} \end{cases}\\ & = & \begin{cases} \left(\lchoose{h^{i/2}(X)+1}{2}\textrm{\makebox[0.02in][l]{1}1} +\sum_{\stackrel{\scriptstyle j+k=i}{j<k}}h^j(X)h^k(X)\textrm{\makebox[0.02in][l]{1}1}\right)q^i & \text{if $i$ is even,}\\ \left(\sum_{\stackrel{\scriptstyle j+k=i}{j<k}}h^j(X)h^k(X)\textrm{\makebox[0.02in][l]{1}1}\right)q^i & \text{if $i$ is odd.} \end{cases} \end{eqnarray*} On the other hand, \begin{eqnarray*} & & [\sigma_2(\operatorname{Serre}(X))]_i\textrm{\makebox[0.02in][l]{1}1} \\ & = & [\lambda_2(-\sum h^j(X)q^j)]_i\textrm{\makebox[0.02in][l]{1}1} \\ & = & \left[\sum \lambda_2(-h^j(X))q^{2j}+\sum_{j<k}h^j(X)h^k(X)q^{j+k} \right]_i\textrm{\makebox[0.02in][l]{1}1} \\ & = & \begin{cases} \left(\lchoose{h^{i/2}(X)+1}{2}q^i +\sum_{\stackrel{\scriptstyle j+k=i}{j<k}}h^j(X)h^k(X)q^i\right)\textrm{\makebox[0.02in][l]{1}1} & \text{if $i$ is even,}\\ \left(\sum_{\stackrel{\scriptstyle j+k=i}{j<k}}h^j(X)h^k(X)q^i\right)\textrm{\makebox[0.02in][l]{1}1} & \text{if $i$ is odd.} \end{cases} \end{eqnarray*} \begin{flushright} $\Box$ \end{flushright} \noindent As a corollary, the ordinary Serre polynomial of $[X^2/S_2]$ is $\sigma_2(\operatorname{Serre}(X))$. A key fact used in our computations is the following. \begin{prop} \label{Serre00} If $d>0$, $\operatorname{Serre}({\mathcal{M}}_{0,0}({\mathbb P}^r,d))=q^{(d-1)(r+1)}\chews{r+1}{2}$. \end{prop} \noindent This follows from Pandharipande's proof in \cite{P4} that the Chow ring of the nonlinear Grassmannian $M_{{\mathbb P}^k}({\mathbb P}^r,d)$ is isomorphic to the Chow ring of the ordinary Grassmannian ${\mathbb G}(k,r)$. If $k=1$, the nonlinear Grassmannian is ${\mathcal{M}}_{0,0}({\mathbb P}^r,d)$. (The Serre polynomial grades by dimension rather than codimension. This is why the shifting factor $q^{(d-1)(r+1)}$ appears.) Notice that the proposition refers to the locus of stable maps with smooth domain curve, which is a proper (dense) subset of the compactified moduli space ${\overline{\mathcal{M}}}_{0,0}({\mathbb P}^r,d)$. Recall that ${\mathcal{M}}_{0,n}({\mathbb P}^r,0)\simeq M_{0,n}\times{\mathbb P}^r$, so that the Serre polynomials of these spaces are easy to compute. Finally, let $F(X,n)$ be the configuration space of $n$ distinct labeled points in a nonsingular variety $X$. Fulton and MacPherson show in \cite{FM} that \[\operatorname{Serre}(F(X,n))=\prod_{i=0}^{n-1}(\operatorname{Serre}(X)-i) \text{.}\] In order to compute the Serre polynomial of a moduli space of stable maps, we can stratify it according to the degeneration types of the maps and compute the Serre polynomial of each stratum separately. The degeneration types of maps in ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ are in 1--1 correspondence with stable $(n,d)$-trees via taking the dual graph of a stable map. These concepts were defined by Behrend and Manin. A summary of their definitions follows; see \cite{BM} for a full development. \begin{Def} A {\em graph} $\tau$ is a quadruple $(V_{\tau},F_{\tau},j_{\tau}, \partial_{\tau})$, where $V_{\tau}$ and $F_{\tau}$ are sets, $j_{\tau}:F_{\tau}\rightarrow F_{\tau}$ is an involution, and $\partial_{\tau}:F_\tau\rightarrow V_\tau$. Elements of $V_{\tau}$ are the {\em vertices} of $\tau$, and elements of $F_{\tau}$ are its {\em flags}. \end{Def} Two additional sets associated to a graph $\tau$ are the set of {\em tails} $S_\tau=\{f\in F_\tau|j_\tau(f)=f\}$ and the set of {\em edges} $E_\tau=\{(f_1,f_2)|f_i\in F_\tau,j_\tau(f_1)=f_2\}$. Geometrically, a flag can be thought of as half of an edge. Every flag $f$ has an associated vertex $\partial_{\tau}(f)$ attached to one of its ends. The edges in the graph are given by gluing two flags $f_1$ and $f_2$ together at their open ends whenever $j_\tau(f_1)=f_2$. The fixed points of $j_\tau$ remain half-edges, or tails, with a vertex at only one end. This interpretation leads to the {\em geometric realization} $|\tau|$ of $\tau$, which is a topological graph. \begin{Def} A {\em tree} is a graph $\tau$ whose geometric realization is simply connected. Equivalently, $|\tau|$ is connected and $|E_\tau|=|V_\tau|-1$. \end{Def} \begin{Def} Let $n$ and $d$ be non-negative integers. An {\em $(n,d)$-tree} is a tree $\tau$ together with a bijection $\nu:S_\tau\rightarrow\und{n}$ and a map $d:V_\tau\rightarrow{\mathbb N}\cup\{0\}$ such that \[\sum_{v\in V_\tau}d(v)=d\text{.}\] We call $d(v)$ the {\em degree} of $v$. \end{Def} The {\em valence} of a vertex $v$ is $n(v)=|\{f\in F_\tau|\partial(f)=v\}|$. An $(n,d)$-tree $\tau$ is {\em stable} if whenever $v$ is a vertex of $\tau$ with $d(v)=0$, then the valence of $v$ is at least three. An automorphism of a stable $(n,d)$-tree is a graph automorphism that fixes the tails and preserves the degree labels. The {\em dual graph} of a stable map $(C,x_1,\ldots,x_n,f)$ has one vertex for each irreducible component of $C$. An edge connects two vertices whenever the corresponding components intersect. This includes the possibility of a loop edge, with both ends incident to the same vertex, if the corresponding component intersects itself. However, this won't happen for stable maps from genus zero curves. The flags incident to a vertex are in 1--1 correspondence with the marked points lying on that component. Each vertex is labeled with the homology class of the pushforward of the fundamental class of the corresponding component. If $X={\mathbb P}^r$, this becomes the degree of a vertex, which is just the degree of the restriction of $f$ to the corresponding component. It follows that the dual graph of a genus zero, $n$-pointed stable map to ${\mathbb P}^r$ of degree $d$ is a stable $(n,d)$-tree. A family of stable maps is said to {\em degenerate} over a point if the curve in the fiber over that point has more nodes than the curve over the general fiber. Thus in the genus zero case, the curve of the degenerate fiber will have extra components. Two stable maps are said to have the same degeneration type if and only if their dual graphs are identical. We are now ready to compute the Poincar\'{e} polynomials of some moduli spaces of stable maps. \begin{prop} The Poincar\'{e} polynomial of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ is \[\operatorname{Serre}({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))= \left(\sum_{i=0}^r q^i\right)\left(\sum_{i=0}^{r-1} q^i\right) \left(\sum_{i=0}^{r+2} q^i+2\sum_{i=1}^{r+1} q^i+2\sum_{i=2}^{r} q^i\right) \text{,}\] and the Euler characteristic of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ is $r(r+1)(5r+3)$. \end{prop} \noindent {\bf Proof.} We begin by stratifying ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ according to the degeneration type of the stable maps. Since the strata are locally closed, the compatibility of Serre polynomials with decomposition allows us to compute the Serre polynomial of each stratum separately and add up the results to obtain $\operatorname{Serre}({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))$. Each stratum is isomorphic to a finite group quotient of a fiber product of moduli spaces ${\mathcal{M}_{0,n}(\mathbb{P}^{r},d)}$ via an inverse procedure to the gluing defined in Section \ref{sec:modintro}. Given a stable map $(C,x_1,x_2,f)$, consider the normalization of $C$. It consists of a disjoint union of smooth curves $C_i$ corresponding to the components of $C$, and there are maps $f_i$ from each curve to ${\mathbb P}^r$ naturally induced by $f$. Furthermore, auxiliary marked points are added to retain data about the node locations. The result is a collection of stable maps with smooth domain curves, one for each component. The evaluations of auxiliary marked points corresponding to the same node must agree. This gives rise to a fiber product of moduli spaces of stable maps from smooth domain curves, together with a morphism onto the stratum coming from the normalization map. There can be automorphisms of the stable maps in the stratum that are not accounted for by the fiber product. These occur when there is a collection of connected unions $U_i$ of connected components that all intersect a common component or point and satisfy the following conditions: \begin{enumerate} \item None of the $U_i$ contain any marked points. \item The restrictions of $f$ to $U_i$ and $U_j$ give isomorphic stable maps for any $i$ and $j$. \end{enumerate} \noindent These automorphisms correspond exactly to the automorphisms of the dual graph ${\Gamma}$ of the stratum. Proving that $\operatorname{Aut}({\Gamma})$ is the right group to quotient by appears quite complicated in general. We prove it for the strata of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ by brute force. All the strata are listed below, and the assertion clearly holds in each case. So we can compute the Serre polynomials of the strata using Corollary \ref{serfiber} and Proposition \ref{Serre00}. When stratified according to the dual graphs of stable maps, ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ has 9 types of strata. The corresponding graphs are \vspace{0.5in} \begin{pspicture}(0,0)(3,2) \rput(0,1.5){\pscirclebox{1}} \pnode(0.5,1.5){a} \dotnode(1.5,1.5){c} \pnode(2.5,1.5){b} \ncline{a}{c} \ncline{c}{b} \uput{5pt}[u](1.5,1.5){2} \end{pspicture} \hspace{1in} \begin{pspicture}(0,0)(3,2.5) \rput(0,1.5){\pscirclebox{2}} \dotnode(0.5,1.5){a} \dotnode(1.5,1.5){b} \pnode(2.3,2.3){c} \pnode(2.3,0.7){d} \ncline{a}{b} \ncline{b}{c} \ncline{b}{d} \uput{5pt}[u](0.5,1.5){2} \uput{7pt}[ul](1.5,1.5){0} \end{pspicture} \hspace{1in} \begin{pspicture}(0,0)(3,2.5) \rput(0,1.5){\pscirclebox{3}} \pnode(0.5,0.7){c} \dotnode(1.5,1.5){a} \pnode(.5,2.3){d} \dotnode(2.5,1.5){b} \ncline{c}{a} \ncline{a}{d} \ncline{a}{b} \uput{7pt}[ur](1.5,1.5){1} \uput{5pt}[u](2.5,1.5){1} \end{pspicture} \vspace{0.5in} \begin{pspicture}(0,0)(4,1) \rput(0,1.5){\pscirclebox{4}} \pnode(0.5,1.5){c} \dotnode(1.5,1.5){a} \dotnode(2.5,1.5){b} \pnode(3.5,1.5){d} \ncline{c}{a} \ncline{a}{b} \ncline{b}{d} \uput{5pt}[u](1.5,1.5){1} \uput{5pt}[u](2.5,1.5){1} \end{pspicture} \hspace{.5in} \begin{pspicture}(0,0)(4,2.5) \rput(0,1.5){\pscirclebox{5}} \pnode(.7,.7){c} \dotnode(1.5,1.5){a} \pnode(.7,2.3){d} \dotnode(2.5,1.5){b} \dotnode(3.5,1.5){e} \ncline{c}{a} \ncline{a}{d} \ncline{a}{b} \ncline{b}{e} \uput{7pt}[ur](1.5,1.5){0} \uput{5pt}[u](2.5,1.5){1} \uput{5pt}[u](3.5,1.5){1} \end{pspicture} \hspace{.5in} \begin{pspicture}(0,0)(4,2) \rput(0,1.5){\pscirclebox{6}} \pnode(.5,1.5){c} \dotnode(1.5,1.5){a} \dotnode(2.5,1.5){b} \dotnode(3.5,1.5){d} \pnode(2.5,.5){e} \ncline{c}{a} \ncline{a}{b} \ncline{b}{d} \ncline{b}{e} \uput{5pt}[u](1.5,1.5){1} \uput{5pt}[u](2.5,1.5){0} \uput{5pt}[u](3.5,1.5){1} \end{pspicture} \vspace{.5in} \begin{pspicture}(0,0)(3,2) \rput(0,1.5){\pscirclebox{7}} \pnode(.7,.7){c} \dotnode(.5,1.5){a} \dotnode(1.5,1.5){b} \dotnode(2.5,1.5){e} \pnode(2.3,.7){d} \ncline{a}{b} \ncline{b}{e} \ncline{b}{c} \ncline{b}{d} \uput{5pt}[u](0.5,1.5){1} \uput{5pt}[u](1.5,1.5){0} \uput{5pt}[u](2.5,1.5){1} \end{pspicture} \hspace{1in} \begin{pspicture}(0,0)(3,3) \rput(0,1.5){\pscirclebox{8}} \pnode(.7,.7){c} \dotnode(.5,2.5){a} \dotnode(1.5,2.5){b} \dotnode(2.5,2.5){e} \dotnode(1.5,1.5){f} \pnode(2.3,.7){d} \ncline{a}{b} \ncline{b}{e} \ncline{b}{f} \ncline{f}{d} \ncline{f}{c} \uput{5pt}[u](0.5,2.5){1} \uput{5pt}[u](1.5,2.5){0} \uput{5pt}[u](2.5,2.5){1} \uput{7pt}[ur](1.5,1.5){0} \end{pspicture} \hspace{.75in} \begin{pspicture}(0,0)(4,2) \rput(0,1.5){\pscirclebox{9}} \pnode(1.5,.5){c} \dotnode(.5,1.5){a} \dotnode(1.5,1.5){b} \dotnode(2.5,1.5){e} \dotnode(3.5,1.5){f} \pnode(2.5,.5){d} \ncline{a}{b} \ncline{b}{e} \ncline{e}{f} \ncline{b}{c} \ncline{e}{d} \uput{5pt}[u](.5,1.5){1} \uput{5pt}[u](1.5,1.5){0} \uput{5pt}[u](2.5,1.5){0} \uput{5pt}[u](3.5,1.5){1} \end{pspicture} There are actually 10 strata, because there are two strata of type 6 depending on which marked point is identified with which tail. We use the same numbers to label the strata as those labeling the corresponding graphs above. Eight of the strata have no automorphisms, so we can directly compute ordinary Serre polynomials in these cases. The strata corresponding to Graphs 7 and 8 have automorphism group $S_2$. Calculating the $S_2$-equivariant Serre polynomials of these strata is necessary as an intermediate step. We now compute the Serre polynomials of the strata. Stratum 1 is ${\mathcal{M}}_{0,2}({\mathbb P}^r,2)$. It is an $F({\mathbb P}^1,2)$-bundle over ${\mathcal{M}}_{0,0}({\mathbb P}^r,2)$. Thus Stratum 1 has Serre polynomial \begin{eqnarray*} \operatorname{Serre}(F({\mathbb P}^1,2))\operatorname{Serre}({\mathcal{M}}_{0,0}({{\mathbb P}}^{r},2)) & = & (q^2+q)q^{r+1}\frac{[r+1][r]}{[2]}\\ & = & q^{r+2}[r+1][r] \text{.} \end{eqnarray*} Stratum 2 is isomorphic to the fiber product \[{\mathcal{M}}_{0,1}({{\mathbb P}}^{r},2)\times_{{{\mathbb P}}^{r}}{\mathcal{M}}_{0,3}({{\mathbb P}}^{r},0)\text{.}\] Now ${\mathcal{M}}_{0,3}({{\mathbb P}}^{r},0)\simeq{{\mathbb P}}^{r}$, so the Serre polynomial of this stratum is just \[\operatorname{Serre}({\mathcal{M}}_{0,1}({{\mathbb P}}^{r},2))=\operatorname{Serre}({\mathbb P}^1)\operatorname{Serre}({\mathcal{M}}_{0,0}({{\mathbb P}}^{r},2)) =q^{r+1}[r+1][r]\] since ${\mathcal{M}}_{0,1}({{\mathbb P}}^{r},2)$ is a ${\mathbb P}^1$-bundle over ${\mathcal{M}}_{0,0}({{\mathbb P}}^{r},2)$. Stratum 3 is isomorphic to the fiber product \[{\mathcal{M}}_{0,3}({\mathbb P}^r,1)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,1}({\mathbb P}^r,1) \text{.}\] The $F({\mathbb P}^1,3)$-bundle ${\mathcal{M}}_{0,3}({\mathbb P}^r,1)$ over ${\mathcal{M}}_{0,0}({\mathbb P}^r,1)$ has Serre polynomial $(q^3-q)\chews{r+1}{2}$. Similarly, $\operatorname{Serre}({\mathcal{M}}_{0,1}({\mathbb P}^r,1))=(q+1)\chews{r+1}{2}=[r+1][r]$. Thus Stratum 3 has Serre polynomial \[\frac{(q^3-q)\chews{r+1}{2}[r+1][r]}{[r+1]} =(q^2-q)[r+1][r]^2\text{.}\] Stratum 4 is isomorphic to the fiber product \[{\mathcal{M}}_{0,2}({\mathbb P}^r,1)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,2}({\mathbb P}^r,1) \text{.}\] The $F({\mathbb P}^1,2)$-bundle ${\mathcal{M}}_{0,2}({\mathbb P}^r,1)$ over ${\mathcal{M}}_{0,0}({\mathbb P}^r,1)$ has Serre polynomial $(q^2+q)\chews{r+1}{2}$. Thus Stratum 4 has Serre polynomial \[\frac{(q^2+q)^2[r+1]^2[r]^2}{[r+1][2]^2}=q^2[r+1][r]^2\text{.}\] Stratum 5 is isomorphic to the fiber product \[{\mathcal{M}}_{0,3}({\mathbb P}^r,0)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,2}({\mathbb P}^r,1)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,1}({\mathbb P}^r,1) \text{,}\] and this in turn is isomorphic to ${\mathcal{M}}_{0,2}({\mathbb P}^r,1)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,1}({\mathbb P}^r,1)$. So Stratum 5 has Serre polynomial \[\frac{(q^2+q)\chews{r+1}{2}(q+1)\chews{r+1}{2}}{[r+1]} =q[r+1][r]^2\text{.}\] A stratum of type 6 is isomorphic to the fiber product \[{\mathcal{M}}_{0,2}({\mathbb P}^r,1)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,3}({\mathbb P}^r,0)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,1}({\mathbb P}^r,1) \text{.}\] This is isomorphic to ${\mathcal{M}}_{0,2}({\mathbb P}^r,1)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,1}({\mathbb P}^r,1)$, so each stratum of type 6 has Serre polynomial \[\frac{(q^2+q)\chews{r+1}{2}(q+1)\chews{r+1}{2}}{[r+1]} =q[r+1][r]^2\text{.}\] Thus the total contribution from strata of type 6 is \[2q[r+1][r]^2\text{.}\] Stratum 9 is isomorphic to the fiber product \[{\mathcal{M}}_{0,1}({\mathbb P}^r,1)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,3}({\mathbb P}^r,0)\times_{{\mathbb P}^r} {\mathcal{M}}_{0,3}({\mathbb P}^r,0)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,1}({\mathbb P}^r,1) \text{.}\] It has Serre polynomial \[\frac{(q+1)^2\chews{r+1}{2}^2}{[r+1]} =[r+1][r]^2\text{.}\] We now turn our attention to the two strata with automorphisms. Stratum 8 is isomorphic to the quotient of \[X={\mathcal{M}}_{0,3}({\mathbb P}^r,0)\times_{{\mathbb P}^r}{\mathcal{M}}_{0,3}({\mathbb P}^r,0)\times_{({\mathbb P}^r)^2} {\mathcal{M}}_{0,1}({\mathbb P}^r,1)^2\] by the action of $S_2$. The first copy of ${\mathcal{M}}_{0,3}({\mathbb P}^r,0)$ is superfluous. The action of $S_2$ on the cohomology of the second copy of ${\mathcal{M}}_{0,3}({\mathbb P}^r,0)$ is trivial. The action switches the two factors of ${\mathcal{M}}_{0,1}({\mathbb P}^r,1)$ as well as the two factors in ${\mathbb P}^r\times{\mathbb P}^r$. Since ${\mathcal{M}}_{0,1}({{\mathbb P}}^{r},1)$ is a fiber space over ${\mathbb P}^r$, we can use Corollary \ref{serfiber} in computing the equivariant Serre polynomial of $X$ to be \begin{eqnarray*} & & \operatorname{Serre}_2({\mathcal{M}}_{0,3}({{\mathbb P}}^{r},0))\operatorname{Serre}_2(({\mathcal{M}}_{0,1}({{\mathbb P}}^{r},1)/{{\mathbb P}}^{r})^2)\\ & = & [r+1]\left(\sigma_2\left(\frac{\operatorname{Serre}({\mathcal{M}}_{0,1}({{\mathbb P}}^{r},1))}{\operatorname{Serre}({{\mathbb P}}^{r})}\right)\textrm{\makebox[0.02in][l]{1}1} +\lambda_2\left(\frac{\operatorname{Serre}({\mathcal{M}}_{0,1}({{\mathbb P}}^{r},1))}{\operatorname{Serre}({{\mathbb P}}^{r})}\right)\epsilon\right) \\ & = & [r+1](\sigma_2([r])\textrm{\makebox[0.02in][l]{1}1}+\lambda_2([r])\epsilon) \\ & = & [r+1](\chews{r+1}{2}\textrm{\makebox[0.02in][l]{1}1}+q\chews{r}{2}\epsilon) \end{eqnarray*} As in the proof of Lemma \ref{grasser}, the fiber ${\mathcal{M}}_{0,1}({{\mathbb P}}^{r},1)/{{\mathbb P}}^{r}$ is isomorphic to ${\mathbb P}^{r-1}$. Now augmentation gives \[\frac{[r+1]^2[r]}{[2]}\] as the Serre polynomial of Stratum 8. Stratum 7 is isomorphic to the quotient of \[Y={\mathcal{M}}_{0,1}({\mathbb P}^r,1)^2\times_{({\mathbb P}^r)^2}{\mathcal{M}}_{0,4}({\mathbb P}^r,0)\] by the action of $S_2$, which again switches the squared factors. In addition, it switches two of the four marked points in ${\mathcal{M}}_{0,4}({\mathbb P}^r,0)$. Now ${\mathcal{M}}_{0,4}({\mathbb P}^r,0)\simeq M_{0,4}\times{\mathbb P}^r$, and $S_2$ acts trivially on the ${\mathbb P}^r$ factor. Furthermore, $M_{0,4}\simeq{\mathbb P}^1\setminus\{0,1,\infty\}$ has Serre polynomial $q-2$. But we need to know $\operatorname{Serre}_2(M_{0,4})$ under an $S_2$-action switching two of the deleted points. It is not hard to imagine that $\operatorname{Serre}_2(M_{0,4})=(q-1)\textrm{\makebox[0.02in][l]{1}1}-\epsilon$, but this takes some work to prove. Considering $M_{0,4}$ as the parameter space of four distinct points in ${\mathbb P}^1$ modulo automorphisms of ${\mathbb P}^1$, we obtain $M_{0,4}\simeq F({\mathbb P}^1,4)/\operatorname{PGL}(2)$. Now $\operatorname{PGL}(2)$ acts freely on $F({\mathbb P}^1,4)$. As a result, \begin{equation}\label{mm04} \operatorname{Serre}_2(M_{0,4})=\frac{\operatorname{Serre}_2(F({\mathbb P}^1,4))}{\operatorname{Serre}_2(\operatorname{PGL}(2))}\text{.} \end{equation} Since the cohomology of $\operatorname{PGL}(2)$ is not affected by the action, \begin{equation}\label{pgl2} \operatorname{Serre}_2(\operatorname{PGL}(2))=\operatorname{Serre}(\operatorname{PGL}(2))=q^3-q\text{.} \end{equation} We can stratify $({\mathbb P}^1)^4$ into fifteen cells whose closures are respectively $({\mathbb P}^1)^4$, the six large diagonals, the seven ``medium diagonals" where two coordinate identifications are made, and the small diagonal, so that $F({\mathbb P}^1,4)$ is the complement of the union of all the cells corresponding to diagonals. We examine how the action affects cells of each type, subtracting the polynomials for cells that are removed. For concreteness, suppose the first two marked points are switched. Any pairs of Chow ring generators of $({\mathbb P}^1)^4$ which differ exactly by a multiple of $H_2-H_1$ are switched, so it is not hard to get \[\operatorname{Serre}_2(({\mathbb P}^1)^4)=(q^4+3q^3+4q^2+3q+1)\textrm{\makebox[0.02in][l]{1}1}+(q^3+2q^2+q)\epsilon\text{.}\] How does the action affect the diagonals removed from $({\mathbb P}^1)^4$? Exactly two pairs, $({\Delta}_{13},{\Delta}_{23})$ and $({\Delta}_{14},{\Delta}_{24})$, of the six large diagonals are switched, so the corresponding cells contribute \[(-4q^3+4q)\textrm{\makebox[0.02in][l]{1}1}+(-2q^3+2q)\epsilon\] to the equivariant Serre polynomial, since these diagonals have been removed. Exactly two pairs, $({\Delta}_{134},{\Delta}_{234})$ and $({\Delta}_{(13)(24)},{\Delta}_{(14)(23)})$, among the seven diagonals with two identifications are switched as well. The corresponding cells contribute \[(-5q^2-5q)\textrm{\makebox[0.02in][l]{1}1}+(-2q^2-2q)\epsilon\] to the equivariant Serre polynomial. The small diagonal is not affected by the action, so it contributes \[(-q-1)\textrm{\makebox[0.02in][l]{1}1}\text{.}\] Putting these together gives \[\operatorname{Serre}_2(F({\mathbb P}^1,4))=(q^4-q^3-q^2+q)\textrm{\makebox[0.02in][l]{1}1}+(-q^3+q)\epsilon\text{.}\] Then by (\ref{mm04}) and (\ref{pgl2}), we have the desired result $\operatorname{Serre}_2(M_{0,4})=(q-1)\textrm{\makebox[0.02in][l]{1}1}-\epsilon$. Using Corollary \ref{serfiber} again, we thus calculate the equivariant Serre polynomial of $Y$ to be \begin{eqnarray*} & & \operatorname{Serre}_2({\mathcal{M}}_{0,4}({{\mathbb P}}^{r},0))\operatorname{Serre}_2(({\mathcal{M}}_{0,1}({{\mathbb P}}^{r},1)/{{\mathbb P}}^{r})^2) \\ & = & [r+1]((q-1)\textrm{\makebox[0.02in][l]{1}1}-\epsilon)\left(\chews{r+1}{2}\textrm{\makebox[0.02in][l]{1}1}+q\chews{r}{2}\epsilon\right) \\ & = & [r+1]\left(\left((q-1)\chews{r+1}{2}-q\chews{r}{2}\right)\textrm{\makebox[0.02in][l]{1}1} +\left((q^2-q)\chews{r}{2}-\chews{r+1}{2}\right)\right)\epsilon\text{.} \end{eqnarray*} Augmentation gives \begin{eqnarray*} [r+1]\left((q-1)\chews{r+1}{2}-q\chews{r}{2}\right) & = & \frac{[r+1][r]}{[2]}((q-1)[r+1]-q[r-1])\\ & = & \frac{[r+1][r]}{[2]}(q^{r+1}+q^r)-\frac{[r+1]^2[r]}{[2]}\\ & = & [r+1][r]q^r-\frac{[r+1]^2[r]}{[2]}\\ \end{eqnarray*} as the Serre polynomial of Stratum 7. To get the Serre polynomial for the whole moduli space, we add together the contributions from all the strata. \begin{eqnarray} & & \operatorname{Serre}({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))\nonumber \\ & = & q^{r+2}[r+1][r]+q^{r+1}[r+1][r]+(q^2-q)[r+1][r]^2 +q^2[r+1][r]^2+q[r+1][r]^2\nonumber \\ & & +2q[r+1][r]^2+[r+1][r]^2+\frac{[r+1]^2[r]}{[2]} +[r+1][r]q^r-\frac{[r+1]^2[r]}{[2]}\nonumber \\ & = & [r+1][r](q^{r+2}+q^{r+1}+(q^2-q)[r]+q^2[r]+3q[r]+[r]+q^r)\nonumber \\ & = & [r+1][r](q^{r+2}+q^{r+1}+q^r+[r](2q^2+2q+1))\nonumber \\ & = & [r+1][r](q^{r+2}+q^{r+1}+q^r+2\sum_{i=2}^{r+1}q^i+2\sum_{i=1}^{r}q^i +\sum_{i=0}^{r-1}q^i)\nonumber \\ & = & \left(\sum_{i=0}^r q^i\right)\left(\sum_{i=0}^{r-1} q^i\right) \left(\sum_{i=0}^{r+2} q^i+2\sum_{i=1}^{r+1} q^i+2\sum_{i=2}^{r} q^i\right) \text{.} \label{ser2r2} \end{eqnarray} \noindent Evaluating this sum at $q=1$ gives the Euler characteristic $(r+1)r(5r+3)$.$\Box$ \section{Betti numbers of flag varieties of pointed lines} Let $\alpha_i$ denote the $i$'th Betti number of the flag variety ${\mathbb F}(0,1;r)$ of point-line pairs in ${\mathbb P}^r$ such that the point lies on the line. Recall from the proof of Lemma \ref{grasser} that $\operatorname{Serre}({\mathbb F}(0,1;r))=[r+1][r]$. The product $[r+1][r]$ also appears as a factor in the Serre polynomial (\ref{ser2r2}) of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$, making its coefficients especially relevant to our computations. It is easy to see that the Betti numbers of ${\mathbb F}(0,1;r)$ initially follow the pattern $$(1, 2, 3, ...)\text{,}$$ so that for the first half of the Betti numbers we have $\alpha_i=i+1$. Since $\operatorname{dim}{\mathbb F}(0,1;r)=2r-1$ is always odd, it always has an even number of Betti numbers. By Poincar\'{e} duality, it follows that the middle two Betti numbers are both $r$, and the Betti numbers then decrease back to 1. It can be checked that all the Betti numbers are given by the following formula. \[\alpha_i=r+\frac{1}{2}-|r-\frac{1}{2}-i|\] for $i\in{0}\cup\und{2r-1}$, and $\alpha_i=0$ otherwise. \section{Formulas for the Betti numbers of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$} Let $\beta_j$ be the $j$'th Betti number of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$. We can write $\beta_j=\beta_j^1+\beta_j^2+\beta_j^3$, where $\beta_j^i$ is the contribution from the $i$'th sum in the last factor of Equation \ref{ser2r2}. We compute each contribution separately. We can factor $2q$ out of the second sum and $2q^2$ out of the third sum, and then temporarily ignore these factors. It will be easy to recover their effect later by shifting and doubling some contributions as described below. We have reduced our computation to finding coefficients of expressions of the form $[r+1][r][m]$, where $m\in\{r-1,r+1,r+3\}$. The degree $j$ term of this polynomial is given by \[\sum_{i=\max\{0,j-m+1\}}^{\min\{j,2r-1\}}(\alpha_iq^i)q^{j-i}\text{.}\] So each coefficient is the sum of some of the Betti numbers $\alpha_i$ of ${\mathbb F}(0,1;r)$. The sum can have at most $m$ nonzero terms, since this is the number of terms in $[m]$. One could follow the convention that all sums have $m$ terms, allowing some of the terms to be $\alpha_i$ that are zero for dimension reasons. However, we are restricting our expressions to include only nonzero $\alpha_i$, so that the indices in the sums will be in the range $\{0\}\cup\und{2r-1}$. We also adopt the usual convention that empty sums---those whose upper index is smaller than their lower index---are zero. Let $\gamma_i^j$ be the coefficients resulting from these computations. Reinserting the factors of $2q$ and $2q^2$ factored out of the second and third pieces, we find $\beta_j^1=\gamma_i^1$, $\beta_j^2=2\gamma_{j-1}^2$, and $\beta_j^3=2\gamma_{j-2}^3$. Hence we have \renewcommand{\baselinestretch}{1} \small\normalsize \[ \beta_j^1= \begin{cases} \sum_{i=0}^{j}\alpha_i & \text{ if $j\leq r+2$} \\ \sum_{i=j-r-2}^{j}\alpha_i & \text{ if $r+2\leq j\leq 2r-1$} \\ \sum_{i=j-r-2}^{2r-1}\alpha_i & \text{ if $2r-1\leq j\leq 3r+1$} \end{cases}\text{,} \] \[ \beta_j^2= \begin{cases} 2\sum_{i=0}^{j-1}\alpha_i & \text{ if $j\leq r+1$} \\ 2\sum_{i=j-r-1}^{j-1}\alpha_i & \text{ if $r+1\leq j\leq 2r$} \\ 2\sum_{i=j-r-1}^{2r-1}\alpha_i & \text{ if $2r\leq j\leq 3r$} \end{cases}\text{,} \] \[ \beta_j^3= \begin{cases} 2\sum_{i=0}^{j-2}\alpha_i & \text{ if $j\leq r$} \\ 2\sum_{i=j-r}^{j-2}\alpha_i & \text{ if $r\leq j\leq 2r+1$} \\ 2\sum_{i=j-r}^{2r-1}\alpha_i & \text{ if $2r+1\leq j\leq 3r-1$.} \end{cases} \] \vspace{0.05in} Thus we get the following formulas for the Betti numbers of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$: \renewcommand{\baselinestretch}{1} \small\normalsize \[ \beta_{j}= \begin{cases} \sum_{i=0}^{j}\alpha_i+2\sum_{i=0}^{j-1}\alpha_i+2\sum_{i=0}^{j-2}\alpha_i &\text{ if } j\leq r \\ \\ \sum_{i=0}^{r+1}\alpha_i+2\sum_{i=0}^{r}\alpha_i+2\sum_{i=1}^{r-1}\alpha_i &\text{ if } j=r+1\\ \\ \sum_{i=j-r-2}^{j}\alpha_i+2\sum_{i=j-r-1}^{j-1}\alpha_i+2\sum_{i=j-r}^{j-2} \alpha_i &\text{ if } r+2\leq j\leq 2r-1 \\ \\ \sum_{i=r-2}^{2r-1}\alpha_i+2\sum_{i=r-1}^{2r-1}\alpha_i+2\sum_{i=r}^{2r-2} \alpha_i &\text{ if } j=2r\\ \\ \sum_{i=j-r-2}^{2r-1}\alpha_i+2\sum_{i=j-r-1}^{2r-1}\alpha_i +2\sum_{i=j-r}^{2r-1}\alpha_i &\text{ if } 2r+1\leq j\leq 3r+1\text{.} \end{cases} \] \vspace{0.05in} We can come up with an especially explicit description of $\beta_j$ for $j<r$ since we know $\alpha_i=i+1$ for $i<r$. Also, $\alpha_r=r$, which gives the second part below. \begin{cor} \begin{enumerate} \item For $j<r$, the $j$'th Betti number of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ is \[\beta_j=\frac{5}{2}j^2+\frac{3}{2}j+1\text{.}\] \item Furthermore \[\beta_r=\frac{5}{2}r^2+\frac{3}{2}r\text{.}\] \end{enumerate} \end{cor} \noindent {\bf Proof.} Simplify \[\beta_j=\frac{(j+1)(j+2)}{2}+j(j+1)+j(j-1)\] and \[\beta_r=\frac{(r+1)(r+2)}{2}+r(r+1)+r(r-1)-1\text{.}\] \begin{flushright}$\Box$\end{flushright} \noindent As a consequence of this, a particular Betti number of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ stabilizes as $r$ becomes large. \begin{cor} For all $r>j$, the $j$'th Betti number of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ is $\beta_j=\frac{(j+1)(j+2)}{2}+j(j+1)+j(j-1)$. \end{cor} \noindent Let $\bar{\beta}_j$ be this limiting value. We have \[\bar{\beta}_0=1, \bar{\beta}_1=5, \bar{\beta}_2=14, \bar{\beta}_3=28, \bar{\beta}_4=47, \bar{\beta}_5=71,\ldots\] \renewcommand{\baselinestretch}{1} \section{Poincar\'{e} polynomials of ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2)$ and ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ for small $r$} Using the same procedure as above, one can easily compute the Poincar\'{e} polynomial of ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2)$. \begin{prop} If $r$ is even, the Poincar\'{e} polynomial of ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2)$ is \[\operatorname{Serre}({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2))= \left(\sum_{i=0}^r q^i\right)\left(\sum_{i=0}^{(r-2)/2} q^{2i}\right) \left(\sum_{i=0}^{r+2} q^i+\sum_{i=1}^{r+1} q^i+\sum_{i=2}^{r} q^i\right) \text{,}\] and if $r$ is odd, the Poincar\'{e} polynomial of ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2)$ is \[\operatorname{Serre}({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2))= \left(\sum_{i=0}^{r-1} q^i\right)\left(\sum_{i=0}^{(r-1)/2} q^{2i}\right) \left(\sum_{i=0}^{r+2} q^i+\sum_{i=1}^{r+1} q^i+\sum_{i=2}^{r} q^i\right) \text{.}\] \end{prop} \noindent Thus, for small values of $r$, we get the explicit Poincar\'{e} polynomials listed in Table \ref{ep012r2}. \renewcommand{\baselinestretch}{1} \small\normalsize \begin{sidewaystable} \begin{tabular}{|rrl|} \hline $r$ & $\chi(X)$ & $\operatorname{Serre}(X)$ \\ 1 & 6 & $1+2q+2q^2+q^3$ \\ 2 & 27 & $1+3q+6q^2+7q^3+6q^4+3q^5+q^6$ \\ 3 & 72 & $1+3q+7q^2+11q^3+14q^4+14q^5+11q^6+7q^7+3q^8+q^9$ \\ 4 & 150 & $1+3q+7q^2+12q^3+18q^4+22q^5+24q^6+22q^7+18q^8+12q^9+7q^{10}+3q^{11}+q^{12}$ \\ 5 & 270 & $1+3q+7q^2+12q^3+19q^4+26q^5+32q^6+35q^7$ \\ & & \hspace{0.1in} $+35q^8+32q^9+26q^{10}+19q^{11}+12q^{12}+7q^{13}+3q^{14}+q^{15}$ \\ 6 & 441 & $1+3q+7q^2+12q^3+19q^4+27q^5+36q^6+43q^7+48q^8+49q^9$ \\ & & \hspace{0.1in} $+48q^{10}+43q^{11}+36q^{12}+27q^{13}+19q^{14}+12q^{15}+7q^{16}+3q^{17}+q^{18}$ \\ 7 & 672 & $1+3q+7q^2+12q^3+19q^4+27q^5+37q^6+47q^7+56q^8+62q^9+65q^{10}$ \\ & & \hspace{0.1in} $+65q^{11}+62q^{12}+56q^{13}+47q^{14}+37q^{15}+27q^{16}+19q^{17}+12q^{18}+7q^{19}+3q^{20}+q^{21}$ \\ 8 & 972 & $1+3q+7q^2+12q^3+19q^4+27q^5+37q^6+48q^7+60q^8+70q^9+78q^{10}+82q^{11}+84q^{12}$ \\ & & \hspace{0.1in} $+82q^{13}+78q^{14}+70q^{15}+60q^{16}+48q^{17}+37q^{18}+27q^{19}+19q^{20}+12q^{21}+7q^{22}+3q^{23}+q^{24}$ \\ \hline \end{tabular} \begin{tabular}{|rrl|} \hline $r$ & $\chi(Y)$ & $\operatorname{Serre}(Y)$ \\ 1 & 16 & $1+4q+6q^2+4q^3+q^4$ \\ 2 & 78 & $1+5q+13q^2+20q^3+20q^4+13q^5+5q^6+q^7$ \\ 3 & 216 & $1+5q+14q^2+27q^3+39q^4+44q^5+39q^6+27q^7+14q^8+5q^9+q^{10}$ \\ 4 & 460 & $1+5q+14q^2+28q^3+46q^4+63q^5+73q^6+73q^7+63q^8+46q^9+28q^{10}+14q^{11}+5q^{12}+q^{13}$ \\ 5 & 840 & $1+5q+14q^2+28q^3+47q^4+70q^5+92q^6+107q^7+112q^8$ \\ & & \hspace{0.1in} $+107q^9+92q^{10}+70q^{11}+47q^{12}+28q^{13}+14q^{14}+5q^{15}+q^{16}$ \\ 6 & 1386 & $1+5q+14q^2+28q^3+47q^4+71q^5+99q^6+126q^7+146q^8+156q^9$ \\ & & \hspace{0.1in} $+156q^{10}+146q^{11}+126q^{12}+99q^{13}+71q^{14}+47q^{15}+28q^{16}+14q^{17}+5q^{18}+q^{19}$ \\ 7 & 2128 & $1+5q+14q^2+28q^3+47q^4+71q^5+100q^6+133q^7+165q^8+190q^9+205q^{10}+210q^{11}$ \\ & & \hspace{0.1in} $+205q^{12}+190q^{13}+165q^{14}+133q^{15}+100q^{16}+71q^{17}+47q^{18}+28q^{19}+14q^{20}+5q^{21}+q^{22}$ \\ 8 & 3096 & $1+5q+14q^2+28q^3+47q^4+71q^5+100q^6+134q^7+172q^8+209q^9+239q^{10}+259q^{11}+269q^{12}$ \\ & & \hspace{0.1in} $+269q^{13}+259q^{14}+239q^{15}+209q^{16}+172q^{17}+134q^{18}+100q^{19}+71q^{20}$ \\ & & \hspace{0.1in} $+47q^{21}+28q^{22}+14q^{23}+5q^{24}+q^{25}$ \\ \hline \end{tabular} \caption{Euler characteristics and Poincar\'{e} polynomials for $X={\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2)$ and $Y={\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ \label{ep012r2}} \end{sidewaystable} \chapter{Generators} \label{sec:gen} This section describes three types of divisor classes found in all Chow rings \linebreak[4] $A^*({\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)})$. We will see in Section \ref{sec:comp} that divisors of these types generate the Chow ring of the moduli space ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$. All of these divisors have equivariant lifts, produced by taking the equivariant first Chern classes of their corresponding equivariant line bundles. We will use the same symbols for these equivariant versions; the meaning should be clear from the context. \section{Hyperplane pullbacks} Let $\operatorname{ev}_1,\ldots,\operatorname{ev}_n$ be the evaluation maps on ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$ defined in Section \ref{sec:modintro}. Any cohomology class on $X$ pulls back under any evaluation map to a class on ${\overline{\mathcal{M}}_{g,n}(X,\beta)}$, which may be considered as an element in the Chow ring under the isomorphism (\ref{hi}). In the case $X={\mathbb P}^r$, we can pull the hyperplane class $H$ back under each evaluation, getting the $n$ divisor classes $H_i=\operatorname{ev}_i^*(H)$. \section{Boundary divisors} \label{sec:bound} The boundary of ${\overline{\mathcal{M}}}_{0,n}(X,\beta)$ by definition consists of the locus of stable maps with reducible domain curves. By Proposition \ref{nicemod}, it is a divisor with normal crossings. The irreducible components of the boundary of ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ are in 1--1 correspondence with quadruples $(A,d_A,B,d_B)$, where $A,B\subset\und{n}$ partition $\und{n}$, $d_1+d_2=d$, and if $d_A=0$ (resp. $d_B=0$), then $A$ (resp. $B$) has at least two elements. Such a boundary divisor and its class in the Chow ring are both denoted $D_{A,d_A,B,d_B}$. Geometrically, the divisor $D_{A,d_A,B,d_B}$ corresponds to the closure of the locus of stable maps where the domain curve has two components, one having marked points labeled by $A$ and mapping to ${\mathbb P}^r$ with degree $d_A$, and the other having marked points labeled by $B$ and mapping to ${\mathbb P}^r$ with degree $d_B$. We represent this divisor by the following picture. \begin{center} \begin{pspicture}(0,0)(5,3.5) \pnode(0.5,.5){a} \dotnode(1,1){b} \dotnode(1.5,1.5){c} \dotnode(2,2){d} \pnode(3,3){e} \pnode(2,3){f} \dotnode(3,2){g} \dotnode(3.5,1.5){h} \dotnode(4,1){i} \pnode(4.5,0.5){j} \pnode(.6,1.2){k} \pnode(.6,1.4){l} \pnode(1.6,2.4){m} \pnode(1.8,2.4){n} \pnode(4.4,1.2){o} \pnode(4.4,1.4){p} \pnode(3.4,2.4){q} \pnode(3.2,2.4){r} \ncline{a}{e} \ncline{f}{j} \ncline{k}{l} \ncline{l}{m}\naput{$A$} \ncline{m}{n} \ncline{o}{p} \ncline{q}{p}\naput{$B$} \ncline{q}{r} \uput{5pt}[d](.5,.5){$d_A$} \uput{5pt}[d](4.5,.5){$d_B$} \end{pspicture} \end{center} Note that the domain curves of stable maps lying in a boundary divisor may have more than two components. In the limit, some combinations of marked points and the node may coincide, causing new components to sprout. The marked points involved in the ``collision" will appear on this new component. Observe that, although marked points can thus migrate to newly added components in the limit, they cannot move onto components that already existed as long as the node separating the components is maintained. Additionally, the map itself may degenerate in such a way that the number of components of the domain curve increases. The diagram representation given above for divisors can easily be extended to describe the closures of other degeneration loci. This description is an alternative to the dual graphs used in Section \ref{sec:poincare}. We will use dual graphs when referring to the degeneration loci themselves, and we will use this diagram representation when referring to their closures. Such a diagram therefore directly describes only a generic element of the locus it represents. For example, the diagram \begin{center} \begin{pspicture}(0,0)(5,2) \pnode(1,1){a} \dotnode(2,1){c} \dotnode(3,1){d} \pnode(4,1){b} \ncline{a}{b} \uput{5pt}[r](4,1){2} \end{pspicture} \end{center} represents (a generic element of) ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ itself. In $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))$, there are exactly three boundary divisors. We use the notation $D_0=D_{\underline{2},0,\emptyset,2}$, $D_1=D_{\underline{2},1,\emptyset,1}$, and $D_2=D_{\underline{1},1,\{2\},1}$ for these divisors. Thus the domain of a generic stable map in $D_0$ has one collapsed component containing both marked points and one component of degree two. Generic elements of $D_1$ are maps which have degree one on each of the two components of the domain curve, with both marked points lying on the same component. Finally, $D_2$ is the boundary divisor whose generic maps have degree one on each component with one marked point on each component. Note that $D_i$ corresponds to curves with $i$ degree one components containing marked points; this aids in remembering the notation. The diagrams for these three divisors follow. \begin{center} \begin{pspicture}(0,0)(4,3) \rput(2,0.5){$D_0$} \pnode(0.5,.5){a} \dotnode(1.5,1.5){c} \dotnode(1,1){d} \pnode(2.5,2.5){e} \pnode(1.5,2.5){f} \pnode(3.5,0.5){j} \ncline{a}{e} \ncline{f}{j} \uput{5pt}[d](.5,.5){0} \uput{5pt}[d](3.5,.5){2} \end{pspicture} \begin{pspicture}(0,0)(4,3) \rput(2,0.5){$D_1$} \pnode(0.5,.5){a} \dotnode(1.5,1.5){c} \dotnode(1,1){d} \pnode(2.5,2.5){e} \pnode(1.5,2.5){f} \pnode(3.5,0.5){j} \ncline{a}{e} \ncline{f}{j} \uput{5pt}[d](.5,.5){1} \uput{5pt}[d](3.5,.5){1} \end{pspicture} \begin{pspicture}(0,0)(4,3) \rput(2,0.5){$D_2$} \pnode(0.5,.5){a} \dotnode(1.25,1.25){c} \dotnode(2.75,1.25){d} \pnode(2.5,2.5){e} \pnode(1.5,2.5){f} \pnode(3.5,0.5){j} \ncline{a}{e} \ncline{f}{j} \uput{5pt}[d](.5,.5){1} \uput{5pt}[d](3.5,.5){1} \end{pspicture} \end{center} We use $D_1$ as an example to illustrate the further degeneration that can occur within a boundary divisor. Contained within $D_1$ are loci with the following diagrams. \begin{center} \begin{pspicture}(0,0)(4,3.5) \pnode(1,.5){a} \pnode(1,3){b} \pnode(.5,2.5){c} \pnode(3.5,2.5){d} \pnode(3,3){e} \pnode(3,.5){f} \dotnode(1,1.5){i} \dotnode(2,2.5){j} \ncline{a}{b} \ncline{c}{d} \ncline{e}{f} \uput{5pt}[d](1,.5){1} \uput{5pt}[l](.5,2.5){0} \uput{5pt}[d](3,.5){1} \uput{5pt}[l](1,1.5){2} \uput{5pt}[u](2,2.5){1} \end{pspicture} \begin{pspicture}(0,0)(5.5,2.5) \pnode(.5,2){a} \pnode(2.2,.3){b} \pnode(1.8,.3){c} \pnode(3.7,2.2){d} \pnode(3.3,2.2){e} \pnode(5,.5){f} \dotnode(1,1.5){i} \dotnode(1.5,1){j} \ncline{a}{b} \ncline{c}{d} \ncline{e}{f} \uput{5pt}[ul](.5,2){0} \uput{5pt}[dl](1.8,.3){1} \uput{5pt}[dr](5,.5){1} \end{pspicture} \begin{pspicture}(0,0)(4,3.5) \pnode(1,.5){a} \pnode(1,3){b} \pnode(.5,2.5){c} \pnode(3.5,2.5){d} \pnode(3,3){e} \pnode(3,.5){f} \dotnode(1.67,2.5){i} \dotnode(2.33,2.5){j} \ncline{a}{b} \ncline{c}{d} \ncline{e}{f} \uput{5pt}[d](1,.5){1} \uput{5pt}[l](.5,2.5){0} \uput{5pt}[d](3,.5){1} \end{pspicture} \begin{pspicture}(0,0)(4,3.5) \pnode(1,.5){a} \pnode(1,3){b} \pnode(.5,2.5){c} \pnode(3.5,2.5){d} \pnode(3,3){e} \pnode(3,.5){f} \dotnode(1,1.5){i} \dotnode(2,2.5){j} \ncline{a}{b} \ncline{c}{d} \ncline{e}{f} \uput{5pt}[d](1,.5){1} \uput{5pt}[l](.5,2.5){0} \uput{5pt}[d](3,.5){1} \uput{5pt}[l](1,1.5){1} \uput{5pt}[u](2,2.5){2} \end{pspicture} \hspace{.25in} \begin{pspicture}(0,0)(4,3.5) \pnode(1,.5){a} \pnode(1,3){b} \pnode(.5,2.5){c} \pnode(3.5,2.5){d} \pnode(3,3){e} \pnode(3,.5){f} \pnode(2,1){g} \pnode(2,3){h} \dotnode(2,1.5){i} \dotnode(2,2){j} \ncline{a}{b} \ncline{c}{d} \ncline{e}{f} \ncline{g}{h} \uput{5pt}[d](1,.5){1} \uput{5pt}[l](.5,2.5){0} \uput{5pt}[d](3,.5){1} \uput{5pt}[d](2,1){0} \end{pspicture} \hspace{.25in} \begin{pspicture}(0,0)(4,4.5) \pnode(1,.5){a} \pnode(1,3){b} \pnode(.6,2.3){c} \pnode(2.4,3.2){d} \pnode(1.6,3.2){e} \pnode(3.4,2.3){f} \pnode(3,.5){g} \pnode(3,3){h} \dotnode(1.5,2.75){i} \dotnode(2.5,2.75){j} \ncline{a}{b} \ncline{c}{d} \ncline{e}{f} \ncline{g}{h} \uput{5pt}[d](1,.5){1} \uput{5pt}[dl](.6,2.3){0} \uput{5pt}[dr](3.4,2.3){0} \uput{5pt}[d](3,.5){1} \end{pspicture} \end{center} \noindent The marked points are not labeled in diagrams where the distinction does not affect the boundary class, either because both marked points lie on the same component or because of symmetry. We will show in Section \ref{sec:lla} that the three boundary divisor classes together with the hyperplane pullbacks $H_1$ and $H_2$ generate the linear part of the ring $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$. Even better, we will see in Section \ref{sec:comp} that these classes generate this entire Chow ring. \section{The $\psi$-classes} \label{sec:psi} We use definitions and properties from \cite{HM}. Every flat family $\phi:\mathcal{C}\rightarrow B$ of nodal curves over a scheme has a relative dualizing sheaf $\omega_{\mathcal{C}/B}$, or $\omega_{\phi}$, defined to be the sheaf of rational relative differentials. This sheaf is invertible, so we may also consider it as a line bundle. If the total space is smooth, we can write \[\omega_\phi=K_\mathcal{C}\*\phi^*K_B^\vee\text{,}\] where $K_\mathcal{C}$ and $K_B$ are the canonical bundles. If the family is equipped with sections $s_1,...,s_n$, we may consider the bundles $L_i=s_i^*(\omega_{\phi})$, called the cotangent line bundles. At a point $b\in B$, the fiber of $L_i$ is the cotangent space to the curve $\mathcal{C}_b$ at the point $s_i(b)$. We also define the {\em $\psi$-class} $\psi_i$ to be the first Chern class $c_1(L_i)$ for each $i$. We extend this setup to moduli stacks of stable maps and the universal curves of their universal families. We saw in Section \ref{sec:mod} that the universal curve of ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ is ${\overline{\mathcal{M}}}_{0,n+1}({\mathbb P}^r,d)$, which is a smooth stack. Everything said above carries over to this stack setting. The universal curve is equipped with $n$ universal sections $\sigma_1,\ldots,\sigma_n$, so we have $n$ naturally defined $\psi$-classes. Furthermore, it is straightforward to check that these $\psi$-classes are universal as well: given any morphism $g:S\rightarrow{\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$, the pullbacks $g^*(\psi_1),\ldots,g^*(\psi_n)$ are the $\psi$-classes on the induced family. Although the $\psi$-classes are not strictly necessary as generators for the rings $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$, we include them because their geometric nature makes some of the relations much easier to understand and state. Compare the presentations of Theorem \ref{thm:prez} and Proposition \ref{altprez} for immediate and visible testimony to the value of such incorporation. We will give here one example of the usefulness of the $\psi$-classes in describing geometric conditions. To make this example easier to state, we first introduce a slight modification of the concept of $\psi$-classes. Restricting to the closure of a particular degeneration locus, let $p$ be a node at the intersection of components $E_1$ and $E_2$, like in the diagrams of Section \ref{sec:bound}. Then we can define classes $\tilde{\psi}_{p,E_i}$ essentially just like we defined $\psi$-classes. The only difference here is that we additionally specify which branch to consider $p$ as lying on, so that the cotangent space is one-dimensional. In fact, this type of class is defined at the end of Section \ref{sec:eq}, where it is denoted $e_F$. If, reversing the gluing process described in Section \ref{sec:modintro}, we remove, say, $E_2$ and then the associated connected component of the curve, replacing the node with an auxiliary marked point $s_{\bullet}$, then $\tilde{\psi}_{p,E_1}$ becomes a legitimate $\psi$-class $\psi_{\bullet}$ on the resulting moduli space. An important fact which will be used many times throughout the dissertation is that a collapsed rational component with exactly three special points is a {\em rigid object}. In other words, the marked points and nodes on such a component cannot be moved around internally on the component. Among other things, this says that, once such a component appears in a degeneration, it will remain in any further degeneration. We have already used this fact implicitly, for example, in describing the possible further degenerations of $D_1$ in Section \ref{sec:bound}. This rigidity is intuitively clear from at least two different perspectives. First, there exists an automorphism of ${\mathbb P}^1$ taking any three distinct points to any other three distinct points. So, up to isomorphism, any two data consisting of three marked points in ${\mathbb P}^1$ are equivalent. We might as well always take the points to be 0, 1, and $\infty$ (or any three arbitrarily chosen points). Second, if such points and nodes were allowed to move on the domain of a stable map, they could come together and force the sprouting of new components. However, the result would no longer be a stable map; there aren't enough special points left for the collapsed components. More rigorously, the rigidity of such a component is equivalent to the vanishing of the corresponding $\psi$-classes and $\tilde{\psi}$-classes on that component. For $\psi$-classes, this is because the cotangent line bundles are trivial if and only if the marked points are fixed. A similar statement holds for $\tilde{\psi}$-classes. We will prove in Section \ref{sec:geomrel} that $\psi_1$ and $\psi_2$ vanish on $D_0$. This is the most important special case of the rigidity just described. From the argument given there, it is easy to see that this characterization of the rigidity of a component holds in general. \renewcommand{\baselinestretch}{1} \chapter{Presentations for the Chow rings of some simpler spaces} \label{sec:simpler} \renewcommand{\baselinestretch}{1} \section{Presentations for $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1))$, $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1))$, and $A^*({\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1))$} \label{sec:0n11} The study of degree one stable maps to ${\mathbb P}^1$ is relatively simple because every degree one morphism from ${\mathbb P}^1$ to ${\mathbb P}^1$ is an isomorphism. More generally, genus zero, degree one stable maps never have nontrivial automorphisms. Thus the corresponding moduli spaces may just as well be considered as fine moduli schemes, as no loss of information results. Let us record the standard fact that \begin{equation}\label{projprod} A^*(\prod_{i=1}^n{\mathbb P}^{r_i})= \frac{{\mathbb Q}[H_1,\ldots,H_n]}{\left(H_1^{r_1+1},\ldots,H_n^{r_n+1}\right)} \text{,} \end{equation} where $H_i$ is the pullback of the hyperplane class under the $i$'th projection. See \cite[Chapter 8]{F}. First, we mention that the moduli space ${\overline{\mathcal{M}}}_{0,0}({\mathbb P}^1,1)$ is a point because the domain of such a stable map is always ${\mathbb P}^1$, and, thanks to the opening comment above, all degree one stable maps from ${\mathbb P}^1$ to itself are isomorphic to the identity. Second, this seems like a good place to note that ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\simeq{\overline{M}}_{0,3}\times{\mathbb P}^1\simeq{\mathbb P}^1$ since ${\overline{M}}_{0,3}$ is also a point. Since the automorphism group of ${\mathbb P}^1$ is three-dimensional, there is an automorphism mapping any three distinct points to any other three distinct points. Thus we can fix three points, say 0, 1, and $\infty$ of ${\mathbb P}^1$, and any data $({\mathbb P}^1,x_1,x_2,x_3)$ of three distinct points in ${\mathbb P}^1$ is isomorphic to our fixed data. In effect, the marked points of ${\overline{M}}_{0,3}$, and hence by pullback those of ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)$, are not allowed to vary. (See the discussion at the end of Section \ref{sec:psi}. Incidentally, ${\overline{M}}_{0,4}\simeq{\mathbb P}^1$ roughly because, after fixing the first three marked points, the fourth is allowed to vary over ${\mathbb P}^1$. See \cite{K} for more detail.) Thus \[A^*({\overline{\mathcal{M}}}_{0,0}({\mathbb P}^1,1))\simeq{\mathbb Q} \text{\hspace{.5in}and\hspace{.5in}} A^*({\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0))\simeq{\mathbb Q}[H]/(H^2)\text{.}\] The classes in the former are just multiples of its fundamental class. In the latter, $H$ corresponds to the hyperplane pullback under any of the three evaluation morphisms, which all simply record the image of the trivial stable map. We now proceed to the three main subjects of this section. \begin{lem} ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1)\simeq {\mathbb P}^1$. \end{lem} \noindent {\bf Proof.} The family of one-pointed, degree one stable maps \vspace{0.3in} \begin{center} \psset{arrows=->} \begin{psmatrix} ${\mathbb P}^1\times\mathbb{P}^1$ & $\mathbb{P}^1$ \\ $\mathbb{P}^1$ \ncline[offset=-3pt]{1,1}{2,1}\naput{$\operatorname{pr}_1$} \ncarc[arcangle=45]{2,1}{1,1}\naput{${\Delta}$} \ncline{1,1}{1,2}\naput{$\operatorname{pr}_2$} \end{psmatrix} \end{center} \vspace{0.1in} \noindent gives a morphism $\alpha:{\mathbb P}^1\rightarrow{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1)$. The evaluation morphism $\operatorname{ev}:{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1)\rightarrow{\mathbb P}^1$ is easily checked to be inverse to $\alpha$, and hence is the desired isomorphism.$\Box$ \noindent Under this isomorphism, the hyperplane $H$ in ${\mathbb P}^1$ naturally corresponds to its pullback $H_1=\operatorname{ev}^{-1}H$. Therefore \[A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1))\simeq\frac{{\mathbb Q}[H_1]}{(H_1^2)}\text{.}\] \begin{lem} \label{m0211} ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)\simeq {\mathbb P}^1\times{\mathbb P}^1$ \end{lem} \noindent {\bf Proof.} We start as before with a family \vspace{0.2in} \begin{center} \psset{arrows=->} \begin{psmatrix} $({\mathbb P}^1)^2\times\mathbb{P}^1$ & $\mathbb{P}^1$ \\ $(\mathbb{P}^1)^2$ \ncline[offset=-3pt]{1,1}{2,1}\naput{$\operatorname{pr}_1$} \ncarc[arcangle=45]{2,1}{1,1}\naput{$s_i=(\operatorname{id},\operatorname{pr}_i)$} \ncline{1,1}{1,2}\naput{$\operatorname{pr}_2$} \end{psmatrix}\text{.} \psset{arrows=-} \end{center} \vspace{0.2in} \noindent This is not a family of stable maps because the images of the $s_i$ coincide on the diagonal. To fix this, let $f:\operatorname{B\ell}_{{\Delta}}({\mathbb P}^1)^3\rightarrow({\mathbb P}^1)^3$ be the blowup of $({\mathbb P}^1)^3$ along its small diagonal. The family $((f_1,f_2):\operatorname{B\ell}_{{\Delta}}({\mathbb P}^1)^3\rightarrow({\mathbb P}^1)^2,\tilde{s}_1,\tilde{s}_2, f_3)$ induced by $f$ from the original family {\em is} a family of stable maps. (Here the sections $\tilde{s}_i$ are proper transforms.) To see this, fix $a\in{\mathbb P}^1$ and consider the hypersurface $Y=Z(z_3-a)$ in $({\mathbb P}^1)^3$. The blowup $f$ restricts to the blowup of $Y$ at $(a,a,a)$. The restrictions of the sections $s_i$ to $Y$ have distinct tangent directions at $(a,a,a)$, so their proper transforms will be disjoint on the exceptional ${\mathbb P}^1$ over $(a,a,a)$. Let $\alpha:({\mathbb P}^1)^2\rightarrow{\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)$ be the morphism induced by this family. We will check that $\operatorname{ev}=(\operatorname{ev}_1,\operatorname{ev}_2)$ is inverse to $\alpha$, and hence is the desired isomorphism. An alternate way to compactify ${\mathcal{M}}_{0,2}({\mathbb P}^1,1)$ is by allowing the marked points to coincide. The resulting moduli space is equivalent to ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)$. Once again, this is due to the rigidity of a collapsed component with three special points, as discussed in Section \ref{sec:psi}, since such a component results as a limit of stable maps when the marked points would otherwise coincide. Using this equivalent description of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)$, we may assume that the map associated to any stable map in ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)$ is the identity. Thus $\alpha$ takes an ordered pair $(z_1,z_2)$ to the stable map $({\mathbb P}^1,z_1,z_2,\operatorname{id})$. Simple observation now confirms that the compositions $(\operatorname{ev}_1,\operatorname{ev}_2)\circ\alpha$ and $\alpha\circ(\operatorname{ev}_1,\operatorname{ev}_2)$ are both identities. $\Box$ \noindent Under this isomorphism, the hyperplanes $H_i$ in $({\mathbb P}^1)^2$ naturally correspond to the hyperplane pullbacks $H_i$ in ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)$. Hence \[A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1))\simeq\frac{{\mathbb Q}[H_1,H_2]}{(H_1^2,H_2^2)}\text{.}\] \begin{prop} \label{m0311} We have an isomorphism of ${\mathbb Q}$-algebras \begin{eqnarray*} & & A^*({\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1)) \\ & \simeq & \frac{{\mathbb Q}[H_1,H_2,H_3,D]}{(H_1^2,H_2^2,H_3^2, (H_1+H_2-D)(H_2+H_3-D), D(H_1-H_2), D(H_2-H_3))} \end{eqnarray*} \end{prop} \noindent {\bf Proof.} Since ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1)$ is the universal curve over ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)$, the proof of Lemma \ref{m0211} shows that ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1) \simeq\operatorname{B\ell}_{\Delta}{({\mathbb P}^1)^3}$. Now ${\Delta}\simeq{\mathbb P}^1$, and the restriction map corresponding to $i:{\Delta}\hookrightarrow({\mathbb P}^1)^3$ sends each $H_i$ to the hyperplane class in ${\Delta}$. So $i^*:A^* (({\mathbb P}^1)^3)\rightarrow A^*({\Delta})$ is surjective, and we can take $H_1-H_2$ and $H_2-H_3$ as generators for $\ker{i^*}$. The small diagonal is the complete intersection of any two of the large diagonals. So we may apply Keel's Lemma 1 from \cite{K}, which says that whenever $X$ is a complete intersection of two divisors $D_1$,$D_2$ in a scheme $Y$ and the restriction map $i^*:A^*(Y)\rightarrow A^*(X)$ is surjective, then \[A^*(\tilde{Y})=A^*(Y)[T]/((D_1-T)(D_2-T), \ker{i^*}\cdot T)\text{,}\] where $\tilde{Y}$ is the blowup of $Y$ along $X$. Here $T$ corresponds to the exceptional divisor. We know from \cite{F} that $A^*(({\mathbb P}^1)^3)={\mathbb Q}[H_1,H_2,H_3]/(H_1^2,H_2^2,H_3^2)$ and that we can express two of the large diagonal classes as $D_i=H_i+H_{i+1}$ for $i\in\{1,2\}$. The expression in the proposition results.$\Box$ \noindent As before, the $H_i$ in the presentation are naturally identified with the corresponding hyperplane pullbacks. Furthermore $D$ corresponds to the boundary divisor $D=D_{\und{3},0,\emptyset,1}$ whose generic stable map has all three marked points lying on the same collapsed component. \renewcommand{\baselinestretch}{1} \section{The Chow ring $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))$ via equivariant cohomology} \label{sec:0112} The goal of this subsection is to give a presentation for $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))$. Our main tool is the method of localization in equivariant cohomology, developed in Section \ref{sec:eq}. We will employ additional methods from \cite[Section 9.2]{CK} to aid in applying localization to this particular moduli space. From Table \ref{ep012r2}, \[\operatorname{Serre}({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))=q^3+2q^2+2q+1\text{.}\] By Section \ref{sec:gen}, we have the divisor classes $H_1=\operatorname{ev}_1^*(H)$, the pullback of the hyperplane class, and $D=D_{\underline{1},1,\emptyset,1}$, the lone boundary divisor. We will show that these two classes generate the Chow ring, and we will also find two relations involving them. Let ${\overline{\mathcal{M}}}={\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$. Let $T=({\mathbb C}^*)^2$. Consider the usual $T$-action on ${\overline{\mathcal{M}}}$. We want to compute the equivariant integrals of degree three monomials in the classes above. First, we have to find the equivariant Euler classes of the normal bundles of the fixed point components. We do this using Theorem \ref{norm}. There are six fixed components, all of them isolated points. We will label the graphs corresponding to the fixed components as follows. \[{\Gamma}_1=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{\{1\}}} { \TC*~{1}\taput{2} } }\] \[{\Gamma}_2=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{1}~[tnpos=a]{\{1\}}} { \TC*~{0}\taput{2} } }\] \[{\Gamma}_3=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{1}~[tnpos=a]{\{1\}}} { \pstree{\TC*~{0}\taput{1}} { \TC*~{1}\taput{1} } } }\] \[{\Gamma}_4=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{\{1\}}} { \pstree{\TC*~{1}\taput{1}} { \TC*~{0}\taput{1} } } }\] \[{\Gamma}_5=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{1}} { \pstree{\TC*~{0}~[tnpos=a]{\{1\}}\taput{1}} { \TC*~{1}\taput{1} } } }\] \[{\Gamma}_6=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}} { \pstree{\TC*~{1}~[tnpos=a]{\{1\}}\taput{1}} { \TC*~{0}\taput{1} } } }\] \vspace{0.1in} For all $i$, let $Z_i$ denote the fixed component corresponding to ${\Gamma}_i$. As we will see, all degree three classes restrict to zero on $Z_1$ and $Z_2$, so their equivariant Euler classes are not needed for our computations. Before beginning computations for the other components, we note the following fact about the term $e_F$ that appears in the formula for $e_{{\Gamma}}^{\text{F}}$ in Theorem \ref{norm}. Since it is a first Chern class with zero weight, we must have $e_F=0$ on any fixed component which is a point. We label the vertices and edges of the remaining graphs as follows. \begin{center} \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{A}} { \pstree{\TC*~{B}\taput{a}} { \TC*~{C}\taput{b} } } \end{center} Then for $Z_3$ we have \[e_{{\Gamma}_{3}}^{\text{F}}=\frac{1}{(\lambda_0-\lambda_1)^2(\lambda_1-\lambda_0)^2} \text{,}\] \[e_{{\Gamma}_{3}}^{\text{v}}=\frac{(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)(\lambda_0-\lambda_1+\lambda_0-\lambda_1)} {\lambda_1-\lambda_0}=2(\lambda_1-\lambda_0)^3 \text{,}\] and \[e_{{\Gamma}_{3}}^{\text{e}}=(\lambda_0-\lambda_1)^4 \text{.}\] Thus \[\operatorname{Euler_T}(N_{{\Gamma}_{3}})=\frac{2(\lambda_1-\lambda_0)^3(\lambda_0-\lambda_1)^4} {(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)^2}=2(\lambda_1-\lambda_0)^3 \text{.}\] A similar calculation shows that $\operatorname{Euler_T}(N_{{\Gamma}_{4}})=2(\lambda_0-\lambda_1)^3$. Now for $Z_5$, \[e_{{\Gamma}_{5}}^{\text{F}}=\frac{\omega_{B,a}\omega_{B,b}}{(\lambda_0-\lambda_1)^2(\lambda_1-\lambda_0)^2} =\frac{(\lambda_0-\lambda_1)^2}{(\lambda_0-\lambda_1)^2(\lambda_1-\lambda_0)^2}=\frac{1}{(\lambda_1-\lambda_0)^2} \text{,}\] \[e_{{\Gamma}_{5}}^{\text{v}}=\frac{(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)}{\omega_{A,a}\omega_{C,b}} =\frac{(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)}{(\lambda_1-\lambda_0)^2}=\lambda_0-\lambda_1 \text{,}\] and \[e_{{\Gamma}_{5}}^{\text{e}}=(\lambda_0-\lambda_1)^4 \text{.}\] Thus \[\operatorname{Euler_T}(N_{{\Gamma}_{5}})=\frac{(\lambda_0-\lambda_1)(\lambda_0-\lambda_1)^4}{(\lambda_1-\lambda_0)^2} =(\lambda_0-\lambda_1)^3 \text{.}\] A similar calculation shows that $\operatorname{Euler_T}(N_{{\Gamma}_{6}})=(\lambda_1-\lambda_0)^3$. Next we need the restrictions of the divisor classes to the fixed components. ``Restriction" applies in a loose sense here, since the weights given are actually those on the pullback to an atlas in cases where the fixed components have automorphisms. This is reconciled later by including the factors $a_j$ in the residue formula of Corollary \ref{stackloc}. We will focus our comments on the particular case of ${\overline{\mathcal{M}}}$ for simplicity, although the same ideas apply much more generally with slight modifications. Each $Z_i$ maps to a fixed point in ${\mathbb P}^1$ under evaluation. Thus the restriction of $H_1$ to $Z_i$ amounts to the pullback via the evaluation morphism of the first Chern class of the restriction of ${\mathcal O}(1)$ to that point in ${\mathbb P}_T^1$. The restriction of ${\mathcal O}(1)$ to a point is certainly a trivial bundle, but the Chern class is still non-zero since this bundle carries a $T$-action. In particular, restricting to the fixed point $q_j$ gives the weight $\lambda_j$ for the first Chern class. (See Section \ref{sec:eq} for notation.) Therefore, we need only look at the image of the marked point to determine which $\lambda_j$ to put in the Table \ref{rest0112}. Since $Z_1$ and $Z_2$ correspond to stable maps with smooth domain curves, they do not lie on $D$. Thus restricting the class $D$ to them gives zero. The other four fixed points all lie on $D$. Restricting $D$ to one of these points is equivalent to taking the first Chern class of the restriction of ${\mathcal O}(D)$ to that point. In what follows, we will factor this restriction into two steps, first restricting to $D$ and then restricting to the fixed point. Let $(C,x_1,\ldots,x_n,f)\in{\overline{\mathcal{M}}}$ be a stable map. In case $D$ is smooth at $f$, ${\mathcal O}(D)$ restricts to the normal bundle $N_{D/{\overline{\mathcal{M}}}}$ on $D$. Restricting this further to $f$ gives a one-dimensional vector space which is a quotient of the tangent space $T_{{\overline{\mathcal{M}}},f}$ of ${\overline{\mathcal{M}}}$ at $f$. The tangent space $T_{{\overline{\mathcal{M}}},f}$ is the same as the space of first order deformations of the stable map $f$. Each such deformation is a combination of three basic types: Deformations that deform the map while preserving the curve and its marked points, deformations that move the marked points and nodes while preserving the nodes, and deformations that smooth the nodes. We are interested in isolating the deformations that smooth the nodes. This is because smoothing the node of a generic stable map in $D$ takes one outside the divisor $D$, and the space of deformations that do this corresponds to the normal space of $D$ at this point. Toward this end, we mention that there is a surjection \[T_{{\overline{\mathcal{M}}},f}\longrightarrow H^0(C,\und{\operatorname{Ext}}^1(\Omega_C,{\mathcal O}_C))\text{,}\] where $H^0(C,\und{\operatorname{Ext}}^1(\Omega_C,{\mathcal O}_C))$ can be identified with the node-smoothing deformations. We can describe the deformations that smooth the nodes as follows. Let $p_1,\ldots,p_m$ be the nodes of $C$. For each $i$, let $C_1^i$ and $C_2^i$ be the components of $C$ intersecting at the node $p_i$. We make use of the following fact stated by Kontsevich in \cite{Ko}. (See \cite[\S 3B]{HM} for more detail.) \begin{lem} There is a natural isomorphism \begin{equation}\label{eq:sumtan} H^0(C,\und{\operatorname{Ext}}^1(\Omega_C,{\mathcal O}_C))\simeq\+_{i=1}^m(T_{p_i}C_1^i\*T_{p_i}C_2^i) \end{equation} respecting the natural $T$-actions. \end{lem} Now each node is mapped to one of the fixed points $q_0$, $q_1\in{\mathbb P}^1$. The weight of tangent space to a component on which $f$ has degree one at such a special point is inherited directly by pullback from the weight of the tangent space to the image of the node. We know from \cite[Chapter 9]{CK} that the equivariant Euler class of the tangent space to ${\mathbb P}^r$ at $q_i$ is \[\prod_{j\neq i} (\lambda_i-\lambda_j)\text{.}\] Thus in our case smoothing a node will contribute a weight (or sum of two weights) of the form $\lambda_0-\lambda_1$ or $\lambda_1-\lambda_0$. More generally, the node smoothing from any singleton fixed component will contribute weight (or sum of two weights) of the form $\omega_F$, in the notation of Section \ref{sec:eq}. Since $T$ acts equivariantly on the vector spaces involved, the weight of the quotient space $i_{Z_j}^*({\mathcal O}(D))$ will be one of the weights of the tangent space to $Z_j$. The products of these weights were computed above as the equivariant Euler classes. We simply pick the weight that corresponds to smoothing the node as described above, since this is the weight associated to the restriction of the normal bundle. For example, at $Z_3$ we obtain the weight $2(\lambda_0-\lambda_1)$ from smoothing the node. Now at $Z_5$ and $Z_6$, $D$ is not smooth. This is actually no problem at all. Each of these fixed components is an $S_2$-quotient of a point. Recall from Section \ref{sec:eq} that we simply carry out computations on this smooth variety upstairs, and then account for the group quotient when integrating by placing an extra factor of two in the denominator. Since we noted in Section \ref{sec:eq} that every $T$-fixed component in a moduli space ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ is the quotient of a nonsingular variety by a finite group, we can always utilize Expression (\ref{eq:sumtan}) in computing restrictions of boundary classes to fixed components that do not intersect the corresponding boundary locus transversally. Let $j_{Z_5}:\operatorname{Spec} {\mathbb C}\rightarrow D$ be the composition of the natural atlas for $Z_5$ and the inclusion of $Z_5$ into $D$. It is a 2--1 morphism. It factors in a natural way through $\tilde{D}={\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1)$ via a 2--1 morphism $j:\operatorname{Spec} {\mathbb C}\rightarrow\tilde{D}$. Let ${\mathcal N}_{D/{\overline{\mathcal{M}}}}$ be the normal sheaf of $D$ in ${\overline{\mathcal{M}}}$. The two sheaves ${\mathcal O}(D)|_D$ and ${\mathcal N}_{D/{\overline{\mathcal{M}}}}$ are not isomorphic on $D$, but their pullbacks to $\tilde{D}$ {\em are} isomorphic bundles since $\tilde{D}$ is smooth. In other words, it now becomes clear which node of $Z_5$ to smooth: the one which is glued under the gluing map $\tilde{D}\rightarrow D$. The collapsed component at this node contributes nothing to the smoothing weight because it is fixed by the $T$-action and thus has zero weight. Hence the weight of ${\mathcal O}(D)$ pulled back to $\tilde{D}$ is again $\lambda_0-\lambda_1$, the weight that corresponds to smoothing the node of $D$. Finally, pulling this back to $\operatorname{Spec} {\mathbb C}$ gives $2(\lambda_0-\lambda_1)$ since $j$ has degree two. The computation for $Z_6$ is similar. The results of all the restrictions are listed in Table \ref{rest0112}. \renewcommand{\baselinestretch}{1} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Fixed component & $Z_1$ & $Z_2$ & $Z_3$ & $Z_4$ & $Z_5$ & $Z_6$ \\ \hline $H_1$ & $\lambda_0$ & $\lambda_1$ & $\lambda_1$ & $\lambda_0$ & $\lambda_0$ & $\lambda_1$ \\ \hline $D$ & 0 & 0 & $2(\lambda_0-\lambda_1)$ & $2(\lambda_1-\lambda_0)$ & $2(\lambda_0-\lambda_1)$ & $2(\lambda_1-\lambda_0)$ \\ \hline \end{tabular} \caption{Restrictions of divisor classes in $A_T^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))$ to fixed components \label{rest0112}} \end{center} \end{table} \renewcommand{\baselinestretch}{2} We already know the relation $H_1^2=0$ since $H^2=0$, so the divisor classes above give two degree three classes, $D^3$ and $D^2H_1$. The restrictions of these degree three classes to the fixed components are given in Table \ref{restdeg3}. These follow directly from the divisor restrictions in Table \ref{rest0112}. \renewcommand{\baselinestretch}{1} \begin{table} \begin{center} \noindent \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $Z_1$ & $Z_2$ & $Z_3$ & $Z_4$ & $Z_5$ & $Z_6$ \\ \hline $D^3$ & 0 & 0 & $8(\lambda_0-\lambda_1)^3$ & $8(\lambda_1-\lambda_0)^3$ & $8(\lambda_0-\lambda_1)^3$ & $8(\lambda_1-\lambda_0)^3$ \\ \hline $D^2H_1$ & 0 & 0 & $4\lambda_1(\lambda_0-\lambda_1)^2$ & $4\lambda_0(\lambda_1-\lambda_0)^2$ & $4\lambda_0(\lambda_0-\lambda_1)^2$ & $4\lambda_1(\lambda_1-\lambda_0)^2$ \\ \hline \end{tabular} \caption{Restrictions of degree 3 classes in $A_T^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))$ to fixed components \label{restdeg3}} \end{center} \end{table} \renewcommand{\baselinestretch}{2} At last we are ready to compute the integrals of the degree three monomials. Note that since components $Z_5$ and $Z_6$ have automorphism group $S_2$, we need to put an extra factor of 2 in the denominators of integrands over these components when we apply localization. We get \begin{eqnarray*} \int_{{\overline{\mathcal{M}}}_T}D^3 & = & \int_{(Z_{3})_T}\frac{8(\lambda_0-\lambda_1)^3}{2(\lambda_1-\lambda_0)^3}+ \int_{(Z_{4})_T}\frac{8(\lambda_1-\lambda_0)^3}{2(\lambda_0-\lambda_1)^3} \\ & = & +\int_{(Z_{5})_T}\frac{8(\lambda_0-\lambda_1)^3}{2(\lambda_0-\lambda_1)^3} +\int_{(Z_{6})_T}\frac{8(\lambda_1-\lambda_0)^3}{2(\lambda_1-\lambda_0)^3} \\ & = & -4-4+4+4 = 0 \end{eqnarray*} and \begin{eqnarray*} \int_{{\overline{\mathcal{M}}}_T}D^2H_1 & = & \int_{(Z_{3})_T}\frac{4\lambda_1(\lambda_0-\lambda_1)^2}{2(\lambda_1-\lambda_0)^3}+ \int_{(Z_{4})_T}\frac{4\lambda_0(\lambda_1-\lambda_0)^2}{2(\lambda_0-\lambda_1)^3} \\ & + & \int_{(Z_{5})_T}\frac{4\lambda_0(\lambda_0-\lambda_1)^2}{2(\lambda_0-\lambda_1)^3} +\int_{(Z_{6})_T}\frac{4\lambda_1(\lambda_1-\lambda_0)^2}{2(\lambda_1-\lambda_0)^3} \\ & = & \frac{2\lambda_1}{(\lambda_1-\lambda_0)}+\frac{2\lambda_0}{(\lambda_0-\lambda_1)} +\frac{2\lambda_0}{(\lambda_0-\lambda_1)}+\frac{2\lambda_1}{(\lambda_1-\lambda_0)} = 2+2 = 4\text{.} \end{eqnarray*} Now we can construct a presentation for $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))$. First we show that $D$ and $H_1$ are independent. Suppose we have a relation \[aD+bH_1=0\text{,}\] where $a,b\in{\mathbb Q}$. Multiplying both sides by $D^2$ gives $aD^3+bH_1D^2=0$. Then integrating gives $4b=0$. Similarly, multiplying by $H_1D$ and integrating the result gives $4a=0$. Clearly $a=b=0$. Thus $D$ and $H_1$ generate $A^1({\overline{\mathcal{M}}})$. Next we show that $DH_1$ and $D^2$ are independent. Suppose we have a relation \[aDH_1+bD^2=0\text{,}\] where $a,b\in{\mathbb Q}$. Multiplying both sides by $D$ and integrating gives $4a=0$. Multiplying both sides by $H_1$ and integrating gives $4b=0$. We see once again that $b=a=0$. Thus $DH_1$ and $D^2$ generate $A^2({\overline{\mathcal{M}}})$. Finally, there must be a relation in $A^3({\overline{\mathcal{M}}})$. In fact, since $\int D^3=0$ was computed above, we are already able to conclude that $D^3=0$ is the desired relation. Nevertheless, we will compute this by linear algebra to further illustrate the method. Suppose we have a relation \[aD^2H_1+bD^3=0\text{,}\] where $a,b\in{\mathbb Q}$. Integrating this gives $4a=0$, so that $a=0$. We can take $b=1$ to get the relation $D^3=0$. It is easy to see that $D^2H_1$ is not zero, so we have all generators and relations in degree three. Everything in higher degrees is zero, so we can give a complete presentation \[A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))=\frac{{\mathbb Q}[D,H_1]}{(H_1^2,D^3)}\text{.}\] \renewcommand{\baselinestretch}{1} \section{Expressions for $\psi$-classes in the above moduli spaces} \label{sec:exppsi} Below we express the $\psi$-classes in terms of the boundary and hyperplane divisors. This is possible since we have seen that these divisor classes generate the Chow rings in the cases under consideration. We will use the setup of Section \ref{sec:psi}. To avoid confusion, we will use notation $\pi_n$ for universal projections to a space of stable maps (or more generally for contraction morphisms that forget a marked point) and $\rho_i:({\mathbb P}^1)^n\rightarrow{\mathbb P}^1$ for projections of $({\mathbb P}^1)^n$ to a factor. We will make frequent use of several identities expressing pullbacks of the standard divisor classes on ${\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)}$ under contraction morphisms in terms of the standard divisor classes on ${\overline{\mathcal{M}}}_{0,n+1}({\mathbb P}^r,d)$. The formula governing pullback of $\psi$-classes is $\psi_i=\pi_{n+1}^*(\psi_i)+D_{i,n+1}$, which is a well-known extension of an identity in \cite{Wit}. On the left side, $\psi_i\in A^*({\overline{\mathcal{M}}}_{0,n+1}({\mathbb P}^r,d))$, while on the right side, $\psi_i\in A^*({\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)})$ and $D_{i,n+1}$ is the divisor \begin{center} \begin{pspicture}(0,0)(4,3) \pnode(0.5,.5){a} \dotnode(1.5,1.5){c} \dotnode(.75,.75){d} \pnode(2.5,2.5){e} \pnode(1.5,2.5){f} \pnode(3.5,0.5){j} \ncline{a}{e} \ncline{f}{j} \uput{5pt}[d](.5,.5){0} \uput{5pt}[d](3.5,.5){$d$} \uput{5pt}[ul](.75,.75){i} \uput{5pt}[ul](1.5,1.5){n+1} \end{pspicture} \end{center} with the remaining marked points on the degree $d$ component. For $i<j$, it's easy to see that $\operatorname{ev}_{i,n}\circ\pi_j=\operatorname{ev}_{i,n+1}$. Here the second subscript of $\operatorname{ev}_i$ indicates the number of marked points associated to its domain. It follows that $\pi_j^*(H_i)=H_i$ for $i<j$. A similar statement holds when $i>j$ (and even when $i=j$ if the marked points are monotonically relabeled as in Section \ref{sec:modintro}), but attention must be given to how the indexing changes in the wake of deleting one index. Finally, we recall the basic fact about pullbacks of boundary divisors: If $\pi$ is any contraction morphism, then \begin{equation} \label{pulldiv} \pi^*(D_{A,d_A,B,d_B})=\sum_{A\subset A^{\prime},B\subset B^{\prime}} D_{A^{\prime},d_A,B^{\prime},d_B} \text{.} \end{equation} Clearly the righthand side is the support of the pullback. See \cite{FP}, for example, for relevant statements about why the pullback is multiplicity-free. \subsection{Expressions for $\psi$-classes in moduli spaces with $d=1$} \subsubsection{The case $d=1$, $n=1$} \label{sec:11} Recall that ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1)\simeq{\mathbb P}^1$ and its universal curve is ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)\simeq{\mathbb P}^1\times{\mathbb P}^1$. Furthermore, the section is the diagonal map ${\Delta}$. The universal projection is the projection $\rho_1$. Then \begin{eqnarray*} \omega_{\rho_1} = K_{{\mathbb P}^1\times{\mathbb P}^1}\*\rho_1^*K_{{\mathbb P}^1}^\vee =\rho_1^*({\mathcal O}(-2))\*\rho_2^*({\mathcal O}(-2))\*\rho_1^*({\mathcal O}(-2))^\vee={\mathcal O}(0,-2) \text{.}\end{eqnarray*} Now $c_1({\mathcal O}(0,-2))=-2H_2$, and the pullback of each $H_i$ under ${\Delta}$ is $H_1$. So in this case $\psi=\psi_1={\Delta}^*(c_1({\mathcal O}(0,-2)))=-2H_1$. \subsubsection{The case $d=1$, $n=2$} Using the results stated above, in $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1))$ we have \[\psi_1=\rho_1^*(-2H_1)+D_{1,2}=-2H_1+H_1+H_2=H_2-H_1\text{.}\] Here $D_{1,2}$ is the divisor corresponding to the locus where the marked points have the same image. This is the diagonal of ${\mathbb P}^1\times{\mathbb P}^1$, and we have used the fact that its class in $A^*({\mathbb P}^1\times{\mathbb P}^1)$ is $H_1+H_2$. By symmetry, that is, by forgetting the first marked point via the projection $\rho_2$ instead of the second using $\rho_1$, we find \[\psi_2=H_1-H_2\text{.}\] \subsubsection{The case $d=1$, $n=3$} \label{sec:13} Pulling back from ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,1)$, we have \[\psi_1=\pi_3^*(\psi_1)+D_{1,3}=H_2-H_1+H_1+H_3-D=H_2+H_3-D \text{,}\] \[\psi_2=\pi_3^*(\psi_2)+D_{2,3}=H_1-H_2+H_2+H_3-D=H_1+H_3-D \text{,}\] and, by symmetry, \[\psi_3=H_1+H_2-D \text{.}\] Here $D_{i,j}$ is the divisor corresponding to the closure of locus where the $i$'th and $j$'th marked points have the same image, but the image of the remaining marked point is different. This is the proper transform in $\operatorname{B\ell}_{\Delta}{({\mathbb P}^1)^3}$ of the large diagonal ${\Delta}_{ij}$ in $({\mathbb P}^1)^3$. Its class in $A^*(\operatorname{B\ell}_{\Delta}{({\mathbb P}^1)^3})$ is $H_i+H_j-D$. \subsection{Expression for the $\psi$-class in the case $d=2$, $n=1$} In this subsection let ${\overline{\mathcal{M}}}={\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$. We know that $A^1({\overline{\mathcal{M}}})=\operatorname{Div}({\overline{\mathcal{M}}})$ is generated by the boundary divisor class $D$ and the hyperplane pullback class $H_1$. So $\psi=aD+bH_1$ for some $a,b\in{\mathbb Q}$. We use a method for finding relations developed by Pandharipande in \cite{P} to determine $a$ and $b$. Let $C$ be the curve in ${\overline{\mathcal{M}}}$ corresponding to the family \vspace{0.3in} \begin{center} \psset{arrows=->} \begin{psmatrix} ${\mathbb P}^1\times\mathbb{P}^1$ & $\mathbb{P}^1$ & $\mathbb{P}^1$ \\ $\mathbb{P}^1$ \ncline[offset=-3pt]{1,1}{2,1}\naput{$\rho_1$} \ncarc[arcangle=45]{2,1}{1,1}\naput{${\Delta}$} \ncline{1,1}{1,2}\naput{$\rho_2$} \ncline{1,2}{1,3}\naput{$f$} \end{psmatrix} \psset{arrows=-} \end{center} \vspace{0.1in} \noindent of stable maps, where $f$ is a fixed double cover of ${\mathbb P}^1$ by itself. Since $C$ is contained in the substack ${\mathcal{M}}$ of ${\overline{\mathcal{M}}}$ corresponding to stable maps with smooth domain curves, $D\cdot C=0$. Let $i:C\rightarrow{\overline{\mathcal{M}}}$ be the inclusion. Next, by definition the restriction $i_C^*\psi$ of $\psi$ to $C$ is the first Chern class of the pullback of the sheaf of relative differentials of the family under the section ${\Delta}$. (See Section \ref{sec:psi}.) But we computed exactly this in Section \ref{sec:11}, finding it to be $-2H_1$. Finally, the degree of $C\cdot H_1$ is two, because the map has degree two: two points lie above a general point in ${\mathbb P}^1$. So \[2b=\int C\cdot bH_1=\int C\cdot\psi=\int i_C^*\psi=-2\text{,}\] and we conclude that $b=-1$. Now we construct another curve $C^{\prime}$ which intersects $D$ but not $H_1$. (Here we consider $H_1$ as the pullback of a fixed hyperplane.) As above, we give a family of curves together with a map from the total space to ${\mathbb P}^1$. We can avoid $H_1$ by arranging for the marked point to have the same image under this map on each fiber. In order to cause $C^{\prime}$ to intersect $D$, we will initially specify a rational map with base points. Blowing up to eliminate the base points gives rise to some reducible fibers. Let $\pi=\rho_1:{\mathbb P}^1\times{\mathbb P}^1\rightarrow{\mathbb P}^1$ be our initial family of curves. Let $t$ be a coordinate on the base, considering this ${\mathbb P}^1$ as the one-point compactification of ${\mathbb C}$, and let $(x:y)$ be homogeneous coordinates on the other factor. Associate to this family the constant section $s$ given by $s(t)=(t,(0:1))$. Define $f:{\mathbb P}^1\times{\mathbb P}^1\rightarrow{\mathbb P}^1$ by $f(t,(x:y))=((t+1)x^2+ty^2:tx^2+(t+1)xy)$. Base points will occur whenever there is a common factor in the two components. There are two ways this can happen, corresponding to the two factors $x$ and $tx+(t+1)y$ of the second component. In the fiber over $t=0$, the map is given by $(x^2,xy)$, which has a base point at $(0:1)$ since the first component also has a factor of $x$. This is the only base point of this type. The existence of three base points of the second type is more subtle. These arise when, for a certain $t$, $tx+(t+1)y$ divides $(t+1)x^2+ty^2$. For simplicity, we may assume $y=1$. A base point will occur where $tx+(t+1)$ and $(t+1)x^2+t$ simultaneously vanish. Performing some algebra, we see that this happens when $2t^3+3t^2+3t+1=0$. This has solutions $-\frac{1}{2}$, $e^{i\pi/3}$, and $e^{i2\pi/3}$. The corresponding base points are $(-\frac{1}{2},(1:1))$, $(e^{i\pi/3},(-e^{-i\pi/6}:1))$, and $(e^{i2\pi/3},(e^{i\pi/6}:1))$. Notice that the section $s$ passes only through the base point $(0,(0:1))$ and not the other three. Also note that $f(t,(0:1))=(1:0)$ for all $t\neq 0$, so that the image of the marked point is $(1:0)$ on all fibers other than the fiber over $t=0$. To arrive at the family $C^{\prime}$ of stable maps, we blow up at the four base points. Let $S$ be the resulting surface and $\rho:S\rightarrow{\mathbb P}^1\times{\mathbb P}^1$ the blowup map. We take for sections $s_i:{\mathbb P}^1\rightarrow S$ the proper transforms of the original sections $s_i$. By abuse of notation, keep the labels $\pi$ and $f$ for the induced projection and map to the target as well. It is clear that $\rho$ is an isomorphism of curves on the fibers over points other than the four base points, and that the fibers over the special values of $t$ now each have two components. We will check that on such a special fiber, each the restriction of $f$ has degree one on each component. Over $t=0$, the marked point is on the new component coming from the exceptional divisor since its original image was blown up. In the other three special fibers, the marked points avoided the blowup, and thus remain on the original component of those curves. We start with the fiber over $t=0$. By fixing $y=1$, we consider the affine piece of ${\mathbb P}^1\times{\mathbb P}^1$ with coordinates $t$ and $x$. The blowup introduces additional coordinates $u$ and $v$ subject to the blowup equation $tv=ux$. Looking locally on the blowup, we assume $v=1$ so that $t=ux$. The map $f$ becomes \[((ux+1)x^2+ux:ux^3+(ux+1)x)=((ux+1)x+u:ux^2+ux+1)\text{.}\] The exceptional divisor $E$ corresponds to $x=0$. Hence $f$ restricted to the exceptional divisor is $(u:1)$. Similarly, the other component in the fiber $t=0$ corresponds to $u=0$, and $f$ restricted to that component is given by $(x:1)$. Thus the map has degree 1 on each component of the fiber. Second, we consider the blowup at the point $(-\frac{1}{2},1)$. Here we use the coordinate system $((w,z),(u:v))$, where $w=t+\frac{1}{2}$, $z=x-1$, and $wv=uz$. Take $v=1$ to arrive in local coordinates, where now $w=uz$. We find \begin{eqnarray*} & & ((t+1)x^2+t:tx^2+(t+1)x) \\ & = & ((uz+\frac{1}{2})(z+1)^2+uz-\frac{1}{2}: (uz-\frac{1}{2})(z+1)^2+(uz+\frac{1}{2})(z+1)) \\ & = & {\scriptstyle(uz^3+\frac{1}{2}z^2+2uz^2+uz+z+\frac{1}{2}+uz-\frac{1}{2}: uz^3+2uz^2+uz-\frac{1}{2}z^2-z-\frac{1}{2}+uz^2+\frac{1}{2}z+uz+\frac{1}{2})} \\ & = & (uz^2+\frac{1}{2}z+2uz+2u+1:uz^2+3uz+2u-\frac{1}{2}z-\frac{1}{2}) \text{.} \end{eqnarray*} The fiber over $t=-\frac{1}{2}$ has two components. On the component corresponding to the exceptional divisor, $z=0$, so $f$ restricts to $(2u+1:2u-\frac{1}{2})$. On the other component, $u=0$, so $f$ restricts to $(\frac{1}{2}z+1:-\frac{1}{2}z-\frac{1}{2})$. We see that the map has degree 1 on each component. The computations on the other two special fibers are similar. Let us return to considering the fiber over $t=0$. The value of $s$ at $t=0$ is determined by its values at nearby points in ${\mathbb P}^1\times{\mathbb P}^1$. In local coordinates $s(t)=(t,0)$. Switching to coordinates $((t,x),(u:v))$ on the blowup, we find \[\lim_{t\rightarrow 0}s(t)=\lim_{t\rightarrow 0}((t,0),(u:0)) =\lim_{t\rightarrow 0}((t,0),(1:0))=((0,0),(1:0))\text{,}\] where $v=0$ because of the blowup equation $tv=ux$. Thus $s(0)$ has value $(1:0)$ on the exceptional divisor $E$ above $t=0$. The restriction of $f$ to $E$ described in local coordinates above extends to $f((0,0),(1:0))=(1:0)$. This completes the verification that the marked point of every stable map in this family has image $(1:0)$. We have shown that $\int C^{\prime}\cdot D=4$, and we have also constructed $C^{\prime}$ so that $\int C^{\prime}\cdot H_1=0$. Furthermore, by standard intersection theory on surfaces (see \cite[Ch.V,\S 3]{H}), $E.(s({\mathbb P}^1))=1$, and the image of the section does not intersect the other exceptional divisors. Finally, if $E_1$, $E_2$, and $E_3$ are the other exceptional divisors, then \[s^*(c_1(\omega_{\pi}))=s^*(-2H_2+E+E_1+E_2+E_3)=s^*E\] by standard results about dualizing sheaves of products and blowups and about intersection theory on blowups (\cite{H}). Therefore \[4a=\int C^{\prime}\cdot aD=\int C^{\prime}\cdot\psi=\int i_{C^{\prime}}^*\psi= \int s^*(c_1(\omega_{\pi}))=\int s^*(E)=1\text{,}\] so that $a=\frac{1}{4}$. We can now conclude that \[\psi=\frac{1}{4}D-H_1\] in ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$. \chapter{Relations} \label{sec:rel} Relations come from three different sources: Pullbacks of relations in $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2))$ via the contractions forgetting a marked point, relations given by the geometry of $\psi$-classes on boundary divisors, and one linear relation that so far has no {\em proven} geometric explanation. Instead, we prove this last relation using the method of localization and linear algebra, which will be explained in Section \ref{sec:lla}. \section{Relations from pullbacks} \label{sec:pb} In Section \ref{sec:0112}, we found the relations $H_1^2$ and $D^3$ in $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))$. Additionally, Section \ref{sec:exppsi} gives the relation $\psi-\frac{1}{4}D+H_1$. There are two contraction morphisms $\pi_1$ and $\pi_2$ from ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)$ to ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$. Recall that $\pi_i$ forgets the $i$'th marked point. Formulas for the pullbacks of the standard divisor classes under these maps were given in Section \ref{sec:exppsi}. First, pulling back the relation $H_1^2$ under $\pi_1$ and $\pi_2$ gives the relations $H_i^2$ for $i\in\und{2}$. (When considering the contraction $\pi_1$, either a monotonic relabeling of the marked points gives an index shift on pullback, or $H_1$ needs to be labeled as $H_2$.) We can also see these relations via pullback under the evaluation morphisms. Since $H^2=0$ in $A^*({\mathbb P}^1)$, we have $H_i^2=\operatorname{ev}_i^*(H^2)=0$. Second, applying Equation \ref{pulldiv} to the case at hand, we have $\pi_i^*(D)=D_1+D_2$ for $i\in\und{2}$. Pulling back the relation $D^3$ in $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))$ by either one of these gives the cubic relation $(D_1+D_2)^3$ in $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$. Finally, linear relations expressing the $\psi$-classes in terms of the other divisor classes are obtained by pulling back the relation $\psi-\frac{1}{4}D+H_1$. We find \[\psi_i=\pi_j^*(\psi_i)+D_0=\pi_j^*(\frac{1}{4}D-H_1)+D_0 =\frac{1}{4}D_1+\frac{1}{4}D_2+D_0-H_i\text{,}\] where $i\neq j$. Thus we have relations $\psi_i-\frac{1}{4}D_1-\frac{1}{4}D_2-D_0+H_i$. \renewcommand{\baselinestretch}{1} \section{Relations from the geometry of $\psi$-classes on boundary divisors} \label{sec:geomrel} We can interpret a product with a factor of $D_i$ as a restriction to that divisor and, via the gluing morphisms described in Section \ref{sec:modintro}, ultimately as a class on a fiber product of simpler moduli spaces. This simplifies the computations of such products once we know the pullbacks of the remaining factors. We will use the gluing morphisms \[j_0:{\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2) \rightarrow{\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)\] and \[j_1:{\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1) \rightarrow{\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)\text{,}\] whose images are $D_0$ and $D_1$ respectively. By convention, both $j_0$ and $j_1$ glue the third marked point of the first factor to the lone marked point of the second factor. It is not hard to see that $j_0$ is an isomorphism onto $D_0$, and we note further that ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)\simeq{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$. Similarly, ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1)) \simeq{\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1)$. However, $j_1$ is only an isomorphism away from the divisor $D\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1)$, where $D$ is the boundary divisor of ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1)$ as described in Section \ref{sec:0n11}. The image of this divisor is isomorphic to the global quotient stack \[[{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,4}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1) /S_2]\text{,}\] where the $S_2$-action switches the factors on the ends. Thus the restriction of $j_0$ to $D$ has degree two. Note that this last fiber product is isomorphic to $[{\overline{\mathcal{M}}}_{0,4}({\mathbb P}^1,0)/S_2]$, where the $S_2$ action switches the third and fourth marked points. For the $D_0$ case, the universal property of the moduli space shows that $\psi_1$ and $\psi_2$ pull back to what may be considered as the first and second $\psi$-classes on ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)$. A similar statement holds for $D_1$ case, this time with the resulting $\psi$-classes on ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1)$. In the first case using this technique, we will show that the $\psi$-classes vanish on $D_0$. This is because each $\psi$-class pulls back to zero under $j_0$ since the marked points lie on the rigid component corresponding to ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)$. More rigorously, $j_0$ induces a family $({\mathcal C},\tilde{\sigma}_1,\tilde{\sigma}_2,\operatorname{ev}_3\circ\tilde{\jmath}_0)$ of stable maps via the fiber diagram \vspace{0.3in} \psset{arrows=->} \begin{center} \begin{psmatrix} ${\mathcal C}$ & ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,2)$ & ${\mathbb P}^1$\\ ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$ & ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)\text{,}$ \ncline{1,1}{2,1}\naput{$\tilde{\pi}$} \ncline{1,1}{1,2}\naput{$\tilde{\jmath}_0$} \ncline{1,2}{2,2}\naput{$\pi$} \ncline{2,1}{2,2}\naput{$j_0$} \ncline{1,2}{1,3}\naput{$\operatorname{ev}_3$} \ncarc[arcangle=45]{2,1}{1,1}\naput{$\tilde{\sigma}_i$} \ncarc[arcangle=45]{2,2}{1,2}\naput{$\sigma_i$} \end{psmatrix} \end{center} \psset{arrows=-} \vspace{0.2in} \noindent It is easy to check that this family can be identified with a universal family of stable maps over $D_0$. We can obtain another family over ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$ by fiber product: \vspace{0.3in} \psset{arrows=->} \begin{center} \begin{psmatrix} ${\overline{\mathcal{M}}}_{0,4}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$ & ${\overline{\mathcal{M}}}_{0,4}({\mathbb P}^1,0)$ & ${\mathbb P}^1$ \\ ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$ & ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\text{,}$ \ncline{1,1}{2,1}\naput{$\tilde{\pi}_4$} \ncline{1,1}{1,2}\naput{$\tilde{\operatorname{pr}}_1$} \ncline{1,2}{2,2}\naput{$\pi_4$} \ncline{2,1}{2,2}\naput{$\operatorname{pr}_1$} \ncline{1,2}{1,3}\naput{$\operatorname{ev}_4$} \ncarc[arcangle=45]{2,1}{1,1}\naput{$\tilde{s}_i$} \ncarc[arcangle=45]{2,2}{1,2}\naput{$s_i$} \end{psmatrix} \end{center} \psset{arrows=-} \vspace{0.2in} \noindent where the right-hand part is the universal stable map. Recall from Section \ref{sec:0n11} that the marked points in ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)$ cannot vary. It follows that $s_i^*(\omega_{\pi_4})=\psi_i=0$ for $i\in\und{3}$. Note also that $\omega_{\pi_4}$ pulls back to the relative dualizing sheaf of the left-hand column. There is an inclusion \[k_0:{\overline{\mathcal{M}}}_{0,4}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)\rightarrow {\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,2)\] very similar to the morphism $j_0$ described above. This morphism is part of a 2-commutative diagram that guarantees the existence of a stack morphism \[\iota:{\overline{\mathcal{M}}}_{0,4}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)\rightarrow{\mathcal C}\] over ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$. Furthermore, $\iota$ is injective since $k_0$ is, and it is also compatible with the sections. The images of the sections $\tilde{\sigma_i}$ are contained in the image of $\iota$. Therefore we can compute the $\psi$-classes of ${\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,0)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$ on this subfamily: \begin{eqnarray*} \psi_i & = & c_1(\tilde{\sigma}_i^*\omega_{\tilde{\pi}})\\ & = & c_1(\tilde{s}_i^*\omega_{\tilde{\pi}_4})\\ & = & \tilde{s}_i^*\tilde{\operatorname{pr}}_1^*(c_1(\omega_{\pi_4}))\\ & = & \operatorname{pr}_1^*s_i^*(c_1(\omega_{\pi_4}))\\ & = & 0\text{.} \end{eqnarray*} It follows that the pullbacks of $\psi_1$ and $\psi_2$ to $D_0$ vanish. This gives relations $D_0\psi_i$. Now we will show that the product $D_1\psi_1\psi_2$ vanishes by computing the pullback of $\psi_1\psi_2$ under $j_1$. Identifying $A^*({\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1)\times_{{\mathbb P}^1}{\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,1))$ with $A^*({\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1))$, the pullback is \[\psi_1\psi_2=(H_2+H_3-D)(H_1+H_3-D) =(H_1+H_2-D)(H_2+H_3-D)=0\] according to the presentation given in Proposition \ref{m0311}. This shows that $\psi_1\psi_2$ is in the kernel of $j_1^*$. It follows from \cite[Theorem 3.1]{Mu} that there is an group isomorphism $A^2(D_1)\rightarrow A^2({\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1))^{S_2}$, where the target is the subgroup of $S_2$-invariants. This isomorphism is naturally identified with a monomorphism into $A^2({\overline{\mathcal{M}}}_{0,3}({\mathbb P}^1,1))$. Furthermore, Mumford's proof of this theorem shows that, up to sign, this monomorphism is the same as $\tilde{\jmath}_1^*$, where $\tilde{\jmath}_1$ is $j_1$ with the target changed to $D_1$ (and considered as a map of these Chow groups). This is enough to show that $\psi_1\psi_2$ restricts to zero on $D_1$. Thus $D_1\psi_1\psi_2$ is a relation in $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$. \section{A relation from localization and linear algebra} \label{sec:lla} Ideally, the proof that (\ref{geomprez}) is a presentation for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$ should be uniformly geometric. Such a proof would be easy to understand and generalize to higher-dimensional targets ${\mathbb P}^r$ (and other moduli spaces). Unfortunately, a sort of ``brute force" algebraic computation is so far still required for two aspects of the proof. The first is the derivation of the linear relation $D_2-\psi_1-\psi_2$. The second, closely related, is the verification that the set of generators and relations is complete in each degree. The relation $D_2-\psi_1-\psi_2$ should be explained geometrically by the existence of a section $s$ of the tensor product $L_1\* L_2$ of the cotangent line bundles whose zero stack is $D_2$. Then \[D_2=Z(s)=c_1(L_1\* L_2)=c_1(L_1)+c_1(L_2)=\psi_1+\psi_2\text{.}\] The completeness of the set of generators and relations could be demonstrated by giving an additive basis for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$. Such a basis should be attainable using the stratification of the moduli space of stable maps described in Section \ref{sec:ser}. We call the computational algebraic method developed in this subsection {\em localization and linear algebra}. In theory, it could be used to compute relations in any moduli space of stable maps to projective space. (In practice, the method quickly becomes too tedious as the parameters increase. See Section \ref{sec:direct} for an example.) Its first step consists of using localization (see Section \ref{sec:eq}) to find the integrals of all degree four monomials in the boundary divisors and hyperplane pullbacks in $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$. Then an arbitrary relation of given degree in these generating classes is considered (with variable coefficients). This relation is multiplied by various monomials of complementary dimension, and the result is integrated. The resulting rational polynomials in the coefficients place restrictions on these coefficients. Ultimately, we can use these restrictions to describe precisely the form of relations that can occur in each degree in this Chow ring. We begin by computing the integrals of degree four monomials. Notice first that the relations $D_0\psi_i$ and $\psi_i-\frac{1}{4}D_1-\frac{1}{4}D_2-D_0+H_i$ imply \begin{equation}\label{d0hi} D_0H_1=D_0H_2\text{,} \end{equation} and hence $H_1H_2D_0=H_1^2D_0=0$. The integrals of the following types of monomials will be zero since the monomials themselves are zero: \begin{enumerate} \item Any monomial with a factor of $H_i^2$ for $i\in\underline{2}$. \item Any monomial with a factor of $H_1H_2D_0$. \end{enumerate} These together take care of $2\left(\left({5 \atop 1}\right)+ \left({5 \atop 2}\right)\right)-1+3=32$ of the $\left({8\atop 4}\right)=70$ degree 4 monomials. We will use localization to compute the remaining 38 integrals. In particular, we apply Corollary \ref{stackloc} to the smooth stack ${\overline{\mathcal{M}}}={\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)$. Let $T=({\mathbb C}^*)^2$. Consider the usual $T$-action on ${\overline{\mathcal{M}}}$. First we have to find the $T$-equivariant Euler classes of the normal bundles of the fixed point components. We do this using Theorem \ref{norm}. We will label the graphs of the 14 fixed components as follows. \[{\Gamma}_1=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{\{1\}}} { \TC*~{1}~[tnpos=a]{\{2\}}\taput{2} } }\] \[{\Gamma}_2=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{\{2\}}} { \TC*~{1}~[tnpos=a]{\{1\}}\taput{2} } }\] \[{\Gamma}_3=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{\underline{2}}} { \TC*~{1}\taput{2} } }\] \[{\Gamma}_4=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}} { \TC*~{1}~[tnpos=a]{\underline{2}}\taput{2} } }\] \[{\Gamma}_5=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{1}~[tnpos=a]{$\underline{2}$}} { \pstree{\TC*~{0}\taput{1}} { \TC*~{1}\taput{1} } } }\] \[{\Gamma}_6=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{$\underline{2}$}} { \pstree{\TC*~{1}\taput{1}} { \TC*~{0}\taput{1} } } }\] \[{\Gamma}_7=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{1}~[tnpos=a]{\{1\}}} { \pstree{\TC*~{0}~[tnpos=a]{\{2\}}\taput{1}} { \TC*~{1}\taput{1} } } }\] \[{\Gamma}_8=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{\{1\}}} { \pstree{\TC*~{1}~[tnpos=a]{\{2\}}\taput{1}} { \TC*~{0}\taput{1} } } }\] \[{\Gamma}_9=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{1}~[tnpos=a]{\{2\}}} { \pstree{\TC*~{0}~[tnpos=a]{\{1\}}\taput{1}} { \TC*~{1}\taput{1} } } }\] \[{\Gamma}_{10}=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{\{2\}}} { \pstree{\TC*~{1}~[tnpos=a]{\{1\}}\taput{1}} { \TC*~{0}\taput{1} } } }\] \[{\Gamma}_{11}=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{1}} { \pstree{\TC*~{0}~[tnpos=a]{$\underline{2}$}\taput{1}} { \TC*~{1}\taput{1} } } }\] \[{\Gamma}_{12}=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}} { \pstree{\TC*~{1}~[tnpos=a]{$\underline{2}$}\taput{1}} { \TC*~{0}\taput{1} } } }\] \[{\Gamma}_{13}=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{1}~[tnpos=a]{\{1\}}} { \pstree{\TC*~{0}\taput{1}} { \TC*~{1}~[tnpos=a]{\{2\}}\taput{1} } } }\] \[{\Gamma}_{14}=\text{ \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{0}~[tnpos=a]{\{1\}}} { \pstree{\TC*~{1}\taput{1}} { \TC*~{0}~[tnpos=a]{\{2\}}\taput{1} } } }\] \vspace{0.1in} \noindent Let $Z_i$ denote the fixed component corresponding to ${\Gamma}_i$ for all $i$. All of the fixed components are points except for $Z_{11}$ and $Z_{12}$, which are each isomorphic to the quotient $[{\mathbb P}^1/S_2]$. Fixed components $Z_1$, $Z_2$, $Z_3$, and $Z_4$ also have automorphism group $S_2$. All the degree 4 classes that are not automatically zero for the reasons above have factors $D_i$. Therefore they are supported on the boundary of ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)$. Hence representatives of these classes will not intersect $Z_1$ or $Z_2$, which lie in the locus ${\mathcal{M}}_{0,2}({\mathbb P}^1,2)$ where the domain curves are smooth. Since restrictions of the relevant classes to $Z_1$ and $Z_2$ are thus zero, their equivariant Euler classes are not needed for our computation. Recall from Section \ref{sec:0112} that the term $e_F$ that appears in the formula for $e_{{\Gamma}}^{\text{F}}$ in Theorem \ref{norm} is zero on any fixed component which is a point. This was called a $\tilde{\psi}$-class in Section \ref{sec:psi}. From the discussion there, we see that, for the components that are isomorphic to ${\overline{M}}_{0,4}\simeq{\mathbb P}^1$, we can identify $e_F$ with a $\psi$-class on ${\overline{M}}_{0,4}$. We will argue that the $\psi_i$ are all equal to the class of a point for $i\in\underline{4}$. We denote this class by $\psi$, so that on these components $e_F=\psi$. Note that $\psi^2=0$. \begin{lem}\label{allpsiequal} Let $H$ be the class of a point in ${\overline{M}}_{0,4}\simeq{\mathbb P}^1$. Then $\psi_4=H$ in $A^*({\overline{M}}_{0,4})$. \end{lem} \noindent {\bf Proof.} We may fix the first three marked points to be 0, 1, and $\infty$. The universal curve for ${\overline{M}}_{0,4}$ is ${\overline{M}}_{0,5}$, which is isomorphic to the blowup $\operatorname{B\ell}_3({\mathbb P}^1\times{\mathbb P}^1)$ of ${\mathbb P}^1\times{\mathbb P}^1$ at the points 0, 1, and $\infty$ on its diagonal. (This is a simple case of the construction of Keel in \cite{K}, for example.) We abuse notation by identifying the forgetful morphism $\pi_5$ of the universal curve with the projection $\rho_2$. Similarly, the universal section $s$ corresponds to the diagonal morphism. Let $E_1$, $E_2$, and $E_3$ be the exceptional divisors. Then, using arguments like those at the end of Section \ref{sec:exppsi}, \begin{eqnarray*} \psi_4 & = & s^*c_1(\omega_{\pi_5})\\ & = & s^*c_1(\rho_1^*({\mathcal O}(-2))\*{\mathcal O}(E_1)\*{\mathcal O}(E_2)\*{\mathcal O}(E_3))\\ & = & s^*(-2H_1+E_1+E_2+E_3)\\ & = & -2H+H+H+H \\ & = & H\text{.} \end{eqnarray*} \begin{flushright} $\Box$\end{flushright} \noindent The same follows for the other $\psi_i$ by symmetry. For ${\Gamma}_3$ we label the edges and vertices as follows. \begin{center} \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{A}~[tnpos=a]{\underline{2}}} { \TC*~{B}\taput{a} } \end{center} We calculate \[e_{{\Gamma}_{3}}^{\text{F}}=\frac{\omega_{A,a}-e_{A,a}}{(\lambda_0-\lambda_1)(\lambda_1-\lambda_0)} =\frac{(\lambda_0-\lambda_1)/2}{(\lambda_0-\lambda_1)(\lambda_1-\lambda_0)}=\frac{1}{2(\lambda_1-\lambda_0)} \text{,}\] \[e_{{\Gamma}_{3}}^{\text{v}}=\frac{(\lambda_0-\lambda_1)(\lambda_1-\lambda_0)} {(\lambda_1-\lambda_0)/2}=2(\lambda_0-\lambda_1) \text{,}\] and \[e_{{\Gamma}_{3}}^{\text{e}}=\frac{2^2(\lambda_1-\lambda_0)^4}{2^4}=\frac{(\lambda_1-\lambda_0)^4}{4} \text{.}\] Thus \[\operatorname{Euler_T}(N_{{\Gamma}_{3}})=\frac{2(\lambda_0-\lambda_1)}{2(\lambda_1-\lambda_0)}\frac{(\lambda_1-\lambda_0)^4}{4} =\frac{-(\lambda_1-\lambda_0)^4}{4} \text{.}\] A similar calculation shows that $\operatorname{Euler_T}(N_{{\Gamma}_{4}})=\frac{-(\lambda_1-\lambda_0)^4}{4}$ as well. For the remaining graphs we label the vertices and edges as follows. \begin{center} \psset{labelsep=2pt, tnpos=b,radius=2pt} \pstree[treemode=R]{\TC*~{A}} { \pstree{\TC*~{B}\taput{a}} { \TC*~{C}\taput{b} } } \end{center} Then \[e_{{\Gamma}_{5}}^{\text{F}}=\frac{\omega_{A,a}}{(\lambda_0-\lambda_1)^2(\lambda_1-\lambda_0)^2} =\frac{\lambda_1-\lambda_0}{(\lambda_0-\lambda_1)^2(\lambda_1-\lambda_0)^2}=\frac{1}{(\lambda_1-\lambda_0)^3} \text{,}\] \[e_{{\Gamma}_{5}}^{\text{v}}=(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)\frac{\omega_{B,a}+\omega_{B,b}} {\omega_{C,b}}=\frac{(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)2(\lambda_0-\lambda_1)}{\lambda_1-\lambda_0} =2(\lambda_1-\lambda_0)^3 \text{,}\] and \[e_{{\Gamma}_{5}}^{\text{e}}=(\lambda_1-\lambda_0)^4 \text{.}\] Thus \[\operatorname{Euler_T}(N_{{\Gamma}_{5}})=\frac{2(\lambda_1-\lambda_0)^3(\lambda_1-\lambda_0)^4}{(\lambda_1-\lambda_0)^3} =2(\lambda_1-\lambda_0)^4 \text{.}\] A similar calculation shows that $\operatorname{Euler_T}(N_{{\Gamma}_{6}})=2(\lambda_1-\lambda_0)^4$. Next, for $Z_7$, \[e_{{\Gamma}_{7}}^{\text{F}}=\frac{\omega_{B,a}\omega_{B,b}}{(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)^2} =\frac{1}{(\lambda_1-\lambda_0)^2} \text{,}\] \[e_{{\Gamma}_{7}}^{\text{v}}=\frac{(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)}{\omega_{C,b}} =(\lambda_0-\lambda_1)(\lambda_1-\lambda_0) \text{,}\] and \[e_{{\Gamma}_{7}}^{\text{e}}=(\lambda_1-\lambda_0)^4 \text{.}\] Thus \[\operatorname{Euler_T}(N_{{\Gamma}_{7}})=\frac{(\lambda_0-\lambda_1)(\lambda_1-\lambda_0)(\lambda_1-\lambda_0)^4}{(\lambda_1-\lambda_0)^2} =-(\lambda_0-\lambda_1)^4 \text{.}\] Similar calculations show that $\operatorname{Euler_T}(N_{{\Gamma}_{8}})=\operatorname{Euler_T}(N_{{\Gamma}_{9}})= \operatorname{Euler_T}(N_{{\Gamma}_{10}})=-(\lambda_0-\lambda_1)^4$ as well. For $Z_{11}$ we get \[e_{{\Gamma}_{11}}^{\text{F}}=\frac{(\lambda_0-\lambda_1-\psi)^2}{(\lambda_0-\lambda_1)^4} \text{,}\] \[e_{{\Gamma}_{11}}^{\text{v}}=\frac{(\lambda_1-\lambda_0)^2(\lambda_0-\lambda_1)}{(\lambda_1-\lambda_0)^2} =\lambda_0-\lambda_1 \text{,}\] and \[e_{{\Gamma}_{11}}^{\text{e}}=(\lambda_1-\lambda_0)^4 \text{.}\] Thus \[\operatorname{Euler_T}(N_{{\Gamma}_{11}})=(\lambda_0-\lambda_1)(\lambda_0-\lambda_1-\psi)^2 =(\lambda_0-\lambda_1)^2(\lambda_0-\lambda_1-2\psi) \text{.}\] A completely symmetric calculation shows that \[\operatorname{Euler_T}(N_{{\Gamma}_{12}})=(\lambda_1-\lambda_0)^2(\lambda_1-\lambda_0-2\psi) \text{.}\] Finally, \[e_{{\Gamma}_{13}}^{\text{F}}=\frac{1}{(\lambda_0-\lambda_1)^2(\lambda_1-\lambda_0)^2} \text{,}\] \[e_{{\Gamma}_{13}}^{\text{v}}=(\lambda_0-\lambda_1)^2(\lambda_1-\lambda_0)(\omega_{B,a}+\omega_{B,b}) =2(\lambda_0-\lambda_1)^2(\lambda_1-\lambda_0)^2 \text{,}\] and \[e_{{\Gamma}_{13}}^{\text{e}}=(\lambda_1-\lambda_0)^4 \text{.}\] Thus \[\operatorname{Euler_T}(N_{{\Gamma}_{13}})=2(\lambda_0-\lambda_1)^4 \text{.}\] A similar calculation shows that $\operatorname{Euler_T}(N_{{\Gamma}_{14}})=2(\lambda_0-\lambda_1)^4$ as well. Next we need to know the restriction of each degree 4 monomial in the generating classes to each fixed component. These come immediately from the restrictions of the generating classes themselves to the fixed components, which we now compute. Results are given in the Table \ref{rest0212}. The same computations were carried out for ${\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2)$ in Section \ref{sec:0112}, and much of the groundwork for the present case was laid there. However, since the current space is more complicated, we will need to extend the those remarks. For example, there are several boundary divisors to consider instead of just one. Perhaps most strikingly, two of the fixed components are one-dimensional rather than just points. We again turn to the methods of \cite[\S 9.2]{CK}. As mentioned, some of the fixed components are quotients of smooth varieties by finite groups. We must be careful to perform these calculations on the variety before quotienting; the integral formula of Corollary \ref{stackloc} takes this quotienting into account later. The restrictions of the $H_i$ are computed exactly as in Section \ref{sec:0112}, except now there are two evaluation morphisms. Just use $\operatorname{ev}_i$ for computing restrictions of $H_i$, so that the image of the $i$'th marked point determines the weight of the restriction. Many restrictions of boundary classes are zero simply because the corresponding boundary components are disjoint from the fixed components. We have $i_1^*(D_j)=i_2^*(D_j)=0$ for $j\in\{0,1,2\}$, because $Z_1$ and $Z_2$ have smooth domain curves and thus lie in the complement of the boundary. The domain of every map in $D_1$ has two degree one components, as can be observed from the degeneration diagrams in Section \ref{sec:bound}. The same holds for $D_2$. Thus $i_3^*(D_1)=i_4^*(D_1)=i_3^*(D_2)=i_4^*(D_2)=0$ since the maps of $Z_3$ and $Z_4$ do not have two degree one components. Recall that the boundary divisor $D_0$ is the closure of the locus consisting of stable maps whose domains have a collapsed component and a degree two component. By stability, such a stable map must have both marked points on the collapsed component. Since we have seen that a collapsed component with three special points is a rigid object, the property of possessing such a component is preserved under limits. Thus the domain curve of every stable map lying in $D_0$ must have both marked points on the same degree zero component. On the other hand, the maps of $Z_7$, $Z_8$, $Z_9$, and $Z_{10}$ have their marked points on different components. Thus $i_7^*(D_0)=i_8^*(D_0)=i_9^*(D_0)=i_{10}^*(D_0)=0$. The above argument also shows that the domain of every map in $D_0$ has a component without any marked points. The same holds for $D_1$. Thus $i_{13}^*(D_0)=i_{14}^*(D_0)=i_{13}^*(D_1)=i_{14}^*(D_1)=0$ since the maps of $Z_{13}$ and $Z_{14}$ each have a marked point on both components. Similarly, the domain curve of every stable map in $D_2$ must have the marked points on distinct components. Briefly, a generic stable map in $D_2$ has this property, and in the limit marked points may only move to newly sprouted components. (If both marked points collide with the node, {\em two} new components will result.) On the other hand, the maps of $Z_5$ and $Z_6$ have their marked points on the same component. Thus $i_5^*(D_2)=i_6^*(D_2)=0$. The simplest of the nonzero entries are the restrictions of $D_0$ and $D_2$ to $Z_{11}$ and $Z_{12}$. Each divisor intersects these fixed components transversely before quotienting. The intersections consist of one point in ${\mathbb P}^1$ for $D_0$ and two points for $D_2$. Recall from Lemma \ref{allpsiequal} that $\psi$ is the class of a point in ${\overline{M}}_{0,4}\simeq{\mathbb P}^1$. Thus we get $\psi$ for the restrictions of $D_0$ and $2\psi$ for the restrictions of $D_2$. The remaining entries are nonzero and are computed using Expression (\ref{eq:sumtan}). This requires a bit of extra care. As before, $i_j^*(D_i)=c_1(i_j^*{\mathcal O}(D_i))=c_1(i_j^*{\mathcal N}_{D_i/{\overline{\mathcal{M}}}})$ since we are working on the smooth atlases over the fixed components. Equation (\ref{eq:sumtan}) gives the results from smoothing all the nodes. The first Chern class of the restriction of the normal cone ${\mathcal N}_{D_i/{\overline{\mathcal{M}}}}$ to a fixed component corresponds to smoothing a node in that degeneration locus that comes from $D_i$. In particular, this involves taking the sum of the weights of the tangent directions to each component at the ``node to be smoothed.'' Intuitively, this is the node that, when smoothed, takes one outside the divisor being intersected with. This makes sense because it gives the restriction of the normal cone of the divisor to its intersection with the fixed component. This is straightforward for $Z_3$ and $Z_4$ since these components have only one node. Since the non-collapsed component of these stable maps have degree two, the weights from the tangent space of ${\mathbb P}^1$ are divided by two. The weights of the restriction of $D_2$ to $Z_{13}$ and $Z_{14}$ are easy to determine for the same reason. Fixed components $Z_5$ and $Z_6$ have two nodes. When restricting $D_0$, smooth the node contained in the collapsed component. When restricting $D_1$, smooth the node that is the intersection of two degree one components. Similarly, the curves associated to $Z_7$, $Z_8$, $Z_9$, and $Z_{10}$ have two nodes. When restricting $D_2$, smooth the node for which both irreducible components have a marked point. When restricting $D_1$, smooth the node contained in a component without a marked point. At any rate, smoothing either node gives the same weight for these components since both nodes map to the same fixed point and lie at the same type of intersection. Finally, we restrict $D_1$ to $Z_{11}$ and $Z_{12}$. Both of these components lie completely inside of $D_1$, so that we still need to use the node splitting. Smoothing either node corresponds to the restriction of ${\mathcal O}(D_1)$. For the first time, the collapsed components make a non-trivial contribution since they contain four special points. Since $\psi$ is the first Chern class of the cotangent bundle to any marked point in ${\overline{M}}_{0,4}$, $-\psi$ is the first Chern class of the tangent bundle. We get contributions of the form $\lambda_i-\lambda_j-\psi$. As in Section \ref{sec:0112}, the $S_2$-action causes these contributions to be doubled on the overlying variety. \begin{table} \renewcommand{\baselinestretch}{1} \small\normalsize \begin{tabular*}{5.9375in}{|c|c|c|c|c|c|c|c|c|@{\extracolsep{\fill}}c|} \hline & $Z_1$ & $Z_2$ & $Z_3$ & $Z_4$ & $Z_5$ & $Z_6$ & $Z_7$ & $Z_8$ & $Z_9$\\ \hline $H_1$ & $\lambda_0$ & $\lambda_1$ & $\lambda_0$ & $\lambda_1$ & $\lambda_1$ & $\lambda_0$ & $\lambda_1$ & $\lambda_0$ & $\lambda_0$ \\ \hline $H_2$ & $\lambda_1$ & $\lambda_0$ & $\lambda_0$ & $\lambda_1$ & $\lambda_1$ & $\lambda_0$ & $\lambda_0$ & $\lambda_1$ & $\lambda_1$ \\ \hline $D_0$ & 0 & 0 & $\frac{\lambda_0-\lambda_1}{2}$ & $\frac{\lambda_1-\lambda_0}{2}$ & $\lambda_1-\lambda_0$ & $\lambda_0-\lambda_1$ & 0 & 0 & 0 \\ \hline $D_1$ & 0 & 0 & 0 & 0 & $2(\lambda_0-\lambda_1)$ & $2(\lambda_1-\lambda_0)$ & $\lambda_0-\lambda_1$ & $\lambda_1-\lambda_0$ & $\lambda_0-\lambda_1$ \\ \hline $D_2$ & 0 & 0 & 0 & 0 & 0 & 0 & $\lambda_0-\lambda_1$ & $\lambda_1-\lambda_0$ & $\lambda_0-\lambda_1$ \\ \hline \end{tabular*} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $Z_{10}$ & $Z_{11}$ & $Z_{12}$ & $Z_{13}$ & $Z_{14}$ \\ \hline $H_1$ & $\lambda_1$ & $\lambda_0$ & $\lambda_1$ & $\lambda_1$ & $\lambda_0$ \\ \hline $H_2$ & $\lambda_0$ & $\lambda_0$ & $\lambda_1$ & $\lambda_1$ & $\lambda_0$ \\ \hline $D_0$ & 0 & $\psi$ & $\psi$ & 0 & 0 \\ \hline $D_1$ & $\lambda_1-\lambda_0$ & $2\lambda_0-2\lambda_1-2\psi$ & $2\lambda_1-2\lambda_0-2\psi$ & 0 & 0 \\ \hline $D_2$ & $\lambda_1-\lambda_0$ & $2\psi$ & $2\psi$ & $2(\lambda_0-\lambda_1)$ & $2(\lambda_1-\lambda_0)$ \\ \hline \end{tabular} \caption{Restrictions of divisor classes in $A_T^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$ to fixed components \label{rest0212}} \end{table} We are now ready to compute the integrals. Details are shown for two of the integrals below. Computations of the remaining integrals are relegated to Appendix \ref{sec:ints}. Note first that \[(\lambda_0-\lambda_1-2\psi)^{-1}=(\lambda_0-\lambda_1)^{-1}(1-2\psi/(\lambda_0-\lambda_1))^{-1} =\frac{1+2\psi/(\lambda_0-\lambda_1)}{(\lambda_0-\lambda_1)}\text{,}\] and similarly with $\lambda_0$ and $\lambda_1$ switched. Also note the factors of two appearing in the denominators of the integrands for $Z_{11}$ and $Z_{12}$, which are due to the $S_2$ automorphism group of these components. Also, in later computations, a factor of 2 is inserted into the denominators of integrands for $Z_3$ and $Z_4$, with the $S_2$ automorphisms here occurring because of the involution switching the sheets of the double covers. For $D_1^2H_1H_2$, we get \begin{eqnarray*} \int_{{\overline{\mathcal{M}}}_T}D_1^2H_1H_2 & = & \int_{(Z_{5})_T}\frac{4\lambda_1^2(\lambda_1-\lambda_0)^2}{2(\lambda_0-\lambda_1)^4}+ \int_{(Z_{6})_T}\frac{4\lambda_0^2(\lambda_0-\lambda_1)^2}{2(\lambda_0-\lambda_1)^4}\\ & & +\int_{(Z_{7})_T}\frac{\lambda_0\lambda_1(\lambda_0-\lambda_1)^2}{-(\lambda_0-\lambda_1)^4} +\int_{(Z_{8})_T}\frac{\lambda_0\lambda_1(\lambda_1-\lambda_0)^2}{-(\lambda_0-\lambda_1)^4}\\ & & + \int_{(Z_{9})_T}\frac{\lambda_0\lambda_1(\lambda_0-\lambda_1)^2}{-(\lambda_0-\lambda_1)^4}+ \int_{(Z_{10})_T}\frac{\lambda_0\lambda_1(\lambda_1-\lambda_0)^2}{-(\lambda_0-\lambda_1)^4} \\ & & +\int_{(Z_{11})_T}\frac{\lambda_0^2(2\lambda_0-2\lambda_1-2\psi)^2}{2(\lambda_0-\lambda_1)^2(\lambda_0-\lambda_1-2\psi)}\\ & & +\int_{(Z_{12})_T}\frac{\lambda_1^2(2\lambda_1-2\lambda_0-2\psi)^2}{2(\lambda_1-\lambda_0)^2(\lambda_1-\lambda_0-2\psi)} \\ & = & \frac{2\lambda_1^2}{(\lambda_0-\lambda_1)^2}+\frac{2\lambda_0^2}{(\lambda_0-\lambda_1)^2} -\frac{\lambda_0\lambda_1}{(\lambda_0-\lambda_1)^2}-\frac{\lambda_0\lambda_1}{(\lambda_0-\lambda_1)^2} \\ & & -\frac{\lambda_0\lambda_1}{(\lambda_0-\lambda_1)^2} -\frac{\lambda_0\lambda_1}{(\lambda_0-\lambda_1)^2}+\int_{(Z_{11})_T} \frac{2\lambda_0^2(\lambda_0-\lambda_1-2\psi)}{(\lambda_0-\lambda_1)(\lambda_0-\lambda_1-2\psi)} \\ & & +\int_{(Z_{12})_T}\frac{2\lambda_1^2(\lambda_1-\lambda_0-2\psi)}{(\lambda_1-\lambda_0)(\lambda_1-\lambda_0-2\psi)} \\ & = & \frac{2\lambda_0^2+2\lambda_1^2-4\lambda_0\lambda_1}{(\lambda_0-\lambda_1)^2}+0+0 \\ & = & 2 \text{.}\end{eqnarray*} The integrals over $Z_{11}$ and $Z_{12}$ vanish because the integrands are not of top codimension. For $D_1D_2H_1H_2$, we get \begin{eqnarray*} \int_{{\overline{\mathcal{M}}}_T}D_1D_2H_1H_2 & = & \int_{(Z_{7})_T}\frac{\lambda_0\lambda_1(\lambda_0-\lambda_1)^2}{-(\lambda_0-\lambda_1)^4} + \int_{(Z_{8})_T}\frac{\lambda_0\lambda_1(\lambda_1-\lambda_0)^2}{-(\lambda_0-\lambda_1)^4} \\ & & +\int_{(Z_{9})_T}\frac{\lambda_0\lambda_1(\lambda_0-\lambda_1)^2}{-(\lambda_0-\lambda_1)^4} +\int_{(Z_{10})_T}\frac{\lambda_0\lambda_1(\lambda_1-\lambda_0)^2}{-(\lambda_0-\lambda_1)^4} \\ & & +\int_{(Z_{11})_T}\frac{\lambda_0^22\psi(2\lambda_0-2\lambda_1-2\psi)} {2(\lambda_0-\lambda_1)^2(\lambda_0-\lambda_1-2\psi)} \\ & & +\int_{(Z_{12})_T}\frac{\lambda_1^22\psi(2\lambda_1-2\lambda_0-2\psi)} {2(\lambda_1-\lambda_0)^2(\lambda_1-\lambda_0-2\psi)} \\ & = & -4\frac{\lambda_0\lambda_1}{(\lambda_0-\lambda_1)^2}+ \int_{(Z_{11})_T}\frac{2\lambda_0^2\psi(2\lambda_0-2\lambda_1)(1+2\psi/(\lambda_0-\lambda_1))} {2(\lambda_0-\lambda_1)^3} \\ & & +\int_{(Z_{12})_T}\frac{2\lambda_1^2\psi(2\lambda_1-2\lambda_0)(1+2\psi/(\lambda_1-\lambda_0))} {2(\lambda_1-\lambda_0)^3}\\ & = & \frac{-4\lambda_0\lambda_1}{(\lambda_0-\lambda_1)^2}+\int_{(Z_{11})_T}\frac{2\lambda_0^2\psi} {(\lambda_0-\lambda_1)^2}+\int_{(Z_{12})_T}\frac{2\lambda_1^2\psi}{(\lambda_1-\lambda_0)^2} \\ & = & \frac{2\lambda_0^2-4\lambda_0\lambda_1+2\lambda_1^2}{(\lambda_0-\lambda_1)^2} \\ & = & 2 \text{.}\end{eqnarray*} Computations of the other integrals (shown in Appendix \ref{sec:ints}) are quite similar and give the results shown in Table \ref{deg4ints}. Any integral of a degree four monomial not listed there is automatically zero for one of the reasons given at the beginning of the section. \renewcommand{\baselinestretch}{1} \small\normalsize \begin{table} \begin{center} \begin{tabular}{|p{2in}p{2.8in}|} \hline \rule[-3mm]{0mm}{8mm}$\int_{{\overline{\mathcal{M}}}}D_2^4=12$ & $\int_{{\overline{\mathcal{M}}}}D_2^3H_1=-4$ \\ \rule[-3mm]{0mm}{8mm}$\int_{{\overline{\mathcal{M}}}}D_2^3D_1=-4$ & $\int_{{\overline{\mathcal{M}}}}D_2^3H_2=-4$ \\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2^3D_0=0$ & $\int_{{\overline{\mathcal{M}}}}D_2^2D_1H_1=0$\\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2^2D_1^2=-4$ & $\int_{{\overline{\mathcal{M}}}}D_2^2D_1H_2=0$ \\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2^2D_1D_0=0$ & $\int_{{\overline{\mathcal{M}}}}D_2^2D_0H_1=\int_{{\overline{\mathcal{M}}}}D_2^2D_0H_2=0$ \\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2^2D_0^2=0$ & $\int_{{\overline{\mathcal{M}}}}D_2D_1^2H_1=4$\\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2D_1^3=12$ & $\int_{{\overline{\mathcal{M}}}}D_2D_1^2H_2=4$\\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2D_1^2D_0=0$ & $\int_{{\overline{\mathcal{M}}}}D_2D_1D_0H_1=\int_{{\overline{\mathcal{M}}}}D_2D_1D_0H_2=0$ \\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2D_1D_0^2=0$ & $\int_{{\overline{\mathcal{M}}}}D_2D_0^2H_1=\int_{{\overline{\mathcal{M}}}}D_2D_0^2H_2=0$\\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2D_0^3=0$ & $\int_{{\overline{\mathcal{M}}}}D_1^3H_1=-8$\\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_1^4=-20$ & $\int_{{\overline{\mathcal{M}}}}D_1^3H_2=-8$ \\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_1^3D_0=0$ & $\int_{{\overline{\mathcal{M}}}}D_1^2D_0H_1=\int_{{\overline{\mathcal{M}}}}D_1^2D_0H_2=4$\\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_1^2D_0^2=4$ & $\int_{{\overline{\mathcal{M}}}}D_1D_0^2H_1=\int_{{\overline{\mathcal{M}}}}D_1D_0^2H_2=-1$ \\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_1D_0^3=-2$ & $\int_{{\overline{\mathcal{M}}}}D_0^3H_1=\int_{{\overline{\mathcal{M}}}}D_0^3H_2=\frac{1}{4}$ \\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_0^4=\frac{3}{4}$ & $\int_{{\overline{\mathcal{M}}}}D_2D_1H_1H_2=2$ \\ \rule[-3mm]{0mm}{8mm} $\int_{{\overline{\mathcal{M}}}}D_2^2H_1H_2=2$ & $\int_{{\overline{\mathcal{M}}}}D_1^2H_1H_2=2$ \\ \hline \end{tabular} \end{center} \caption{Integrals of degree four classes on ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)$ \label{deg4ints}} \end{table} Expressions for the $\psi$-classes in ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2)$ were given in Section \ref{sec:pb}, so it suffices to consider the basic divisor classes $H_1$, $H_2$, $D_0$, $D_1$, and $D_2$. Since the first Betti number is four, there must be a relation among these five divisor classes. Suppose we have a relation \[aH_1+bH_2+cD_0+dD_1+eD_2=0\text{.}\] We can place restrictions on the coefficients by multiplying the above equation by degree three monomials and then integrating. For example, multiplying by $D_2^2H_1$ gives \[aD_2^2H_1^2+bD_2^2H_2H_1+cD_2^2D_0H_1+dD_2^2D_1H_1+eD_2^3H_1=0\text{.}\] Now integration gives the equation $2b-4e=0$, using the integral values from Table \ref{deg4ints}. Continuing with some other choices of monomial, we get the system of restrictions given in Table \ref{rest}. \renewcommand{\baselinestretch}{1} \small\normalsize \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline Monomial & Resulting relation on coefficients \\ \hline $D_2^2H_1$ & $ 2b-4e=0$ \\ $D_2^2H_2$ & $ 2a-4e=0$ \\ $D_1D_0H_1$ & $ -c+4d=0$ \\ $D_1H_1H_2$ & $ 2d+2e=0$ \\ \hline \end{tabular} \caption{Restrictions placed on coefficients of a linear relation by integration \label{rest}} \end{center} \end{table} Together these restrictions show that up to a constant multiple the only possible linear relation among these five divisor classes is \[2H_1+2H_2-4D_0-D_1+D_2=0\text{.}\] Thus this must indeed be a relation, and the remaining four classes must be independent. Hence, we have additionally found that the classes $H_1$, $H_2$, $D_0$, and $D_1$ generate the degree one piece of the graded ring $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$. The same method will be used in Section \ref{sec:prez} to show monomials in the basic classes generate in degrees two and three also. We can write the relation above as $D_2=4D_0+D_1-2H_1-2H_2$. Notice that this linear relation can also be written $D_2-\psi_1-\psi_2=0$ using the expressions for the $\psi$-classes from Section \ref{sec:pb}. An interesting consequence of this relation, together with the relations $D_0\psi_i$, is the relation $D_0D_2=0$. We can find this relation directly by arguing that the divisors $D_0$ and $D_2$ are disjoint. We have seen that the domain curve of every stable map lying in $D_0$ must have both marked points on the same degree zero component, and that the domain curve of every stable map in $D_2$ must have the marked points on distinct components. These mutually exclusive properties of stable maps in $D_0$ and $D_2$ validate the claim of their disjointness. Indeed, one may need to use such a direct argument for the relation $D_0D_2$ when $r>1$, since it is not clear whether the relation $D_2-\psi_1-\psi_2$ holds there. \chapter{The Presentation} \label{sec:prez} \renewcommand{\baselinestretch}{1} \section{ Completeness of the higher degree parts of the presentation} \label{sec:comp} We have already seen in Section \ref{sec:lla} that there can be no extra divisor classes independent of the ones given in Section \ref{sec:gen}, since the first Betti number is four, and four of these classes are independent. The same method used there also demonstrates that monomials in the standard divisor classes generate in higher degrees. As in Section \ref{sec:lla}, we need not consider the $\psi$-classes since they can be expressed in terms of the other generators. We need not consider monomials involving $D_2$ for the same reason: As found in Section \ref{sec:lla}, $D_2$ can be expressed in terms of other generators. Furthermore, relations involving the $\psi$-classes can be reformulated to reduce the number of spanning monomials in the other divisor classes. In degree 2, there are ten monomials in the remaining classes. However, the $H_i^2$ will not play a role since they vanish. Furthermore, it is easy to check that $\psi_1-\psi_2=H_1-H_2$, so that $D_0H_1-D_0H_2=D_0\psi_1-D_0\psi_2=0$. Hence we can also discount the monomial $D_0H_2$, leaving seven monomials spanning the degree two part of the Chow ring. Suppose we have a relation of the form \[aD_1^2+bD_1D_0+cD_1H_1+dD_1H_2+eD_0^2+fD_0H_1+gH_1H_2=0\text{.}\] Then, multiplying this expression by each of these seven degree two monomials and integrating the results, we obtain a system of seven linear equations in seven variables. Solving this system, we see that the only possible relation of this form (up to a constant multiple) is \[D_1D_0+4D_0^2-4D_0H_1=0\text{.}\] Using the expression for $D_2$ derived in Section \ref{sec:lla}, this can be rewritten as $D_0D_2$, a relation we have already discovered. Thus six of the above degree two classes are independent. Since the second Betti number is six, these six classes generate the degree two part of $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$, and so there are no other generators in degree two. In degree three, any monomial with a factor of $D_1D_0$ can be expressed in terms of other monomials. Taking this into account together with the other relations of lower degree, we have six remaining degree three monomials in the divisor classes. A generic relation among these has the form \[aD_1^3+bD_1^2H_1+cD_1^2H_2+dD_1H_1H_2+eD_0^3+fD_0^2H_1=0\text{.}\] Multiplying this expression by the four independent divisor classes and integrating, we get a system of four linear equations in six variables. The set of solutions is two-dimensional: any relation must have the form \[aD_1^3+bD_1^2H_1+bD_1^2H_2+(-6a-4b)D_1H_1H_2+(32a-8b)D_0^3+(-96a-8b)D_0^2H_1 =0\text{.}\] One can compute that the relations \begin{equation} \label{eqn:c1} (D_1+D_2)^3=2^3(D_1^3-3D_1^2H_1-3D_1^2H_2+6D_1H_1H_2+56D_0^3-72D_0^2H_1) \end{equation} and \begin{equation} \label{eqn:c2} D_1\psi_1\psi_2=\frac{1}{4}D_1^3-D_1^2H_1-D_1^2H_2+ \frac{5}{2}D_1H_1H_2+16D_0^3-16D_0^2H_1 \end{equation} found above satisfy these conditions. Moreover, they are clearly independent. So they span the space of relations. Thus four of these six degree three classes must be independent. Since the third Betti number is four, they generate the degree three part, and there are no additional generators. The degree four part is one-dimensional, so since $D_1^2H_1H_2$ is nonzero, it generates the degree four part. \section{Two presentations for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$} We have established the following result. \begin{thm}\label{thm:prez} With notation as established in Section \ref{sec:gen}, we have an isomorphism \begin{equation}\label{geomprez} A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))\simeq\frac{{\mathbb Q}[D_0,D_1,D_2,H_1,H_2,\psi_1,\psi_2]} {\left({H_1^2, H_2^2,D_0\psi_1,D_0\psi_2,D_2-\psi_1-\psi_2, \psi_1-\frac{1}{4}D_1-\frac{1}{4}D_2-D_0+H_1, \atop \psi_2-\frac{1}{4}D_1-\frac{1}{4}D_2-D_0+H_2, (D_1+D_2)^3, D_1\psi_1\psi_2}\right)}\text{.} \end{equation} of graded rings. \end{thm} There are many different presentations for a given ring, and some are more practical and satisfying than others. Here is another presentation for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$. \begin{prop}\label{altprez} We also have an isomorphism \[A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))\simeq\frac{{\mathbb Q}[D_0,D_1,H_1,H_2]} {\left({H_1^2, H_2^2, D_0H_1-D_0H_2, D_1D_0+4D_0^2-4D_0H_1, \atop C_1(D_0,D_1,H_1,H_2), C_2(D_0,D_1,H_1,H_2)}\right)}\] of graded rings, where \[C_1(D_0,D_1,H_1,H_2)=D_1^3-3D_1^2H_1-3D_1^2H_2+6D_1H_1H_2+56D_0^3-72D_0^2H_1 \] and \[C_2(D_0,D_1,H_1,H_2)=D_1^2H_1+D_1^2H_2-4D_1H_1H_2-8D_0^3-8D_0^2H_1\] are cubic relations. \end{prop} This presentation is more efficient than (\ref{geomprez}) in the sense that it has fewer generators and relations. In fact, it has the minimum possible number of each. However, it is not very geometric; one would be hard-pressed to give a geometric explanation for some of the relations. A goal of efficiency also leads to some complicated relations, and this type of presentation will be difficult to generalize. Its derivation relies even more heavily on the kind of brute force linear algebra techniques described and used in Sections \ref{sec:lla} and \ref{sec:comp}. Including the $\psi$-classes as generators leads to the geometric presentation (\ref{geomprez}), which is more beautiful and will also be more useful. \renewcommand{\baselinestretch}{1} \section{ Directions for generalization} \label{sec:direct} \renewcommand{\baselinestretch}{2} The most natural direction to extend Theorem \ref{thm:prez} would involve giving presentations for all the rings $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))$, or at least for those with some other small values of $r$. We already know the Poincar\'{e} polynomials of these spaces from Chapter \ref{sec:ser}; indeed, we saw there that the degeneration strata are the same for all $r$. So the computations involved and the resulting presentations should bear many similarities to what we have seen in the case of $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$. The biggest obstacle to such extension is the dependence on the localization and linear algebra method of Section \ref{sec:lla}. This dependence must be removed in order to obtain a general result. Localization and linear algebra can still be used in obtaining presentations one dimension at a time, but even in the next simplest case $r=2$, the magnitude of computations increases substantially. Since $\operatorname{dim}({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^2,2))=7$, the method specifies finding integrals of all degree seven monomials in the generating classes $H_1$, $H_2$, $D_0$, $D_1$, and $D_2$. There are ${11 \choose 4}=330$ such monomials, though as before many are zero or equivalent to other integrals by inspection. Any monomial with a factor of $H_i^3$ will be zero since $H^3=0$ in $A^*({\mathbb P}^2)$. There are $2{8 \choose 4}-5=135$ monomials like this. The relation $D_0H_1=D_0H_2$ still holds by a direct argument. (Roughly, both marked points must have the same image when they are on the same collapsed component.) Thus we need not worry about any monomial with a factor of $D_0H_2$. There are ${9 \choose 4}-{7 \choose 3} -{6 \choose 2}+1=77$ of these monomials not already counted above. Finally, $D_0$ and $D_2$ are still disjoint, so the ${8 \choose 3}-{5 \choose 3}=46$ monomials (not already counted above) with a factor of $D_0D_2$ are zero as well. Taking these relations into account, 72 integrals remain to be calculated. While the effort necessary to perform these calculations is not too unreasonable, it is clear that the workload will quickly explode as $r$ increases. It could be reduced somewhat by calculating only the integrals actually needed to find relations by linear algebra. However, the benefit of such picking and choosing would be fleeting as $r$ grows. Including the $\psi$-classes and making full use of the existing algorithms for computing gravitational correlators may afford more significant efficiency. At any rate, this approach cannot extend beyond some computational proofs for low $r$. The steps needed to free our method from dependence on localization and linear algebra were described at the beginning of Section \ref{sec:lla}. Making rigorous the heuristic calculations of \cite{Wit} may give a way to construct the desired section of $L_1\* L_2$. At least for low $r$, the methods of Mumford in \cite{Mu} are appropriate for finding an additive basis for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))$. Using the excision sequence (\ref{eksiz}), the author has already used these methods to construct an additive basis for $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^1,2))$, and work on such a construction for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$ is in progress. It should not be too hard get an additive basis for general $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))$ after seeing the pattern of the first few cases, although a different method of proof will be required. It should also be possible to find a presentation for all Chow rings $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2))$ by building from already existing general presentations for $A^*({\overline{\mathcal{M}}}_{0,0}({\mathbb P}^r,2))$ (\cite{BO}) or $A^*({\overline{\mathcal{M}}}_{0,1}({\mathbb P}^r,2))$ (\cite{M}, \cite{MM}). The contraction morphisms provide a key connection between ${\overline{\mathcal{M}}}_{0,2}({\mathbb P}^r,2)$ and these moduli spaces, and would play a central role in such a method. Once a presentation for a Chow ring $A^*({\overline{M}}_{0,2}({\mathbb P}^r,2))$ is known, the techniques in the previous paragraph should also apply to give a presentation for $A^*({\overline{M}}_{0,3}({\mathbb P}^r,2))$. Ultimately, presentations of Chow rings $A^*({\overline{M}}_{0,n}({\mathbb P}^r,2))$ for small $n$ and arbitrary $r$ should be computable using these ideas. Since Musta\c{t}\v{a} has given a presentation for $A^*({\overline{M}}_{0,1}({\mathbb P}^r,d))$ with $r$ and $d$ arbitrary, it may be possible to let $d$ increase beyond two as well in this vision. The final goal in this realm is a description of presentations for all the Chow rings $A^*({\overline{\mathcal{M}}_{0,n}(\mathbb{P}^{r},d)})$. (Presentations for positive $g$ are also desirable, but this seems to be in a somewhat different realm.) Doing so requires more general methods. The recent work of Oprea (\cite{O},\cite{O2}) may be a substantial step in the right direction. \renewcommand{\baselinestretch}{1} \chapter{Computation of gravitational correlators using the presentation} \label{sec:app} Let $X$ be a smooth variety, $\beta\in H_2(X)$, and $\gamma_1,\ldots,\gamma_n\in H^*(X)$. Recall the $\psi$-classes $\psi_1,\ldots,\psi_n\in A^*({\overline{\mathcal{M}}_{g,n}(X,\beta)})$ defined in Section \ref{sec:psi}. The gravitational correlator $\langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_n}\gamma_n\rangle$ is a rational number given by \[\langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_n}\gamma_n\rangle_{g,\beta} =\int_{[{\overline{\mathcal{M}}_{g,n}(X,\beta)}]^\text{vir}}\prod_{i=1}^n \left(\psi_i^{d_i}\operatorname{ev}_i^*(\gamma_i)\right)\text{.}\] If $\beta=0$, we must require either $g>0$ or $n\geq 3$. We usually suppress $\tau_0$ from the notation, and we suppress the fundamental class of $X$ or write it as $1$. We define any gravitational correlator including an argument $\tau_{-1}$ to be zero. Physicists are actually interested in correlation functions of operators called {\em gravitational descendants}. A certain model of string theory associates to each cohomology class $\gamma\in H^*(X)$ a {\em local operator} ${\mathcal O}_{\gamma}$, which is roughly a function from a coordinate patch in $X$ to the space of operators on a Hilbert space of states. Associated to these operators are other operators ${\mathcal O}_{i,\gamma}$ called gravitational descendants. Moreover, all these operators act as distributions in their coordinates. As such, we can take the correlation functions $\langle{\mathcal O}_{d_1,\gamma_1},\ldots,{\mathcal O}_{d_n,\gamma_n}\rangle_{g,\beta}$ of any number of gravitational descendants by integrating. In string theory they describe how various particles interact. In fact, according to \cite{CK}, ``In a very real sense, one can argue that the $n$-point functions contain all of the physical predictions of the theory." (A correlation function with $n$ variables is also called an $n$-point function. See \cite{CK} for more on correlation functions.) These are related to gravitational correlators as follows: \[\langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_n}\gamma_n\rangle_{g,\beta} =\frac{\langle{\mathcal O}_{d_1,\gamma_1},\ldots, {\mathcal O}_{d_n,\gamma_n}\rangle_{g,\beta}}{\prod_{i=1}^n d_i!}\text{.}\] Generalizing gravitational correlators, we can define {\em gravitational classes} in \linebreak[4] $H^*({\overline{\mathcal{M}}}_{g,n},{\mathbb Q})$. Assume that $2g+n\geq 3$. Let $\pi:{\overline{\mathcal{M}}_{g,n}(X,\beta)}\rightarrow X^n\times {\overline{\mathcal{M}}}_{g,n}$ be defined using the evaluation morphisms and the morphism forgetting the map. Denote the projection from $X^n\times {\overline{\mathcal{M}}}_{g,n}$ onto its $i$'th factor by $p_i$. Let $PD:H^*({\overline{\mathcal{M}}}_{g,n},{\mathbb Q})\rightarrow H_{6g-6+2n-*}({\overline{\mathcal{M}}}_{g,n},{\mathbb Q})$ be the Poincar\'{e} duality isomorphism on ${\overline{\mathcal{M}}}_{g,n}$. Then given $\beta$ and $\gamma_1,\ldots,\gamma_n$ as above, the gravitational class $I_{g,n,\beta}(\tau_{d_1}\gamma_1,\ldots,\tau_{d_n}\gamma_n)$ is defined to be \[PD^{-1} {p_2}_*\left(\prod_{i=1}^n \psi_i^{d_i}\cup p_1^*(\gamma_1\*\cdots\*\gamma_n)\cap\pi_*([{\overline{\mathcal{M}}_{g,n}(X,\beta)}]^\text{virt})\right)\text{.}\] The gravitational correlators and gravitational classes are related by \[\langle\tau_1\gamma_1,\ldots,\tau_{d_n}\gamma_n\rangle_{g,\beta} =\int_{{\overline{\mathcal{M}}}_{g,n}}I_{g,n,\beta}(\tau_{d_1}\gamma_1,\ldots,\tau_{d_n}\gamma_n)\text{.}\] Gromov-Witten invariants, defined in Section \ref{sec:intro}, are just gravitational correlators with $d_i=0$ for all $i\in\und{n}$, so that there are no $\psi$-classes in the corresponding integral. Gromov-Witten classes can also be defined as a special case of gravitational classes in the same way. The above information and more can be found in \cite[Chapter 10]{CK}. In this chapter, we will use the presentation given in Theorem \ref{thm:prez} to compute all the genus zero, degree two, two-point gravitational correlators of ${\mathbb P}^1$. Algorithms for computing gravitational correlators have already been constructed using indirect methods. We will show that our results agree with the numbers computed by these existing methods. This provides a check on the validity of the presentation. Gravitational correlators are known to satisfy certain axioms, and the algorithms mentioned above make use of some of these axioms in computing the correlators. We now list the axioms that we will use. For the most part, these are the same axioms listed in \cite[Chapter 10]{CK}. \noindent {\bf Degree Axiom.} Assume that the $\gamma_i$ are homogeneous. A gravitational correlator $\langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_n}\gamma_n\rangle_{g,\beta}$ can be nonzero only if the cohomological degrees of the classes being integrated add up to twice the virtual dimension of the moduli space, {\em i.e.}, \[\sum_{i=1}^n (\operatorname{deg}(\gamma_i)+2d_i)=2(1-g)(\operatorname{dim}_{{\mathbb C}}(X)-3) -2\int_\beta K_X+2n\text{.}\] When $X={\mathbb P}^r$, which is the case of interest, all the $\gamma_i$ have even degrees. So we can divide the above equation by two and use algebraic degrees for the $\gamma_i$. \noindent {\bf Equivariance Axiom.} Assume the $\gamma_i$ are homogeneous. Then for $i\in\und{n-1}$ \begin{eqnarray*} & & \langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_{i+1}}\gamma_{i+1}, \tau_{d_i}\gamma_{i},\ldots,\tau_{d_n}\gamma_n\rangle_{g,\beta}\\ & = & (-1)^{\operatorname{deg}\gamma_i\cdot\operatorname{deg}\gamma_{i+1}}\langle\tau_{d_1}\gamma_1, \ldots,\tau_{d_i}\gamma_{i},\tau_{d_{i+1}}\gamma_{i+1}, \ldots, \tau_{d_n}\gamma_n\rangle_{g,\beta}\text{.} \end{eqnarray*} This has an obvious extension to any permutation of the entries, so that ``equivariance" refers to $S_n$-equivariance. Again, we consider only cases where the cohomology lives in even degrees, so that the gravitational correlators are in fact {\em invariant} under permutation of the entries. \noindent {\bf Fundamental Class Axiom.} Assume that either $\beta\neq 0$ and $n\geq 1$ or $n+2g\geq 4$. Recall that $1\in H^0(X,{\mathbb Q})$ denotes the fundamental class of $X$. Then \begin{eqnarray*} & & \langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_{n-1}}\gamma_{n-1},1\rangle_{g,\beta}\\ & = & \sum_{i=1}^{n-1}\langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_{i-1}}\gamma_{i-1}, \tau_{d_i-1}\gamma_{i},\tau_{d_{i+1}}\gamma_{i+1},\ldots,\tau_{d_{n-1}}\gamma_{n-1} \rangle_{g,\beta}\text{.} \end{eqnarray*} \noindent {\bf Divisor Axiom.} Let $D\in H^2(X,{\mathbb Q})$ be a divisor class. Again assume that either $\beta\neq 0$ and $n\geq 1$ or $n+2g\geq 4$. Then \begin{eqnarray*} & & \langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_{n-1}}\gamma_{n-1},D\rangle_{g,\beta}\\ & = & (\int_\beta D)\langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_{n-1}}\gamma_{n-1}\rangle_{g,\beta}\\ & & +\sum_{i=1}^{n-1}\langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_{i-1}}\gamma_{i-1}, \tau_{d_i-1}\gamma_{i}\cup D,\tau_{d_{i+1}}\gamma_{i+1},\ldots,\tau_{d_{n-1}}\gamma_{n-1} \rangle_{g,\beta}\text{.} \end{eqnarray*} \noindent {\bf Splitting Axiom.} This axiom is easier to state in terms of gravitational classes. Recall the gluing morphisms \[\phi:{\overline{\mathcal{M}}}_{g_1,n_1+1}\times{\overline{\mathcal{M}}}_{g_2,n_2+1}\rightarrow{\overline{\mathcal{M}}}_{g_1+g_2,n_1+n_2}\] defined in Section \ref{sec:modintro}. Let $T_i$ be a homogeneous basis for $H^*(X,{\mathbb Q})$. For each $i$ and $j$, let $g_{ij}=\int_X T_i\cup T_j$ and $(g^{ij})$ be the inverse of the matrix $(g_{ij})$. Then \begin{eqnarray*} & & \phi^*I_{g,n,\beta}(\tau_{d_1}\gamma_1,\ldots,\tau_{d_n}\gamma_n)\\ & = & \sum_{\beta=\beta_1+\beta_2}\sum_{i,j}(g^{ij} I_{g_1,n_1+1,\beta_1}(\tau_{d_1}\gamma_1,\ldots,\tau_{d_{n_1}}\gamma_{n_1},T_i)\\ & & \text{\makebox[1in]{}} \* I_{g_2,n_2+1,\beta_2}(T_j,\tau_{d_{n_1+1}}\gamma_{n_1+1},\ldots, \tau_{d_{n_1+n_2}}\gamma_{n_1+n_2})) \text{.} \end{eqnarray*} \noindent {\bf Dilaton Axiom.} For $n\geq 1$, \[\langle\tau_1,\tau_{d_1}\gamma_1,\ldots,\tau_{d_{n}}\gamma_{n}\rangle_{g,\beta} =(2g-2+n)\langle\tau_{d_1}\gamma_1,\ldots,\tau_{d_{n}}\gamma_{n}\rangle_{g,\beta}\text{.}\] There are sixteen genus zero, degree two, two-point gravitational correlators of ${\mathbb P}^1$. First, we compute them using the presentation given in Section \ref{sec:prez}. We invoke the Equivariance Axiom in order to reduce the number of calculations to nine. This is purely a matter of convenience; the other integrals can be calculated just as easily and have the expected values. The computations were carried out using the algebraic geometry software system Macaulay 2 (\cite{GS}) by entering the presentation for the Chow ring and instructing the program to perform multiplications in this ring. The Macaulay 2 input and output for these calculations are shown in Appendix \ref{sec:m2}. The top codimension ({\em i.e.} degree) piece of the Chow ring is generated by any non-trivial class of that codimension. Thus, once we know the degree of one such class, all integrals can be computed in terms of it. The degrees of many such classes are given in Table \ref{deg4ints}. For our computations in Macaulay 2, it is convenient to use the value \[\int_{{\overline{\mathcal{M}}}}D_1^4=-20\text{.}\] One may wish to avoid dependence on localization by using instead a ``geometrically obvious" degree. The value \[\int_{{\overline{\mathcal{M}}}}D_2D_1H_1H_2=2\] is appropriate for such a purpose. Indeed, it is rather clear that there are two points in the moduli space that satisfy the conditions these classes impose. They correspond to the following stable maps. \begin{center} \begin{pspicture}(0,-2.5)(4,4) \rput(0,2){$C$} \pnode(0.5,1){a} \pnode(3.5,1){b} \pnode(3,0.5){d} \pnode(3,3.5){c} \pnode(.5,3){f} \pnode(3.5,3){e} \dotnode(1.5,3){z} \dotnode(3,2){y} \ncline{a}{b} \ncline{c}{d} \ncline{e}{f} \uput{5pt}[l](.5,1){1} \uput{5pt}[l](.5,3){1} \uput{5pt}[u](3,3.5){0} \uput{5pt}[u](1.5,3){1} \uput{5pt}[r](3,2){2} \rput(0,-2){${\mathbb P}^1$} \pnode(0.5,-2){g} \pnode(4,-2){h} \dotnode(1.5,-2){x} \dotnode(3,-2){w} \ncline{g}{h} \uput{5pt}[d](3,-2){$p_2$} \uput{5pt}[d](1.5,-2){$p_1$} \pnode(2,0){i} \pnode(2,-1.5){j} \ncline{->}{i}{j} \end{pspicture} \hspace{1in} \begin{pspicture}(0,-2.5)(4,4) \rput(4,2){$C^{\prime}$} \pnode(0.5,1){a} \pnode(3.5,1){b} \pnode(1,0.5){d} \pnode(1,3.5){c} \pnode(.5,3){f} \pnode(3.5,3){e} \dotnode(2.5,3){z} \dotnode(1,2){y} \ncline{a}{b} \ncline{c}{d} \ncline{e}{f} \uput{5pt}[r](3.5,1){1} \uput{5pt}[r](3.5,3){1} \uput{5pt}[u](1,3.5){0} \uput{5pt}[u](2.5,3){2} \uput{5pt}[l](0.5,2){1} \rput(4,-2){${\mathbb P}^1$} \pnode(0,-2){g} \pnode(3.5,-2){h} \dotnode(1,-2){x} \dotnode(2.5,-2){w} \ncline{g}{h} \uput{5pt}[d](2.5,-2){$p_2$} \uput{5pt}[d](1,-2){$p_1$} \pnode(2,0){i} \pnode(2,-1.5){j} \ncline{->}{i}{j} \end{pspicture} \end{center} Note also that these stable maps have no non-trivial automorphisms. We show details for one example, the gravitational correlator $\langle\tau_2H,\tau_1\rangle_{0,2}$. By definition, \[\langle\tau_2H,\tau_1\rangle_{0,2}=\int_{\overline{\mathcal{M}}} \psi_1^2 H_1 \psi_2\text{.}\] Macaulay 2 reduces the integrand to $\frac{1}{80}D^4$. Since $D^4$ has degree $-20$, $\langle\tau_2H_1,\tau_1\rangle=-\frac{1}{4}$. Similar computations result in the values found in Table \ref{gravcor}. (See Appendix \ref{sec:m2}.) \begin{table} \begin{center} \begin{tabular}{|cccclcr|} \hline $\langle\tau_4,1\rangle_{0,2}$ & = & $\langle 1,\tau_4\rangle_{0,2}$ & = & $-20\cdot\frac{3}{80}$ & = & $-\frac{3}{4}$\\ $\langle\tau_3H,1\rangle_{0,2}$ & = & $\langle 1,\tau_3H\rangle_{0,2}$ & = & $-20\cdot-\frac{1}{80}$ & = & $\frac{1}{4}$\\ $\langle\tau_3,H\rangle_{0,2}$ & = & $\langle H,\tau_3\rangle_{0,2}$ & = & $-20\cdot\frac{1}{16}$ & = & $-\frac{5}{4}$ \\ $\langle\tau_3,\tau_1\rangle_{0,2}$ & = & $\langle\tau_1,\tau_3\rangle_{0,2}$ & = & $-20\cdot-\frac{3}{80}$ & = & $\frac{3}{4}$\\ $\langle\tau_2H,\tau_1\rangle_{0,2}$ & = & $\langle\tau_1,\tau_2H\rangle_{0,2}$ & = & $-20\cdot\frac{1}{80}$ & = & $-\frac{1}{4}$\\ $\langle\tau_2H,H\rangle_{0,2}$ & = & $\langle H,\tau_2H\rangle_{0,2}$ & = & $-20\cdot-\frac{1}{40}$ & = & $\frac{1}{2}$\\ & & $\langle\tau_2,\tau_2\rangle_{0,2}$ & = & $-20\cdot-\frac{1}{16}$ & = & $\frac{5}{4}$\\ $\langle\tau_2,\tau_1H\rangle_{0,2}$ & = & $\langle\tau_1H,\tau_2\rangle_{0,2}$ & = & $-20\cdot\frac{3}{80}$ & = & $-\frac{3}{4}$\\ & & $\langle\tau_1H,\tau_1H\rangle_{0,2}$ & = & $-20\cdot-\frac{1}{40}$ & = & $\frac{1}{2}$\\ \hline \end{tabular} \end{center} \caption{Gravitational correlators via the presentation for $A^*({\overline{\mathcal{M}}}_{0,2}({\mathbb P}^1,2))$ \label{gravcor}} \end{table} Now we will verify the values of these gravitational correlators by computing them using previously established methods. We will use the Equivariance Axiom again to reduce the number of computations. First, the following identities are derived in \cite[Chapter 10]{CK} using the axioms and a method attributed to R. Pandharipande: \[\langle\tau_{2d-1}H,1\rangle_{0,d}=\frac{1}{(d!)^2}\] and \[\langle\tau_{2d},1\rangle_{0,d}=\frac{-2}{(d!)^2}\left(1+\frac{1}{2}+\cdots+ \frac{1}{d}\right)\text{.}\] For $d=2$ these give $\langle\tau_3H,1\rangle_{0,2}=\frac{1}{4}$ and $\langle\tau_4,1\rangle_{0,2}=-\frac{3}{4}$. From these, the Fundamental Class Axiom gives $\langle\tau_2H\rangle_{0,2}=\frac{1}{4}$ and $\langle\tau_3\rangle_{0,2}=-\frac{3}{4}$ as well. By the Divisor Axiom, \[\langle\tau_3,H\rangle_{0,2}=2\langle\tau_3\rangle_{0,2}+\langle\tau_2H\rangle_{0,2} =-\frac{5}{4}\text{.}\] Similarly, \[\langle\tau_2H,H\rangle_{0,2}=2\langle\tau_2H\rangle_{0,2}+\langle\tau_1H^2\rangle_{0,2} =\frac{1}{2}\text{,}\] where the second term in the sum is zero because $H^2=0$. By the Dilaton Axiom, \[\langle\tau_1,\tau_3\rangle_{0,2}=-\langle\tau_3\rangle_{0,2} =\frac{3}{4}\text{.}\] Similarly, \[\langle\tau_1,\tau_2H\rangle_{0,2}=-\langle\tau_2H\rangle_{0,2} =-\frac{1}{4}\text{.}\] For the remaining calculations, we use results of Kontsevich and Manin in \cite{KM2}. Let $X$ be a smooth projective manifold (variety), and let $\langle{\Delta}_a\rangle$ and $\langle{\Delta}^a\rangle$ be Poincar\'{e} dual bases of $H^*(X)$. \begin{prop}[Kontsevich-Manin]\label{wellknownid} For $g=0$, $n=3$, and $d_1\geq 1$, we have \[\langle\tau_{d_1}\gamma_1,\tau_{d_2}\gamma_2,\tau_{d_3}\gamma_3\rangle_{0,\beta} =\sum_{\beta_1+\beta_2=\beta,a} \langle\tau_{d_1-1}\gamma_1,{\Delta}_a\rangle_{0,\beta_1} \langle{\Delta}^a,\tau_{d_2}\gamma_2,\tau_{d_3}\gamma_3\rangle_{0,\beta_2}\text{.}\] \end{prop} Choose some divisor $\gamma_0$ such that $(\gamma_0,\beta)=\int_\beta \gamma_0\neq 0$. Using the Divisor Axiom, they derive the following identity for genus zero, two-point correlators with $d_1>0$: \begin{eqnarray*} \langle\tau_{d_1}\gamma_1,\tau_{d_2}\gamma_2\rangle_{0,\beta} & = & \frac{1}{(\gamma_0,\beta)}(\langle\gamma_0,\tau_{d_1}\gamma_1,\tau_{d_2}\gamma_2\rangle_{0,\beta}\\ & & -\langle\tau_{d_1-1}(\gamma_0\cup\gamma_1),\tau_{d_2}\gamma_2\rangle_{0,\beta} -\langle\tau_{d_1}\gamma_1,\tau_{d_2-1}(\gamma_0\cup\gamma_2)\rangle_{0,\beta})\text{.} \end{eqnarray*} A gravitational correlator is called {\em primary} if all the $d_i$ are zero, {\em i.e.}, if it is a Gromov-Witten invariant. The identity above can be used repeatedly to reduce to an expression in terms of primary correlators, whose calculations are relatively straightforward. In the last two terms, the sums of the $\tau$ subscripts are already smaller. To reduce the first term, we apply Proposition \ref{wellknownid} together with the Equivariance Axiom to get \[\langle\gamma_0,\tau_{d_1}\gamma_1,\tau_{d_2}\gamma_2\rangle_{0,\beta} =\sum_{\beta_1+\beta_2=\beta,a}\langle\tau_{d_1-1}\gamma_1,{\Delta}_a\rangle_{0,\beta_1} \langle{\Delta}^a,\gamma_0,\tau_{d_2}\gamma_2\rangle_{0,\beta_2}\text{.}\] We take the dual bases $\langle 1,H\rangle$ and $\langle H,1\rangle$ for $H^*({\mathbb P}^1)$, and $\gamma_0=H$ is the obvious choice. Applying the above procedure gives \begin{eqnarray*} \langle\tau_1H,\tau_1H\rangle_{0,2} & = & \frac{1}{2}(\langle H,\tau_1H,\tau_1H\rangle_{0,2} -\langle H^2,\tau_1H\rangle_{0,2}-\langle \tau_1H,H^2\rangle_{0,2})\\ & = & \frac{1}{2}\sum_{d_1+d_2=2,a}\langle H,{\Delta}_a\rangle_{0,d_1}\langle{\Delta}^a,H,\tau_1 H \rangle_{0,d_2}-0-0\\ & = & \frac{1}{2}(\langle H,H\rangle_{0,1}\langle 1,H,\tau_1H\rangle_{0,1})\\ & = & \frac{1}{2}\cdot 1\cdot\langle H,H\rangle_{0,1}\\ & = & \frac{1}{2}\text{,} \end{eqnarray*} where we used the Fundamental Class Axiom in going from the third line to the fourth. Notice also that the Degree Axiom substantially limits the number of terms in the sum that can be nonzero. Finally, $\langle H,H\rangle_{0,1}=1$ is the degree of the class of a point in ${\mathbb P}^1\times{\mathbb P}^1$. We will use the same procedure to calculate $\langle\tau_1H,\tau_2\rangle_{0,2}$ and $\langle\tau_2,\tau_2\rangle_{0,2}$. We get \begin{eqnarray*} \langle\tau_1H,\tau_2\rangle_{0,2} & = & \frac{1}{2}(\langle H,\tau_1H,\tau_2\rangle_{0,2} -\langle H^2,\tau_2\rangle_{0,2}-\langle \tau_1H,\tau_1H\rangle_{0,2})\\ & = & \frac{1}{2}\sum_{d_1+d_2=2,a}\langle H,{\Delta}_a\rangle_{0,d_1}\langle{\Delta}^a,H,\tau_2 \rangle_{0,d_2}-0-\frac{1}{4}\\ & = & \frac{1}{2}\langle H,H\rangle_{0,1}\langle 1,H,\tau_2\rangle_{0,1}-\frac{1}{4}\\ & = & \frac{1}{2}\cdot 1\cdot\langle H,\tau_1\rangle_{0,1}-\frac{1}{4}\\ & = & -\frac{3}{4} \end{eqnarray*} and \begin{eqnarray*} \langle\tau_2,\tau_2\rangle_{0,2} & = & \frac{1}{2}(\langle H,\tau_2,\tau_2\rangle_{0,2} -\langle \tau_1H,\tau_2\rangle_{0,2}-\langle \tau_2,\tau_1H\rangle_{0,2})\\ & = & \frac{1}{2}\sum_{d_1+d_2=2,a}\langle \tau_1,{\Delta}_a\rangle_{0,d_1}\langle{\Delta}^a,H,\tau_2 \rangle_{0,d_2}+\frac{3}{8}+\frac{3}{8}\\ & = & \frac{1}{2}\langle \tau_1,H\rangle_{0,1}\langle 1,H,\tau_2\rangle_{0,1}+\frac{3}{4}\\ & = & \frac{1}{2}\cdot(-1)\cdot(-1)+\frac{3}{4}\\ & = & \frac{5}{4}\text{,} \end{eqnarray*} where the last steps are obtained using the Fundamental Class Axiom, the Dilaton Axiom, and $\langle H\rangle_{0,1}=1$ in each case. (The gravitational correlator $\langle H\rangle_{0,1}$ is just the degree of the class of a point in ${\mathbb P}^1$.) Observe that all of the values computed by these standard methods agree with those in Table \ref{gravcor}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} While constructing stochastic mathematical model there is a certain problem how to introduce the stochastic term which deals not with external impact on the system, but has a direct relationship with the system's structure. In to order to construct a required mathematical model we will consider the processes occurring in the system as one-step Markov processes. This approach allows to obtain stochastic differential equations with compatible stochastic and deterministic parts, since they are derived from the same equation. The stochastic differential equations theory allows qualitatively to analyse the solutions of these equations The Runge--Kutta methods are used to obtain the solutions of stochastic differential equations and for illustration of presented results. In previous studies, the authors developed a method of construction of mathematical model based on one-step stochastic processes, which describe a wide class of phenomena ~\cite{L_lit13, L_lit10::en}. This method presented good results for population dynamic models~\cite{L_lit14::en, L_lit12::en, L_lit11::en}. This method is also applicable to some technical problems such as p2p-networks simulation, in particular to the FastTrack and BitTorrent~\cite{kulyabov:2013:conference:mephi::en}. The paper proposes to use one-step stochastic processes method in order to construct FastTrack and BitTorrent protocol models and to study stochastic influence on the deterministic model. \section{Notations and conventions} \label{sec:2} \begin{enumerate} \item In this paper the notation of abstract indices is used~\cite{penrose-rindler-1987::en}. Under the given notation, the tensor is denoted by an index (eg, $x^{i}$), and the tensor's components are denoted by an underlined index (eg, $x^{\crd{i}}$). \item Latin indices of the middle of the alphabet (eg, $i$, $j$, $k$) denote system space vectors. Latin indices from the beginning of the alphabet (eg, $a$) are related to the space of Wiener's process. Latin indices from the end of the alphabet (eg, $p$, $q$) are the indices of the Runge--Kutta methods. Greek indices (eg, $\alpha$) denote a quantity of different interactions in the kinetic equations. \item A dot over the symbol (eg, $\dot{x}$) denotes time differentiation. \item A comma in the index denotes the partial derivative with respect to corresponding coordinate. \end{enumerate} \section{One-step processes modeling} \label{sec:onestep} Under the one-step process we understand the continuous time Markov processes with integer state space. The transition matrix which allows only transitions between neighboring states. Also, these processes are known as birth-and-death processes. The state of the system is described by a state vector $x^{i} \in \mathbb{R}^n$, where $n$~--- system dimension. The idea of the method is as follows. For the studied system the interaction scheme as a symbolic record of all possible interactions between system elements is introduced. The scheme shows the number and type of elements in certain interaction and the result of the interaction. For this purpose the system state operators are used. The operator $n^{i \alpha}_{j} \in \mathbb{Z}^{n}_{{}\geqslant 0} \times \mathbb{Z}^{n}_{{}\geqslant 0} \times \mathbb{Z}^{s}_{0}$ sets the state of the system before the interaction, and the operator $m^{i \alpha}_{j} \in \mathbb{Z}^{n}_{{}\geqslant 0} \times \mathbb{Z}^{n}_{{}\geqslant 0} \times \mathbb{Z}^{s}_{0}$~--- sets the state after the interaction. It is also assumed that in the system $s$ kinds of different interactions may occur, where $s\in \mathbb{Z}_{+}$. As a result of the interaction , the system switches into the $x^{i} \rightarrow x^i + r^{i \crd{\alpha}}_{j} x^{j}$ or $x^{i} \rightarrow x^{i} - r^{i \crd{\alpha}}_{j} x^{j}$ states, where $r_j^{i \alpha} = m_j^{i \alpha} -n_j^{i \alpha}$. Let's introduce transition probabilities from state $x^{i}$ into $x^{i} + r^{i \crd{\alpha}}_{j} x^{j}$ states (the $x^{i} - r^{i \crd{\alpha}}_{j} x^{j}$ state). The transition probabilities of are assumed to be proportional to the number of possible interactions between elements. Based on the interaction schemas and transition probabilities, we create the master equation, decompose it into a series, leaving only the terms up to the second derivative. The resulting equation is the Fokker--Planck equation, which looks like: \begin{equation} \label{eq:FP} \frac{\partial p}{\partial t} = - \partial_{i} \left[ A^{i} p \right] + \frac{1}{2} \partial_{i} \partial_{j} \left[ B^{i j}p \right], \end{equation} where \begin{equation} \label{eq:kFP} \begin{gathered} A^{i} := A^{i}(x^{k}, t) = r^{i \crd{\alpha}} \left[ s^+_{\crd{\alpha}} - s^-_{\crd{\alpha}} \right], \\ B^{i j} := B^{i j}(x^{k},t) = r^{i \crd{\alpha}} r^{j \crd{\alpha}} \left[ s^+_{\crd{\alpha}} - s^-_{\crd{\alpha}} \right]. \end{gathered} \end{equation} Here $p := p(x^{i},t)$ is a random variable $x^{i}$ density function, $A^{i}$ --- a drift vector, $B^{i j}$ --- a diffusion vector. As it is evident from \eqref{eq:kFP}, the Fokker--Planck equation coefficients can be obtained immediately from interaction scheme and transition probabilities, i.e., for practical calculations the master equation is not in need. To get the more convenient form of the model the corresponding Langevin's equation is given: \begin{equation} \label{eq:langevin} \d x^{i} = a^{i} \d t + b^i_{a} \d W^{a}, \end{equation} where $a^{i} := a^{i} (x^k, t)$, $b^{i}_{a} := b^{i}_{a} (x^k, t)$, $x^i \in \mathbb{R}^n $, $W^{a} \in \mathbb{R}^m$ --- $m$-dimensional Wiener's process. It is implemented as $\d W = \varepsilon \sqrt{\d t}$, where $\varepsilon \sim N(0,1)$~--- normal distribution with mean equal to$0$ and variance equal to $1$. The relationship between the equations \eqref{eq:FP} and \eqref{eq:langevin} is expressed by the following: \begin{equation} \label{eq:k-langevin} A^{i} = a^{i}, \qquad B^{i j} = b^{i}_{a} b^{j a}. \end{equation} Thus, for our system description the stochastic differential equation may be derived to from the general considerations. This equation consists of two parts, one of which describes a deterministic behaviour of the system, and other the stochastic one. Furthermore, both sides of the equations are consistent, since they are derived from the same equation (Figure~\ref{fig:met}). \begin{figure \centering \includegraphics[width=\linewidth]{met} \caption{Method's diagram} \label{fig:met} \end{figure} \section{Fast Track Protocol} Fast Track Protocol is the p2p network protocol for Internet file sharing. The file can be downloaded only from peers which possess the whole file. FastTrack was originally implemented in the KaZaA software. It has a decentralized topology that makes it very reliable. All FastTrack users are divided into two classes: supernodes and ordinary nodes. The supernodes allocation is one of the functions of FT protocol. Supernodes are those with fast network connection, high-bandwidth and possibility of a fast data processing. The users themselves do not know that their computer has been designated as a supernode. To upload a file, a node sends a request to the supernode, which, in its turn, communicates with the other nodes, etc. So, the request extends to a certain level protocol network which is called a lifetime request. After as the desired file is found, it is directly sent to the node bypassing the supernode from the node possessing the necessary file~\cite{ft1, ft2}. \subsection{FastTrack modeling } Assume that the file consists of one part Thus, during one interaction, the node desiring to download it can download the entire file at once. When the download is completed, the node becomes supernode. Let $N$ denote the new node, $L$ --- supernode and $\beta$ --- interaction coefficient. The new nodes appear with the intensity of $\lambda$, and the supernodes leave the system with the intensity of $\mu$. Then the scheme of interaction and vector $\mathbf{r}$ are: \begin{equation} \label{ft:1} \begin{cases} 0 \xrightarrow{\lambda } N, & r^{\crd{i}1}=(1,0) \\ N+L \xrightarrow{\beta } 2L, & r^{\crd{i}2}=(-1,1)\\ L \xrightarrow{\mu} 0, & r^{\crd{i}3}=(0,-1). \end{cases} \end{equation} The first line in the diagram describes the a new client appearance in the system. The second line reflects the interaction of a new client and a seed. After this interaction a new seed appears. And the third line indicates the departure of the seed from the system. Let us introduce the transition probabilities: \begin{equation} \label{ft:2} \begin{gathered} s^{+}_1 (n,l) = \lambda \\ s^{+}_2 (n,l) = \beta nl \\ s^{+}_3 (n,l) = \mu l. \end{gathered} \end{equation} It is possible now to write out the Fokker-Planck equation for our model: \begin{equation} \label{ft:3} \frac{\partial p(n,l)}{\partial t} = {\partial_i} (A^i(n,l) p(n,l)) + \frac{1}{2} {\partial_i \partial_j} (B^{ij}(n,l) p(n,l)), \end{equation} where the drift vector and diffusion matrix are follows: \begin{equation} \begin{gathered} A^i: = A^i(x^k,t)= r^{i\crd{\alpha}}s^+_{\crd{\alpha}} (n,l) ,\\ B^i:= B^{ij}(x^k,t) = r^{i\crd{\alpha}}r^{i\crd{\alpha}} s^+_{\crd{\alpha}} (n,l), \crd{\alpha}=1,2,3. \end{gathered} \end{equation} At last we get: \begin{equation} \label{ft:4} \begin{gathered} \mathbf A = \begin{pmatrix} 1\\ 0 \end{pmatrix} \lambda + \begin{pmatrix} -1\\ 1 \end{pmatrix} \beta n l + \begin{pmatrix} 0\\ -1 \end{pmatrix} \mu l = \begin{pmatrix} \lambda - \beta n l\\ \beta n l - \mu l \end{pmatrix}, \\ \begin{multlined} \mathbf B = \begin{pmatrix} 1\\ 0 \end{pmatrix} (1,0) \lambda + \begin{pmatrix} -1\\ 1 \end{pmatrix} (-1,1) \beta n l + \begin{pmatrix} 0\\ -1 \end{pmatrix} (0,-1) \mu l = \\ = \begin{pmatrix} \lambda + \beta n l & - \beta n l \\ - \beta n l & \beta n l + \mu l \end{pmatrix}. \end{multlined} \end{gathered} \end{equation} The stochastic differential equation in its Langevin's form can be derived by using an appropriate formula. \subsection{Deterministic behavior} Since the drifts vector $A$ completely describes the deterministic behavior of the system, it is possible to derive an ordinary differential equations system, which describes the population dynamics of new clients and seeds. \begin{equation} \label{ft:5} \left \{ \begin{aligned} \frac{dn}{d t}&= \lambda - \beta n l\\ \frac{dl}{d t}&= \beta n l - \mu l \end{aligned} \right. \end{equation} \subsubsection{Steady-states} Let us find steady-states of the system~\eqref{ft:5} from the following system of equations: \begin{equation} \label{ft:6} \left \{ \begin{aligned} \lambda - \beta n l &=0\\ \beta n l - \mu l &=0 \end{aligned} \right. \end{equation} The system~\eqref{ft:5} has the only one steady state: \begin{equation} (\bar{n},\bar{l})= \left ( \frac{\mu }{\beta }, \frac{\lambda }{\mu } \right ) \end{equation}. To linearize the system~\eqref{ft:5}, let $n=\bar{n} + \xi $, $l=\bar{l} + l\eta$, where $\bar{n}$ and $\bar{l}$ are coordinates of stability points, $\xi$ and $\eta $ are small parameters \begin{equation} \label{ft:7} \left\{ \begin{aligned} \frac{d\xi }{d t}&=-\beta \bar{n} \eta- \beta \bar{l} \xi \\ \frac{d\eta }{d t}&=\beta \bar{n} \eta + \beta \bar{l} \xi - \mu \eta \end{aligned} \right. \end{equation} In the neighborhood of the equilibrium point, the linearized system is presented as following:: \begin{equation} \label{ft:8} \left\{ \begin{aligned} \frac{d\xi }{d t}&= - \mu \eta \frac{\beta \lambda }{\mu}\xi \\ \frac{d\eta }{d t}&= \frac{\beta \lambda }{\mu}\xi \end{aligned} \right. \end{equation} Now we may find the eigenvalues of the characteristic equation: \begin{equation} \label{ft:9} s^2+\frac{\beta \lambda }{\mu} s + \beta \lambda =0. \end{equation} The roots of this characteristic equation: \begin{equation} \label{ft:10} s_{1,2}= \frac{1}{2} \left( -\frac{\beta \lambda }{\mu} \pm \sqrt{ \left( \frac{\beta \lambda }{\mu} \right)^2 - 4 \beta \lambda} \right). \end{equation} Thus, depending on the choice of parameters, the critical point has different types. In the case when $\beta\lambda < 4\mu^2$, the critical point represents a stable focus, while in the reverse case --- a steady node. In both the cases, the singular point is a stable one because the real part of the roots of the equation is negative. Thus, depending on the choice of coefficient, the change of values of the variables can occur in one of two trajectories. In the case when the critical point represents a focus, the damped oscillations of the nodes and supernodes quantity occur ~\ref{fig:ft1}. And if the critical point is node, there are no oscillations in the trajectories~\ref{fig:ft2}. Phase portraits of the system for each of the two cases are plotted, respectively, in Figs. ~\ref{fig:ft3} and ~\ref{fig:ft4}. \begin{figure \centering \includegraphics[width=\linewidth]{1} \caption{The time dependence of the nodes and seeds quantity in the Fast Track network for the deterministic case $\beta \lambda < 4\mu^2$.} \label{fig:ft1} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{2} \caption{The time dependence of the nodes and seeds quantity in Fast Track network for the deterministic case $\beta \lambda > 4\mu^2$.} \label{fig:ft2} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{3} \caption{Phase portraits of a deterministic Fast Track system with various deviations $(\Delta x, \Delta y)$ from stationary point if $\beta \lambda < 4\mu^2$.} \label{fig:ft3} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{4} \caption{Phase portraits of a deterministic Fast Track system with various deviations $(\Delta x, \Delta y)$ from stationary point if $\beta \lambda > 4\mu^2$.} \label{fig:ft4} \end{figure} \subsubsection{Numerical simulation of the stochastic model} To illustrate the obtained results the numerical modelling of stochastic differential equation in the Langevin's form was performed. The extension of Runge-Kutta methods for stochastic differential equations was applied~\cite{L_lit04, L_lit01}, and a Fortran program for this extension was written. The results are presented on Figures~\ref{fig:ft5} and~\ref{fig:all_sft}. Figures~\ref{fig:ft5} and~\ref{fig:all_sft} clearly indicate that small stochastic terms do not substantially affect the behaviour of the system in the stationary point neighbourhood. The stochastic term influence exists only on the early evolution of the system. After a relatively short period of time, the system enters the steady-state regime and differs little from the deterministic case. \textbf{Conclusions} The obtained results indicate that the stochastic introduction in the steady-state regime has little effect on the behaviour of the system. So, the deterministic model provides the appropriate results. Furthermore, the proposed method allows to extend the framework of the tools used for the analysis, so is becomes possible to use ordinary stochastic differential equation (Langevin) and partial differential equation (Fokker-Planck) simultaneously. Furthermore, as the above example indicates, in some cases a deterministic approach defined by the diffusion matrix is sufficient. \begin{figure \centering \includegraphics[width=\linewidth]{sft_graph} \caption{The time dependence of new nodes and seeds quantity in the FastTrack network for the stochastic case.} \label{fig:sft_graph} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{5} \caption{Phase trajectories of the stochastic FastTrack model with various deviations $(\Delta x, \Delta y)$ from the stationary point $\beta \lambda > 4\mu^2$.} \label{fig:ft5} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{all_sft} \caption{Phase trajectories of the stochastic FastTrack model with various deviations $(\Delta x, \Delta y)$ from the stationary point $\beta \lambda > 4\mu^2$.} \label{fig:all_sft} \end{figure} \section{BitTorrent protocol} BitTorrent is the p2p-network protocol for file sharing over the Internet. Files are transferred by chunks. Each torrent-client simultaneously downloads the needed chunks from one node and uploads available chunks to another node. It makes the BitTorrent protocol more flexible then the FastTrack one. \subsection{Modeling} First, we consider a simplified model of a closed system, where the numbers of leechers and seeders are constant. Furthermore, we assume that the file consists of one chunk. Thus, the leecher downloads the file during only one time step and then becomes a seeder. Let $N$ denote a new leecher, $C$ ---seeder, and $\beta$ --- interaction coefficient. Then the interaction scheme will be: \begin{equation} \label{bt:1} N+C \xrightarrow{\beta } 2C, \qquad r^{\crd{i}2}=(-1,1). \end{equation} The scheme reflects that after the leecher interaction with the seeder, the leecher disappears and another seeder appears. Next, let $n$ be the number of new nodes, and $c$ --- the number of seeders in the system. The transition probabilities: \begin{equation} \label{bt:2} s^{+} (n,c) = \beta nc. \end{equation} The Fokker--Planck's equation for this model: \begin{equation} \label{bt:3} \frac{\partial p(n,c)}{\partial t} = {\partial_i} (A^i(n,c) p(n,c)) +\frac{1}{2} {\partial_i \partial_j} (B^{ij}(n,c) p(n,c)), \end{equation} where the drift vector and the diffusion matrix are presented as following: \begin{equation} \begin{gathered} A^i(n,c)= r^{i\crd{\alpha}}s^+_{\crd{\alpha}} (n,l) ,\\ B^i(n,c) = r^{i\crd{\alpha}}r^{i\crd{\alpha}} s^+_{\crd{\alpha}} (n,l). \end{gathered} \end{equation} Thus, we obtain: \begin{equation} \label{bt:4} \begin{gathered} \mathbf A = \begin{pmatrix} -1\\ 1 \end{pmatrix} \beta n c + = \begin{pmatrix} - \beta n l\\ \beta n l \end{pmatrix}, \\ \begin{multlined} \mathbf B = \begin{pmatrix} -1\\ 1 \end{pmatrix} (-1,1) \beta n c = \begin{pmatrix} \beta n c & - \beta n c \\ - \beta n c& \beta n c \end{pmatrix}. \end{multlined} \end{gathered} \end{equation} The stochastic differential equation in the Langevin's form can be obtained with the help of the appropriate formula. It is also possible to write out differential equations which describe the deterministic behaviour of the system: \begin{equation} \label{bt:5} \left \{ \begin{aligned} \frac{dn}{d t}&= - \beta n c\\ \frac{dc}{d t}&= \beta n c \end{aligned} \right. \end{equation} Next, we consider the open system in which new clients appear with the intensity $\lambda $, and seeders leave it with the intensity $\mu $. Now, the scheme of interaction has the form of: \begin{equation} \label{bt:6} \begin{aligned} 0 \xrightarrow{\lambda } N, & r^{\crd{i}1}=(1,0),\\ N+C \xrightarrow{\beta } 2C, & r^{\crd{i}2}=(-1,1),\\ C \xrightarrow{\mu } 0, & r^{\crd{i}3}=(0,-1). \end{aligned} \end{equation} The first line in the scheme describes the appearance of the new peer in the system, the second line describes the interactions between the new peer and the seeder, after which a new seeder appears. And the third line meaning is that the seeder leaves the system. Let $n$ denote the number of new clients and $c$ --- the number of seeders in the system. This system is equivalent to the Fast Track model up to notation. Now consider a system in which downloaded files consist of $m$ chunks. The system consists of: \begin{itemize} \item Peers ($N$) are the clients without any chunk of the file. \item Leechers ($L$) are the clients who have already downloaded a number of chunks of the file and can share them with new peers or other leechers. \item Seeders ($C$) are the clients who have the whole file and they only can share the file. \end{itemize} In addition, $n$ is the number of new peers, and $c$ --- number of seeders in the system, $l_i$ --- number of leechers with exactly $i$ chunks of the file, where $i = \overline{i, n-1}$. Also, let $\bar {L}_i$ be the number of leechers with any chunk of the file of interest for leecher $ L_i $ and the $ \bar{l}_i $ is their amount. For this scheme it is possible to write out the following types of relations: \begin{equation} \label{bt:7} \begin{aligned} 0 \xrightarrow{\lambda } & N, \\ N+C \xrightarrow{\beta } & L_1+C, \\ N+L_i \xrightarrow{\beta_i } & L_1+L_i, \\ L_i + \bar{L}_i \xrightarrow{\delta_i } & L_{i+1}+\bar{L}_i, \\ L_i + C \xrightarrow{\gamma_i } & L_{i+1}+C, \\ L_{m-1} + \bar{L}_{m-1} \xrightarrow{\gamma_{m-1} } & C+\bar{L}_{m-1}, \\ L_{m-1} + C \xrightarrow{\gamma } & 2C, \\ C \xrightarrow{\mu } & 0. \end{aligned} \end{equation} On every interaction step one chunk of file is transferred from one peer to another. The first relation describes the appearance of a new peer in a system with the intensity $\lambda$. The second and third relations describe the interaction of a new peer with a seeder or a leecher with the interaction coefficients $\beta$ and $\beta i$, $(i=\overline{i, m-1})$. As the result of interaction, the peer transforms into a leecher from the $L_1$ class. The fourth and fifth relations describe the leecher $L_i$ interaction with the seeder and other leechers with the coefficients $\delta_i$ and $\gamma_i$ $(i=\overline{i, m-2})$. As the result of this interaction, the leecher gets one chunk of a file and becomes the $L_{i+1}$-class leacher. The sixth and seventh relations describe the transformation of leecher into seeders with the coefficients $\gamma_{m-1} $ and $\gamma$ (the leecher downloads the last file chunk). The last relation describes the seeder departure from the system with the intensity $\mu$. The vectors $r^{i\crd{\alpha}}=(n,l_1,l_2,...,l_{m-1},c)$ and transition probabilities $s^+_{\crd{\alpha}}$: \begin{equation} \label{bt:8} \begin{gathered} r^{1} =(1,0,0,...,0), \\ r^{2} =r_i^3=(-1,1,0,...,0), i=\overline{i, m-1} \\ r_i^4 =r_i^5=(0,...,-1,1,...,0), i=\overline{i, m-2} \\ r^{6} =r^7=(0,0,...,-1,1), \\ r^{8} =(0,0,...,-1). \end{gathered} \end{equation} \begin{equation} \label{bt:9} \begin{gathered} s^{+}_1 =\lambda, \\ s^{+}_2 =\beta n c, \\ s^{+}_{3i} =\beta_i n l_i, \\ s^{+}_{4i} =\delta_i l_i \bar{l}_i, i=\overline{i, m-1}\\ s^{+}_{5i} =\gamma_i l_i c, i=\overline{i, m-2}\\ s^{+}_{6} =\gamma_{m-1} l_{m-1} \bar{l}_{m-1}, \\ s^{+}_{7} =\gamma l_{m-1} c, \\ s^{+}_{8} =\mu c. \\ \end{gathered} \end{equation} For this model, which is similar to the previous one, we can write out the Fokker-Planck's equation. But for deterministic behaviour description, it's enough to write out the matrix $A$. \begin{equation} \label{bt:10} \mathbf A = \begin{pmatrix} \lambda - \beta n c - \sum_{i=1}^{m-1} \beta_i n l_i \\ \beta n c + \sum_{i=1}^{m-1} \beta_i n l_i -\delta_1 l_1 \bar{l}_1 - \gamma_1 l_1 c \\ \delta_1 l_1 \bar{l}_1 + \gamma_1 l_1 c - \delta_2 l_2 \bar{l}_2 - \gamma_2 l_2 c \\ \ldots \\ \begin{multlined} \delta_{m-2} l_{m-2} \bar{l}_{m-2} + \gamma_{m-2} l_{m-2} c - {} \\ {} - \delta_{m-1} l_{m-1} \bar{l}_{m-1} - \gamma_{m-1} l_{m-1} c \end{multlined} \\ \delta_{m-1} l_{m-1} \bar{l}_{m-1} + \gamma_{m-1} l_{m-1} c - \mu c \end{pmatrix}. \end{equation} As a result, we obtain a system of differential equations describing the dynamics of new peers, leechers and seeders: \begin{equation} \label{bt:11} \left \{ \begin{gathered} \frac{d n}{d t} = \lambda - \beta n c - \sum_{i=1}^{m-1} \beta_i n l_i, \\ \frac{d l_1}{d t}= \beta n c + \sum_{i=1}^{m-1} \beta_i n l_i -\delta_1 l_1 \bar{l}_1 - \gamma_1 l_1 c, \\ \frac{d l_2}{d t}= \delta_1 l_1 \bar{l}_1 + \gamma_1 l_1 c - \delta_2 l_2 \bar{l}_2 - \gamma_2 l_2 c, \\ \ldots \\ \begin{multlined} \frac{d l_{m-1}}{d t}= \delta_{m-2} l_{m-2} \bar{l}_{m-2} + \gamma_{m-2} l_{m-2} c - {} \\ {} - \delta_{m-1} l_{m-1} \bar{l}_{m-1} - \gamma_{m-1} l_{m-1} c, \end{multlined} \\ \frac{d c}{d t}= \delta_{m-1} l_{m-1} \bar{l}_{m-1} + \gamma_{m-1} l_{m-1} c - \mu c. \end{gathered} \right. \end{equation} Let's suppose that $\delta=\delta_{1}=\delta_{2}=...=\delta_{m-1}=const$, then let's sum the equations in our system from the second one to the $m+1$-th. If we denote leechers and seeders as $l = l_1 + l_2 + ... + l_{m-1} + c$ the system of the equations may be simplified as follows: \begin{equation} \label{bt:12} \left \{ \begin{aligned} \frac{d n}{d t}&= \lambda - \beta n (l+c), \\ \frac{d (l+c)}{d t}&= \beta n (l+c) - \mu c. \end{aligned} \right. \end{equation} \section{Conclusion} \begin{enumerate} \item In this paper the method of stochastic models construction by use of one-step stochastic processes is described. The proposed method provides an universal algorithm of deriving stochastic differential equations for such systems. It's also shown that there are two way of stochastic system's description: with the help of partial differential equation (Fokker-Plank) and ordinary differential equations (Langevin). \item In order to study influence of the stochastic term of an equation the Fast Track and Bittorrent protocol models were discussed. The results of this study indicate, that near the stationary points the stochastic influence is minimal, and that's because the deterministic model gives very good results. In addition, as it was shown by the above example, in some cases, in order to examine the system it is enough to study its deterministic approximation, which is described by the drift matrix. \end{enumerate} \bibliographystyle{abbrvnat} \section{Введение} При стохастизации математических моделей возникает проблема, как ввести стохастический член, который интерпретируется не как внешнее случайное воздействие на систему, а имеет непосредственную связь с ее структурой. Для получения стохастических моделей предлагается рассматривать процессы, происходящие в системе, как одношаговые марковские процессы. Такой подход позволяет получать стохастические дифференциальные уравнения с согласованными стохастической и детерминистической частями, так как они выводятся из одно и того же уравнения. Привлечение теории стохастических дифференциальных уравнений позволяет провести качественный и численный анализ поведения решений уравнений для полученной стохастической модели. Для иллюстрации результатов предлагается использовать численные Рунге--Кутты разных порядков построения решений стохастических дифференциальных уравнений. В предыдущих работах авторов разработан метод построения одношаговых стохастических моделей, который позволяет моделировать широкий класс явлений~\cite{L_lit13, L_lit10}. Данный метод показал хорошие результаты для популяционной динамики~\cite{L_lit14, L_lit12, L_lit11} . Его также можно применить к техническим задачам таким как peer-to-peer сети, в частности к моделированию протокола FastTrack и BitTorrent~\cite{kulyabov:2013:conference:mephi}. В работе предлагается применение данного метода для построения моделей протоколов FastTrack и BitTorrent и изучение влияния введения стохастики в детерминистическую модель. \section{Обозначения и соглашения} \label{sec:2} \begin{enumerate} \item В работе используется нотация абстрактных индексов~\cite{penrose-rindler-1987}. В данной нотации тензор как целостный объект обозначается просто индексом (например, $x^{i}$), компоненты обозначаются подчёркнутым индексом (например, $x^{\crd{i}}$). \item Будем придерживаться следующих соглашений. Латинские индексы из середины алфавита ($i$, $j$, $k$) будут относиться к пространству векторов состояний системы. Латинские индексы из начала алфавита ($a$) будут относиться к пространству винеровского процесса. Латинские индексы из конца алфавита ($p$, $q$) будут относиться к индексам метода Рунге--Кутты. Греческие индексы ($\alpha$) будут задавать количество разных взаимодействий в кинетических уравнениях. \item Точкой над символом обозначается дифференцирование по времени. \item Запятой в индексе обозначается частная производная по соответствующей координате. \end{enumerate} \section{Моделирование одношаговых процессов} \label{sec:onestep} Под одношаговыми процессами мы будем понимать марковские процессы с непрерывным временем, принимающие значения в области целых чисел, матрица перехода которых допускает только переходы между соседними участками. Также эти процессы известны под названиями процессов рождения--гибели. Состояние системы будем описывать вектором состояния $x^{i} \in \mathbb{R}^n$, где $n$~--- размерность системы Идея метода состоит в следующем. Для исследуемой системы, состояние которой будем описывать вектором состояния $x^{i} \in \mathbb{R}^n$, где $n$~--- размерность системы, можно записать схему взаимодействия. Т.е. символическую запись всех возможных взаимодействий между элементами системы, которая показывает сколько и каких элементов во взаимодействие какого типа вступают и что получилось в результате. Для этого используются операторы состояния системы. Оператор $n^{i \alpha}_{j} \in \mathbb{Z}^{n}_{{}\geqslant 0} \times \mathbb{Z}^{n}_{{}\geqslant 0} \times \mathbb{Z}^{s}_{0}$ задаёт состояние системы до взаимодействия, оператор $m^{i \alpha}_{j} \in \mathbb{Z}^{n}_{{}\geqslant 0} \times \mathbb{Z}^{n}_{{}\geqslant 0} \times \mathbb{Z}^{s}_{0}$~--- после. Также считается, что в системе может происходить $s$ видов различных взаимодействий, где $s\in \mathbb{Z}_{+}$. И в результате взаимодействия система переходит в состояние $x^{i} \rightarrow x^i + r^{i \crd{\alpha}}_{j} x^{j}$ или $x^{i} \rightarrow x^{i} - r^{i \crd{\alpha}}_{j} x^{j}$, где $r_j^{i \alpha} = m_j^{i \alpha} -n_j^{i \alpha}$~---оператор изменения состояния системы. Далее предлагается записать вероятности переходов из состояния $x^{i}$ в состояние $x^{i} + r^{i \crd{\alpha}}_{j} x^{j}$ (в состояние $x^{i} - r^{i \crd{\alpha}}_{j} x^{j}$), которые предполагаются пропорциональными числу возможных взаимодействий между элементами. На основании схем взаимодействия и вероятностей переходов мы строим основное кинетическое уравнение, раскладываем его в ряд, оставляя только члены до второй производной включительно. Получившееся уравнение будет уравнением Фоккера--Планка, которое будет иметь вид: \begin{equation} \label{eq:FP} \frac{\partial p}{\partial t} = - \partial_{i} \left[ A^{i} p \right] + \frac{1}{2} \partial_{i} \partial_{j} \left[ B^{i j}p \right], \end{equation} где \begin{equation} \label{eq:kFP} \begin{gathered} A^{i} := A^{i}(x^{k}, t) = r^{i \crd{\alpha}} \left[ s^+_{\crd{\alpha}} - s^-_{\crd{\alpha}} \right], \\ B^{i j} := B^{i j}(x^{k},t) = r^{i \crd{\alpha}} r^{j \crd{\alpha}} \left[ s^+_{\crd{\alpha}} - s^-_{\crd{\alpha}} \right]. \end{gathered} \end{equation} Здесь $p := p(x^{i},t)$ и имеет смысл плотности распределения случайной величины $x^{i}$, $A^{i}$ --- вектор сноса, $B^{i j}$ --- вектор диффузии. Как видно из \eqref{eq:kFP}, коэффициенты уравнения Фоккера--Планка можно получить сразу после записи схемы взаимодействия и вероятностей перехода, то есть в практических расчётах записывать основное кинетическое уравнение нет необходимости. Для получения более привычного вида модели записываем соответствующее ему уравнение Ланжевена: \begin{equation} \label{eq:langevin} \d x^{i} = a^{i} \d t + b^i_{a} \d W^{a}, \end{equation} где $a^{i} := a^{i} (x^k, t)$, $b^{i}_{a} := b^{i}_{a} (x^k, t)$, $x^i \in \mathbb{R}^n $ --- вектор состояния системы, $W^{a} \in \mathbb{R}^m$ --- $m$-мерный винеровский процесс. Винеровский процесс реализуется как $\d W = \varepsilon \sqrt{\d t}$, где $\varepsilon \sim N(0,1)$~--- нормальное распределение со средним $0$ и дисперсией $1$. Латинскими индексами из середины алфавита обозначаются величины, относящиеся к векторам состояний (размерность пространства $n$), а латинскими индексами из начала алфавита обозначаются величины, относящиеся к вектору винеровского процесса (размерность пространства $m \leqslant n$). При этом связь между уравнениями \eqref{eq:FP} и \eqref{eq:langevin} выражается следующими соотношениями: \begin{equation} \label{eq:k-langevin} A^{i} = a^{i}, \qquad B^{i j} = b^{i}_{a} b^{j a}. \end{equation} Таким образом для описания системы из общих соображений можно получить стохастическое дифференциальное уравнение. Это уравнение состоит из двух частей, один из которых описывает детерминистическое поведение системы, а другой стохастическое. Кроме того, обе части уравнения являются согласованными, т. к. получены из одного и того же уравнения (схема на рис.~\ref{fig:met}). \begin{figure \centering \includegraphics[width=\linewidth]{met} \caption{Схема метода} \label{fig:met} \end{figure} \section{Протокол Fast Track} Fast Track --- одноранговый (P2P) сетевой протокол для кооперативного обмена файлами через Интернет. Закачка данных осуществляется только из источников, содержащих полные файлы. FastTrack первоначально был реализован в программе KaZaA. Сеть, основанная на работе прокола FastTrack, имеет децентрализованную топологию, что делает ее работу очень надежной. В сети пользователи разделены на два класса: суперузлы и простые узлы (supernodes и ordinary nodes). Выделение суперузлов является одной из функций протокола и на эту роль выбираются узлы с быстрым подключением к сети, высокой пропускной способностью и возможностью быстрой обработки данных. При этом владельцы компьютеров не знают, что их компьютер был назначен в качестве суперузла. Для того, чтобы загрузить файл, узел посылает запрос суперузлу, который в свою очередь взаимодействует с другими узлами и т.д. Таким образом запрос распространяется до определенного протоколом уровня сети и называется временем жизни запроса (Time to live). После того, как нужный файл будет найден, он передается непосредственно узлу, его запросившему, от узла, который имеет этот файл, минуя суперузел~\cite{ft1, ft2}. \subsection{Моделирование} Сделаем предположение, что файл состоит из одной части. Таким образом за один шаг взаимодействия нового узла, желающего скачать файл, и узла, раздающего файл, новый узел скачивает весь файл и становится раздающим узлом. Пусть $N$ --- это обозначение нового узла, $L$ --- это раздающий узел, а $\beta $ --- коэффициент взаимодействия. Новые узлы могут приходить в систему с интенсивностью $\lambda $, а раздающие узлы уходить из нее с интенсивностью $\mu$. Тогда схема взаимодействия и вектор $\mathbf r$ будет иметь вид: \begin{equation} \label{ft:1} \begin{cases} 0 \xrightarrow{\lambda } N, & r^{\crd{i}1}=(1,0) \\ N+L \xrightarrow{\beta } 2L, & r^{\crd{i}2}=(-1,1)\\ L \xrightarrow{\mu} 0, & r^{\crd{i}3}=(0,-1). \end{cases} \end{equation} Первая строка в схеме описывает появление нового клиента в системе. Вторая строка отражает взаимодействие нового клиента и сида, в результате которого появляется новый сид. А третья – это уход сида из системы. Запишем вероятности переходов: \begin{equation} \label{ft:2} \begin{gathered} s^{+}_1 (n,l) = \lambda \\ s^{+}_2 (n,l) = \beta nl \\ s^{+}_3 (n,l) = \mu l. \end{gathered} \end{equation} Далее можно записать уравнение Фоккера-Планка для данной модели: \begin{equation} \label{ft:3} \frac{\partial p(n,l)}{\partial t} = {\partial_i} (A^i(n,l) p(n,l)) + \frac{1}{2} {\partial_i \partial_j} (B^{ij}(n,l) p(n,l)), \end{equation} где вектор сносов и матрица диффузии имеют следующий вид: \begin{equation} \begin{gathered} A^i: = A^i(x^k,t)= r^{i\crd{\alpha}}s^+_{\crd{\alpha}} (n,l) ,\\ B^i:= B^{ij}(x^k,t) = r^{i\crd{\alpha}}r^{i\crd{\alpha}} s^+_{\crd{\alpha}} (n,l), \crd{\alpha}=1,2,3. \end{gathered} \end{equation} Таким образом получаем: \begin{equation} \label{ft:4} \begin{gathered} \mathbf A = \begin{pmatrix} 1\\ 0 \end{pmatrix} \lambda + \begin{pmatrix} -1\\ 1 \end{pmatrix} \beta n l + \begin{pmatrix} 0\\ -1 \end{pmatrix} \mu l = \begin{pmatrix} \lambda - \beta n l\\ \beta n l - \mu l \end{pmatrix}, \\ \begin{multlined} \mathbf B = \begin{pmatrix} 1\\ 0 \end{pmatrix} (1,0) \lambda + \begin{pmatrix} -1\\ 1 \end{pmatrix} (-1,1) \beta n l + \begin{pmatrix} 0\\ -1 \end{pmatrix} (0,-1) \mu l = \\ = \begin{pmatrix} \lambda + \beta n l & - \beta n l \\ - \beta n l & \beta n l + \mu l \end{pmatrix}. \end{multlined} \end{gathered} \end{equation} Стохастическое дифференциальное уравнение в форме Ланжевена можно получить воспользовавшись соответствующей формулой. \subsection{Детерминистическое поведение} Так как вектор сносов $A$ полностью описывает детерминистическое поведение системы можно получить систему обыкновенных дифференциальных уравнений, описывающих динамику численности новых клиентов и сидов: \begin{equation} \label{ft:5} \left \{ \begin{aligned} \frac{dn}{d t}&= \lambda - \beta n l\\ \frac{dl}{d t}&= \beta n l - \mu l \end{aligned} \right. \end{equation} \subsubsection{Стационарные состояния} Найдём стационарные состояния системы~\eqref{ft:5}, которые являются решением системы уравнений: \begin{equation} \label{ft:6} \left \{ \begin{aligned} \lambda - \beta n l &=0\\ \beta n l - \mu l &=0 \end{aligned} \right. \end{equation} Система~\eqref{ft:5} имеет одностационарное состояние: \begin{equation} (\bar{n},\bar{l})= \left ( \frac{\mu }{\beta }, \frac{\lambda }{\mu } \right ) \end{equation}. \subsubsection{Исследование линеаризованной устойчивости} \subsubsection{Study of linearized stability} Линеаризуем систему~\eqref{ft:5}. Пусть $n=\bar{n} + \xi $, $l=\bar{l} + l\eta$, где $\bar{n}$ и $\bar{l}$ --- координаты точки равновесия, а $\xi $ и $\eta $ --- малые возмущения: \begin{equation} \label{ft:7} \left\{ \begin{aligned} \frac{d\xi }{d t}&=-\beta \bar{n} \eta- \beta \bar{l} \xi \\ \frac{d\eta }{d t}&=\beta \bar{n} \eta + \beta \bar{l} \xi - \mu \eta \end{aligned} \right. \end{equation} Запишем линеаризованную систему в окрестности точки равновесия: \begin{equation} \label{ft:8} \left\{ \begin{aligned} \frac{d\xi }{d t}&= - \mu \eta \frac{\beta \lambda }{\mu}\xi \\ \frac{d\eta }{d t}&= \frac{\beta \lambda }{\mu}\xi \end{aligned} \right. \end{equation} Найдём собственные значения характеристического уравнения, которое имеет вид: \begin{equation} \label{ft:9} s^2+\frac{\beta \lambda }{\mu} s + \beta \lambda =0. \end{equation} Далее запишем корни этого характеристического уравнения: \begin{equation} \label{ft:10} s_{1,2}= \frac{1}{2} \left( -\frac{\beta \lambda }{\mu} \pm \sqrt{ \left( \frac{\beta \lambda }{\mu} \right)^2 - 4 \beta \lambda} \right). \end{equation} Таким образом, в зависимости от выбора параметров особая точка может иметь разный характер. Так при $\beta \lambda < 4\mu^2$ особая точка является устойчивым фокусом, а при обратном соотношении --- устойчивый узел. В обоих случаях особая точка является устойчивой, так как действительная часть корней уравнения отрицательная. Таким образом, в зависимости от выбора значений коэффициентов, изменения переменных системы может происходить по одной из двух траекторий. Если особая точка является фокусом, то в системе происходят затухающие колебания численностей новых и раздающих узлов~\ref{fig:ft1}. А в узловом случае приближение численностей к стационарным значениям происходит в бесколебательном режиме~\ref{fig:ft2}. Фазовые портреты системы для каждого из двух случаев изображены, соответственно, на графиках ~\ref{fig:ft3} и~\ref{fig:ft4}. \begin{figure \centering \includegraphics[width=\linewidth]{1} \caption{Зависимость числа новых и раздающих узлов от времени в сети Fast Track для детерминистического случая при $\beta \lambda < 4\mu^2$.} \label{fig:ft1} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{2} \caption{Зависимость числа новых и раздающих узлов от времени в сети Fast Track для детерминистического случая при $\beta \lambda > 4\mu^2$.} \label{fig:ft2} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{3} \caption{Фазовые портреты детерминистической системы Fast Track с различными отклонениями $(\Delta x, \Delta y)$ от стационарной точки при $\beta \lambda < 4\mu^2$.} \label{fig:ft3} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{4} \caption{ Фазовые портреты детерминистической системы Fast Track с различными отклонениями $(\Delta x, \Delta y)$ от стационарной точки при $\beta \lambda > 4\mu^2$.} \label{fig:ft4} \end{figure} \subsubsection{Численное моделирование стохастической модели} Для иллюстрации полученных результатов было проведено численное моделирование стохастического дифференциального уравнения в форме Ланжевена. Для численного решения стохастических дифференциальных уравнений использован метод, заключающийся в распространении методов Рунге-Кутты на случай стохастических дифференциальных уравнений~\cite{L_lit04, L_lit01}, реализованный на языке Фортран. Результаты численного моделирования приведены на графиках~\ref{fig:ft5} и~\ref{fig:all_sft}. На рисунках~\ref{fig:ft5} и~\ref{fig:all_sft} наглядно видно, что введение малых стохастических членов существенно не влияет на поведение системы в близи узловой точки при большом числе раздающих узлов. Последствия введения стохастики ощущаются лишь в начале эволюции системы. По прошествии сравнительно небольшого отрезка времени система входит в стационарный режим и мало отличается от детерминированного случая. \subsubsection{Выводы} Полученные результаты показывают, что введение стохастики в стационарном режиме слабо влияет на поведение системы, поэтому при ее изучении можно рассматривать детерминистическую модель. Кроме того, предложенный метод позволяет расширить аппарат инструментов, используемых для анализа модели, так как одновременно при применении данного подхода для описания системы можно получить обыкновенное стохастическое дифференциальное уравнение и уравнение в частных производных в форме уравнения Фоккера-Планка. Кроме того, как показал рассмотренный пример в некоторых случаях для изучения системы можно рассматривать ее детерминистическое приближение, которое определяется матрицей сносов. \begin{figure \centering \includegraphics[width=\linewidth]{sft_graph} \caption{Зависимость числа новых и раздающих узлов от времени в сети FastTrack для стохастического случая.} \label{fig:sft_graph} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{5} \caption{Фазовые портреты стохастической системы Fast Track с различными отклонениями $(\Delta x, \Delta y)$ от стационарной точки при $\beta \lambda > 4\mu^2$.} \label{fig:ft5} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth]{all_sft} \caption{Фазовые портреты стохастической системы Fast Track с различными отклонениями $(\Delta x, \Delta y)$ от стационарной точки при $\beta \lambda > 4\mu^2$.} \label{fig:all_sft} \end{figure} \section{Протокол BitTorrent} BitTorrent --- пиринговый (P2P) сетевой протокол для кооперативного обмена файлами через Интернет. Файлы передаются частями, каждый torrent-клиент, получая (скачивая) эти части, в тоже время отдает (закачивает) их другим клиентам, что снижает нагрузку и зависимость от каждого клиента-источника и обеспечивает избыточность данных. \subsection{Моделирование} Сначала рассмотрим упрощенную модель закрытой системы, т.е. такую, в которой не приходят новые клиенты и не уходят раздающие. Кроме того, сделаем предположение, что файл состоит из одной части. Таким образом за один шаг взаимодействия нового клиента (личера), желающего скачать файл, и клиента, раздающего файл (сида), новый клиент скачивает весь файл и становиться сидом. Пусть $N$ --- это обозначение нового клиента (личера), $С$ --- это раздающий клиент (сид), а $\beta$ --- коэффициент взаимодействия. Тогда схема взаимодействия будет иметь вид: \begin{equation} \label{bt:1} N+C \xrightarrow{\beta } 2C, \qquad r^{\crd{i}2}=(-1,1). \end{equation} Схема отражает, что после взаимодействия личера и сида, в системе пропадает личер и появляется еще один сид. Далее, пусть $n$ --- это численность новых клиентов, а $c$ --- количество сидов в системе. Запишем вероятности переходов: \begin{equation} \label{bt:2} s^{+} (n,c) = \beta nc. \end{equation} Далее можно записать уравнение Фоккера--Планка для данной модели: \begin{equation} \label{bt:3} \frac{\partial p(n,c)}{\partial t} = {\partial_i} (A^i(n,c) p(n,c)) +\frac{1}{2} {\partial_i \partial_j} (B^{ij}(n,c) p(n,c)), \end{equation} где вектор сносов и матрица диффузии имеют следующий вид: \begin{equation} \begin{gathered} A^i(n,c)= r^{i\crd{\alpha}}s^+_{\crd{\alpha}} (n,l) ,\\ B^i(n,c) = r^{i\crd{\alpha}}r^{i\crd{\alpha}} s^+_{\crd{\alpha}} (n,l). \end{gathered} \end{equation} Таким образом получаем: \begin{equation} \label{bt:4} \begin{gathered} \mathbf A = \begin{pmatrix} -1\\ 1 \end{pmatrix} \beta n c + = \begin{pmatrix} - \beta n l\\ \beta n l \end{pmatrix}, \\ \begin{multlined} \mathbf B = \begin{pmatrix} -1\\ 1 \end{pmatrix} (-1,1) \beta n c = \begin{pmatrix} \beta n c & - \beta n c \\ - \beta n c& \beta n c \end{pmatrix}. \end{multlined} \end{gathered} \end{equation} Стохастическое дифференциальное уравнение в форме Ланжевена можно получить воспользовавшись соответствующей формулой. Также можно записать систему дифференциальных уравнений, описывающую детерминистическое поведение системы: \begin{equation} \label{bt:5} \left \{ \begin{aligned} \frac{dn}{d t}&= - \beta n c\\ \frac{dc}{d t}&= \beta n c \end{aligned} \right. \end{equation} Далее рассмотрим открытую систему, в которой новые клиенты могут приходить в систему с интенсивностью $\lambda$, а сиды уходить из нее с интенсивностью $\mu$. Схема взаимодействия имеет вид: \begin{equation} \label{bt:6} \begin{aligned} 0 \xrightarrow{\lambda } N, & r^{\crd{i}1}=(1,0),\\ N+C \xrightarrow{\beta } 2C, & r^{\crd{i}2}=(-1,1),\\ C \xrightarrow{\mu } 0, & r^{\crd{i}3}=(0,-1). \end{aligned} \end{equation} Первая строка в схеме описывает появление нового клиента в системе, вторая строка --- взаимодействие нового клиента и сида, в результате которого появляется новый сид. А третья --- это уход сида из системы. Далее, пусть $n$ --- это численность новых клиентов, а $c$ --- количество сидов в системе. Эта система с точностью до обозначений совпадает с моделью Fasttrack. Теперь рассмотрим систему, в которой передаются файлы, состоящие из $m$ частей. В системе присутствуют следующие участники: \begin{itemize} \item Новые клиенты ($N$) --- это клиенты, у которых нет ни одной части файла. \item Личеры ($L$) --- это клиенты, которые уже скачали какое-то количество частей файла и могут их раздавать новым клиентам или другим личерам. \item Сиды ($C$) --- это клиенты, у которых есть весь файл, т.е. они только раздают. \end{itemize} Кроме того $n$ --- это численность новых клиентов, а $c$ --- количество сидов в системе, $l_i$ --- количество личеров, у которых есть ровно $i$ частей файла, где $i=\overline{i, n-1}$. Также пусть $ \bar{L}_i$ --- это личеры , у которых есть какие-либо части файла интересующие личера $L_i$ и соответственно $ \bar{l}_i$i их количество. Для данной системы в схеме взаимодействия будут иметь место следующие типы соотношений: \begin{equation} \label{bt:7} \begin{aligned} 0 \xrightarrow{\lambda } & N, \\ N+C \xrightarrow{\beta } & L_1+C, \\ N+L_i \xrightarrow{\beta_i } & L_1+L_i, \\ L_i + \bar{L}_i \xrightarrow{\delta_i } & L_{i+1}+\bar{L}_i, \\ L_i + C \xrightarrow{\gamma_i } & L_{i+1}+C, \\ L_{m-1} + \bar{L}_{m-1} \xrightarrow{\gamma_{m-1} } & C+\bar{L}_{m-1}, \\ L_{m-1} + C \xrightarrow{\gamma } & 2C, \\ C \xrightarrow{\mu } & 0. \end{aligned} \end{equation} Один шаг взаимодействия --- это передача одной части файла от одного клиента другому. Первое соотношение описывает появление нового клиента в системе с интенсивностью $\lambda$. Второе и третье соотношения описывают взаимодействие нового клиента с сидом или личером с коэффициентами $\beta$ и $\beta_i$, $(i=\overline{i, m-1})$, в результате которого новый клиент становиться личером из класса $L_1$ . Четвертое и пятое соотношения --- это взаимодействие личера $L_i$ с сидом или другим личером с коэффициентами $\delta_i$ и $\gamma_i$ $(i=\overline{i, m-2})$, что приводит к получению личером одной части файла и переходу его в класс $L_{i+1}$. Шестое и седьмое описывает процесс перехода личера в класс сидов с коэффициентами $\gamma_{m-1}$ и $\gamma$, т.е. личер скачивает последнюю часть файл. Последнее соотношение – это уход сида из системы с интенсивность $\mu$. Запишем векторы $r^{i\crd{\alpha}}=(n,l_1,l_2,...,l_{m-1},c)$ и вероятности перехода $s^+_{\crd{\alpha}}$: \begin{equation} \label{bt:8} \begin{gathered} r^{1} =(1,0,0,...,0), \\ r^{2} =r_i^3=(-1,1,0,...,0), i=\overline{i, m-1} \\ r_i^4 =r_i^5=(0,...,-1,1,...,0), i=\overline{i, m-2} \\ r^{6} =r^7=(0,0,...,-1,1), \\ r^{8} =(0,0,...,-1). \end{gathered} \end{equation} \begin{equation} \label{bt:9} \begin{gathered} s^{+}_1 =\lambda, \\ s^{+}_2 =\beta n c, \\ s^{+}_{3i} =\beta_i n l_i, \\ s^{+}_{4i} =\delta_i l_i \bar{l}_i, i=\overline{i, m-1}\\ s^{+}_{5i} =\gamma_i l_i c, i=\overline{i, m-2}\\ s^{+}_{6} =\gamma_{m-1} l_{m-1} \bar{l}_{m-1}, \\ s^{+}_{7} =\gamma l_{m-1} c, \\ s^{+}_{8} =\mu c. \\ \end{gathered} \end{equation} Для данной модели, аналогично предыдущей, можно записать уравнение Фоккера-Планка. Но так как детерминистическое поведение полностью описывается матрицей $A$, запишем только ее. Таким образом получаем: \begin{equation} \label{bt:10} \mathbf A = \begin{pmatrix} \lambda - \beta n c - \sum_{i=1}^{m-1} \beta_i n l_i \\ \beta n c + \sum_{i=1}^{m-1} \beta_i n l_i -\delta_1 l_1 \bar{l}_1 - \gamma_1 l_1 c \\ \delta_1 l_1 \bar{l}_1 + \gamma_1 l_1 c - \delta_2 l_2 \bar{l}_2 - \gamma_2 l_2 c \\ \ldots \\ \begin{multlined} \delta_{m-2} l_{m-2} \bar{l}_{m-2} + \gamma_{m-2} l_{m-2} c - {} \\ {} - \delta_{m-1} l_{m-1} \bar{l}_{m-1} - \gamma_{m-1} l_{m-1} c \end{multlined} \\ \delta_{m-1} l_{m-1} \bar{l}_{m-1} + \gamma_{m-1} l_{m-1} c - \mu c \end{pmatrix}. \end{equation} Как следствие можно получить систему дифференциальных уравнений описывающих динамику численности новых клиентов, личеров и сидов : \begin{equation} \label{bt:11} \left \{ \begin{gathered} \frac{d n}{d t} = \lambda - \beta n c - \sum_{i=1}^{m-1} \beta_i n l_i, \\ \frac{d l_1}{d t}= \beta n c + \sum_{i=1}^{m-1} \beta_i n l_i -\delta_1 l_1 \bar{l}_1 - \gamma_1 l_1 c, \\ \frac{d l_2}{d t}= \delta_1 l_1 \bar{l}_1 + \gamma_1 l_1 c - \delta_2 l_2 \bar{l}_2 - \gamma_2 l_2 c, \\ \ldots \\ \begin{multlined} \frac{d l_{m-1}}{d t}= \delta_{m-2} l_{m-2} \bar{l}_{m-2} + \gamma_{m-2} l_{m-2} c - {} \\ {} - \delta_{m-1} l_{m-1} \bar{l}_{m-1} - \gamma_{m-1} l_{m-1} c, \end{multlined} \\ \frac{d c}{d t}= \delta_{m-1} l_{m-1} \bar{l}_{m-1} + \gamma_{m-1} l_{m-1} c - \mu c. \end{gathered} \right. \end{equation} Сделаем предположение, что $\delta=\delta_{1}=\delta_{2}=...=\delta_{m-1}=const$. Сложим в системе уравнения со второго по $m+1$ и при обозначении всех личеров и сидов через $l = l_1 + l_2 + ... + l_{m-1} + c$ получим упрощённую систему следующего вида: \begin{equation} \label{bt:12} \left \{ \begin{aligned} \frac{d n}{d t}&= \lambda - \beta n (l+c), \\ \frac{d (l+c)}{d t}&= \beta n (l+c) - \mu c. \end{aligned} \right. \end{equation} \section{Заключение} \begin{enumerate} \item В работе описан метод получения стохастических моделей для систем, которые возможно описывать одношаговыми процессами. Предложенный метод позволяет получить универсальные правила записи стохастических дифференциальных уравнений для систем, процессы в которых представимы как одношаговые процессы. А также расширить аппарат инструментов, используемых для анализа модели, так как одновременно при применении данного подхода для описания системы можно получить обыкновенное стохастическое дифференциальное уравнение и уравнение в частных производных в форме уравнения Фоккера--Планка. \item Изучено влияния введения стохастики в детерминистические модели, на примере модели протокола FastTrack и Bittorrent. Полученные результаты показывают, что введение стохастики в стационарном режиме слабо влияет на поведение системы, поэтому при ее изучении можно рассматривать детерминистическую модель. Кроме того, как показал рассмотренный пример в некоторых случаях для изучения системы можно рассматривать ее детерминистическое приближение, которое определяется матрицей сносов. \end{enumerate} \bibliographystyle{gost2008l}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $Z=(Z_1,\ldots,Z_p)$ be a \(p\)-dimensional real valued random variable with a \(p\times p\) covariance matrix $\Sigma$. Given $n$ i.i.d. samples of $Z$, denoted $\{{\bf z}_i\}_{i=1}^n$, a fundamental task in statistical inference is to estimate $\Sigma$. A standard estimator of $\Sigma$ is the sample covariance matrix \(S=\frac{1}{n-1}\sum_i ({\bf z}_i-\bar{\bf z})({\bf z}_i-\bar{\bf z})^T\), where $\bar{\bf z}$ is the sample mean. Direct computation of all $p^2$ entries of $S$ requires $O(np^2)$ operations. In various contemporary data analysis applications, where both \(p\) and $n$ are large, this computation may be prohibitively slow and challenging in terms of memory and storage. In several applications, however, the population covariance matrix is approximately sparse, whereby only its few large entries are of interest, and the remaining entries are either small or even precisely equal to zero. Applications leading to sparse covariance matrices include, among others, gene arrays and biological networks \cite{butte_2000}, social networks, climate data and f-MRI scans. As we are only interested in the large entries, a key question is whether these can be computed significantly faster, possibly by cleverly detecting their locations, without directly computing the entire matrix. In this paper we present and theoretically analyze two different randomized algorithms with sub-quadratic time complexity, to detect and subsequently compute the large entries of a sparse sample covariance matrix. First, in Section \ref{sec:sfft_based_algo}, we present a reduction of this problem to the sparse-Fast-Fourier-Transform (sFFT) \cite{Nearly_Optimal_sFFT}. A solution to our task then directly follows by invoking multiple calls to the recently developed randomized sFFT sub-linear time algorithm. Next, in Section \ref{sec:tree_algo} we present a simpler and more direct algorithm, based on the construction of $O(\log p)$ binary random trees. We prove that under suitable assumptions on the sparsity of the matrix \(S\), both algorithms are guaranteed, with high probability, to locate its large entries. Furthermore, their runtimes are $O(nrp\log^3p)$ operations for the sFFT-based algorithm, and \(O(nrp\log^2 p)\) operations for the tree-based method, where $r$ is a bound on the number of large entries in each row of \(S\). By suitable normalization of the input data, both algorithms can also detect large entries of a sparse sample \textit{correlation matrix}, whose entries are $\frac{S_{ij}}{\sqrt{S_{ii}S_{jj}}}$. The theoretical analysis of the two algorithms of Sections \ref{sec:sfft_based_algo} and \ref{sec:tree_algo} relies on the assumption that $S$ is approximately sparse. In reality, in various applications one may only assume that the population matrix is approximately sparse. To this end, in Section \ref{sec:num_samples} we provide sufficient conditions on $\Sigma$, the underlying random variable \(Z\) and the number of samples $n$ that ensure, w.h.p., the approximate sparsity of $S$. Finally, in Section \ref{sec:simulations} we empirically compare the tree-based algorithm with other methods to detect large entries of $S$. In addition, we illustrate on artificially generated data that includes near duplicates the potential applicability of our algorithm for the \textit{Near Duplicate Detection} problem, common in document corpora analysis \cite{xiao_2011_PPJOIN}. \subsection{Related works} In the statistics literature, the problem of sparse covariance estimation has received significant attention, see for example \cite{Bickel_Levina_2008,Cai_Liu_2011,Karoui_2008,Bien_2011,Chaudhur_07}. As discussed in Sections \ref{sec:sfft_based_algo} and \ref{sec:tree_algo} below, under suitable sparsity assumptions, our algorithms are guaranteed to find with high probability all entries of the sample covariance matrix which are larger, in absolute value, than some threshold \(\mu\). The resulting matrix is then nothing but the sample covariance matrix, hard-thresholded at the value \(\mu\). The statistical properties of such thresholding were intensively studied, see for example \cite{Bickel_Levina_2008,Karoui_2008,Cai_Liu_2011} and references therein. The main motivation of these works was to derive a more accurate estimate of the population covariance matrix \(\Sigma\), assuming it is sparse, thus overcoming the asymptotic inconsistency of $S$ in the operator norm, in the joint limit \(p,n\to\infty\) with $p/n\to c$, see for example \cite{karoui_2008_spectrum}. Moreover, hard-thresholding with a threshold that slowly tends to zero as \(p,n\to\infty\), was proven to be asymptotically minimax optimal under various sparsity models. Our work, in contrast, is concerned with the {\em computational effort} of computing this thresholded estimator, mainly for finite \(p,n\) and relatively large thresholds. Focusing on the computational aspects, first of all note that the sample covariance matrix $S$ can be represented as a product of two suitable matrices. Hence, using fast matrix multiplication methods, all its entries can be computed faster than \(O(np^{2})\). Currently, the fastest matrix multiplication algorithm for square matrices of size $N\times N$ has a time-complexity of $O(N^{2.3727})$ \cite{williams_2012_multiplying}. Hence, by expanding (with zero padding)\ the input sample matrix to a square matrix, all entries of $S$ can be exactly evaluated using $O(\max\{n,p\}^{2.3727})$ operations. In recent years, several works developed fast methods to \textit{approximate} the product of two matrices, see for example \cite{drineas_2006}, as well as \cite{iwen_spencer_2009} and \cite{pagh_2013} which assume that the product is approximately sparse. In particular, in a setting similar to the one considered in this paper, the method of \cite{pagh_2013}, based on fast approximate matrix multiplication, can detect the large entries of a sparse covariance matrix in time complexity comparable to ours. In addition, since the entries of $S$ can be represented as inner products of all-pairs vectors, our problem is directly related to the \textit{Maximum Inner Product Search} (MIPS) problem \cite{ram_2012, shrivastava_2014}. In the MIPS\ problem, given a large dataset \(\{{\bf x}_i\}\), the goal is to quickly find, for any query vector \({\bf y}\), its maximal inner product $\max_i \langle {\bf y},{\bf x}_i\rangle$. \cite{ram_2012} presented three algorithms to retrieve the maximal inner product, which can be generalized to find the $k$ largest values. While \cite{ram_2012} did not provide a theoretical analysis of the runtime of their algorithms, empirically on several datasets they were significantly faster than direct computation of all inner products. Recently, \cite{shrivastava_2014} presented a simple reduction from the \textit{approximate}-MIPS problem, of finding $\max_i \langle {\bf y},{\bf x}_i\rangle$ up to a small distortion \(\epsilon\), to the well-studied $k$ \textit{nearest neighbour} problem ($k$NN). This allows to solve the approximate-MIPS problem using any $k$NN procedure. The (exact or approximate) $k$NN search problem has been extensively studied in the last decades, with several fast algorithms, see \cite{shakhnarovich_book_2006,osipov_rokhlin_2013,arya_1998} and references therein. For example, $kd$-tree \cite{kd_tree} is a popular exact algorithm for low dimensional data, whereas \textit{Local-sensitive-hashing} (LSH) approximation methods \cite{datar_indyk_2004} are more suitable to high dimensions. Combining the reduction of \cite{shrivastava_2014} with the LSH-based algorithm of \cite{peled_indyk_2012} yields an approximate solution of the MIPS problem with $O(np^\gamma\text{ poly} \log p)$ query time, where $\gamma \in (0,1)$ controls the quality of the approximation. These methods can be exploited to approximate the sample covariance matrix, perhaps with no assumptions on the matrix but with slower (or no) runtime guarantees. In section \ref{sec:simulations}, we empirically compare our sub-quadratic tree-based algorithm to some of the above methods. Another line of work related to fast estimation of a sparse covariance matrix is the matrix sketching problem \cite{nowak_2013_sketching}. Here, the goal is to recover an unknown matrix $X$, given only partial knowledge of it, in the form of a linear projection $AXB$ with known matrices $A$ and $B$. The matrix sketching problem is more challenging, since we are given only partial access to the input. In fact, recent solutions to this problem \cite{adler_2013} have time complexity at least $O(np^2)$. A more closely related problem is the \textit{all-pairs similarity search} with respect to the cosine similarity or the Pearson-correlation. Here, given a set of vectors \(\{\x_i\}\), the task is to find the pair with highest cosine similarity, \(\max_{i\neq j}{\x_i}^t{\x_j}/\|\x_i\| \|\x_j\|\). Two popular exact methods, suitable mainly for sparse inputs, are \textit{All-Pairs} \cite{Bayardo_2007} and \textit{PPJoin+} \cite{xiao_2011_PPJOIN}. As in the nearest-neighbours search problem, hashing can be adapted to obtain various approximation algorithms \cite{charikar_2002}. These algorithms can be used to rapidly compute a sparse correlation matrix. In a special case of the all-pairs similarity search, known as the \textit{Light Bulb Problem} \cite{valiant_1988}, one is given \(p\) boolean vectors all of length \(n\) and taking values $\pm 1$. The assumption is that all vectors are uniformly distributed on the \(n\)-dimensional boolean hypercube, apart from a pair of two vectors which have a Pearson-correlation $\rho\gg\sqrt{\log p/n}$. The question is how fast can one detect this pair of correlated vectors. LSH type methods as well as bucketing coding \cite{dubiner_2010} solve this problem with a sub-quadratic time complexity of $O(np^{g(\rho)})$, for a suitable function $g(\rho)$ of the correlation coefficient. More recently, \cite{valiant_2015} developed a method with expected sub-quadratic time complexity of $O(np^{1.62})$ operations, where the exponent value 1.62 is independent of $\rho$ and directly related to the complexity of the fastest known matrix multiplication algorithm. It is easy to show that the methods proposed in our paper can solve the Light Bulb Problem using only $O(np\text{poly}\log p)$ operations, but assuming a significantly stronger correlation of $\rho > O(\sqrt{\frac{p}{n}})$. Furthermore, our methods offer an interesting tradeoff between time complexity and correlation strength \(\rho\). For example, our tree-based algorithm with \(O(p^\alpha)\) trees (instead of \(O(\log p)\)), can detect weaker correlations of strength \(\rho>O(\sqrt{\frac{p^{1-\alpha}}{n}})\) with a time complexity of \(O(np^{1+\alpha}\text{ poly}\log p)\) operations. \section{Notation and Problem Setup} \label{sec:notation} For simplicity, in this paper we assume the input data $(\z_1, \dots, \z_n)$ is real-valued, though the proposed methods can be easily modified to the complex valued case. For a vector $\x\in\mathbb{R}^n$, we denote its $i$-th entry by $\x_i$ (or $(\x)_i$), its $L_2$-norm by $||\x|| := \sqrt{\sum_{i=1}^n\x_i^2}$, and its $L_0$-norm by $||\x||_0:=| \{ i:\x_i \neq 0\} | $. Similarly, for a matrix $A\in\mathbb{R}^{p\times p}$ we denote its $k$-th row by $A_k$, its $(i,j)$-th entry by $A_{ij}$ (or $(A_{i})_{j}$), its $L_2$-norm by $||A||: = \sup_{\x \neq 0}\frac{||A\x||}{||\x||}$, and its Frobenius norm by $||A||_F := \sqrt{\sum_{ij}A_{ij}^2}$. For an integer $a\in\mathbb{N}$, let $ [a]:=\{1, 2, \dots, a\}$. To simplify notations, the inner product between two complex vectors $\x,\y\in \mathbb{C}^n$ is defined with a normalization factor, $$ \langle \x, \y\rangle = \frac{1}{n-1}\sum_{i=1}^n \x_i \y_i ^\dagger$$ where $\dagger$ represents the complex conjugate, and will be relevant only when we discuss the Fourier transform which involves complex valued numbers. Given the input data $\{\z_1,\dots, \z_n\}$, let $\x_i=((\z_{1})_i - (\bar \z_i),\cdots,(\z_{n})_i - (\bar \z_{n}) )\in\mathbb{R}^n$ be a mean centered vector of observations for the $i$-th coordinate of $Z$. This leads to the simple representation of $S$, in terms of inner products \begin{equation} \label{eq:s_rep_ip} S_{ij}=\langle \x_i, \x_j\rangle, \qquad 1\leq i,j\leq p. \end{equation} For a matrix $A\in\mathbb{R}^{p\times p}$ and a threshold parameter $\mu$, we define the set of large entries at level $\mu$ in the $k$-th row to be$$J_\mu(A_{k}) = \{j \ :\ |A_{kj}|\geq\mu\}$$ and the set of all its large entries at level $\mu$ by $$ J_\mu(A) = \bigcup_{k=1}^p \{k\}\times J_\mu(A_k).$$ As for the definition of matrix sparsity, we say that a matrix $A$ is $(r,\mu)$\textit{-sparse} if for every row $k$, $|J_{\mu}(A_{k})| \leq r$. We say that $A$ is $(r, \mu, R,q)$-sparse if it is $(r,\mu)$-sparse and for every $k\in[p]$ the remaining small entries in the $k$-th row are contained in a $L_q$-ball of radius $R$, $$\left(\sum_{j\not\in J_\mu(A_k)}|A_{kj}|^q\right)^{1/q}\leq R.$$ Here we only consider the case where $q=2$ and $R <\mu/2$. \paragraph{Problem Setup.} Let $\{\z_1,\dots, \z_n\}$ be $n$ input vectors whose covariance matrix \(S\) is $(r,\mu)\)-sparse, with $r\ll p$. In this paper we consider the following task: Given $\{\z_i\}_{i=1}^n$ and the threshold value \(\mu\), find the set $J_\mu(S)$, which consists of all entries of $S$ which are larger than $\mu$ in absolute value. A naive approach is to explicitly compute all $p^2$ entries of \(S\) and then threshold at level $\mu$. This requires $O(np^2)$ operations and may be quite slow when $p \gg1$. In contrast, if an oracle gave us the precise locations of all large entries, computing them directly would require only \(O(npr)\) operations. The key question studied in this paper is whether we can find the set $J_\mu(S)$, and compute the corresponding entries significantly faster than \(O(np^{2})\). In what follows, we present and analyze two sub-quadratic algorithms to discover $J_\mu(S)$, under the assumption that the matrix $S$ is approximately sparse. \section{Sparse covariance estimation via sFFT} \label{sec:sfft_based_algo} The first solution we present is based on a reduction of our problem to that of multiple independent instances of sparse Fourier transform calculations. Whereas standard fast-Fourier-transform (FFT) of a vector of length \(p\) requires \(O(p\log p)\) operations, if the result is a-priori known to be approximately sparse, it can be computed in sub-linear time. This problem, known as sparse-FFT (sFFT), has been intensively studied in the past few years, see \cite{Nearly_Optimal_sFFT, akavia_2010, gilbert_2002, iwen_2010}. As described below, the computational gains for our problem then directly follow by an application of one of the available sFFT algorithms. In what follows we present this reduction, and focusing on the algorithm of \cite{Nearly_Optimal_sFFT} we analyze under which sparsity conditions on \(S\) it is guaranteed to succeed w.h.p. To this end, we use the following definitions for the discrete Fourier transform (DFT) $\mathcal{F}:\mathbb{C}^p\rightarrow\mathbb{C}^p$ and its inverse $\mathcal{F}^{-1}$, $$ (\mathcal{F}[\x])_j=\sum_{l=1}^{p}\x_l\omega^{-jl}\qquad (\mathcal{F}^{-1}[\x])_j=\frac{1}{p}\sum_{l=1}^{p}\x_l\omega^{jl} $$ where $\omega=\exp(2\pi {\bf i}/p)$, \({\bf i}=\sqrt{-1} \). Without any assumptions on the input $\x$, the fastest known method to compute its DFT is the FFT which requires $O(p\log p)$ operations. If the vector \(\mathcal F[\x]\) is approximately $r$-sparse, with only $r$ large entries, it is possible to estimate it in \textit{sub-linear time}, by detecting and approximately evaluating only its large entries. In particular, \cite{Nearly_Optimal_sFFT} developed a randomized algorithm to compute the sparse Fourier transform with time complexity of $O(r\log p\log(p/r))$, which we refer to here as the sFFT algorithm. Formally, for any input vector $\x\in\mathbb{C}^p$ and parameters $\alpha, \delta,r$, the sFFT algorithm returns a sparse vector $\hat \x$ such that, w.p. $\geq 2/3$ \begin{equation} \label{eq:sfft_algo_res} ||\mathcal{F}[ \x]-\hat \x||\leq (1+\alpha)\min_{||\y||_0\leq r}||\mathcal{F}[\x]-\y|| +\delta||\mathcal{F}[\x]||. \end{equation} The algorithm time complexity is $O(\frac{r}{\alpha}\log(p/r)\log(p/\delta))$, though for our purposes we assume that $\alpha$ is fixed (e.g. $\alpha=1$) and hence ignore the dependence on it. The output \(\hat\x\) of the sFFT algorithm is represented as a pair \((J,\y)\) where \(J\subset\{1,\ldots,p\}\) is a set of indices and $\y\in\mathbb{C}^{|J|}$ are the values of $\hat\x$ at these indices, namely $\hat\x|_J=\y$ and \(\hat\x|_{J^c}=0.\) We now present the reduction from the sparse covariance estimation problem to sparse FFT. To this end, let us define a matrix $W = ({\bf w}_1, \dots, \w_p)\in \mathbb{C}^{n\times p}$ whose $j$-th column is $\w_j=\frac{1}{p}\sum_{l=1}^{p}\x_l\, \omega^{-jl}$. Note that using standard FFT methods, $W$ can be calculated in $O(np\log p)$ operations. The following lemma describes the relation between the matrix $S$ and the matrix $W$. \begin{lemma}\label{lemma:sfft_reduction} For any $k\in[p]$, let ${\bf u}_k=(\langle \x_k, \w_{1}\rangle,\dots,\langle\x_k, \w_{p}\rangle)\in\mathbb{C}^p$. Then, \begin{equation} \label{eq:sfft_red} \mathcal{F}[{\bf u}_k]=S_{k}. \end{equation} \end{lemma} \begin{proof} Eq. (\ref{eq:sfft_red}) follows directly by applying the inverse DFT on $S_k$, $$ (\mathcal{F}^{-1}[S_k])_j = \frac{1}{p}\sum_{l=1}^{p}\langle \x_k, \x_l\rangle\omega^{jl}=\langle\x_k,\frac{1}{p}\sum_{l=1}^{p} \x_l\omega^{-jl} \rangle=\langle \x_k, \w_j\rangle=({\bf u}_k)_j.$$ \end{proof} According to Lemma \ref{lemma:sfft_reduction}, each row $S_k$ of $S$ is the DFT of an appropriate vector \({\bf u}_k\). Since by assumption $S_k$ is approximately sparse, we may thus find its set of large indices in sublinear-time by applying the sFFT algorithm on the input \({\bf u}_k\). We then explicitly compute the corresponding entries in \(S_{k}\) directly from the original data \(\x_{1},\ldots,\x_p\). Computing all \(p\) entries of all \(p\) vectors \(\{{\bf u}_k\}\) requires a total of \(O(np^{2})\) operations. However, with this time complexity we could have computed all entries of the matrix \(S\) to begin with. The key point that makes this reduction applicable is that the sFFT algorithm is sub-linear in time and to compute its output it reads at most $O(r\log(p/\delta)\log(p/r))$ coordinates of ${\bf u}_k$. In particular, it does not require a-priori evaluation of all \(p\) entries of each vector \({\bf u}_k\). Hence, we may compute on-demand only the entries of ${\bf u}_k$ requested by the algorithm. Since computing a single entry of ${\bf u}_k$ can be done in $O(n)$ operations, the total number of operations required for a single sFFT\ run is \(O(nr\log(p/\delta)\log(p/r))\). To detect the large entries in all rows of \(S\), all \(p\) outputs of the sFFT algorithm should simultaneously satisfy Eq. (\ref{eq:sfft_algo_res}) with high probability. Given that each sFFT run succeeds w.p. $\geq 2/3$, we show below that it suffices to invoke $m=O(\log p)$ independent queries of the sFFT algorithm on each input ${\bf u}_k$. The output for each row is the union of the large indices found by all sFFT runs. In more detail, for each row $k$ let $(J_{1},\y_1), \dots, (J_m,\y_m)$ be the outputs (indices and values)\ of sFFT on $m$ independent runs with the same input ${\bf u}_k$. Our approximation for the \(k\)-th row of \(S\) is then \begin{equation} \tilde S_{kj}= \left\{ \begin{array}{cl} S_{kj} & j\in \bigcup_i J_i \\ 0 & \mbox{otherwise} \end{array} \right. \end{equation} The following lemma and corollary prove that $m=O(\log p)$ runs suffice to detect all large entries of $S$ with a constant success probability. \begin{lemma}\label{lemma:sfft_prob} Let $m= \lceil\log(3p)/\log(3)\rceil=O(\log(p))$. Then, for each $k\in[p]$ the following inequality holds w.p. $\geq1- \frac{1}{3p}$, $$ ||S_{k} -\tilde S_{k } ||\leq (1+\alpha)\min_{|| \y||_0\leq r}||S_{k}-\y|| +\delta||S_k||. $$ \end{lemma} \begin{corollary}\label{cor:sfft_cor}Let $m= \lceil\log(3p)/\log(3)\rceil$ as in the previous lemma. Then with probability $\geq 2/3$, simultaneously for all rows $k\in[p]$ \begin{equation} \label{eq:sfft_cor} ||S_{k} -\tilde S_{k } ||\leq (1+\alpha)\min_{\y:||\y||_0\leq r}||S_{k}-\y||+\delta||S_k||. \end{equation} \end{corollary} {The proof of corollary \ref{cor:sfft_cor} follows from a standard union-bound argument, details omitted. } To conclude, we use multiple runs of the sFFT algorithm to find a set \(I\) of indices in $S$, whose entries \(S_{ij}\) may be potentially large, and which we then evaluate explicitly. The resulting approximation of $S$ is compactly represented as $\{((i,j), S_{ij})\}_{(i,j)\in I}$. This procedure is summarized in Algorithm \ref{algo:sfft_algo}. The following Theorem, proven in the appendix, provides a bound on its runtime and a guarantee for the accuracy of its output. \begin{algorithm}[t] \caption{\tt sFFTCovEstimation($\x_1, \dots, \x_p, r, R, \epsilon$)} \label{algo:sfft_algo} \begin{algorithmic}[1] \REQUIRE \ \\$(\x_1, \dots, \x_p)$: $p$ vectors of dimension $n$. \\$r$: bound on the number of large entries in each row.\\ $R, \epsilon$: sparsity parameters. \ENSURE $\tilde S$: Compact representation of the large entries of $S$. \STATE Compute $W = (\w_1, \dots, \w_p)\in \mathbb{C}^{n\times p}$ using FFT, where $\w_j=\frac{1}{p}\sum_{l=1}^{p}\x_l\omega^{-jl}$ \STATE Set $I=\emptyset$ \STATE Compute all diagonal entries \(S_{ii}\) and the value $M=\max_{i}{S_{ii}}$ \FOR {$j=1, \dots, p$} \STATE Calculate $(J_{1},\y_1), \dots, (J_m,\y_m)$ by running $m=O(\log p)$ sFFT queries with input ${\bf u}_k=(\langle \x_k, \w_{1}\rangle,\dots,\langle \x_k, \w_{p}\rangle), \delta = \frac{\epsilon}{R+\sqrt{r}M}$ and $\alpha = 1$ \STATE Add $\{ k\}\times(\bigcup_{i=1}^{m} J_i)$ to $I$ \ENDFOR \FOR {$(i,j)\in I$} \STATE Calculate $S_{ij}=\langle \x_i, \x_j\rangle$ \ENDFOR \RETURN $\tilde S = \{((i,j), S_{ij})\}_{(i,j)\in I}$\end{algorithmic} \end{algorithm} \begin{theorem}\label{claim:sfft_algo_correctness} Assume $S$ is $(r, \mu,R,2)$-sparse, where $\mu > 2R+\epsilon$ for known $R, \epsilon>0$, and let \(M=\max_i S_{ii}\). Then, Algorithm \ref{algo:sfft_algo}, which invokes the sFFT algorithm with parameters $\delta = \frac{\epsilon}{R+\sqrt{r}M}$ and $\alpha =1$, has a runtime of $O(nrp\log^2p\log((R+\sqrt{r}M)p/\epsilon))$ operations, and w.p. $\geq$ 2/3, its output set $I$ is guaranteed to include the set $J_\mu(S)$ of all large entries of $S$. \end{theorem} \section{Tree-based Algorithm} \label{sec:tree_algo} We now present a second more direct method to efficiently detect and compute the large entries of a sparse covariance matrix. This method, which assumes the threshold $\mu$ is a priori known, is based on a bottom-up construction of $m$ random binary trees. To construct the $l$-th tree, we first place at its $p$ leaves the following $n$ dimensional vectors $\{\eta_{lj}\x_j\}_{j=1}^p$, where $\eta_{lj}\overset{i.i.d.}{\sim}N(0,1)$. Then, at higher levels, the $n$-dimensional vector in each parent node is the sum of its offsprings. After the construction of the trees, the main idea of the algorithm is to make recursive coarse-to-fine statistical group tests, where all indices of various subsets $\mathcal{A}\subseteq[p]$ are simultaneously tested whether they contain at least one large entry of $S$ or not. In more details, given as input a row $k\in[p]$ and a set $\mathcal{A}\subseteq[p]$, we consider the following query or hypothesis testing problem: $$ \mathcal{H}_0:J_\mu(S_k)\cap \mathcal{A}=\emptyset \quad \text{vs.} \quad \mathcal{H}_1:J_\mu(S_k)\cap \mathcal{A}\neq\emptyset. $$ Assuming $S$ is $(r, \mu, R, 2)$-sparse, with $R<\mu/2$, one way to resolve this query is by computing the following quantity \begin{equation} \label{eq:tree_alg_sum} F(k, \mathcal{A})=\sum_{j\in \mathcal{A}}\langle \x_{k}, \x_{j}\rangle^2. \end{equation} Indeed, under $\mathcal{H}_0$, $F(k,\mathcal{A})\leq R^2$, whereas under $\mathcal{H}_1$, $F(k, \mathcal{A})\geq \mu^2$. A direct calculation of (\ref{eq:tree_alg_sum}) requires $O(n|\mathcal{A}|)$ operations, which may reach up to $O(n p)$ when $|\mathcal{A}|$ is large. Instead, the algorithm {\em estimates} the value of $F(k, \mathcal{A})$ in Eq. (\ref{eq:tree_alg_sum}) using $m$ i.i.d. samples $\{ y_l\}_{l=1}^{m}$ of the random variable \begin{equation} \label{eq:tree_alg_sum_rv} Y(\mathcal{A}) = \langle \x_k,\sum_{j\in \mathcal{A}}\eta_j\x_j\rangle \end{equation} where $(\eta_1, \dots, \eta_p)\sim N(0,I_p)$. Since by definition $\mathbb{E}Y(\mathcal{A})=0$ and $Var[Y(\mathcal{A})]=F(k, \mathcal{A})$, Eq. (\ref{eq:tree_alg_sum}) can be estimated by the empirical variance of the $m$ variables $\{y_l\}_{l=1}^m$. Again, given a specific realization of $\eta_j$'s, direct calculation of Eq. (\ref{eq:tree_alg_sum_rv}) also requires $O(n|\mathcal{A}|)$ operations. Here our construction of $m$ binary trees comes into play, as it efficiently provides us samples of $\sum_{j\in \mathcal{A}}\x_j\eta_j$, for any subset of the form $\mathcal{A}=\{2^hi, \dots, 2^h(i+1) \}$, where $h\in[\log_{2} p]$ and $i\in[p/2^h] $. As described in section \ref{sec:quary} below, after the pre-processing stage of constructing the \(m\) trees, each query requires only $O(nm)$ operations. Moreover, we show that $m=O(\log p)$ trees suffice for our purpose, leading to $O(n\log p)$ query time. To find large entries in the $k$-th row, a divide and conquer method is applied: start with $\mathcal{A}=\{1, \dots, p\}$, and check if $F(k, \mathcal{A})$ of Eq. (\ref{eq:tree_alg_sum}) is large by invoking an appropriate query. If so, divide $\mathcal{A}$ into two disjoint sets and continue recursively. With this approach we reduce the number of required operations for each row from $O(np)$ to $O(nr\log^2 p)$, leading to an overall time complexity of $O(nrp\log^2p)$. \subsection{Preprocessing Stage - Constructing the trees} To efficiently construct the $m$ trees, first $mp$ i.i.d. samples $\{\eta_{lj}\}$ are generated from a Gaussian distribution $N(0,1)$. For simplicity, we assume that $p=2^L$, for some integer $L$, leading to a full binary tree of height $L+1$. Starting at the bottom, the value at the $j$-th leaf in the $l$-th tree is set to be $n$-dimensional vector\ $\eta_{lj}\x_j$. Then, in a bottom-up construction, the vector of each node is the sum of its two offsprings. Since, each tree has $2p-1$ nodes, and calculating the vector of each node requires $O(n)$ operations, the construction of $m$ i.i.d. random trees requires $O(nmp)$ operations. For future use, we introduce the following notation: for a given tree $T$, we denote by $T(h,i)$ the $i$-th node at the $h$-th level, where the root is considered to be at level zero. Furthermore, we denote by $I(h,i)$ the set of indices corresponding to the leaves of the subtree starting from $T(h,i)$. The vector stored at the node $T(h,i)$ is the following $n$-dimensional random variable $$ Val(T(h,i)) = \sum_{j\in I(h,i)}\eta_j\x_j$$ whose randomness comes only from the random variables $\eta_j$ (we here consider the samples $\{ \x_j\}$ as fixed). The entire tree construction procedure is described in Algorithm \ref{algo:tree_preprocess}. \begin{figure}[t!] \label{fig:tree_example} \centering \begin{tikzpicture}[auto, level 1/.style={sibling distance=80mm}, level 2/.style={sibling distance=40mm}] \node [ellipse,draw] (n1){$\x_1\eta_{1}+\x_2\eta_{2}+\x_3\eta_{3}+\x_4\eta_{4}$} [sibling distance=60mm] child { node[ellipse,draw] (n2) {$\x_1\eta_{1}+\x_2\eta_{2}$} child { node[ellipse,draw] (n4) {$\x_1\eta_{1}$} } child { node[ellipse,draw] (n5) {$\x_2\eta_{2}$} } } child { node[ellipse,draw] (n3) {$\x_3\eta_{3}+\x_4\eta_{4}$} child { node[ellipse,draw] (n6) {$\x_3\eta_{3}$} } child { node[ellipse,draw] (n7) {$\x_4\eta_{4}$} } } ; \end{tikzpicture} \caption{Illustration of a random tree, for $p=4.$} \end{figure} \begin{algorithm} \caption{\tt ConstructTrees$(\x_1, \dots, \x_p, m)$} \label{algo:tree_preprocess} \begin{algorithmic}[1] \REQUIRE \ \\ $(\x_1, \dots, \x_p)$: $p$ vectors of dimension $n$. \\ $m$: number of trees. \ENSURE $m$ random binary trees $(T_1, \dots, T_m).$ \STATE Generate $\eta_{lj}$, for $j\in[p]$ and $l\in[m]$, where $\eta_{lj}\overset{i.i.d.}{\sim}N(0,1)$ \FOR {$l=1,\dots,m$} \FOR {$j=1,\dots,p$} \STATE Set $Val(T_l(L, j)) = \eta_{lj} \x_j$ \ENDFOR \FOR {$h=L-1,\dots, 0$} \FOR {$i=1,\dots, 2^{h}$} \STATE Set $Val(T_l(h, i))=Val(T_l(h+1, 2i))+Val(T_l(h+1, 2i+1))$ \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Algorithm description} \label{sec:quary} As mentioned before, we assume that the threshold $\mu$ is an input parameter to the algorithm. Given a set $\mathcal{A}=I(h,i)$ and a vector $\x_k$, to estimate (\ref{eq:tree_alg_sum}) the algorithm uses $m$ i.i.d. samples of (\ref{eq:tree_alg_sum_rv}) obtained from the relevant node $T(h,i)$ of the tree. For this, let $y_1, \dots, y_m$ be the $m$ i.i.d. samples of $Y=Y(I(h,i))$, $$ y_l = \langle\x_k, Val(T_l(h,i))\rangle=\sum_{j\in I(h,i)} \langle\x_{k}, \x_{j}\rangle\eta_{lj}.$$ Since $\mathbb{E}Y=0$ and $$\sigma^2:=Var(Y) =\sum_{j\in I(h,i)} \langle \x_{k}, \x_{j}\rangle^{2}$$ a natural estimator for (\ref{eq:tree_alg_sum}) is the sample variance $${\hat\sigma}^2:=\frac{1}{m}\sum_{l=1}^my_l^2.$$ Recall that for a matrix $S$ which is $(r, \mu, R,2)$-sparse with \(R<\mu/2\), the exact value\ of Eq. (\ref{eq:tree_alg_sum}) allows us to perfectly distinguish between the following two hypotheses \begin{equation} \label{eq:tree_hyp} \mathcal{H}_0: I(h,i)\cap J_\mu(S_{k})= \emptyset \quad\text{vs.}\quad \mathcal{H}_1: I(h,i)\cap J_\mu(S_{k})\neq \emptyset. \end{equation} Given the estimate $\hat \sigma^2$, the algorithm considers the following test \begin{equation} \label{eq:tree_test} {\hat\sigma}^2\mathop{\gtrless}_{\mathcal{H}_0}^{\mathcal{H}_1} \frac{3\mu^2}{4} \end{equation} If $\hat\sigma^2 \geq \frac{3\mu^2}{4}$, the algorithm continues recursively in the corresponding subtree. Lemma \ref{lemma:tree_test} below shows that $m=O(\log \frac{1}{\delta})$ trees suffice to correctly distinguish between the two hypotheses w.p. at least $1-\delta$. In summary, the algorithm detects large entries in the $k$-th row by processing the \(m\) random trees with the vector \(\x_k\); starting from the root, it checks if one of its children contains a large entry using the query presented in Eq. (\ref{eq:tree_test}), and continues recursively as required. This recursive procedure is described in Algorithm \ref{algo:tree_rec}. Then, as described in Algorithm \ref{algo:tree_algo}, the complete algorithm to detect all large entries of a sparse covariance matrix applies Algorithm 3 separately to each row. As in Algorithm \ref{algo:sfft_algo}, the output of Algorithm \ref{algo:tree_algo} is a compact representation of the large entries of the matrix, as a set of indices $I\subset[p]\times[p]$ and their corresponding entries $\{S_{ij}\}_{(i,j)\in I}$. \begin{algorithm}[t] \caption{{\tt Find}$(\x, h, i,m, \{T_l\},\mu)$} \label{algo:tree_rec} \begin{algorithmic}[1] \REQUIRE \ \\ $\x$: input vector of dimension $n$.\\ $(h,i)$: tree-node index. \\ $m$: number of trees. \\ $\{ T_l\}$: a collection of $m$ trees. \\ $\mu$: threshold parameter. \ENSURE The set of large entries in the current sub-tree. \IF {$h= L$} \RETURN $\{ i\}$ \ENDIF \STATE Set $a =\frac{1}{m} \sum_{l=1}^m \langle \x, Val(T_{l}(h+1, 2i))\rangle^2$, $b =\frac{1}{m} \sum_{l=1}^m \langle \x, Val(T_{l}(h+1, 2i+1))\rangle^2$ \STATE Set $\mathcal{S}_a=\emptyset, \mathcal{S}_b=\emptyset$\IF {$a \geq \frac{3\mu^2}{4}$} \STATE $\mathcal{S}_a = {\tt Find}(\x, h+1, 2i,m, \{T_l\},\mu)$ \ENDIF \IF {$b \geq \frac{3\mu^2}{4}$} \STATE $\mathcal{S}_b = {\tt Find}(\x, h+1, 2i+1, m, \{T_l\},\mu)$ \ENDIF \RETURN $\mathcal{S}_a \cup \mathcal{S}_b$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{{\tt SparseCovTree}$(\x_1, \dots, \x_p, m, \mu)$} \label{algo:tree_algo} \begin{algorithmic}[1] \REQUIRE \ \\ $(\x_1, \dots, \x_p)$: $p$ vectors of dimension $n$. \\ $m$: number of trees. \\ $\mu$: threshold parameter. \ENSURE $\tilde S$: compact representation of large entries of $S$. \STATE Construct \(m\) trees: $\{T_l\}={\tt ConstructTrees}(\x_1, \dots, \x_p,m)$ \STATE Set $I=\emptyset$ \FOR {$k=1,\dots, p$} \STATE Add $\{k\} \times {\tt Find}(\x_k, 0, 1, m, \{T_l\},\mu)$ to $I$ \ENDFOR \FOR {$(i,j)\in I$} \STATE Calculate $S_{ij}=\langle \x_i, \x_j\rangle$ \ENDFOR \RETURN $\{((i,j), S_{ij})\}_{(i,j)\in I}$ \end{algorithmic} \end{algorithm} \subsection{Theoretical analysis of the Tree-Based algorithm}\label{sec:algo_analysis} Two key questions related to Algorithm \ref{algo:tree_algo} are: i) can it detect w.h.p. large entries of $S$; and ii) what is its runtime. In this section we study these questions under the assumption that the matrix $S$ is approximately sparse. We start our analysis with the following lemma, proven in the appendix, which shows that $m=O(\log \frac{1}{\delta})$ trees suffice for the test in Eq. (\ref{eq:tree_test}) to succeed w.p. at least $1-\delta$. \begin{lemma} \label{lemma:tree_test} Assume $S$ is $(r, \mu, R, 2)$-sparse, where $R< \mu/2$. Then, for $m\geq64\log(\frac{1}{\delta})$, \begin{align*} \Pr[\textbf{false alarm}]=\Pr\left[{\hat\sigma}^2\geq \frac{3\mu^2}{4}\;|\;\mathcal{H}_0\right] \leq \delta\\ \Pr[\textbf{misdetection}] =\Pr\left[{\hat\sigma}^2< \frac{3\mu^2}{4}\;|\;\mathcal{H}_1\right] \leq\delta \end{align*} \end{lemma} \begin{remark} While the constant of 64 is not sharp, the logarithmic dependence on $\delta$ is. \end{remark} \begin{remark} The choice of $\eta$ to be Gaussian is rather arbitrary. Standard concentration inequalities \cite{vershynin_2010} imply that every sub-Gaussian distribution with zero mean and variance $1$ will yield similar results, albeit with different constants. \end{remark} Next, assuming $S$ is approximately sparse, Theorem \ref{theorem:tree_prob} below shows that w.h.p., Algorithm \ref{algo:tree_algo} indeed succeeds in finding all large entries of $S$. Most importantly, its runtime is bounded by $O(nrp\log^{2}p)$ operations, both w.h.p. and in expectation. \begin{theorem} \label{theorem:tree_prob} Assume $S$ is $(r, \mu, R, 2)$-sparse, where $R< \mu/2$. Let $I(m)$ be the set of indices returned by Algorithm \ref{algo:tree_algo} with threshold $ \mu$ and $m$ trees. For a suitably chosen constant $C$ that is independent of $n$, $m, r$ and $p$, define $$ f(p,m) = \Pr\left[ J_\mu(S) \subseteq I(m) \text{ and } \text{Runtime}\leq Cnpr\log ^{2}p\right].$$ Then, for $m(p) = \lceil64\log(2rpL^3) \rceil$, where $L=\log_2p + 1$, \begin{enumerate}[(i)] \item For all $p\geq 8$, $f(p, m(p))\geq 2/3$. \item $f(p, m(p))\xrightarrow{p\rightarrow\infty}1. $ \item $\mathbb{E}[\text{Runtime}]\leq C'nrp\log ^{2}p$, for some absolute constant $C'$. \end{enumerate} \end{theorem} \begin{remark}The sparsity of $S$ is required only to bound the false alarm probability in Lemma \ref{lemma:tree_test}. If $S$ is not necessarily sparse, the algorithm will still locate w.h.p. all of its large entries. However, its runtime can increase significantly, up to $O(np^2\log^2p)$ operations in the worst case when $r=p$. \end{remark} \begin{remark} Note that since our tree-based algorithm analyzes each row of \(S\) separately, in fact $S$ need not be globally $(r, \mu, R, 2)$-sparse. Our algorithm is still applicable to a matrix \(S\) with $k$ non-sparse rows and all other rows $(r, \mu, R, 2)$-sparse. On the non-sparse rows our algorithm will be quite slow, and would analyze all the leaves of the tree. However, on the sparse rows, it would detect the large entries in time $O(npr\log^2 p)$. If $k=O(\log p)$ the overall run time is, up to logarithmic factors in $p$, still $O(npr)$. \end{remark} The runtime guarantees of Theorem \ref{algo:tree_algo} are probabilistic ones, where the algorithm runtime can reach up to $O(np^2\log^2p)$ operations, even when $S$ is sparse. For an a-priori known upper bound on the sparsity parameter $r$, one can slightly modify the algorithm to stop after $O(nrp\log^2p)$ operations. This modification may decrease the probability to detect all large entries, but still maintain it to be at least $2/3$. This is true since the event of finding all large entries contains the event of finding all large entries using a bounded number of operations, whereas the second event is not affected by the modification. \subsection{Comparison between the sFFT-based and tree-based algorithms } Table \ref{table:comparison} compares the main properties of the sFFT-based and the tree-based algorithms. Interestingly, even though the two algorithms are very different, their success relies on precisely the same definition of matrix sparsity. Moreover, the difference in the input parameters is not significant since knowledge of $R$ yields a lower bound for $\mu$, and vice versa. Although the tree-based algorithm has a lower runtime complexity and is easier to understand and implement, the sFFT-based algorithm is still of interest as it illustrates a connection between two different problems which seem unrelated at first glimpse; estimating the sparse Fourier transform of a given signal and detecting large entries of a sparse covariance matrix. Hence advances in sFFT can translate to faster sparse covariance estimation. \begin{center} \begin{table}[H] \small \begin{tabular}{ |l | c |c |} \hline \textit{} & \textit{\textbf{sFFT-based }} & \textit{\textbf{SparseCovTree }} \\\hline \textit{Sparsity assumptions} & $(r, \mu, R, 2)$-sparse with $\mu > 2R+\epsilon$ & $(r, \mu, R, 2)$-sparse with $\mu > 2R$ \\\hline \textit{Runtime bound} & \multicolumn{1}{c|}{$O(nrp\log^2p\log((R+\sqrt{r}M)p/\epsilon)$} & \multicolumn{1}{c|}{$O(nrp\log^2p)$} \\\hline \textit{Probability to detect all large entries}& \multicolumn{1}{c|}{$\geq2/3$} & \multicolumn{1}{c|}{$\geq2/3$}\\\hline \textit{Required input parameters }& $r, R, \epsilon$ and $M=\max\{S_{ii}\}$& $\mu$ only \\\hline \textit{Dependencies on other algorithms}& Based on the sFFT algorithm & Standalone\\\hline \end{tabular} \caption{Comparison between the sFFT-based and Tree-based algorithms.} \label{table:comparison} \end{table} \end{center} \section{Relation between sample size, sparsity of $S$ and of $\Sigma$} \label{sec:num_samples} The theoretical analysis in the previous two sections assumed that the sample covariance matrix $S$ is approximately sparse. Typically, however, one may only assume that the population matrix $\Sigma$ is sparse, whereas $S$ is computed from a sample of $n$ observations $\z_1, \dots, \z_n$. In general, this may lead to a non-sparse matrix $S$, even when $\Sigma$ is sparse. Nonetheless, we show below that if the underlying r.v. $Z$ is sub-gaussian with a sparse covariance matrix $\Sigma$, then given a sufficient number of samples $n=O(p \log p)$, w.h.p. the corresponding sample covariance matrix $S$ is also approximately sparse, albeit with different parameters. To this end, recall that a random variable $Y$ is said to be sub-gaussian if \begin{equation*} ||Y||_{\psi_2}:=\sup _{q\geq 1}q^{-1/2}(\mathbb{E}|Y|^q)^{1/q}<\infty \end{equation*} and similarly, a random variable $Y$ is said to be sub-exponential if $$ ||Y||_{\psi_1}:=\sup _{q\geq 1}q^{-1}(\mathbb{E}|Y|^q)^{1/q}<\infty.$$ See for example \cite{vershynin_2010}. Last, a random vector $Z=(Z_1, \dots, Z_p)$ is said to be sub-gaussian if $Z_i$ is sub-gaussian for every $i\in[p]$. The following theorem provides us sufficient conditions on $\Sigma$, the underlying random variable \(Z\) and the number of samples $n$ to ensure, w.h.p., the approximate sparsity of $S$. \begin{theorem} \label{theorem:num_of_samples} Assume $Z$ is a sub-gaussian random vector with covariance matrix $\Sigma$ which is $(r,\mu,R,2)$-sparse where $2R < \mu$. Let $K=\max_i||Z_i||_{\psi_2}$ and $t=\min\{\frac{\mu -2 R}{2\sqrt{p-r}+1},\frac{\mu - R}{4},K^{2}\}$. Then, for $n> C\frac{K^4}{t^2}\log (54p^2)$, where $C$ is an absolute constant, w.p. $\geq 2/3$, $S$ is $(r,\mu-t,\frac{1}{2}(\mu-t),2)$-sparse. Moreover, with high probability, every large entry of $\Sigma$ (w.r.t. $\mu$) is a large entry of $S$ (w.r.t. $\mu-t$), and vice versa. \end{theorem} In addition to the probabilistic guarantees of Theorem \ref{theorem:num_of_samples} for fixed $p$ and $n$, we can also deduce stronger asymptotical results, as $n,p\rightarrow \infty$. \begin{theorem} \label{thm:assym_num_of_samples} Assume $Z$ is a sub-gaussian random vector with covariance matrix $\Sigma$ which is $(r,\mu,R,2)$-sparse where $2R < \mu$, and let $t$ be as in Theorem \ref{theorem:num_of_samples}. Then, as $n\rightarrow \infty$ with $p$ fixed, or as $n,p\rightarrow \infty $ with $\frac{p\log p}{n} \rightarrow0$, the probability that $S$ is $(r,\mu-t,\frac{1}{2}(\mu-t),2)$-sparse with $J_{\mu - t}(S) = J_{\mu}(\Sigma)$ converges to one. \end{theorem} Note that for large \(p\gg 1\), the parameter \(t\) in Theorem \ref{theorem:num_of_samples} is equal to \((\mu-2R)/(2\sqrt{p-r+1})\). Hence, the required number of samples for $S$ to be approximately sparse is\ $n>O(\log(p)/t^2)=O(p\log p)$. With such a number of samples, we can replace the assumptions on $S$ with corresponding assumptions on $\Sigma$ and obtain the following result analogous to Theorem \ref{theorem:tree_prob}. \begin{corollary} Assume $\Sigma$ is $(r,\mu, R, 2)$-sparse, where $R < \mu/2$. Then, as $n,p\rightarrow \infty$ with $\frac{p\log p}{n} \rightarrow0$, with probability tending to 1, Algorithm \ref{algo:tree_algo} finds all large entries of $S$, which correspond to large entries of $\Sigma$, using at most $O(nrp\log ^2p)$ operations. \end{corollary} In various contemporary statistical applications, the number of samples $n$ is comparable to the dimension $p$. In such a case, we may still detect the large entries of the population covariance matrix in sub-quadratic time. For example, we can divide the \(p\) variables into $K=p^{1-\alpha}$ distinct groups, each of size $p^{\alpha}$, and separately find the largest entries in each of the \(K^2\) sub-matrices of the full covariance matrix. In each pair, Corollary 5.1 applies, since if \(p,n\to\infty \) with $p/n\to const$, then $p^\alpha\log p/n\to 0$. The overall run time, up to logarithmic factors in \(p\) is now higher \(O(np^{2-\alpha}r)\) but still sub-quadratic. Finally, note that throughout this section we assumed that the underlying random variable \(Z\) is sub-Gaussian. It is an interesting question whether some of the above theorems continue to hold under weaker tail conditions. \section{Simulations} \label{sec:simulations} \begin{figure}[t] \includegraphics[scale=0.53]{gaussian_3.eps} \caption{Average runtime to detect large entries of a sparse covariance matrix as a function of the dimension for several algorithms, presented in a logartihmic scale in both axes. The input data was drawn from a Gaussian distribution with a random population covariance matrix and $r=\lfloor\frac{\log_2p}{3}\rfloor$. In the left panel, the number of samples increases with the dimension, $n=\lfloor p \log p\rfloor$, whereas in the right panel the number of samples is fixed, $n=50,000$.} \label{fig:cov_runtime} \end{figure} In this section we illustrate the empirical performance of Algorithm 4, denoted SparseCovTree, on input drawn from a Gaussian distribution with a sparse covariance matrix. Furthermore, we compare it to other algorithms that can be used to locate the large entries of $S$: (1) LSH-$k$NN \cite{datar_indyk_2004}; (2) $kd$-tree \cite{kd_tree}; (3) Dual-Ball-Ball (DBB) \cite{ram_2012}; (4) Dual-Ball-Cone (DBC) \cite{ram_2012}; and (5) direct calculation of all entries of \(S\), at a cost of \(O(np^{2})\) operations. To the best of our knowledge, no public implementation of the sFFT algorithm of \cite{Nearly_Optimal_sFFT} is currently publicly available, thus we did not include it in the comparison. For the LSH-$k$NN and $kd$-tree algorithms, we used the mlpack library \cite{mlpack}, whereas for DBB and DBC algorithms we used the implementation kindly provided to us by \cite{ram_2012}. All codes are in C++ and were compiled with the O3 optimization flag. We generated random population covariance matrices $\Sigma$ as follows: first $r=\lfloor\frac{\log_2p}{3}\rfloor$ entries in each row (and their corresponding transposed entries) were selected uniformly to be $\pm 1$, with all others set to zero. Then, the diagonal entries were set to be $\pm 1$ as well. Last, to have a valid positive-definite covariance matrix, all diagonal entries were increased by the absolute value of the smallest eigenvalue of the resulting symmetric matrix plus one. Following this, we chose the threshold to be $\mu=0.5$. The space complexity of SparseCovTree algorithm, which involves storing $m$ trees, is $O(npm)$. This may exceed the available memory for large $n, p$ and $m$. Thus, instead of simultaneously constructing and processing the entire \(m\) trees from the coarse level, we can construct and process the sub-trees of each node in the different fine levels separately, starting from some coarse level $h\in[\log_{2} p]$. By doing so, the space complexity is reduced by a factor of $2^h$. These modifications can only increase the probability to locate large entries, whereas the theoretical runtime bound of Theorem \ref{theorem:tree_prob} increases to $O\left(nmp(\log_{2} p-h)2^{h}\right)$ operations. Particularly, in our experiments we used $h=5$, reducing the space complexity by a factor of $32$. Moreover, the value of $m$ as stated in Theorem \ref{theorem:tree_prob} is required mostly for the theoretical analysis. Since the multiplicative factor appearing in Lemma \ref{lemma:tree_test} is not sharp, and the probability of failure converges to zero as the dimension increases, we can in practice use a smaller number of trees $m$ and still detect most large entries of $S$,{ in sub-quadratic time}. Here we chose a fixed value of $m=20$, leading to discovery of more than $99\%$ of the large entries for all values of $p$ considered. As for the LSH-$k$NN tuning parameters, we chose the number of projections to be 10, hash width 4 and bucket size 3500. For the second hash size, we picked a large prime number 424577, whereas the number of tables was configured according to \cite{LSH_lecture_notes_2008} with $\delta = \frac{1}{\log p}$. This configuration led to more than $99$\% discovery rate of all large entries, in all tested dimensions. Figure \ref{fig:cov_runtime} shows the average runtime, in logarithmic scale, for various values of $p$ from 1000 to 10,000. Surprisingly, all alternative solutions, apart from SparseCovTree, yield slower runtime performances than the direct calculation. We raise here several possible explanations for this. First, in all methods, except SparseCovTree and direct calculation, to locate negative large entries of $S$ we duplicate the input $\{\x_i\}$ with the opposite sign, thus potentially increasing the runtime by a factor of $2$. Moreover, it was suggested (see \cite{shrivastava_2014}) that space-partitioning-based methods, such as $kd$-tree, DBB and DBC, may lead to slow runtime in high dimensions. For an empirical illustration of this issue, see \cite{Shrivastava_Li_2015}. In our case, the dimension of the search-problem is the number of samples $n$, which may indeed be very high. In contrast to theses algorithms, the LSH method was originally designed to cope well in high dimensions. However, to locate most of $O(pr)$ large entries of $S$, we tuned the LSH-$k$NN algorithm to have a small misdetection probability, by setting $\delta = \frac{1}{\log p}$. Such small value of $\delta$ led to a large number tables, thus increasing the runtime by a significant factor. As for SparseCovTree, in low dimensions direct calculation is preferable. However, for larger values of $p$, SparseCovTree is clearly faster. Moreover, the slope on a log-log scale of the direct approach is roughly two, whereas our method has a slope closer to 1. In the above simulation, the underlying population matrix \(\Sigma\) was exactly sparse. Next, we empirically study the ability of the SparseCovTree algorithm to detect the large entries when the underlying population covariance matrix is not perfectly sparse, but only approximately so. Specifically, we generated a population covariance matrix similar to the procedure described above, only that the previously zero entries, are now $\pm\epsilon$. Figure \ref{fig:Sigma_epsilon} presents both the average run-time as well as the misdetection rate of our algorithm with \(m=10,25,50\) trees, as a function of $\epsilon$, for a covariance matrix of size \(p=2000\) and \(n=20000\) samples. Each row of $\Sigma$ has an average of $r=8$ large entries, all of size $\mu=1$. Hence, \(R=R(\epsilon)\approx (p-r)\epsilon^2\). The critical value $\epsilon_{crit}$ at which $R=\mu/2$ is thus $\epsilon_{crit} = 1/\sqrt{4(p-r)}$. As seen in the left panel, the run-time scales roughly linearly with the number of trees, is nearly constant for \(\epsilon<\epsilon_{crit}\) and slowly increases for larger values of $\epsilon$. The right panel shows that in accordance to our theory, the misdetection rate is also nearly constant as long as \(\epsilon<\epsilon_{crit}\). \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{run_time_vs_epsilon.eps} \includegraphics[width=0.4\textwidth]{misdetection_vs_epsilon.eps} \caption{Average run-time and misdetection rate (in percentage points) as a function of $\epsilon$, for fixed $p=2000$ and $n=20000$. The dashed vertical line is the critical value $\epsilon_{crit}$ where for the population matrix \(R(\epsilon_{crit})=\mu/2. \)} \label{fig:Sigma_epsilon} \end{figure} Finally, we demonstrate an application of the SparseCovTree algorithm to detect near duplicates in a large artificial data set: Given a reference data set with \(p\) elements, \(\{{\bf x}_i\}_{i=1}^p\), with each \(\x_i\in\mathbb{R}^n\), the goal is to quickly decide, for any query vector \({\bf y}\in\mathbb{R}^n\), whether there exists $\x_i$ s.t. $Sim (\x_i,\y)$ is larger than a given threshold $\mu$, where $Sim$ is the \textit{Cosine Similarity}. In contrast to the previous simulation, here the key quantity of interest is the average query time, and the computational effort of the pre-processing stage is typically ignored. Specifically, we considered the following illustrative example: We first generated a reference set $\{\x_i\}_{i=1}^p$, whose $p$ elements were all independently and uniformly distributed on the unit sphere $S^{n-1}$. Next, we evaluated the average run-time per query in two scenarios:\ a) a general query \(\y\), uniformly distributed on \(S^{n-1}\), and hence with high probability, not a near-duplicate; b) a near duplicate query $\bf{u}$, generated as follows: $$ {\bf u} = \sqrt{1-\epsilon}\x_{j} + \sqrt{\epsilon}\frac{\z - (\z^T \x_{j} )\x_{j}}{||\z - (\z^T \x_{j})\x_{j}||} $$ where $\z$ is uniformly distributed on $S^{n-1}$ and the duplicate index $j$ is chosen uniformly from the set of elements $[p]$. In our simulations, the parameters $\epsilon = 0.25$, \(n=40,000\) and $m=20$ trees were kept fixed, and we varied the size $p$ of the reference set. For a fair comparison to LSH, we decreased its number of projections to 7, and set its misdetection probability per single projection to be $\delta=0.3$. Empirically, both our method and LSH\ correctly detected a near-duplicate with probability rate larger than 99\%. \begin{figure}[t!] \centering \includegraphics[scale=0.5]{near_dup_1.eps} \caption{Average runtime per single query to detect near-duplicates in a large artificial data set with $p$ elements of dimension $n=40,000$, presented in a logarithmic scale in both axes. For SparseCovTree, we present the average runtime both for a near-duplicate and for a general input query. For the other two methods, this partition did not affect the runtime significantly.} \label{fig:near_dup} \end{figure} Figure \ref{fig:near_dup} shows the average query runtime for SparseCovTree, LSH-$k$NN and the direct approach. In both LSH-$k$NN and direct approach, the average query runtime did not depend on whether the query was a near-duplicate or not, and hence we show only one of them. For SparseCovTree, in contrast, the average runtime for a general query is lower than for a near-duplicate query. The reason is that for a general query our algorithm detects that this is not a near-duplicate by processing only the nodes at the first few top levels of the tree. For this problem, the LSH method is clearly preferable over direct calculation for all tested values of \(p\). When the number of elements \(p\) becomes sufficiently large, our method becomes faster. Most importantly, the runtime curves of SparseCovTree, both for the true and false near-duplicates, seem \textit{almost constant} compared to the other runtime curves, which are approximately linear in \(p\). This is consistent with the theoretical analysis, as SparseCovTree query time is $O(n\log^2p)$ operations, which is sub-linear, whereas the direct calculation query time is $O(np)$ operations, which is linear thus significantly larger, and LSH-$k$NN query time is $O(np^\rho\text{ poly} \log p)$ operations, for some constant $\rho > 0$, which is also sub-linear but not logarithmic as the SparseCovTree query time. \section{Summary} In this paper we considered the computational aspects of detecting the large entries of a sparse covariance matrix from given samples. We derived and theoretically analyzed two different algorithms for this task, under sparsity assumptions either on $S$ itself, or on $\Sigma$ and additional requirements on the underlying random variable $Z$ and number of samples $n$. We next demonstrated the time-efficiency of the algorithms, theoretically for both of them and empirically only for the tree-based one. Our work raises several interesting questions for future research. If an oracle gave us the precise locations of all large entries, computing them directly would require $\Omega(npr)$ operations, a quantity lower by a $\text{poly}\log p$ factor from our results. This raises the following fundamental question: is it possible to reduce the poly-log factor, or is there a theoretical lower-bound on the required number of operations? In addition, in this article we only considered a $L_2$-norm in the definition of matrix sparsity. It may be of interest to consider different norms, e.g. $L_1$, which could lead to different algorithmic results. Finally, both algorithms require partial knowledge on the sparsity parameters. It is thus of interest to efficiently estimate these parameters from the data, in case they are unknown.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} An asymptotic future observer perceives thermal emission in a black hole spacetime when one considers quantum fields in such classical geometry. This phenomenon is known as the Hawking effect \cite{hawking1975}. Usually, a very large number of microstates are needed to understand thermal emission from a body. However, a classical black hole can be described by only few parameters in Einstein's general theory of relativity \cite{book:carroll,book:Schutz,Fulling:1989nb,book:wald}. So one expects that the study of Hawking effect in principle might allow one to understand the possible, yet unknown, quantum theory of gravity and significant efforts have been made to understand the Hawking effect in many different ways \cite{Singh:2014paa,Dray1985,Kawai:2013mda, book:parker, Singleton:2011vh,Bhattacharya:2013tq, Singh:2013pxf,Lapedes:1977ip, Ho:2015fja,Jacobson:2012ei,PhysRevD.46.2486, Lambert:2013uaa, fredenhagen1990, Jacobson:2003vx, Kiefer:2002fp, Traschen:1999zr, Chakraborty:2015nwa,Chakraborty:2017pmn,Carlip:2014pma, DEWITT1975295, Ford:1997hb, Hollands:2014eia,Padmanabhan:2009vy, Fulling1987135,Hinton:1982, Parikh:1999mf,Visser:2001kq, Davies:1974th, Wald1975}. In the canonical approaches to quantum gravity, one decomposes the spacetime into spatial hyper-surfaces labeled by a suitable time parameter. Consequently, in order to explore the techniques that are often employed in such canonical quantization framework, it is desirable to have a Hamiltonian-based canonical derivation of the Hawking effect. In such an approach, however one faces multiple hurdles. Firstly, the hyper-surfaces for fixed Schwarzschild time are not always spacelike \cite{Melnikov:2001ex,Melnikov:2002qd,Weinstein:2001kw} and consequently Hamiltonian dynamics is not well-posed in such coordinates. Secondly, in the standard derivation of the Hawking effect one needs to find the relation between the ingoing and outgoing massless field modes as seen by two asymptotic observers at the past and the future null infinity respectively \cite{hawking1975}. These field modes follow null trajectory and are conveniently described using null coordinates. However, null coordinates do not lead to a true matter Hamiltonian that can describe the dynamics of these modes. In order to overcome these difficulties, recently a set of near-null coordinates is introduced in \cite{Barman:2017fzh} which allows one to perform an exact canonical derivation of the Hawking effect. Firstly, these near-null coordinates lead to a non-trivial matter Hamiltonian which describes the dynamics of the field modes. Secondly, these coordinates being structurally closer to the null coordinates, allow one to follow similar methods which are employed for null coordinates. Nevertheless, the usage of these near-null coordinates leads to the off-diagonal terms in the spacetime metric. The corresponding spacetime decomposition involves both the \emph{lapse} function as well as a non-vanishing \emph{shift} vector. Consequently, the dynamics of field modes depends not just on matter Hamiltonian but also on the matter diffeomorphism generator. This article is organized as follows. In the section \ref{Schwarzchild-spacetime}, we review the key aspects of a Schwarzschild black hole spacetime. Then we discuss the difficulties that one faces while using Schwarzschild time for space-time foliation. Subsequently, we introduce a new set of coordinates which allows an exact canonical derivation of the Hawking effect. The spacetime decomposition into spatial hyper-surfaces using these coordinates does not involve any shift vector. Therefore, the usage of these coordinates leads to a much simpler Hamiltonian-based derivation of the Hawking effect. \section{Schwarzschild spacetime}\label{Schwarzchild-spacetime} Let us consider a Schwarzschild spacetime which is formed at some finite past, possibly due to the collapse of a matter shell whose exact dynamics however is not important for understanding the Hawking effect. The invariant distance element in the Schwarzschild spacetime is given by \begin{equation}\label{SchwarzschildMetric0} ds^2 = - f(r) dt^2 + f(r)^{-1} dr^2 + r^2 d\theta^2 + r^2 \sin^2\theta d\phi^2 ~, \end{equation} where $f(r) = \left(1- r_s /r\right)$ and $r_s = 2 G M$ is the Schwarzschild radius. Throughout the paper, we use \emph{natural units} where $c=\hbar=1$. It is well-known that the Hawking effect is ultimately connected with the structure of the Schwarzschild metric in the $t-r$ plane. Therefore, for simplicity now onward we consider $1+1$ dimensional Schwarzschild spacetime with the metric $g_{\mu \nu}$ along with the invariant distance \begin{equation}\label{SchwarzschildMetric1} ds^2 = g_{\mu \nu} dx^{\mu} dx^{\nu} = - f(r) dt^2 + f(r)^{-1} dr^2~. \end{equation} In order to represent the Hawking quanta, here we consider a minimally coupled massless scalar field $\Phi(x)$ whose dynamics is governed by the action \begin{equation}\label{ScalarActionFull} S_{\Phi} = \int d^{2}x \left[ -\frac{1}{2} \sqrt{-g} g^{\mu \nu} \partial_{\mu}\Phi(x) \partial_{\nu}\Phi(x) \right] ~. \end{equation} We shall ignore the back-reaction of this scalar field on the spacetime metric as done also in the standard derivation of the Hawking effect \cite{hawking1975}. \section{Canonical formulation}\label{Hamilton-formulation} It turns out that the Schwarzchild time $t$ is not a good choice of time parameter for canonical formulation as the hyper-surfaces with a fixed Schwarzschild time $t$ are not always spacelike. We may easily see it from the expression $ds^2_{|dt=0} = f(r)^{-1} dr^2$ where hyper-surfaces for fixed Schwarzschild time are spacelike when $r>r_{s}$ and timelike when $r<r_{s}$ \cite{Melnikov:2001ex,Melnikov:2002qd,Weinstein:2001kw}. In order to consider the spatial region only outside the horizon, usually one defines the so-called \emph{tortoise coordinate} $r_{\star}$ such that $dr_{\star} = f(r)^{-1} dr$. By choosing suitable constant of integration, $r_{\star}$ can be expressed as \begin{equation}\label{TortoiseCoordinate} r_{\star} = r + r_s \ln \left(r/r_{s} - 1\right) ~. \end{equation} The domain of $r_{\star}$ being $(-\infty,\infty)$, it covers only a part of the full Schwarzschild spacetime and the corresponding metric becomes \begin{equation}\label{SchwarzschildMetric} ds^2 = f(r) \left[ - dt^2 + dr_{\star}^2 \right] ~, \end{equation} which differs from 1+1 dimensional Minkowski metric by a conformal transformation. \subsection{Null coordinates} In the standard derivation \cite{hawking1975}, the Hawking effect is realized by computing the Bogoliubov transformation coefficients between the ingoing field modes that originate from the past null infinity ($\mathscr{I}^{-}$) and the outgoing field modes that arrive at the future null infinity ($\mathscr{I}^{+}$) respectively. For massless scalar field, these field modes follow null trajectories and are conveniently described using ingoing and outgoing null coordinates, defined as \begin{equation}\label{NearNullCoordinatesMinus} v = t + r_{\star} ~~;~~ u = t - r_{\star} ~. \end{equation} Subsequently, using these Bogoliubov coefficients, one computes the expectation value of number density operator corresponding to an observer near future null infinity in the vacuum state corresponding to an observer near past null infinity. This expectation value turns out to be the same as the blackbody spectrum at the Hawking temperature. Therefore, these null coordinates play key roles even in the basic formulation of the Hawking effect in the covariant approach. However, these null coordinates do not lead to a true Hamiltonian for the matter field (\ref{ScalarActionFull}) that can describe the field dynamics. Consequently, these null coordinates are not suitable for performing a Hamiltonian-based canonical derivation of the Hawking effect. \subsection{Timelike and spacelike coordinates} In order to perform an exact canonical derivation of the Hawking effect, a set of near-null coordinates is introduced in Ref. \cite{Barman:2017fzh}. In particular, a timelike coordinate $\tau_{-}$ and a spacelike coordinate $\xi_{-}$ used by an observer near the past null infinity $\mathscr{I}^{-}$, referred to as the observer $\mathbb{O}^{-}$, are given by \begin{equation}\label{NearNullCoordinatesMinus} \tau_{-} = t - (1-\epsilon)r_{\star}~;~~ \xi_{-} = -t - (1+\epsilon)r_{\star} ~, \end{equation} where the parameter $\epsilon$ is taken to be small and positive such that $\epsilon \ll 1$ which signifies the naming of these coordinates as `near-null'. Similarly, one introduces another set of timelike coordinate $\tau_{+}$ and spacelike coordinate $\xi_{+}$ for an observer near the future null infinity $\mathscr{I}^{+}$. These coordinates are given by \begin{equation}\label{NearNullCoordinatesPlus} \tau_{+} = t + (1-\epsilon)r_{\star} ~~;~~ \xi_{+} = -t + (1+\epsilon)r_{\star} ~, \end{equation} and the corresponding observer is referred to as the observer $\mathbb{O}^{+}$. We note that the domain of the coordinates $\tau_{\pm}$ and $\xi_{\pm}$ both are $(-\infty,\infty)$. \subsubsection{Domain of the parameter $\epsilon$}\label{spatial} The main motivation for choosing the parameter $\epsilon$ to be very small in Ref. \cite{Barman:2017fzh} was to keep these coordinates structurally `near' to the null coordinates so that one could employ similar methods as used for null coordinates. However, in general, any value of the parameter $\epsilon$ in the domain $0<\epsilon<2$ allows one to maintain the timelike and spacelike characteristics of the coordinates $\tau_{\pm}$ and $\xi_{\pm}$ respectively. Therefore, these coordinates can, in principle, be used for the study of the Hawking effect using canonical formulation in the entire allowed domain of $\epsilon$ which is not necessarily small. However, such coordinates would then loose their `near-null' characteristics. We note that for both the observers $\mathbb{O}^{+}$ and $\mathbb{O}^{-}$, the $1+1$ dimensional Schwarzschild metric (\ref{SchwarzschildMetric}) can be expressed as \begin{equation}\label{GeneralNewMetric} ds^2 = \frac{f(r)}{4} \left[ -\alpha d\tau_{\pm}^2 + \beta d\tau_{\pm} d\xi_{\pm} + \gamma d\xi_{\pm}^2 \right]~, \end{equation} where $\alpha = (2\epsilon+\epsilon^2)$, $\beta = 2(2-\epsilon^2)$ and $\gamma = (2\epsilon-\epsilon^2)$. For the small values of the parameter $\epsilon$ \emph{i.e.} $\epsilon \ll 1$, the parameter $\beta$ is non-vanishing. Therefore, if one foliates the spacetime into spatial hyper-surfaces by using the time variables $\tau_{\pm}$, the presence of the \emph{off-diagonal} terms in the metric leads to non-vanishing shift vector. This in turns forces one to deal with the non-vanishing matter diffeomorphism generator \cite{Barman:2017fzh}. \subsubsection{Parameter $\epsilon = \sqrt{2}$} However, one may notice that the \emph{off-diagonal} terms in the metric (\ref{GeneralNewMetric}) vanishes identically for both observers if one chooses $\epsilon=\sqrt{2}$ which implies $\beta=0$. Then the corresponding metric becomes \begin{equation}\label{metric} ds^2 = \frac{f(r)}{4} \left[ -\alpha d\tau_{\pm} ^2 + \gamma d\xi_{\pm}^2 \right] ~\equiv~ g^{\pm}_{\mu\nu}dx^{\mu}dx^{\nu} ~, \end{equation} where $\alpha = 2(\sqrt{2} + 1)$ and $\gamma = 2(\sqrt{2} - 1)$. Clearly, if we use $\tau_{\pm}$ as time parameters with $\epsilon = \sqrt{2}$, then the foliation of the spacetime into spatial hyper-surfaces does not involve any shift vector. \subsubsection{Relation between spatial coordinates $\xi_{-}$ and $\xi_{+}$} In order to perform the canonical derivation of the Hawking effect, a key task is to find the relation between the spatial coordinates $\xi_{-}$ and $\xi_{+}$ which are used by the two asymptotic observers. Firstly, from the equations (\ref{NearNullCoordinatesMinus}, \ref{NearNullCoordinatesPlus}), we note that \begin{equation}\label{xirstarRelation} {d\xi_{-}} _{|\tau_{-}} = - 2 d {r_{\star}} _{|\tau_{-}} ~~,~~ {d\xi_{+}} _{|\tau_{+}} = 2 d {r_{\star}} _{|\tau_{+}} ~. \end{equation} However, we may emphasize here that there was no black hole when the ingoing modes relevant for Hawking effect left the $\mathscr{I}^{-}$ as seen by the observer $\mathbb{O}^{-}$. So one should view the coordinates ($\tau_{-},\xi_{-}$) subject to the condition $r_s\rightarrow 0$ which implies $f(r)\rightarrow 1$ and $r_{\star}\rightarrow r$. Now, using the metric (\ref{SchwarzschildMetric1}), one can calculate the non-vanishing Christoffel symbols given by \begin{equation}\label{ChristoffelSymbols} \Gamma^{t}_{tr} = \Gamma^{t}_{rt} = -\Gamma^{r}_{rr} = \frac{f'(r)}{2f(r)} ~~;~~ \Gamma^{r}_{tt} = \frac{1}{2} f(r) f'(r) ~. \end{equation} By introducing an affine parameter $\sigma$ along the null trajectories which are defined by $ds^2=0$, the geodesic equations can be expressed as \begin{equation}\label{GeodesicEquation} \frac{d}{d\sigma} \left(f(r) \frac{dt}{d \sigma}\right) = 0 ~~,~~ \frac{d^2 r}{d \sigma^2} = 0 ~. \end{equation} The Eqns. (\ref{GeodesicEquation}) admit solutions for $r$ as \begin{equation}\label{rsolution} r = C\sigma + D ~, \end{equation} where $C, D$ are constants of integration. Given affine transformations are of the form $\sigma\to\sigma' = C\sigma + D$, the coordinate $r$ can also be viewed as an affine parameter. We have mentioned that for the observer $\mathbb{O}^{-}$, one should view the coordinates ($\tau_{-},\xi_{-}$) subject to the condition $r_s\rightarrow 0$. Now if we consider a pivotal point $\xi^0_{-}$ on a constant $\tau_{-}$ hyper-surface with $r^0$ being the corresponding value of the radial coordinate then the Eqn. (\ref{xirstarRelation}) implies \begin{equation} \label{iv} (\xi_{-}-\xi^0_{-})_{|\tau_-} = 2(r^0-r)_{|\tau_-} ~, \end{equation} where $(\xi_{-}-\xi^0_{-})_{|\tau_-}$ to be viewed as the spatial separation between any two ingoing null rays which were at the locations $\xi_{-}$ and $\xi^0_{-}$ respectively on the spatial hyper-surfaces labelled by the time parameter $\tau_-$. On the other hand, when the relevant outgoing modes for Hawking radiation arrive at $\mathscr{I}^{+}$, as seen by the observer $\mathbb{O}^{+}$, the black hole has already been formed. So if we consider a pivotal point $\xi^0_{+}$ on a constant $\tau_{+}$ hyper-surface then using the Eqns. (\ref{TortoiseCoordinate}) and (\ref{xirstarRelation}) one can express the spatial separation between two given outgoing null rays along the hyper-surface as \begin{equation}\label{xiplusdiff} (\xi_{+}-\xi_{+}^0){_{|\tau_{+}}} = 2(r-r^0)_{|\tau_{+}} +2 r_s\ln\left(1+\frac{r-r^0}{r^0-r_{s}}\right)_{|\tau_{+}} ~. \end{equation} We have already shown that the coordinate $r$ along both ingoing and outgoing null trajectories can be considered as affine parameter. Therefore, using geometric optics approximation we can relate the spatial separations of the ingoing and the outgoing modes as \begin{equation}\label{rr0relation} (r-r^0)_{|\tau_{+}} = C' (r^0-r)_{|\tau_{-}} ~, \end{equation} where $C'$ is some constant. Given this constant $C'$ does not affect the final result, then for simplicity we set this value to be unity. By choosing $\xi_{-}^0 = 2(r^0-r_{s})_{|\tau_{+}}$ and $\xi_{+}^0 = \xi_{-}^0 + 2r_{s}\ln\left(\xi_{-}^0/2r_s\right)$ in the Eqn. (\ref{xiplusdiff}), we can express it as \begin{equation}\label{xiplusximinusrelation} \xi_{+} = \xi_{-} + 2 r_{s}\ln\left(\frac{\xi_{-}}{2r_s} \right) ~. \end{equation} In the domain where $|\xi_{-}|<<2r_{s}$, we may approximate the relation (\ref{xiplusximinusrelation}) between spatial coordinates $\xi_{-}$ and $\xi_{+}$ as used by two asymptotic observers $\mathbb{O}^{-}$ and $\mathbb{O}^{+}$ respectively, as \begin{equation}\label{xirelationapprox} \xi_{-} \approx 2 r_{s} e^{\xi_{+}/2 r_{s}}~. \end{equation} The relation (\ref{xirelationapprox}) is the key relation which ultimately leads to the Hawking effect. \begin{figure} \includegraphics[width=8.5cm]{Coordinates.pdf} \caption{(a) Spatial separation between two ingoing null rays along a $\tau_{-}$ constant hyper-surface. (b) Spatial separation between two outgoing null rays along a $\tau_{+}$ constant hyper-surface. (c) The spacelike and timelike coordinates for $\epsilon=\sqrt{2}$ drawn on a Penrose diagram together with a collapsing shell of matter denoted by the shaded region. } \label{fig:CoordinatesPenrose} \end{figure} \subsubsection{Scalar matter field} We note that by using a conformally transformed spacetime metric $g^{0}_{\mu\nu}$ such that $g^{\pm}_{\mu\nu} = \tfrac{1}{4} \gamma f(r)~g^{0}_{\mu\nu}$, the scalar field action (\ref{ScalarActionFull}) for both the observers can be written in the form \begin{equation}\label{ReducedScalarAction2DFlat} S_{\varphi} = \int d\tau_{\pm} d\xi_{\pm} \left[-\frac{1}{2} \sqrt{-g^{0}} g^{0\mu\nu} \partial_{\mu}\varphi \partial_{\nu} \varphi \right] ~, \end{equation} where the metric $g^{0}_{\mu\nu}$ is flat and consequently we can use the standard techniques of Fock quantization for the matter field. Using the time coordinates $\tau_{\pm}$, we can compute the scalar matter Hamiltonian as \begin{equation}\label{ScalarHamiltonianFullMinus} H^{\pm}_{\varphi} = \int d\xi_{\pm} ~ N \left[\frac{\Pi^2}{2\sqrt{q}} + \frac{\sqrt{q}}{2}(\partial_{\xi_{\pm}}\varphi)^2 \right] ~, \end{equation} where the \emph{lapse function} $N = \sqrt{\alpha/\gamma} = (\sqrt{2}+1) $ and the determinant of the spatial metric $q = 1$. The Poisson bracket between the field $\varphi$ and its conjugate momentum $\Pi$ for both the observers can be expressed as \begin{equation}\label{PoissonBracketMinus} \{\varphi(\tau_{\pm},\xi_{\pm}), \Pi(\tau_{\pm},\xi_{\pm}')\} = \delta(\xi_{\pm} - \xi_{\pm}') ~. \end{equation} Using the equations of motion, the field momentum $\Pi$ can be expressed as \begin{equation}\label{FieldMomentumMinus} \Pi(\tau_{\pm},\xi_{\pm}) = \frac{\sqrt{q}}{N} (\partial_{\tau_{\pm}}\varphi) ~. \end{equation} \subsubsection{Fourier modes} The spatial volume $V_{\pm} = \int d\xi_{\pm}\sqrt{q}$ is formally divergent. Therefore, to avoid dealing with explicitly divergent quantity, we choose a fiducial box with finite volume as \begin{equation}\label{SpatialVoumeMinus} V_{\pm} = \int_{\xi_{\pm}^L}^{\xi_{\pm}^R} d\xi_{\pm}\sqrt{q} = {\xi_{\pm}^R} - {\xi_{\pm}^L} \equiv L_{\pm} ~, \end{equation} where ${\xi_{\pm}^L}$ and ${\xi_{\pm}^R}$ are left and right coordinate edges associated with the box. We may now define the Fourier modes for the scalar field as \cite{Hossain:2010eb} \begin{eqnarray} \varphi(\tau_{\pm},\xi_{\pm}) &=& \frac{1}{\sqrt{V_{\pm}}}\sum_{k} \tilde{\phi}^{\pm}_{k} (\tau_{\pm})~ e^{i k \xi_{\pm}}~,\nonumber\\ \Pi (\tau_{\pm},\xi_{\pm})&=& \frac{1}{\sqrt{V_{\pm}}} \sum_{k} \sqrt{q}~ \tilde{\pi}^{\pm}_{k} (\tau_{\pm})~ e^{i k \xi_{\pm}} ~,\label{FourierModesDefinitionMinus} \end{eqnarray} where complex-valued Fourier modes $\tilde{\phi}^{\pm}_{k}$ and $\tilde{\pi}^{\pm}_{k}$ are subject to the reality condition as we are considering the scalar field $\varphi$ to be a real-valued field. One may check that the Kronecker delta and the Dirac delta can now be expressed as \begin{eqnarray} \int d\xi_{\pm}\sqrt{q} ~ e^{i(k-k')\xi_{\pm}} = V_{\pm} \delta_{k,k'} ~,\label{KroneckerDeltasMinus}\\ \sum_k e^{ik(\xi_{\pm}-\xi_{\pm}')} = V_{\pm} \delta(\xi_{\pm}-\xi_{\pm}')/\sqrt{q} ~.\label{DiracDeltasMinus} \end{eqnarray} The Eqns. (\ref{KroneckerDeltasMinus}) and (\ref{DiracDeltasMinus}) together allow the values of the wave-vector to be $k \in \{k_l ~| k_l = 2\pi l/L_{\pm}\}$ with $l$ being a non-zero integer. Using Fourier modes, the scalar field Hamiltonian (\ref{ScalarHamiltonianFullMinus}) for both the observers can be expressed as $ H^{\pm}_{\varphi} = \sum_k N \mathcal{H}_k^{\pm}$ where the Hamiltonian density for the $k^{th}$ mode is \begin{equation}\label{FourierHamiltonianDensity} \mathcal{H}_k^{\pm} = \frac{1}{2} \tilde{\pi}^{\pm}_{k} \tilde{\pi}^{\pm}_{-k} + \frac{1}{2} |k|^2 \tilde{\phi}^{\pm}_{k} \tilde{\phi}^{\pm}_{-k} ~. \end{equation} The Poisson bracket between the Fourier modes and their conjugate momenta can be expressed as \begin{equation}\label{FourierPoissonBracketMinus} \{\tilde{\phi}^{\pm}_{k}, \tilde{\pi}^{\pm}_{-k'}\} = \delta_{k,k'} ~. \end{equation} \subsubsection{Relation between Fourier modes} In order to establish the relation between the Fourier modes of two asymptotic observers, firstly we note that the matter field being scalar, it can be expressed in general as $\varphi(\tau_{-}(\tau_{+},\xi_{+}),\xi_{-}(\tau_{+},\xi_{+})) = \varphi(\tau_{+},\xi_{+})$. Further, in the standard formulation of the Hawking effect, the observer near the $\mathscr{I}^{-}$, deals with the \emph{ingoing} field modes for them $v = t + r_{\star} = (\tau_{-} - (\sqrt{2}-1)\xi_{-})/\sqrt{2}$ is \emph{constant}. On the other hand, the observer near $\mathscr{I}^{+}$ deals with the \emph{outgoing} field modes for them $u = t - r_{\star} = (\tau_{+} - (\sqrt{2}-1)\xi_{+})/\sqrt{2}$ is \emph{constant}. This aspect allows one to get a relation between the field momenta \cite{Barman:2017fzh} as \begin{equation} \Pi(\tau_{+},\xi_{+}) = (\partial \xi_{-}/\partial \xi_{+}) \Pi(\tau_{-},\xi_{-})~.\nonumber \end{equation} The Fourier modes and the conjugate momenta on a given hyper-surface labeled by $\tau_{+}^0$, as seen by the observer $\mathbb{O}^{+}$, can be expressed using the modes corresponding to the observer $\mathbb{O}^{-}$, on a given hyper-surface labeled by $\tau_{-}^0$, as \begin{eqnarray} \tilde{\phi}^{+}_{\kappa}(\tau_{+}^0) &=& \sum_{k} \tilde{\phi}^{-}_{k}(\tau_{-}^0) F_{0}(k,-\kappa) ~,\label{FieldModesRelation}\\ \tilde{\pi}^{+}_{\kappa}(\tau_{+}^0) &=& \sum_{k} \tilde{\pi}^{-}_{k} (\tau_{-}^0)F_{1}(k,-\kappa) ~,\label{FieldMomentaModesRelation} \end{eqnarray} where the coefficient functions $F_{m}(k,\kappa)$ are given by \begin{equation}\label{FFunctionGeneral} F_{m}(k,\kappa) = \frac{1}{\sqrt{V_{-} V_{+}}} \int d\xi_{+} \left(\frac{\partial \xi_{-}}{\partial \xi_{+}} \right)^m ~e^{i k \xi_{-} + i \kappa \xi_{+}} ~, \end{equation} with $m=0,1$. The coefficient functions $F_{m}(k,\kappa)$ play the similar role as the Bogoliubov coefficients. Using the expression (\ref{FFunctionGeneral}), it can be shown that $F_{0}(k,\kappa)$ and $F_{1}(k,\kappa)$ are related as \cite{Barman:2018ina} \begin{equation}\label{F0F1Relation} F_{1}(\pm|k|,\kappa) = \mp \frac{\kappa}{|k|}~F_{0}(\pm|k|,\kappa) ~. \end{equation} The coefficient function $F_{0}(k,\kappa)$ is formally divergent as the integrand is purely oscillatory. However, it can be evaluated by introducing a suitable regulator $\delta$ such that $\lim_{\delta\to 0} F_{0}^{\delta}(\pm|k|,\kappa) = F_{0}(\pm|k|,\kappa)$ and the regulated coefficient function can be evaluated as \cite{Hossain:2014fma,Barman:2017fzh} \begin{equation}\label{FFunctionEvaluated} F_{0}^{\delta}(\pm|k|,\kappa) = \frac{(2r_s)^{-\beta} |k|^{-\beta-1} } {\sqrt{V_{-} V_{+}}} e^{\pm i \pi(\beta+1)/2} ~\Gamma(\beta+1) ~, \end{equation} where $\Gamma(\beta+1)$ is the Gamma function and $\beta = (2i\kappar_s + \delta - 1)$. From the Eqn. (\ref{FFunctionEvaluated}), one can deduce an important relation as follows \begin{equation}\label{F0F0Relation} F_{0}^{\delta}(-|k|,\kappa) = e^{2\pir_s\kappa-i\delta\pi} ~F_{0}^{\delta}(|k|,\kappa) ~. \end{equation} \subsubsection{Number density of Hawking quanta} Using the Eqns. (\ref{FieldModesRelation}), (\ref{FieldMomentaModesRelation}), (\ref{F0F1Relation}) and (\ref{F0F0Relation}) one can express the Hamiltonian density (\ref{FourierHamiltonianDensity}) corresponding to the \emph{positive} frequency modes \emph{i.e.} $\kappa>0$ for the observer $\mathbb{O}^{+}$ in terms of the Fourier modes of the observer $\mathbb{O}^{-}$ as \cite{Barman:2017fzh} \begin{equation}\label{ModesHamiltonianRelations0} \frac{\mathcal{H}_{\kappa}^{+}}{\kappa} = \frac{h_{\kappa}^1}{\kappa}+ \frac{e^{2\pi\kappa/\mathrm{\varkappa}} + 1}{e^{2\pi\kappa/\mathrm{\varkappa}} - 1} \left[ \frac{1}{\zeta(1+2\delta)} \sum_{l=1}^{\infty} \frac{1}{l^{1+2\delta}} ~ \frac{\mathcal{H}_{k_l}^{-}}{k_l} \right]~, \end{equation} where $\mathrm{\varkappa}=1/(2r_s)$ is the \emph{surface gravity} at the Schwarzschild event horizon and $\zeta(1+2\delta) = \sum_{l=1}^{\infty} l^{-(1+2\delta)}$ is the \emph{Riemann zeta function}. The term $ h_{\kappa}^1 = \sum_{k\neq k'} [ \frac{1}{2} F_{1}(k,-\kappa) F_{1}(-k',\kappa) ~ \tilde{\pi}^{-}_{k} \tilde{\pi}^{-}_{-k'} + \frac{1}{2} |\kappa|^2 F_{0}(k,-\kappa) F_{0}(-k',\kappa) ~ \tilde{\phi}^{-}_{k} \tilde{\phi}^{-}_{-k'}]$ being linear in Fourier modes and their conjugate momenta, would drop out from the vacuum expectation value. It is well known that the Fourier modes corresponding to a massless free scalar field can be viewed as a system of decoupled harmonic oscillators which can also be seen from the Eqn. (\ref{FourierHamiltonianDensity}). Therefore, in Fock quantization $\langle\hat{\mathcal{H}}_{k}^{-}\rangle \equiv \langle 0_{-}| \hat{\mathcal{H}}_{k}^{-}|0_{-}\rangle = \frac{1}{2}|k|$ where the state $|0_{-}\rangle$ refers to the vacuum state of the observer $\mathbb{O}^{-}$. Consequently, the expectation value of the number density operator $\hat{N}^{+}_{\kappa} \equiv \hat{\mathcal{H}}_{\kappa}^{+}/\kappa - \frac{1}{2} $ corresponding to the observer $\mathbb{O}^{+}$, in the vacuum state of the observer $\mathbb{O}^{-}$ can be evaluated as \begin{equation}\label{NumberOperatorVEV} N_{\omega} \equiv \langle \hat{N}^{+}_{\omega=\kappa}\rangle = \frac{1}{e^{2\pi\omega/\mathrm{\varkappa}} - 1} = \frac{1}{e^{(4\pir_s)\omega} - 1} ~. \end{equation} The Eqn. (\ref{NumberOperatorVEV}) corresponds to a thermal spectrum of bosons at the temperature $T_H = \mathrm{\varkappa}/(2\pi k_B) = 1/(4\pir_s k_B)$. This phenomenon is referred to as the Hawking effect and associated temperature is known as the Hawking temperature. \bigskip \section{Discussions}\label{discussion} In this article we have presented an exact analytical derivation of the Hawking effect in canonical formulation where one does not need to deal with the matter diffeomorphism generator. In order to achieve this simplification, we have introduced a new set of coordinates in which the resultant spacetime metric is diagonal. Consequently, the foliation of the spacetime into spatial hyper-surfaces, which is required for canonical derivation, does not introduce any shift vector. Therefore, these new coordinates lead to a much simpler canonical derivation of the Hawking effect compared to the one reported in Ref. \cite{Barman:2017fzh} where one uses the so-called near-null coordinates. Clearly, these coordinates would be quite useful for testing various new quantization techniques \cite{Ashtekar:2002sn,HALVORSON200445,Hossain:2010eb, Hossain:2014fma, Hossain:2016klt, Hossain:2015xqa,Barman:2017vqx}. We have mentioned earlier that the spacetime metric is diagonal in these new coordinates and up to a scaling the metric is similar to a conformally transformed Minkowski metric. However, it can be checked that these new coordinates cannot be obtained simply by applying a Lorentz boost from $(t,r_{\star})$ coordinates. In this context we may mention that it would be quite interesting to use the canonical formulation as given here, to study the issue of ambiguity in the expression of Hawking temperature due to inequivalent choices of the inertial frames as shown by 't Hooft \cite{THOOFT198445,tHooft:1984kcu,Akhmedov:2006pg,Akhmedov:2008ru}. \begin{acknowledgments} We would like to thank Gopal Sardar, Subhajit Barman and Saumya Ghosh for many useful discussions. CS would like to thank IISER Kolkata for supporting this work through a doctoral fellowship. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Graph colouring is one of the most fundamental and studied problems in combinatorics and computer science. A graph $G$ is called $k$-colourable if there is an assignment of colours $\{1,2,\ldots,k\}$ to the vertices of $G$ so that any two adjacent vertices are assigned different colours. The chromatic number of $G$, denoted by $\chi(G)$, is the smallest integer $k$ for which $G$ is $k$-colourable. Deciding whether $\chi(G)\leq k$ appeared on Karp's original list of 21 NP-complete problems~\cite{Karp72:reducibility}, and is NP-hard for every $k\geq 3$. In particular, it is NP-hard to decide whether $\chi(G)\leq 3$ or $\chi(G)>3$. Put differently (thanks to self-reducibility of graph colouring), it is NP-hard to find a $3$-colouring of $G$ even if $G$ is promised to be $3$-colourable. In the \emph{approximate graph colouring} problem, we are allowed to use more colours than needed. For instance, given a $3$-colourable graph $G$ on $n$ vertices, can we find a colouring of $G$ using significantly fewer than $n$ colours? On the positive side, the currently best polynomial-time algorithm of Kawarabayashi and Thorup~\cite{Kawarabayashi17:jacm} finds a colouring using $O(n^{0.19996})$ colours. Their work continues a long line of research and is based on a semidefinite relaxation. On the negative side, it is believed that finding a $c$-colouring of a $k$-colourable graph is NP-hard for all constants $3\leq k\leq c$. Already in this regime (let alone for non-constant $c$) our understanding remains rather limited, despite lots of work and the development of complex techniques, as we will survey in Section~\ref{sec:related}. A natural and studied generalisation of graph colourings is that of graph homomorphisms and, more generally, constraint satisfaction problems~\cite{Hell08:survey}. Given two graphs $G$ and $H$, a map $h:V(G)\to V(H)$ is a \emph{homomorphism} from $G$ to $H$ if $h$ preserves edges; that is, if $\{h(u),h(v)\}\in E(H)$ whenever $\{u,v\}\in E(G)$~\cite{hell2004homomorphism_book}. A celebrated result of Hell and Ne\v{s}et\v{r}il established a dichotomy for the homomorphism problem with a fixed target graph~$H$, also known as the \emph{$H$-colouring problem}: deciding whether an input graph $G$ has a homomorphism to $H$ is solvable in polynomial time if $H$ is bipartite or if $H$ has a loop; for all other $H$ this problem is NP-hard~\cite{HellN90}. Note that the $H$-colouring problem for $H=K_k$, the complete graph on $k$ vertices, is precisely the graph colouring problem with $k$ colours. The constraint satisfaction problem (CSP) is a generalisation of the graph homomorphism problem from graphs to arbitrary relational structures. One type of CSP that has attracted a lot of attention is the one with a fixed target structure, also known as the \emph{non-uniform} CSP; see, e.g., the work of Jeavons, Cohen, and Gyssens~\cite{Jeavons97:jacm}, Bulatov~\cite{Bulatov06:3-elementJACM,Bulatov11:conservative}, and Barto and Kozik~\cite{Barto14:jacm,Barto16:sicomp}. Following the above mentioned dichotomy of Hell and Ne\v{s}et\v{r}il for the $H$-colouring~\cite{HellN90} and a dichotomy result of Schaefer for Boolean CSPs~\cite{Schaefer78:complexity}, Feder and Vardi famously conjectured a dichotomy for all non-uniform CSPs~\cite{Feder98:monotone}. The Feder-Vardi conjecture was recently confirmed independently by Bulatov~\cite{Bulatov17:focs} and Zhuk~\cite{Zhuk17:focs}. In fact, both proofs establish the so-called ``algebraic dichotomy'', conjectured by Bulatov, Jeavons, and Krokhin~\cite{Bulatov05:classifying}, which delineates the tractability boundary in algebraic terms. A high-level idea of the tractability boundary is that of higher-order symmetries, called polymorphisms, which allow to combine several solutions to a CSP instance into a new solution. The lack of non-trivial\footnote{We note that projections/dictators are not the only trivial polymorphims, cf.~\cite[Example~41]{bkw17:survey}.} polymorphisms guarantees NP-hardness, as shown already in~\cite{Bulatov05:classifying}. The work of Bulatov and Zhuk show that \emph{any} non-trivial polymorphism guarantees tractability. We refer the reader to a recent accessible survey by Barto, Krokhin, and Willard on the algebraic approach to CSPs~\cite{bkw17:survey}. Given two graphs $G$ an $H$ such that $G$ is $H$-colourable (i.e., there is a homomorphism from $G$ to~$H$), the \emph{promise constraint satisfaction problem} parametrised by $G$ and $H$, denoted by $\PCSP(G,H)$, is the following computational problem: given a $G$-colourable graph, find an $H$-colouring of this graph.\footnote{What we described is the ``search version'' of PCSPs. In the ``decision version'', the goal is to say \textsf{YES} if the input graph is $G$-colourable and \textsf{NO} if the input graph is not $H$-colourable. The decision PCSP reduces to the search PCSP but they are not known to be equivalent in general. However, as far as we know, all known positive results are for the search version, while all known negative results, including the new results from this paper, are for the decision version.} More generally, $G$ and $H$ do not have to be graphs but arbitrary relational structures. Note that if $G=H$ then we obtain the (search version of the) standard $H$-colouring and constraint satisfaction problem. PCSPs have been studied as early as in the classic work of Garey and Johnson~\cite{Garey76:jacm} on approximate graph colouring but a systematic study originated in the paper of Austrin, Guruswami, and H{\aa}stad~\cite{Austrin17:sicomp}, who studied a promise version of $(2k+1)$-SAT, called $(2+\epsilon)$-SAT. In a series of papers~\cite{Brakensiek16:ccc,Brakensiek18:soda,Brakensiek19:soda}, Brakensiek and Guruswami linked PCSPs to the universal-algebraic methods developed for the study of non-uniform CSPs~\cite{bkw17:survey}. In particular, the notion of weak polymorphisms, identified in~\cite{Austrin17:sicomp}, allowed for some ideas developed for CSPs to be be used in the context of PCSPs. The algebraic theory of PCSPs was then lifted to an abstract level by Bul\'in, Krokhin, and Opr\v{s}al in~\cite{BulinKO18}. Consequently, this theory was used by Ficak, Kozik, Ol\v{s}\'ak, and Stankiewicz to obtain a dichotomy for symmetric Boolean PCSPs~\cite{Ficak19:icalp}, thus improving on an earlier result from~\cite{Brakensiek18:soda}, which gave a dichotomy for symmetric Boolean PCSP with folding (negations allowed). \subsection{Prior and related work} \label{sec:related} While the NP-hardness of finding a $3$-colouring of a $3$-colourable graph was obtained by Karp~\cite{Karp72:reducibility} in 1972, the NP-hardness of finding a $4$-colouring of a $3$-colourable graph was only proved in 2000 by Khanna, Linial, and Safra~\cite{Khanna00:combinatorica} (see also the work of Guruswami and Khanna for a different proof~\cite{Guruswami04:sidma}). This result implied NP-hardness of finding a $(k+2\lfloor k/3\rfloor-1)$-colouring of a $k$-colourable graph for $k\geq 3$~\cite{Khanna00:combinatorica}. Early work of Garey and Johnson established NP-hardness of finding a $(2k-5)$-colouring of a $k$-colourable graph for $k\geq 6$~\cite{Garey76:jacm}. In 2016, Brakensiek and Guruswami proved NP-hardness of a $(2k-2)$-colouring of a $k$-colourable graph for $k\geq 3$~\cite{Brakensiek16:ccc}. Only very recently, Bul\'in, Krokhin, and Opr\v{s}al showed that finding a $5$-colouring of a $3$-colourable graph, and more generally, finding a $(2k-1)$-colouring of a $k$-colourable graph for any $k\geq 3$, is NP-hard~\cite{BulinKO18}. In 2001, Khot gave an asymptotic result -- he showed that for sufficiently large $k$, finding a $k^{\frac{1}{25}(\log k)}$-colouring of a $k$-colourable graph is NP-hard~\cite{Khot01}. In 2013, Huang improved the gap. For sufficiently large $k$, he showed that finding a $2^{\Omega(k^{1/3})}$-colouring of a $k$-colourable graph is NP-hard~\cite{Huang13}. The NP-hardness of colouring ($k$-colourable graphs) with $(2k-1)$ colours for $k\geq 3$ from~\cite{BulinKO18} and with $2^{\Omega(k^{1/3})}$ colours for sufficiently large $k$ from~\cite{Huang13} constitute the currently strongest known NP-hardness results for approximate graph colouring. Under stronger assumptions (Khot's 2-to-1 Conjecture~\cite{Khot02stoc} for $k\geq 4$ and its non-standard variant for $k=3$), Dinur, Mossel, and Regev showed that finding a $c$-colouring of a $k$-colourable graph is NP-hard for all constants $3\leq k\leq c$~\cite{Dinur09:sicomp} A variant of Khot's 2-to-1 Conjecture with imperfect completeness has recently been proved~\cite{DinurKKMS18,KhotMS18}, which implies hardness for approximate colouring variants where most but not all of the graph is guaranteed to be $k$-colourable. Hypergraphs colourings, a special case of PCSPs, is another line of work intensively studied. A $k$-colouring of a hypergraph is an assignment of colours $\{1,2,\ldots,k\}$ to its vertices that leaves no hyperedge monochromatic. Dinur, Regev, and Smyth showed that for any constants $2\leq k\leq c$, it is NP-hard to find a $c$-colouring of given $3$-uniform $k$-colourable hypergraph~\cite{Dinur05:combinatorica}. Other notions of colourings (such as different types of rainbow colourings) for hypergraphs were studied by Brakensiek and Guruswami~\cite{Brakensiek16:ccc,Brakensiek17:approx}, Guruswami and Lee~\cite{Guruswami18:combinatorica}, and Austrin, Bhangale, and Potukuchi~\cite{Austrin18:arxiv}. Some results are also known for colourings with a super-constant number of colours. For graphs, conditional hardness was obtained by Dinur and Shinkar~\cite{Dinur10:approx}. For hypergraphs, NP-hardness results were obtained in recent work of Bhangale~\cite{Bhangale18:icalp} and Austrin, Bhangale, and Potukuchi~\cite{Austrin19:arxiv}. \section{Results} For two graphs or digraphs $G$, $H$, we write $G \to H$ if there exists a homomorphism from $G$ to $H$.\footnote{In this paper, we allow graphs to have loops: the existence of homomorphisms for such graphs is trivial, but this allows us to make statements about graph constructions that will work without exceptions.} We are interested in the following computational problem. \begin{definition} Fix two graphs $G$ and $H$ with $G\to H$. The (decision variant of the) $\PCSP(G,H)$ is, given an input graph $I$, output $\textsf{YES}$ if $I\to G$, and $\textsf{NO}$ if $I\not\to H$. \end{definition} To state our results it will be convenient to use the following definition. \begin{definition} A graph $H$ is \emph{left-hard} if for every non-bipartite graph $G$ with $G \to H$, $\PCSP(G,H)$ is NP-hard. A graph $G$ is \emph{right-hard} if for every loop-less graph $H$ with $G \to H$, $\PCSP(G,H)$ is NP-hard. \end{definition} If $G \to G'$ and $H' \to H$, then $\PCSP(G,H)$ trivially reduces to $\PCSP(G',H')$ (this is called \emph{homomorphic relaxation}~\cite{BulinKO18}; intuitively, {increasing} the promise gap makes the problem {easier}). Therefore, if $H$ is a left-hard graph, then all graphs left of $H$ (that is, $H'$ such that $H' \to H$) are trivially left-hard.\footnote{Note that by our definition, bipartite graphs are vacuously left-hard.} If $G$ is right-hard, then all graphs right of $G$ are right-hard. For the same reason, since every non-bipartite graph admits a homomorphism from an odd cycle, to show that $H$ is left-hard it suffices to show that $\PCSP(C_n,H)$ is NP-hard for arbitrarily large odd $n$, where $C_n$ denotes the cycle on $n$ vertices. Dually, since every loop-less graph admits a homomorphism to a clique, to show that $G$ is right-hard it suffices to show that $\PCSP(G,K_k)$ is NP-hard for arbitrarily large $k$. It is conjectured that all non-trivial PCSPs for (undirected) graphs are NP-hard, greatly extending Hell and Ne\v{s}et\v{r}il's theorem: \begin{conjecture}[Brakensiek and Guruswami~\cite{Brakensiek18:soda}]\label{conj:main} $\PCSP(G,H)$ is NP-hard for every non-bipartite loop-less $G,H$. Equivalently, every loop-less graph is left-hard. Equivalently, every non-bipartite graph is right-hard. \end{conjecture} In addition to the results on classical colourings discussed above (the case where $G$ and $H$ are cliques), the following result was recently obtained in a novel application of topological ideas. \begin{theorem}[Krokhin and Opr\v{s}al~\cite{KrokhinO19}]\label{thm:K3} $K_3$ is left-hard. \end{theorem} \subsection{Improved hardness of classical colouring} In Section~\ref{sec:arc}, we focus on right-hardness. We use a simple construction called the \emph{arc digraph} or \emph{line digraph}, which decreases the chromatic number of a graph in a controlled way. The construction allows to conclude the following, in a surprisingly simple way: \begin{proposition}\label{prop:right-hard} There exists a right-hard graph if and only if $K_4$ is right-hard.% \footnote{Jakub Opr\v{s}al and Andrei Krokhin realised that in this Proposition, 4 can be improved to 3 by using the fact that $\delta(\delta(K_4))$ is 3-colourable, as proved by Rorabaugh, Tardif, Wehlau, and Zaguia~\cite{RTWZ16}. Details will appear in a future journal version.} \end{proposition} More concretely, we show in particular that $\PCSP(K_6,K_{2^k})$ log-space reduces to $\PCSP(K_4,K_k)$, for all $k \geq 4$. This contrasts with~\cite[Proposition~10.3]{BartoBKO19},\footnote{\cite{BartoBKO19} is a full version of~\cite{BulinKO18}. Proposition~10.3 in~\cite{BartoBKO19} is Proposition 5.31 in the previous two versions of~\cite{BartoBKO19}.} where it is shown to be impossible to obtain such a reduction with \emph{minion homomorphisms}: an algebraic reduction, described briefly in Section~\ref{subsec:category}, central to the framework of~\cite{BulinKO18,BartoBKO19} (in particular, there exists a $k$ such that $\PCSP(K_4,K_k)$ admits no minion homomorphism to any $\PCSP(K_{n'},K_{k'})$ for $4 < n' \leq k'$). Furthermore, we strengthen the best known asymptotic hardness: Huang~\cite{Huang13} showed that for all sufficiently large $n$, $\PCSP(K_n, K_{2^{n^{1/3}}})$ is NP-hard. We improve this in two ways, using Huang's result as a black-box. First, we improve the asymptotics from sub-exponential $2^{n^{1/3}}$ to single-exponential $\B{n} \sim \frac{2^n}{\sqrt{\pi n/2}}$. Second, we show the claim holds for $n$ as low as $4$. \newcommand{\ThmAsymp}{ For all $n \geq 4$, $\PCSP(K_n, K_{\B{n}-1})$ is NP-hard. } \begin{theorem}[\textbf{Main Result \#1}]\label{thm:asymp} \ThmAsymp \end{theorem} In comparison, the previous best result relevant for all integers $n$ was proved by Bul\'in, Krokhin, and Opr\v{s}al~\cite{BulinKO18}: $\PCSP(K_n, K_{2n-1})$ is NP-hard for all $n\geq 3$. For $n=3$ we are unable to obtain any results; for $n=4$ the new bound $\B{n}-1=5$ is worse than $2n-1=7$, while for $n=5$ the two bounds coincide at~9. However, already for $n=6$ we improve the bound from $2n-1=11$ to $\B{n}-1=19$. \subsection{Left-hardness and topology} In Section~\ref{sec:functors}, we focus on left-hardness. The main idea behind Krokhin and Opr\v{s}al's~\cite{KrokhinO19} proof that $K_3$ is left-hard is simple to state. To prove that $\PCSP(C_n,H)$ is NP-hard for all odd $n$, the algebraic framework of~\cite{BulinKO18} shows that it is sufficient to establish certain properties of \emph{polymorphisms}: homomorphisms $f \colon C_n^{L} \to H$ for $L \in \mathbb{N}$ (where $G^L=G \times \dots \times G$ is the $L$-fold tensor product\footnote{ The \emph{tensor} (or \emph{categorical}) \emph{product} $G \times H$ of graphs $G,H$ has pairs $(g,h) \in V(G) \times V(H)$ as vertices and $(g,h)$ is adjacent to $(g',h')$ whenever $g$ is adjacent to $g'$ (in $G$) and $h$ is adjacent to $h'$ (in $H$). }). For large $n$ the graph $C_n^{L}$ looks like an $L$-torus: an $L$-fold product of circles, so the pertinent information about $f$ seems to be subsumed by its topological properties (such as \emph{winding numbers}, when $H$ is a cycle). We refer to~\cite{KrokhinO19} for further details, but this general principle applies to any $H$ and in fact we prove (in Theorem~\ref{thm:topoMain} below) that whether $H$ is left-hard or not depends \emph{only} on its topology.\looseness=-1 The topology we associate with a graph is its \emph{box complex}. See Appendix~\ref{app:topo} for formal definitions and statements. Intuitively, the box complex $\BBox{H}$ is a topological space built from $H$ by taking the tensor product $H \times K_2$ and then gluing faces to each four-cycle and more generally, gluing higher-dimensional faces to complete bipartite subgraphs. The added faces ensure that the box complex of a product of graphs is the same as the product space of their box complexes: thanks to this, $\BBox{C_n^{L}}$ is indeed equivalent to the $L$-torus. The product with $K_2$ equips the box complex with a symmetry that swaps the two sides of $H \times K_2$. This make the resulting space a $\mathbb{Z}_2$-space: a topological space together with a continuous involution from the space to itself, which we denote simply as $-$. A \emph{$\mathbb{Z}_2$-map} between two $\mathbb{Z}_2$-spaces is a continuous function which preserves this symmetry: $f(-x)=-f(x)$. This allows to concisely state that a given map is ``non-trivial'' (in contrast, there is always \emph{some} continuous function from one space to another: just map everything to a single point). The main use of the box complex is then the statement that every graph homomorphism $G \to H$ induces a $\mathbb{Z}_2$-map from $\BBox{G}$ to $\BBox{H}$. Graph homomorphisms can thus be studied with tools from algebraic topology. The classical example of this is an application of the Borsuk-Ulam theorem: there is no $\mathbb{Z}_2$-map from $\Sphere^n$ to $\Sphere^m$ for $n > m$, where $\Sphere^n$ denotes the $n$-dimensional sphere with antipodal symmetry. Hence if $G$ and $H$ are graphs such that $\BBox{G}$ and $\BBox{H}$ are equivalent to $\Sphere^n$ and $\Sphere^m$, respectively, then there can be no graph homomorphism $G\to H$. See Figure~\ref{fig:box}. This is essentially the idea in Lov\'{a}sz' proof~\cite{Lovasz78} of Kneser's conjecture that the chromatic number of Kneser graphs $KG(n,k)$ is $n-2k+2$. In the language of box complexes, the proof amounts to showing that the box complex of a clique $K_c$ is equivalent to $\Sphere^{c-2}$, while the box complex of a Kneser graph contains $\Sphere^{n-2k}$. We refer to~\cite{matousek2008using} for an in-depth, yet accessible reference. We show that the left-hardness of a graph depends only on the topology of its box complex (in fact, it is only important what $\mathbb{Z}_2$-maps it admits, which is significantly coarser than $\mathbb{Z}_2$-homotopy equivalence): \begin{theorem}[\textbf{Main Result \#2}]\label{thm:topoMain} If $H$ is left-hard and $H'$ is a graph such that $\BBox{H'}$ admits a $\mathbb{Z}_{2}$-map to $\BBox{H}$, then $H'$ is left-hard. \end{theorem} \begin{figure}[t!] \centering \makebox[\textwidth][c]{ \input{figBox.tex} } \caption{The box complex of $K_4$ is the hollow cube (informally speaking; the drawing skips some irrelevant faces). It is equivalent ($\mathbb{Z}_2$-homotopy equivalent) to the sphere. The box complex of the circular clique $K_{7/2}$ is equivalent to the circle. Thus there cannot be a homomorphism from $K_4$ to $K_{7/2}$ (of course in this case it is easier to show this directly).} \label{fig:box} \end{figure} Using Krokhin and Opr\v{s}al's result that $K_3$ is left-hard (Theorem~\ref{thm:K3}), since $\BBox{K_3}$ is the circle $\Sphere^1$ (up to $\mathbb{Z}_2$-homotopy equivalence), we immediately obtain the following: \begin{corollary}\label{cor:S1} Every graph $H$ for which $\BBox{H}$ admits a $\mathbb{Z}_2$-map to $\mathcal{S}^1$ is left-hard. \end{corollary} Two examples of such graphs (other than 3-colourable graphs) are loop-less square-free graphs and circular cliques $K_{p/q}$ with $2<\frac{p}{q}<4$ (see Lemma~\ref{lem:boxes} for proofs), which we introduce next. \emph{Square-free graphs} are graphs with no cycle of length exactly 4. In particular, this includes all graphs of girth at least 5 and hence graphs of arbitrarily high chromatic number (but incomparable to $K_4$ and larger cliques, in terms of the homomorphism $\to$ relation). The \emph{circular clique} $K_{p/q}$ (for $p,q\in \mathbb{N}, \frac{p}{q}>2$) is the graph with vertex set $\mathbb{Z}_p$ and an edge from $i$ to every integer at least $q$ apart: $i+q, i+q+1, \dots, i+p-q$. They generalise cliques $K_n = K_{n/1}$ and odd cycles $C_{2n+1} \simeq K_{(2k+1)/k}$. Their basic property is that $K_{p/q} \to K_{p'/q'}$ if and only if $\frac{p}{q} \leq \frac{p'}{q'}$. Thus circular cliques refine the chain of cliques and odd cycles, corresponding to rational numbers between integers. For example: $$ \dots \to C_7 \to C_5 \to C_3 = K_3 \to K_{7/2} \to K_4 \to K_{9/2} \to K_5 \to \dots $$ The \emph{circular chromatic number} $\chi_c(G)$ is the infimum over $\frac{p}{q}$ such that $G \to K_{p/q}$. Therefore: \begin{corollary} For every $2<r \leq r'<4$, it is NP-hard to distinguish graphs $G$ with $\chi_c(G) \leq r$ from those with $\chi_c(G) > r'$. \end{corollary} In this sense, we conclude that $K_{4-\varepsilon}$ is left-hard, thus extending the result for $K_3$. However, the closeness to $K_4$ is only deceptive and no conclusions on 4-colourings follow. For $K_4$, since the box complex is equivalent to the standard 2-dimensional sphere, we can at least conclude that to prove left-hardness of $K_4$ it would be enough to prove left-hardness of any other graph with the same topology: these include all non-bipartite quadrangulations of the projective plane, in particular the Gr\"{o}tzsch graph, 4-chromatic generalised Mycielskians, and 4-chromatic Schrijver graphs~\cite{matousek2008using,BjornerL03}. In this sense, the exact geometry of $K_4$ is irrelevant. However, the fact that it is a finite graph, with only finitely many possible maps from $C_n^L$ for any fixed $n,L$ should still be relevant, as it is for $K_3$. It is also quite probable that any proof for a ``spherical'' graph would apply just as well to $K_4$, where the proof could be just notationally much simpler. \bigskip Finally, in Appendix~\ref{app:topo} we rephrase Krokhin and Opr\v{s}al's~\cite{KrokhinO19} proof of Theorem~\ref{thm:K3} in terms of the box complex. In particular, left-hardness of $K_3$ follows from some general principles and the fact that $\BBox{K_3}$ is a circle. The proof also extends to all graphs $H$ such that $\BBox{H}$ admits a $\mathbb{Z}_2$-map to $\Sphere^1$, giving an independent, self-contained proof of Corollary~\ref{cor:S1} (and Theorem~\ref{thm:K3} in particular). The general principle is that a homomorphism $C_n^L \to H$ induces a $\mathbb{Z}_2$-map $(\Sphere^1)^L \to \BBox{H}$, in a way that preserves minors (identifications within the $L$ variables) and automorphisms. (In the language of category theory, the box complex is a functor from the category of graphs to that of $\mathbb{Z}_2$-spaces, and the functor preserves products). In turn, the $\mathbb{Z}_2$-map induces a group homomorphism between the fundamental group of $(\Sphere^1)^L$, which is just $\mathbb{Z}^L$, and that of $\BBox{H}$. This is essentially the map $\mathbb{Z}^L \to \mathbb{Z}$ obtained in~\cite{KrokhinO19}. While this rephrasing requires a bit more technical definitions, the main advantage is that it allows to replace a tedious combinatorial argument (about winding numbers preserving minors) with straightforward statements about preserving products. \subsection{Methodology -- adjoint functors} While the proof of the first main result is given elementarily in Section~\ref{sec:arc}, it fits together with the second main result in a much more general pattern. The underlying principle is that pairs of graph constructions satisfying a simple duality condition give reductions between PCSPs. To introduce them, let us consider a concrete example. For a graph $G$ and an odd integer $k$, $\Lambda_k G$ is the graph obtained by subdividing each edge into a path of $k$ edges; $\Gamma_k G$ is the graph obtained by taking the $k$-th power of the adjacency matrix (with zeroes on the diagonal); equivalently, the vertex set remains unchanged and two vertices are adjacent if and only if there is a walk of length exactly $k$ in $G$. (For example $\Gamma_3 G$ has loops if $G$ has triangles). We say a graph construction $\Lambda$ (a function from graphs to graphs) is a \emph{thin (graph) functor} if $G\to H$ implies $\Lambda G \to \Lambda H$ (for all $G,H$). A pair of thin functors $(\Lambda,\Gamma)$ is a \emph{thin adjoint pair} if \begin{center} $\Lambda G \to H$ if and only if $G \to \Gamma H$. \end{center} We call $\Lambda$ the \emph{left adjoint} of $\Gamma$ and $\Gamma$ the \emph{right adjoint} of $\Lambda$. For all odd $k$, $(\Lambda_k,\Gamma_k)$ are a thin adjoint pair. For example, since $\Gamma_3 C_5 = K_5$, we have $G \to K_5$ if and only if $\Lambda_k G \to C_5$. This is a basic reduction that shows the NP-hardness of $C_5$-colouring; in fact adjointness of various graph construction is the principal tool behind the original proof of Hell and Ne\v{s}et\v{r}il's theorem (characterising the complexity of $H$-colouring)~\cite{HellN90}. In category theory, there is a stronger and more technical notion of (non-thin) \emph{functors} and \emph{adjoint pairs}. A thin graph functor is in fact a functor in the \emph{thin category of graphs}, that is, the category whose objects are graphs, and with at most one morphism from one graph to another, indicating whether a homomorphism exists or not. In other words, we are only interested in the existence of homomorphisms, and not in their identity and how they compose. Equivalently, we look only at the preorder of graphs by the $G \to H$ relation (we can also make this a poset by considering graphs up to homomorphic equivalence). In order-theoretic language, thin functors are just order-preserving maps, while thin adjoint functors are known as Galois connections. We prefer the categorical language as most of the constructions we consider are in fact functors (in the non-thin category of graphs), which is important for connections to the algebraic framework of~\cite{BulinKO18}, as we discuss in Section~\ref{subsec:category}. While unnecessary for our main results, we believe it may be important to understand these deeper connections to resolve the conjectures completely. Thin adjoint functors give us a way to reduce one PCSP to another. We say that a graph functor $\Gamma$ is log-space computable if, given a graph $G$, $\Gamma G$ can be computed in logarithmic space in the size of $G$. \begin{observation}\label{obs:adj1} Let $\Lambda,\Gamma$ be thin adjoint graph functors and let $\Lambda$ be log-space computable. Then $\PCSP(G, \Gamma H)$ reduces to $\PCSP(\Lambda G, H)$ in log-space, for all graphs $G,H$.\looseness=-1 \end{observation} \begin{proof} Let $F$ be an instance of $\PCSP(G, \Gamma H)$. Then $\Lambda F$ is an appropriate instance of $\PCSP(\Lambda G, H)$. Indeed, if $F \to G$, then $\Lambda F \to \Lambda G$ (because $\Lambda$ is a thin functor). If $\Lambda F \to H$, then $F \to \Gamma H$ by adjointness. \end{proof} In some cases, a thin functor $\Gamma$ that is a thin right adjoint in a pair $(\Lambda, \Gamma)$ is also a thin left adjoint in a pair $(\Gamma,\Omega)$. This allows to get a reduction in the opposite direction: \begin{observation}\label{obs:adj2} Let $(\Lambda,\Gamma)$ and $(\Gamma,\Omega)$ be thin adjoint pairs of functors. Then $\PCSP(\Gamma G, H)$ and $\PCSP(G, \Omega H)$ are log-space equivalent (assuming $\Lambda$ and $\Gamma$ are log-space computable). \end{observation} \begin{proof} The previous observation gives a reduction from $\PCSP(G, \Omega H)$ to $\PCSP(\Gamma G, H)$. For the other direction, let $F$ be an instance of $\PCSP(\Gamma G, H)$. Then $\Lambda F$ is an appropriate instance of $\PCSP(G, \Omega H)$. Indeed, if $F \to \Gamma G$, then $\Lambda F \to G$. If $\Lambda F \to \Omega H$, then $F \to \Gamma \Omega H \to H$. The last arrow follows from the trivial $\Omega H \to \Omega H$. \end{proof} The proofs of Observations~\ref{obs:adj1} and~\ref{obs:adj2} of course extend to digraphs and general relational structures. Note that the above proofs reduce decision problems; they work just as well for search problems: all the thin adjoint pairs $(\Lambda,\Gamma)$ we consider with $\Lambda$ log-space computable also have the property that a homomorphism $\Lambda F \to H$ can be computed from a homomorphism $F \to \Gamma H$ and vice versa, in space logarithmic in the size of $F$. As we discuss in Section~\ref{sec:functors}, all of our results follow from reductions that are either trivial (homomorphic relaxations) or instantiations of Observation~\ref{obs:adj1}. While for the first main result we prefer to first give a direct proof that avoids this formalism (in Section~\ref{sec:arc}), it will be significantly more convenient for the second main result (in Section~\ref{subsec:secondMainProof}), where we use a certain right adjoint $\Omega_k$ to the $k$-th~power~$\Gamma_k$. \subsection{Hedetniemi's conjecture} Another leitmotif of this paper is the application of various tools developed in research around Hedetniemi's conjecture. A graph $K$ is \emph{multiplicative} if $G \times H \to K$ implies $G \to K$ or $H \to K$. The conjecture states that all cliques $K=K_n$ are multiplicative. Equivalently, $\chi(G \times H) = \min(\chi(G),\chi(H))$; see~\cite{zhu1998survey,Sauer01,Tardif08:survey} for surveys. In a very recent breakthrough, Shitov~\cite{Shitov19} proved that the conjecture is false (for large $n$). The arc digraph construction, which we will use in Section~\ref{sec:arc} to prove Theorem~\ref{thm:asymp}, was originally used by Poljak and Rödl~\cite{PoljakR81} to show certain asymptotic bounds on chromatic numbers of products. The functors $\Lambda_k,\Gamma_k,\Omega_k$ were applied by Tardif~\cite{Tardif05:jctb} to show that colourings to circular cliques $K_{p/q}$ ($2<\frac{p}{q}<4$) satisfy the conjecture. Matsushita~\cite{Matsushita17} used the box complex to show that Hedetniemi's conjecture would imply an analogous conjecture in topology. This was independently proved by the first author~\cite{Wrochna17b} using $\Omega_k$ functors, while the box complex was used to show that square-free graphs are multiplicative~\cite{Wrochna17}. See~\cite{FoniokT17} for a survey on applications of adjoint functors to the conjecture. The refutation of Hedetniemi's conjecture and the fact that methods for proving the multiplicativity of $K_3$ extend to $K_{4-\varepsilon}$ and square-free graphs, but fail to extend to $K_4$, might suggest that the Conjecture~\ref{conj:main} is doomed to the same fate. However, it now seems clear that proving multiplicativity requires more than just topology~\cite{TardifW18}: known methods do not even extend to all graphs $H$ such that $\BBox{H}$ is a circle. This contrasts with Theorem~\ref{thm:topoMain}: topological tools work much more gracefully in the setting of PCSPs. \section{The arc digraph construction}\label{sec:arc} Let $D$ be a digraph. The \emph{arc digraph} (or \emph{line digraph}) of $D$, denoted $\delta D$ , is the digraph whose vertices are arcs (directed edges) of $D$ and whose arcs are pairs of the form $((u,v),(v,w))$. We think of undirected graphs as symmetric relations: digraphs in which for every arc $(u,v)$ there is an arc $(v,u)$. So for an undirected graph $G$, $\delta(G)$ has $2|E(G)|$ vertices and is a directed graph: the directions will not be important in this section, but will be in Section~\ref{subsec:otherExamples}. The chromatic number of a digraph is the chromatic number of the underlying undirected graph (obtained by symmetrising each arc; so $\chi(D) \leq n$ if and only if $D \to K_n$). The crucial property of the arc digraph construction is that it decreases the chromatic number in a controlled way (even though it is computable in log-space!). We include a short proof for completeness. We denote by $[n]$ the set $\{1,2,\ldots,n\}$. \begin{lemma}[Harner and Entringer~\cite{HarnerE72}]\label{lem:approxPoljakRodl} For any graph $G$: \begin{itemize} \item if $\chi(\delta(G)) \leq n$, then $\chi(G) \leq 2^n$; \item if $\chi(G) \leq \binom{n}{\lfloor n/2\rfloor}$, then $\chi(\delta(G)) \leq n$. \end{itemize} \end{lemma} \begin{proof} Suppose $\delta G$ has an $n$-colouring. Recall that we think of $G$ as a digraph with two arcs $(u,v)$ and $(v,u)$ for each edge $\{u,v\} \in E(G)$; thus $\delta G$ contains two vertices $(u,v)$ and $(v,u)$, as well as (by definition of $\delta$) two arcs from one pair to the other. In particular, an $n$-colouring of $\delta G$ gives distinct colours to $(u,v)$ and $(v,u)$. Define a $2^n$-colouring $\phi$ of $G$ by assigning to each vertex $v$ the set $\phi(v)$ of colours of incoming arcs. For any edge $\{u,v\}$ of $G$, $\phi(v)$ contains the colour $c$ of the arc $(u,v)$. Since every arc incoming to $u$ gets a different colour from $(u,v)$, the set $\phi(u)$ does not contain $c$. Hence $\phi(u) \neq \phi(v)$, so $\phi$ is a proper colouring. Suppose $G$ has a $\binom{n}{\lfloor n/2\rfloor}$-colouring $\phi$. We interpret colours $\phi(v)$ as $\lfloor n/2\rfloor$-element subsets of $[n]$. Define an $n$-colouring of $\delta G$ by assigning to each arc $(u,v)$ an arbitrary colour in $\phi(u)\setminus \phi(v)$ (the minimum, say). Such a colour exists because $\phi(u) \neq \phi(v)$. For arcs $(u,v)$, $(v,w)$ clearly $\phi(u)\setminus \phi(v)$ is disjoint from $\phi(v) \setminus \phi(w)$, so this is a proper colouring of $\delta(G)$.\looseness=-1 \end{proof} The proofs in fact works for digraphs as well. For graphs, it is not much harder to show an exact correspondence (we note however that most conclusions only require the above approximate correspondence). Let us denote $b(n)\vcentcolon= \B{n}$. \begin{lemma}[Poljak and R\"odl~\cite{PoljakR81}]\label{lem:PoljakRodl} For a (symmetric) graph $G$, \[\chi(\delta(G)) = \min\{n \mid \chi(G) \leq b(n)\}.\] In other words, $\delta G \to K_n$ if and only if $G \to K_{b(n)}$. \end{lemma} This immediately gives the following implication for approximate colouring: \begin{lemma}\label{lem:red} $\PCSP(K_{b(n)},K_{b(k)})$ log-space reduces to $\PCSP(K_n,K_k)$, for all $n,k \in \mathbb{N}$. \end{lemma} \begin{proof} Let $G$ be an instance of the first problem. Then $\delta G$ is a suitable instance of $\PCSP(K_n,K_k)$: if $G \to K_{b(n)}$, then $\delta G \to K_n$. If $\delta G \to K_k$, then $G \to K_{b(k)}$. \end{proof} \begin{remark} As a side note, adding a universal vertex gives the following obvious reduction: $\PCSP(K_n,K_k)$ log-space reduces to $\PCSP(K_{n+1},K_{k+1})$, for $n,k \in \mathbb{N}$. \end{remark} Recall also that if $n \leq n' \leq k' \leq k$, then $\PCSP(K_n,K_k)$ trivially reduces to $\PCSP(K_{n'},K_{k'})$. One corollary of Lemma~\ref{lem:red} is that if any clique of size at least 4 is right-hard, then all of them are: \begin{proposition}\label{prop:rightHard} For all integers $n,n' \geq 4$, $\PCSP(K_n,K_k)$ is NP-hard for all $k\geq n$ if and only if $\PCSP(K_{n'},K_{k'})$ is NP-hard for all $k' \geq n'$. \end{proposition} \begin{proof} Let $n \leq n'$. For one direction, right-hardness of $K_n$ trivially implies right-hardness of $K_{n'}$. On the other hand, we claim that if $K_{b(n)}$ is right-hard, then so is $K_n$. Indeed, suppose $\PCSP(K_{b(n)},K_k)$ is hard for all $k \geq b(n)$. In particular it is hard for all $k$ of the form $k=b(k')$ for an integer $k' \geq n$. Hence by Lemma~\ref{lem:red}, $\PCSP(K_n,K_{k'})$ is hard for all $k' \geq n$. Suppose $K_n$ is not right-hard. Then $K_{b(n)}$ is not right-hard, $K_{b(b(n))}$ is not right-hard and so on. Since starting with $n\geq 4$, the sequence $b(b(\dots n \dots))$ grows to infinity, we conclude that $K_{n''}$ is not right-hard for some $n'' \geq n'$. Therefore, trivially $K_{n'}$ is not right-hard. \end{proof} In other words if any loop-less graph $H$ is right-hard, then trivially some large enough clique $K_{\chi(H)}$ is right-hard; by the above, $K_4$ and all graphs right of it are right-hard. This proves Proposition~\ref{prop:right-hard}. The proof fails to extend to $K_3$ because $b(3)=\B{3}$ is not strictly greater than 3. \bigskip The other consequence we derive from Lemma~\ref{lem:red} is a strengthening of Huang's result: \begin{theorem}[Huang~\cite{Huang13}]\label{thm:Huang} For all sufficiently large $n$, $\PCSP(K_n, K_{2^{\Omega(n^{1/3})}})$ is NP-hard. \end{theorem} {\renewcommand{\thetheorem}{\getrefnumber{thm:asymp}} \begin{theorem}[\textbf{Main Result \#1}] \ThmAsymp \end{theorem} \addtocounter{theorem}{-1} } We thus improve the asymptotics from sub-exponential $f(n) \vcentcolon= 2^{n^{1/3}}$ to single-exponential $b(n) = \B{n} \sim \frac{2^n}{\sqrt{\pi n/2}}$. The informal idea of the proof is that any $f(n)$ can be improved to $b^{-1}(f(b(n)))$. Since $b(n)$ is roughly exponential and $b^{-1}(n)$ is roughly logarithmic, starting from a function $f(n)$ of order $\exp^{(i+1)}(\alpha \cdot \log^{(i)}(n))$ with $i$-fold compositions and a constant $\alpha > 0$, such as $f(n)=2^{n^{1/3}} = 2^{2^{\frac{1}{3} \log n}}$ from Huang's hardness, results~in \[b^{-1}(f(b(n))) \approx \log\Big(\exp^{(i+1)}\big(\alpha \cdot \log^{(i)}(\exp(n))\big)\Big) = \exp^{(i)}(\alpha \cdot \log^{(i-1)}(n)),\] so a similar composition but with $i$ decreased. In a constant number of steps, this results in a single-exponential function. In fact using one more step, but without approximating the function $b(n)$, this results in exactly $b(n)-1$. We note it would not be sufficient to start from a quasi-polynomial $f(n)$, like $n^{\Theta(\log n)}$ in Khot's~\cite{Khot01} result. \begin{proof}[Proof of Theorem~\ref{thm:asymp}] By Lemma~\ref{lem:red}: \begin{center} $\PCSP(K_{b(n)},K_{b(m)})$ log-space reduces to $\PCSP(K_n, K_m)$, for all $n,m \in \mathbb{N}$. \end{center} For any $k \in \mathbb{N}$, let $m = \lfloor \log k\rfloor$ (all logarithms are base-2); then $b(m) \leq 2^m \leq k$, hence $\PCSP(K_{b(n)},K_{k})$ trivially reduces to $\PCSP(K_{b(n)},K_{b(m)})$. Therefore, composing the two reductions: \begin{center} $\PCSP(K_{b(n)},K_{k})$ reduces to $\PCSP(K_{n},K_{\lfloor \log k \rfloor})$, for any $n,k \in \mathbb{N}$. \end{center} Starting from Theorem~\ref{thm:Huang} we have a constant $C$ such that: \begin{center} $\PCSP(K_{n}, K_{2^{\lfloor C \cdot n^{1/3}\rfloor}})$ is NP-hard, for sufficiently large $n$. \end{center} Hence, substituting $n=b(k)$: \begin{center} $\PCSP(K_{b(k)}, K_{2^{\lfloor C \cdot b(k)^{1/3}\rfloor }})$ is NP-hard, for sufficiently large $k$. \end{center} Applying the above reduction, since $\lfloor \log 2^{\lfloor C \cdot b(k)^{1/3}\rfloor} \rfloor = \lfloor C \cdot b(k)^{1/3}\rfloor \geq (\frac{2^k}{k})^{1/3}\geq 2^{k/4}$ for sufficiently large $k$, we conclude: \begin{center} $\PCSP(K_{k}, K_{2^{k/4}})$ is NP-hard, for sufficiently large $k$. \end{center} We repeat this process to bring the constant further ``down''. That is, we substitute $b(k)$ for $k$ and apply the above reduction again. Since $\lfloor \log 2^{b(k)/4} \rfloor = \lfloor b(k)/4 \rfloor \geq 2^k/4k$ for sufficiently large $k$, we conclude: \begin{center} $\PCSP(K_{k}, K_{2^{k}/4k})$ is NP-hard, for sufficiently large $k$. \end{center} To apply the reduction one more time, notice that for large $k$, $b(k) \geq \frac{3}{2} b(k-1)$ (because $b(2k) = \binom{2k}{k} = \binom{2k-1}{k-1} \frac{2k}{k} = 2 \cdot b(2k-1) \geq \frac{3}{2} b(2k-1)$ and $b(2k+1) = \binom{2k+1}{k} = \binom{2k}{k} \frac{2k+1}{k+1} = b(2k) (2-\frac{1}{k+1})\geq \frac{3}{2} b(2k)$). Therefore $\lfloor \log (2^{b(k)}/4b(k)) \rfloor \geq b(k) - \log b(k) \geq \frac{2}{3} b(k) \geq b(k-1)$ for sufficiently large $k$, hence: \begin{center} $\PCSP(K_{k}, K_{b(k-1)})$ is NP-hard, for sufficiently large $k$. \end{center} Substituting $b(k)$ for $k$ one last time: \begin{center} $\PCSP(K_{b(k)}, K_{b(b(k)-1)})$ is NP-hard, for sufficiently large $k$. \end{center} Composing with Lemma~\ref{lem:red} one last time: \begin{center} $\PCSP(K_{k}, K_{b(k)-1})$ is NP-hard, for sufficiently large $k$. \end{center} This concludes the improvement in asymptotics. Moreover, one can notice that the requirements on ``sufficiently large $k$'' gets relaxed whenever we substitute $b(k)$ for~$k$. Formally, let $k$ be maximum such that $\PCSP(K_{k}, K_{b(k)-1})$ is not NP-hard. Then because of Lemma~\ref{lem:red}, $\PCSP(K_{b(k)}, K_{b(b(k)-1)})$ is not NP-hard, and because $b(b(k)-1) \leq b(b(k))-1$, trivially $\PCSP(K_{b(k)}, K_{b(b(k))-1})$ is not NP-hard either. That is, $\PCSP(K_n,K_{b(n)-1})$ is not NP-hard for $n=b(k)$. By maximality of $k$, $k \geq n$. But $k \geq b(k)$ is only possible when $k < 4$. Hence hardness holds for all $k \geq 4$. \end{proof} \section{Adjoint functors and topology}\label{sec:functors} \subsection{\texorpdfstring{Thin functors $\Lambda_k,\Gamma_k,\Omega_k$}{Thin functors}} \label{subsec:secondMainProof} Recall that $\Lambda_k$ denotes $k$-subdivision and $\Gamma_k$ denotes the $k$-th power of a graph. For all odd $k$, they are thin adjoint graph functors: \begin{center} $\Lambda_k G \to H$ if and only if $G \to \Gamma_k H$. \end{center} \noindent More surprisingly, $\Gamma_k$ is itself the thin \emph{left} adjoint of a certain thin functor $\Omega_k$: \begin{center} $\Gamma_k G \to H$ if and only if $G \to \Omega_k H$. \end{center} This characterizes $\Omega_k G$ up to homomorphic equivalence. The exact definition is irrelevant, but we state it for completeness: for $k=2\ell+1$, the vertices of $\Omega_k$ are tuples $(A_0,\dots,A_\ell)$ of vertex subsets $A_i \subseteq V(G)$ such that $A_0$ contains exactly one vertex. Two such tuples $(A_0,\dots,A_\ell)$ and $(B_0,\dots,B_\ell)$ are adjacent if $A_i \subseteq B_{i+1}$, $B_i \subseteq A_{i+1}$ (for $i=0\dots \ell-1$) and $A_\ell$ is fully adjacent to $B_\ell$ (meaning $a$ is adjacent to $b$ in $G$, for $a \in A_k, b \in B_k$). We note that $\Lambda_k$ and $\Gamma_k$ are log-space computable, for all odd $k$; however, $\Omega_k$ is not: $\Omega_k G$~is exponentially larger than $G$. See~\cite{Wrochna17b} for more discussion about the thin functors $\Lambda_k,\Gamma_k,\Omega_k$ and their properties. Observation~\ref{obs:adj1} tells us that $\PCSP(G, \Omega_k H)$ log-space reduces to $\PCSP(\Gamma_k G, H)$ (in fact, by Observation~\ref{obs:adj2}, they are equivalent). To give conclusions on left-hardness, we will need to observe only two more facts about the functors $\Lambda_k,\Gamma_k,\Omega_k$. First, $\Omega_k G \to G$ for all $G$ (it suffices to map $(A_0,\dots,A_{l-1},A_\ell) \in V(\Omega_{2\ell+1} G)$ to the unique vertex in $A_0$). Second, it is not hard to check that $\Gamma_k \Lambda_k G \to G$ and hence by adjointness $\Lambda_k G \to \Omega_k G$ for all $G$ and odd $k$ (see Lemma 2.3 in~\cite{Wrochna17b}). \begin{lemma}\label{lem:left-hard} For every odd $k$, $\Omega_k H$ is left-hard if and only if $H$ is left-hard. \end{lemma} \begin{proof} If $H$ is left-hard, then trivially so is $\Omega_k H$ because $\Omega_k H \to H$. For the other implication, suppose $\Omega_k H$ is left-hard, that is, $\PCSP(G, \Omega_k H)$ is hard for every non-bipartite $G$ such that $G \to \Omega_k H$. By Observation~\ref{obs:adj1}, this implies $\PCSP(\Gamma_k G, H)$ is hard. Let $G'$ be any non-bipartite graph such that $G' \to H$. We want to show that $\PCSP(G', H)$ is hard. Observe that $\Omega_k G'$ is non-bipartite, because $\Lambda_k G' \to \Omega_k G'$ and $\Lambda_k$ subdivides each edge of $G'$ an odd number of times. Since $\Omega_k G' \to \Omega_k H$, using $G := \Omega_k G'$ we conclude that $\PCSP(\Gamma_k \Omega_k G', H)$ is hard. Since $\Gamma_k \Omega_k G' \to G'$, this implies $\PCSP(G', H)$ is hard. \end{proof} As an example, consider the circular clique $K_{7/2}$ (we have $K_3 \to K_{7/2} \to K_4$). Knowing that $K_3$ is left-hard, one could check that $\Omega_3(K_{7/2})$ is 3-colorable and hence left-hard as well; the above lemma then allows to conclude that $K_{7/2}$ is left-hard. What other graphs could one use in place of $K_{7/2}$? The answer turns out to be topological. Intuitively, while the operation $\Gamma_k$ gives a ``thicker'' graph, the operation $\Omega_k$ gives a ``thinner'' one. In fact, $\Omega_k$ behaves like barycentric subdivision in topology: it preserves the topology of a graph (formally: its box complex is $\mathbb{Z}_2$-homotopy equivalent to the original graph's box complex) but refines its geometry. With increasing $k$, this eventually allows to model any continuous map with a graph homomorphism; in particular: \begin{theorem}[\cite{Wrochna17b}]\label{thm:approx} There exists a $\mathbb{Z}_2$-map $\BBox{G} \to_{\mathbb{Z}_2} \BBox{H}$ if and only if for some odd $k$, $\Omega_k G \to H$. \end{theorem} \noindent This concludes our second main result: \begin{proof}[Proof of Theorem~\ref{thm:topoMain}] Let $H$ be left-hard and let $H'$ be a graph such that $\BBox{H'}$ admits a $\mathbb{Z}_{2}$-map to $\BBox{H}$. By Theorem~\ref{thm:approx}, $\Omega_k H' \to H$ for some odd $k$. Trivially then, $\Omega_k H'$ is left-hard. By Lemma~\ref{lem:left-hard}, $H'$ is left-hard. \end{proof} \subsection{Other examples of adjoint functors}\label{subsec:otherExamples} The arc construction $\delta$ is also an example of a digraph functor which admits both a thin left adjoint $\delta_L$ and a thin right adjoint $\delta_R$;% \footnote{For the interested reader: $\delta_L D$ is obtained by making a new arc $(s_v,t_v)$ for each vertex of $D$ and then for each arc $(u,v)$ of $D$, gluing $t_u$ with $s_v$ (which results in many transitive gluings); $\delta_R D$ has a vertex for each pair $S,T \subseteq V(D)$ such that $S \times T \subseteq E(D)$, and an arc from $(S,T)$ to $(S',T')$ iff $T \cap S' \neq \emptyset$.} this adjointness essentially gives a proof of Lemma~\ref{lem:approxPoljakRodl}, see~\cite[Proposition~3.3]{FoniokT17}. In fact, Lemma~\ref{lem:red}, and hence all results of Section~\ref{sec:arc}, can be deduced as instantiations of Observation~\ref{obs:adj1} and homomorphic relaxations as follows. Let $\sym(D)$ be the symmetric closure of a digraph $D$ and let $\sub(D)$ be the maximal symmetric subgraph of $D$; note $\sub(D) \to D \to \sym(D)$. Observe that they are thin adjoint functors: $\sym(D) \to D'$ if and only if $D \to \sub(D')$, for all digraphs $D,D'$.\footnote{As Jakub Opr\v{s}al observed, this is in fact the composition of two adjoint pairs: taking $\sym$ and $\sub$ as functors from digraphs to graphs and the inclusion functor $\iota$ from graphs to digraphs, we have $\sym(D) \to G$ iff $D \to \iota(G)$ and $\iota(G) \to D$ iff $G \to \sub(D)$.} Poljak and R\"odl~\cite{PoljakR81} showed that $\sub(\delta_R(K_{k})) \to K_{b(k)}$ (the $\sub$ is essential here); recall also that $\delta(\sym(K_{b(n)})) \to K_n$. Therefore, $\PCSP(K_{b(n)},K_{b(k)})$ trivially reduces to $\PCSP(K_{b(n)},\sub(\delta_R(K_{k})))$, which by Observation~\ref{obs:adj1} log-space reduces to $\PCSP(\delta(\sym(K_{b(n)})),K_k)$, which trivially reduces to $\PCSP(K_n,K_k)$, proving Lemma~\ref{lem:red}. From Observation~\ref{obs:adj2} we also have: \begin{corollary} $\PCSP(\delta(G),H)$ is log-space equivalent to $\PCSP(G, \delta_R(H))$, for all digraphs~$G, H$. \end{corollary} Another example of a thin adjoint pair (but not triple) of functors is given by products and exponential graphs (see e.g.~\cite{FoniokT13} for definitions): for any graphs $F,G,H$, we have $F \times G \to H$ if and only if $G \to H^F$. That is, for any graph $F$, the operations $G \mapsto F \times G$ and $H \mapsto H^F$ are left and right adjoints, respectively. By Observation~\ref{obs:adj1}: \begin{corollary} $\PCSP(G, H^F)$ reduces to $\PCSP(F \times G, H)$ in log-space. \end{corollary} Here $\times$ is the \emph{tensor} (or \emph{categorical}) product, in particular $G \to H_1 \times H_2$ if and only if $G \to H_1$ and $G \to H_2$. Nevertheless, a few other products have an associated exponentiation as well. These and other examples fall into a pattern known as \emph{Pultr functors} -- see~\cite{FoniokT13} for an extended discussion (we note here that \emph{central Pultr functors}, like $\Gamma_k$ or $\delta$, are a kind of pp-interpretation). Foniok and Tardif~\cite{FoniokT15} studied which digraph functors admit both thin left and right adjoints. The box complex also admits a left adjoint, though they involve two categories. More precisely, the functor $G \mapsto \Hom(K_2, G)$ (see definitions in Appendix~\ref{app:topo}) gives a $\mathbb{Z}_2$-simplicial complex that is $\mathbb{Z}_2$-homotopy equivalent to the box complex. As proved by Matsushita~\cite{Matsushita17}, it admits a left adjoint $A$ from the category of $\mathbb{Z}_2$-simplicial complexes (with $\mathbb{Z}_2$-simplicial maps as morphisms) to the category of graphs. \subsection{Relation to the algebraic framework}\label{subsec:category} We will need basic concepts from the algebraic approach to (P)CSPs, such as polymorphisms~\cite{Austrin17:sicomp,Brakensiek18:soda}, minions, and minion homomorphisms~\cite{BulinKO18}. We shall define them only for graphs as we do not need them for relational structures. We refer the reader to~\cite{bkw17:survey,BulinKO18} for more details, examples, and general definitions. An $n$-ary \emph{polymorphism} of two graphs $G$ and $H$ is a homomorphism from $G^n$ to $H$; that is, a map $f:V(G)^n\to V(H)$ such that, for all edges $(u_1,v_1),\ldots,(u_n,v_n)$ in $G$, $(f(u_1,\ldots,u_n),f(v_1,\ldots,v_n))$ is an edge in $H$. We denote by $\Pol(G,H)$ the set of all polymorphisms of $G$ and $H$. Given an $n$-ary function $f:A^n\to B$, the, say, first coordinate is called \emph{essential} if there exist $a,a'\in A$ and $\vec{a}\in A^{n-1}$ such that $f(a,\vec{a})\neq f(a',\vec{a})$; otherwise, the first coordinate is called \emph{inessential} or \emph{dummy}. Analogously, one defines the $i$-th coordinate to be (in)essential. The \emph{essential arity} of $f$ is the number of essential coordinates. Let $f:A^n\to B$ and $g:A^m\to B$ be $n$-ary and $m$-ary functions, respectively. We call $f$ a \emph{minor} of $g$ if $f$ can be obtained from $g$ by identifying variables, permuting variables, and introducing inessential variables. More formally, $f$ is a minor of $g$ given by a map $\pi:[m]\to[n]$ if $f(x_1,\ldots,x_n)=g(x_{\pi(1)},\ldots,x_{\pi(m)})$. A \emph{minion} on a pair of sets $(A,B)$ is a non-empty set of functions (of possibly different arities) from $A$ to $B$ that is closed under taking minors. A minion is said to have \emph{bounded essential arity} if there is some $k$ such that every function from the minion has essential arity at most $k$. Let $\mathscr M $ and $\mathscr N $ be two minions, not necessarily on the same pairs of sets. A map $\xi:\mathscr M\to\mathscr N$ is called a \emph{minion homomorphism} if (1) it preserves arities; i.e., maps $n$-ary functions to $n$-ary functions, for all $n$; and (2) it preserves taking minors; i.e., for each $\pi:[m]\to[n]$ and each $m$-ary $g\in\mathscr M$, we have $\xi(g)(x_{\pi(1)},\ldots,x_{\pi(m)})=\xi(g(x_{\pi(1)},\ldots,x_{\pi(m)}))$. Minion homomorphisms provide an algebraic way to give reductions between PCSPs. \begin{theorem}[\cite{BulinKO18}] If there is a minion homomorphism $\xi \colon \Pol(G_1,H_1) \to \Pol(G_2,H_2)$, then $\PCSP(G_2,H_2)$ is log-space reducible to $\PCSP(G_1,H_1)$. \end{theorem} The following hardness result is a special case of a result obtained in~\cite{BulinKO18} via a reduction from Gap Label Cover. It gives an algebraic tool to prove hardness for PCSPs. \begin{theorem}[\cite{BulinKO18}]\label{thm:minbndarity} Let $G$ and $H$ be two graphs with $G\to H$. Assume that there exists a minion homomorphism $\xi:\Pol(G,H)\to\mathscr M$ for some minion $\mathscr M$ on a pair of (possibly infinite) sets such that $\mathscr M$ has bounded essential arity and does not contain a constant function (i.e., a function without essential variables). Then $\PCSP(G,H)$ is NP-hard. \end{theorem} Our methods do not give minion homomorphisms in general: while Observation~\ref{obs:adj1} gives a reduction from $\PCSP(G,\Gamma H)$ to $\PCSP(\Lambda G, H)$, it does not give a minion homomorphism from which the reduction would follow (from $\Pol(\Lambda G, H)$ to $\Pol(G,\Gamma H)$). Indeed it cannot, as discussed below Proposition~\ref{prop:right-hard}. However, adjoint functors in the (non-thin) category of graphs do imply such a minion homomorphism. In the remainder of this section, we assume knowledge of basic definitions in category theory. One can define minions in any Cartesian category $\mathcal{C}$ (i.e. a category with all finite products), using morphisms of $\mathcal{C}$ in place of functions. For objects $G,H \in \mathcal{C}$, $\Pol_{\mathcal{C}}(G,H)$ is the minion of morphisms from $G^L$ (the $L$-fold categorical product of $G$) to $H$. A function $\pi\colon [L] \to [L']$ induces a morphism $\pi_G \colon G^{L'} \to G^L$. For a graph $G$, it maps $(v_1,\dots,v_{L'})$ to $(v_{\pi(1)},\dots,v_{\pi(L)})$. In general, it can be defined as the product morphism $\langle p_{\pi(1)},\dots,p_{\pi(L)} \rangle$ of appropriate projections $p_i \colon G^L \to G$. For a polymorphism $f \colon G^L \to H$, the minor of $f$ by $\pi$ is then simply $f \circ \pi_G \colon G^{L'} \to H$. For objects $G$ and $H$ of a category, we denote by $\hom(G,H)$ the set of morphisms from $G$ to $H$. \begin{lemma}\label{lem:cat} Let $\Gamma\colon \mathcal{C}\to\mathcal{D}$ and $\Omega\colon \mathcal{D}\to\mathcal{C}$ be adjoint functors between Cartesian categories $\mathcal{C},\mathcal{D}$. Then for all objects $G$ in $\mathcal{C}$ and $H$ in $\mathcal{D}$, there is a minion homomorphism from $\Pol_{\mathcal{D}}(\Gamma G, H)$ to $\Pol_{\mathcal{C}}(G,\Omega H)$. If, moreover, $\Gamma$ preserves products then this is a minion isomorphism. \end{lemma} \begin{proof} This essentially amounts to checking definitions. We have a natural morphism $\psi_L \colon \Gamma(G^L) \to (\Gamma G)^L$ defined as the product morphism $\langle \Gamma p_1,\dots, \Gamma p_L \rangle$ for projections $p_i \colon G^L \to G$. It is natural in the following sense: for every function $\pi\colon [L] \to [L']$, the following diagram commutes:\vspace*{-\baselineskip} $$\begin{tikzcd} \Gamma(G^{L'}) \ar[d, "\Gamma \pi_G"] \ar[r, "\psi_{L'}"] & (\Gamma G)^{L'} \ar[d, "\pi_{\Gamma G}"] \\ \Gamma(G^L) \ar[r, "\psi_L"] & (\Gamma G)^L \end{tikzcd}$$ \noindent Indeed, $\psi_L \circ \Gamma \pi_G = \pi_{\Gamma G} \circ \psi_{L'}$, because it is the unique morphism whose composition with $p'_i \colon (\Gamma G)^L \to \Gamma G$ is $\Gamma p_{\pi(i)}$ (in other words, it is the product morphism $\langle \Gamma p_{\pi(1)},\dots, \Gamma p_{\pi(L)} \rangle$). Let $\Omega$ be a right adjoint of $\Gamma$. Let $\Phi_{G^L\kern-.2em,H} \colon \hom(\Gamma(G^L),H) \to \hom(G^L,\Omega H)$ be the natural isomorphism given by definition of adjunction. Naturality here means that in particular the right square in the following diagram commutes: \[\begin{tikzcd} \hom((\Gamma G)^L, H) \ar[d, "-\circ\pi_{\Gamma G}"] \ar[r, "-\circ\psi_L"] & \hom(\Gamma(G^L), H) \ar[d, "-\circ\Gamma\pi_G"] \ar[r, "\Phi_{G^L\kern-.2em,H}"] & \hom(G^L, \Omega H) \ar[d, "-\circ\pi_G"] \\ \hom((\Gamma G)^{L'}, H) \ar[r, "-\circ\psi_{L'}"] & \hom(\Gamma(G^{L'}), H) \ar[r, "\Phi_{G^{L'}\kern-.2em,H}"] & \hom(G^{L'},\Omega H) \end{tikzcd}\] \noindent That is, for $f\colon \Gamma(G^L) \to \Omega H$, we have $\Phi_{G^L\kern-.2em,H}(f) \circ \pi_G = \Phi_{G^{L'}\kern-.2em,H}(f \circ \Gamma \pi_G)$. The left square also commutes because of the previously discussed commutation. Therefore, we can define a minion homomorphism $\xi \colon \hom((\Gamma G)^L, H) \to \hom(G^L, \Omega H)$ as $\xi(f) \vcentcolon= \Phi_{G^L\kern-.2em,H}(f \circ \psi_L)$. Indeed, $\xi$ preserves minors, because $\xi(f \circ \pi_{\Gamma G}) = \xi(f) \circ \pi_G$ as seen on the perimeter of the above diagram. If $\Gamma$ preserves products, then $\psi_L$ is an isomorphism. Since $\Phi_{G^L\kern-.2em,H}$ is a bijection, this means $\xi$ is a minion isomorphism. \end{proof} A basic lemma in category theory says that if a functor $\Gamma$ admits a left adjoint, then it preserves products (indeed, all limits). So a pair of adjoint pairs $(\Lambda,\Gamma)$, $(\Gamma,\Omega)$ implies a minion isomorphism. Hence the first part of Lemma~\ref{lem:cat} is analogous to Observation~\ref{obs:adj1}, while the second part is analogous to Observation~\ref{obs:adj2}. We can also derive the second direction as a corollary to the following lemma. \begin{lemma}\label{lem:cat2} Let $\Gamma\colon \mathcal{C} \to \mathcal{D}$ be a functor which preserves products. Then there is a minion homomorphism $\Pol_\mathcal{C}(G,H) \to \Pol_{\mathcal{D}}(\Gamma G, \Gamma H)$, for all $G,H \in \mathcal{C}$. \end{lemma} \begin{proof} Recall from the proof of Lemma~\ref{lem:cat} the following diagram, for $G \in \mathcal{C}$, $L,L' \in \mathbb{N}$, and $\pi\colon [L] \to [L']$: $$\begin{tikzcd} \Gamma(G^{L'}) \ar[d, "\Gamma \pi_G"] \ar[r, "\psi_{L'}"] & (\Gamma G)^{L'} \ar[d, "\pi_{\Gamma G}"] \\ \Gamma(G^L) \ar[r, "\psi_L"] & (\Gamma G)^L \end{tikzcd}$$ Since $\Gamma$ preserves products, $\psi_L$ is an isomorphism, so we can define a minion homomorphism $\xi\colon \Pol_\mathcal{C}(G,H) \to \Pol_{\mathcal{D}}(\Gamma G, \Gamma H)$ as follows: $\xi(f) \vcentcolon= \Gamma(f) \circ \psi_{L}^{-1}$, for $f \colon G^L \to H$. This preserves minors, because from the diagram's commutation we have: \[\xi(f \circ \pi_G) = \Gamma(f \circ \pi_G) \circ \psi_{L'}^{-1} = \Gamma(f) \circ \Gamma(\pi_G) \circ \psi_{L'}^{-1} = \Gamma(f) \circ \psi_L^{-1} \circ \pi_{\Gamma G} = \xi(f) \circ \pi_{\Gamma G}.\vspace*{-\baselineskip}\] \end{proof} \begin{corollary}\label{cor:cat} Let $\Gamma\colon \mathcal{C} \to \mathcal{D}$ be a functor which preserves products. Let $\Omega$ be a thin right adjoint to $\Gamma$. Then there is a minion homomorphism $\Pol_\mathcal{C}(G,\Omega H) \to \Pol_{\mathcal{D}}(\Gamma G, H)$ for all $G \in \mathcal{C}, H \in \mathcal{D}$. \end{corollary} \begin{proof} Since $\Gamma$ has a \emph{thin} right adjoint $\Omega$, there exists a morphism $\varepsilon_H\colon \Gamma \Omega H \to H$ for all $H$ (we don't need it to be natural in any way). Hence we can compose the minion homomorphism $\Pol_\mathcal{C}(G,\Omega H) \to \Pol_{\mathcal{D}}(\Gamma G, \Gamma \Omega H)$ from Lemma~\ref{lem:cat2} with the trivial minion homomorphism $\Pol_{\mathcal{D}}(\Gamma G, \Gamma \Omega H) \to \Pol_{\mathcal{D}}(\Gamma G, H)$ obtained by composing with $\varepsilon_H$. \end{proof} If we have adjoint functors in the (non-thin) category of graphs (or multigraphs), then Lemma~\ref{lem:cat} implies a minion homomorphism between the standard polymorphism minions (because a morphism is associated with a function between vertex sets). One could also apply Lemma~\ref{lem:cat} to the \emph{thin} category of graphs, but the conclusion is then about minions of polymorphisms in that thin category, which is useless, since it does not distinguish between different projections $G^L \to G$. All the thin functors we have considered are in fact functors in the category of graphs or digraphs: in particular $\Lambda_k,\Gamma_k,\Omega_k,\delta_L,\delta,\delta_R$. The definitions can also be extended to give functors in the category of multi(di)graphs. The pairs $(\Lambda_k,\Gamma_k)$ and $(\delta_L,\delta)$ are adjoint pairs in the categories of multi(di)graphs (this fails in the category of (di)graphs; e.g. the number of homomorphisms $\Lambda_3 G \to H$ is not always equal to the number of homomorphisms $G \to \Gamma_3 H$). This implies minion homomorphisms $\Pol(\Lambda_k G, H) \to \Pol(G,\Gamma_k H)$ and $\Pol(\delta_L G, H) \to \Pol(G,\delta H)$. In contrast, the pairs $(\Gamma_k,\Omega_k)$ and $(\delta,\delta_R)$ are not adjoint pairs; they are only thin adjoints. Since $\Gamma_k$ and $\delta$ are right adjoints (of $\Lambda_k$ and $\delta_L$), they preserve products. Applying Corollary~\ref{cor:cat} hence at least gives minion homomorphisms $\Pol(G, \Omega_k H) \to \Pol(\Gamma_k G, H)$ and $\Pol(G, \delta_R H) \to \Pol(\delta G, H)$. However, our results would only follow from the opposite direction. This is impossible to obtain in general: a minion homomorphism $\Pol(\delta G, H) \xrightarrow{?} \Pol(G, \delta_R H)$ would imply the following minion homomorphism \[\Pol(K_4, K_k) \to \Pol(\delta K_6, K_k) \xrightarrow{?} \Pol(K_6, \delta_R K_k) \to \Pol(K_6, K_{2^k})\] (trivially from $\delta K_6 \to K_4$ and $\delta_R K_k \to K_{2^k}$), which is impossible by~\cite[Proposition~10.3]{BartoBKO19}. Thus the seemingly technical difference between adjoints and thin adjoints turns out to be crucial. As proved by Matsushita~\cite{Matsushita17}, the hom complex $\Hom(K_2,-)$ has a left adjoint from the category of $\mathbb{Z}_2$-simplicial complexes with $\mathbb{Z}_2$-simplicial maps to the category of graphs; the left adjoint preserves products. \section{Conclusions} The reduction in Lemma~\ref{lem:red}, on which our first main result relies, does not have a corresponding minion homomorphism. Given the simplicity of the reduction itself, this contrasts with the success of minion homomorphism in explaining other reductions between promise constraint satisfaction problems. It is to been seen whether this notion can be extended to a more general relation between polymorphism sets in a way that would imply Lemma~\ref{lem:red}. The question of whether $K_4$ is left-hard stands open. In principle, it may be possible to extend the proof in Appendix~\ref{app:topo} using more tools from algebraic topology to analyse $\mathbb{Z}_2$-maps $(\Sphere^1)^L \to \Sphere^2$ and deduce an appropriate minion homomorphism. It could also be interesting to consider how $\delta$ or $\delta_R$ affect the topology of a graph, cliques in particular. Another direction could be to look at Huang's Theorem~\ref{thm:Huang} not as a black-box: could constructions like $\delta$ be useful to say something directly about PCPs?
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} One of the main challenges of modern cosmology is to explain the late-time acceleration of the expansion of the Universe. There are several independent methods to probe dark energy: Baryonic Acoustic Oscillations (BAOs), weak and strong gravitational lensing, cluster counts and supernovae. However, BAO measurements appear to be the most powerful cosmological tool at low redshift because they are limited by statistical rather than systematics errors. Using the BAOs as a standard ruler measures the expansion of the Universe as a function of redshift, and so constrains the properties of dark energy (e.g. \citealp{Weinberg2013}). A complementary method to the usual large optical surveys of galaxies to study BAOs is HI intensity mapping \citep{Battye2004,Peterson2006,Chang2008,Loeb2008,Peterson2009}. This method aims to give a tomographic distribution of the neutral HI emission present in the recent Universe over large angular scales. Simulations indicate that the HI intensity mapping technique will give very precise constraints on cosmological parameters, and in particular, on the dark energy equation of state at low redshift \citep{Chang2008}, and at high redshift \citep{McQuinn2006,Bowman2007,Mao2008}. This sensitivity comes from the large volume of the survey. Some HI experiments are currently underway such as BAOradio\footnote{http://arxiv.org/pdf/1209.3266v1.pdf}, BINGO\footnote{http://www.jb.man.ac.uk/research/BINGO/}, CHIME\footnote{http://chime.phas.ubc.ca/}, FAST\footnote{http://fast.bao.ac.cn/en/}, TIANLAI\footnote{http://tianlai.bao.ac.cn/}. Using intensity mapping, the Green Bank Telescope (GBT)\footnote{https://science.nrao.edu/facilities/gbt/} has provided the first detection of the HI signal at $z \sim 0.8$ cross-correlated with the WiggleZ Dark Energy Survey \citep{Masui2013}. This detection shows the HI intensity mapping as a promising tool and gives a lower limit on the fluctuations power of the HI signal. The first phase of the SKA instrument will be built in the next decade and it will offer a broad range of frequencies and a large survey area. Thus, this instrument has great potential to deliver maps of the HI intensity \citep{Bull2015}. To convert the promise into reality radio observations will have to deal with different contaminants, which can dominate the signal of interest in the data, such as astrophysical foregrounds, radio frequency interference (RFI) and instrumental noise. Another important contaminant is time variable noise introduced during propagation of the signal through the atmosphere, which gives an additional contribution to the 1/$f$ noise of the instrument. The amplitude of the atmospheric effects depends on the observing frequency, on the elevation of the instrument above sea level and on the instantaneous weather conditions. The most important challenge for any intensity mapping experiment is the control of foreground emissions. Thus, the data analysis must include a robust foreground subtraction algorithm. At $\sim$ 1 GHz, the most relevant foregrounds are a combination of Galactic emission, mostly synchrotron, and that from the background of extragalactic point sources. See, for example, the discussion in \citet{Battye2013}. These emissions are at least four orders of magnitude larger ($T_\textrm{b} \sim 1000$\,mK) than the HI signal fluctuations ($\delta T_\textrm{b} \sim 1$\,mK). In order to subtract the foreground, the high spectral resolution offered by any HI experiment allows us to exploit the frequency information. In particular the spectra of the foregrounds are expected to be smooth and can be approximated over the frequency range of interest to first order by a modified power-law with a spatial curvature as a function of frequency \citep{Kogut2012}. This spectral smoothness can be used to separate the HI signal from foreground signals. The most common approach is to fit a smooth function to the data in the frequency space and remove it. Several component separation techniques have been discussed in the literature for removal of the Galactic foreground, both specific to low redshift intensity mapping and to epoch of reionisation experiments. We can classify the foreground cleaning methods into two different categories: parametric and blind methods. The parametric methods are model-dependent and consist of applying a parametric fitting to each pixel in the maps \citep{Ansari2012}. The blind methods do not require any assumption on the properties of the foregrounds and the instrument response. Examples of such methods are FASTICA \citep{Chapman2012,Wolz2014}, Correlated Component Analysis (CCA) method \citep{Bonaldi2014}, Karhunen-Loeve Decomposition \citep{Shaw2014}, GMCA \citep{Chapman2012}, principal component analysis (PCA) and independent component analysis (ICA) \citep{Alonso2015}. All methods must also deal with instrumentally induce effects such as mode-mixing of angular and frequency fluctuations induced by the dependence of the beam with frequency. The component separation methods are based on the spectral smoothness of the foregrounds, so the calibration will be a critical step in order not to compromise this smoothness. There are two instrumental approaches to HI mapping, either using single dishes with multiple feeds or interferometer arrays. The single-dish approach offers a relatively cheap and simple way for doing intensity mapping. Unlike single-dish experiments which naturally have good surface brightness sensitivity, interferometer arrays suitable for intensity mapping will require many close-packed elements in order to detect the very low surface brightness HI signal and thus need big correlators \citep{thompson2008interferometry}. Both kinds of experiment will have to deal with potential systematics similar to those encountered in cosmic microwave background (CMB) imaging experiments, such as gain variations, spillover, ground pickup and correlated noise in frequency. Single-dish experiments will require stable receiver systems, good calibration and an appropriate scanning strategy. On the other hand, interferometers are known to deal more naturally with systematics and with foregrounds than single dishes and hence receiver stability (e.g. \citealp{Dickinson2004,Readhead2004}), etc. is not such an important issue. However, we point out that existing interferometers are limited by the small number of their smallest baselines and hence fail to provide the required surface brightness sensitivity \citep{Bull2015}. In this paper, we focus on the concept of a single-dish experiment with the BINGO (BAO from Integrated Neutral gas Observations) instrument, which aims at mapping the HI emission at frequencies from 960 to 1260\,MHz ($z= 0.12-0.48$). This experiment will measure the HI power spectrum, and will detect for the first time the Baryon Acoustic Oscillations around 1 GHz. Some of the details of the BINGO experiment can be found in \citet{Battye2013}. Though we use the BINGO instrument as a concrete example, nearly all the following analysis concepts can be applied to other single-dish instruments. We organise the paper as follows. In Section~\ref{sec:simu}, we describe our simulations in which we incorporate foreground and instrumental noise models. We also make some predictions of the total atmospheric contribution to the instrumental noise level and of the atmospheric fluctuations coming from the inhomogeneous distribution of water vapour. In Section~\ref{sec:fg_noise_sep}, we focus on two simple foreground and noise subtraction procedures: parametric fitting and principal component analysis. The success of these methods depends on the smoothness of the frequency spectrum of the noise and the foreground. In Section~\ref{sec:smoothness}, we place some requirements on the smoothness of these spectra needed for extracting the HI signal from the instrumental $1/f$ noise and from the brighter foreground, the synchrotron emission. In this way, we assess the robustness of the cleaning methods, according to the smoothness of the frequency spectrum of the simulated data. \section{Simulation of a single-dish experiment}\label{sec:simu} In order to explore the foreground cleaning methods we use simulations of the proposed BINGO telescope \citep{Battye2013} as a concrete example of a single-dish instrument. To do this we require simulated maps of the sky at the observation frequencies. We produce a time-ordered data stream with a foreground model which includes Galactic synchrotron plus a background of unresolved point sources, detailed in Section~\ref{subsec:fg_em}, while the way we produce the HI signal is described in Section~\ref{subsec:21cm}. In Section~\ref{subsec:mapmaking}, we introduce the map-making method used to obtain the sky maps of the experiment and describe the instrumental noise model ($1/f$ and thermal noise) in Section~\ref{subsec:instru_noise}. Finally, in Section~\ref{subsec:atm_prediction}, we make some predictions of the amplitude of the atmospheric noise. \subsection{Foreground model}\label{subsec:fg_em} \subsubsection{Galactic synchrotron}\label{subsubsec:synch} To generate a template of the sky emission, we use the reprocessed 408\,MHz Haslam et al. map \citep{Remazeilles2014}, which constitutes a good tracer of the diffuse synchrotron emission. The synchrotron spectrum in terms of brightness temperature can be approximated by $T(\nu) \propto \nu^{\beta+C\text{\textnormal{ln}}(\nu/\nu_0)}$ \citep{Kogut2012}, where $\nu$ is the radiation frequency, $C$ the curvature defined with respect to a reference frequency $\nu_0$ and $\beta$ is the spectral index at $\nu=\nu_0$. Observations have indicated that there are spatial variations of the spectral index \citep{Reich1988,Platania1998,Davies2006}. We extrapolate this template to the frequencies of interest by using 3 different models for $\beta$ listed below, from the simplest to the more complicated ones: \\ 1. We ignore any variation across the sky of the spectral index $\beta$, we fix this index to an average value estimated at frequencies near $\sim 1$ GHz $\beta=-2.8$ \citep{Platania1998}. \\ 2. We assume a Gaussian spatial distribution of the synchrotron index $\beta$, with $<\beta>=-2.8$ and a r.m.s. value of 0.17 \citep{Platania1998}. \\ 3. We use spectral model of the global sky from 10\,MHz to 100\,GHz developed by \citet{deOliveiraCosta2008}. This final model is the most realistic one. It includes spatial correlations of the Galactic emission across the sky and a frequency curvature modification of the synchrotron index $\beta$. The mean value of $\beta$ is $-2.5$ and the steepening of this index is 0.03. The models listed above are summarised in Table \ref{tab:synchrmodel}. \begin{table} \caption{Summary of the different models of the Galactic synchrotron emission.} \label{tab:synchrmodel} \begin{center} \leavevmode \setlength{\tabcolsep}{3pt} \begin{tabular}{lclclr} \hline \hline \small Synchrotron & Characteristics & Mean $\beta$ & r.m.s. $\beta$ \\ \hline \small Model 1 & \small $\beta$ constant on the sky & $-2.8$ & $\beta$ constant \\ \small Model 2 & \small A Gaussian spatial distribution of $\beta$ & $-2.8$ & 0.17 \\ \small Model 3 & \small \citet{deOliveiraCosta2008} model & $-2.5$ & 0.03 \\ \hline \end{tabular} \end{center} \end{table} A high-resolution template of synchrotron emission is required to make realistic tests of foreground removal methods. However, the resulting synchrotron map has a resolution corresponding to a beam with FHWM equal to 56\,arcmin. We require a higher resolution map so it cannot be directly used as a template of the synchrotron emission. And, there are no other full-sky astronomical data sets with resolution better than $\sim$ 1$^{\circ}$. Hence, to account for these small-scale fluctuations, we add to the original map a random Gaussian realisation with a power spectrum $C_{\ell}= \ell^\gamma(\text{\textnormal{exp}}(-\ell^2\sigma^2_{{\textrm{sim}}}))$, where $\gamma=-2.7$, $\sigma_{{\textrm{sim}}}$ is the Gaussian width of the simulation and $\ell$ the multipole. The details are given in \citealt{MivilleDeschenes2007} and \citealt{Remazeilles2014}. \subsubsection{Extragalactic point sources}\label{subsubsec:ps} We assume that the distribution of such sources is not spatially correlated, that is to say the clustering is weak \citep{Liu2009} and hence that they are Poisson distributed. The clustering part increases the pixel-pixel correlations \citep{Battye2013} and thus can have an impact on the foreground removal method. In subsequent work, we will investigate this contribution. Extragalactic point sources can be divided into two populations. The first component comprises bright and isolated point sources that can be readily detected by the instrument and removed directly using the data of the experiment. The second population consists of a continuum of unresolved sources. At radio frequencies (GHz), the r.m.s. confusion $\sigma_{\textrm{c}}$ in a telescope beam with the full width at half maximum $\theta_{\textrm{FWHM}}$ can be approximated by \citep{Condon1974} \begin{equation} \frac{\sigma_{\textrm{c}}}{\text{\textnormal{mJy}}} \approx 0.2 \left( \frac{\nu}{\text{\textnormal{GHz}}} \right)^{-0.7} \left( \frac{\theta_{\textrm{FWHM}}}{\text{\textnormal{arcmin}}} \right)^{2}. \end{equation} For the BINGO telescope, with a $\theta_{\textrm{FWHM}}=40$\,arcmin, this is around 320\,mJy at 1000\,MHz, thus BINGO will be subject to confusion noise when considering a continuum detection. This is irrelevant for an HI line signal. The brightness of each source is drawn from the differential source counts $\frac{\textrm{d}N}{\textrm{d}S}$, with the number of sources per steradian $N$ and per unit of flux $S$. In \citet{Battye2013}, they use data from multiple continuum surveys at 1.4\,GHz \citep{Mitchell1985, White1997,Ciliegi1999, Gruppioni1999, Hopkins1999, Richards2000, Bondi2003, Fomalont2006, Owen2008, Seymour2008,Ibar2010} and fit a 5th order polynomial to these data \begin{equation} \textnormal{log}_{10}\left( \frac{S^{2.5}\textrm{d}N/\textrm{d}S}{N_0}\right)=\sum_{i=0}^5a_i\begin{bmatrix}\textnormal{log}_{10}\left( \frac{S}{S_0}\right)\end{bmatrix}^i, \end{equation} where $a_0=2.593$, $a_1=9.333\times 10^{-2}$, $a_2=-4.839\times10^{-4}$, $a_3=2.488\times10^{-1}$, $a_4= 8.995\times10^{-2}$ and $a_5=8.506\times10^{-3}$; and $N_0 = 1$\,Jy$^{3/2}$\,sr$^{-1}$ and $S_0 = 1$\,Jy. The power-law spectral function with a Gaussian distributed index is given by \begin{equation} S(\nu)=S(1.4 \, \text{\textnormal{GHz}})\left( \frac{\nu}{1.4 \, \text{\textnormal{GHz}}}\right)^{-\alpha}. \end{equation} The spectral index $\alpha$ is randomly chosen from a Gaussian distribution \begin{equation} P(\alpha)=\frac{1}{\sqrt{(2\pi \sigma^2)}}\text{\textnormal{exp}}\begin{bmatrix}-\frac{(\alpha-\alpha_0)^2}{2\sigma^2}\end{bmatrix}, \end{equation} with a mean of $\alpha_0=2.5$ and a width distribution of $\sigma=0.5$ \citep{Tegmark2000}. Assuming that the sources with flux $S > S_{\textnormal{max}}$ can be subtracted from the data, we estimate the mean brightness temperature, contributed due to the remaining sources, by \begin{equation} T_{{\textrm{ps}}}(\nu,\hat{n})=\left( \frac{\textrm{d}B}{\textrm{d}T} \right)^{-1}\Omega_{\textrm{pix}}^{-1}\sum_{i=1}^{N}S_i(\nu), \end{equation} where $S_i$ is the flux of the point source $i$ at 1.4 GHz and $ \Omega_{\textnormal{sky}}$ the pixel size qual to 0.22 arcmin$^2$. The parameter $\textrm{d}B/\textrm{d}T=2k_\textrm{B}/\lambda^2$ is the conversion factor between intensity units to brightness temperature units, $k_{\textrm{B}}$ being the Boltzmann constant, $\lambda$ the wavelength of the incoming radiation and $\Omega_{\textrm{pix}}$ the pixel size. We expect that the brightest sources could be removed directly from the BINGO data or could be masked using the NRAO VLA Sky-Survey (NVSS) \citep{Condon1998}, which is considered to be 99\% complete at a flux density limit of 3.4\,mJy. For our simulation, to be conservative, we take $S_{\textnormal{max}}=100$\,mJy, which corresponds to $\sim 1$ source per square degree. We expect to either subtract or mask most of the brightest radio sources above this flux density. We will investigate in a following paper the residual contribution due to the variability of radio sources and calibration issues. In the following, the maps are created using the HEALPix pixelisation scheme \citep{Gorski2005}. A foreground map of this simulation at 1000\,MHz is given in Fig.~\ref{Fig:fgmap}. The colour bar represents the brightness temperature in mK. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{map_fg.png} \end{center} \caption{Mollweide projection of the foreground with a background of unresolved point sources (S\,$<100$\,mJy) and synchrotron emission at 1000\,MHz in celestial (RA/Dec) coordinates with RA=0$^\circ$ at the centre and increasing to the left. The white solid lines define the spaces expected to be observed by the BINGO experiment. } \label{Fig:fgmap} \end{figure} \subsection{HI signal}\label{subsec:21cm} We use the CORA software \citep{Shaw2014} to simulate the HI brightness temperature. We assume the Planck+WP+highL+BAO cosmological model given in \citet{PlanckCollaboration2013}. The HI brightness temperature can be written as a sum of two parts \begin{equation} T_{\textrm{b}}=\bar{T}_{\textrm{b}}(1+\delta_{{{\textrm{HI}}}}), \end{equation} where $\delta_{{{\textrm{HI}}}}$ is the HI density contrast and $\bar{T}_\textrm{b}$ the mean HI brightness temperature given by \small \begin{align} \bar{T}_\textrm{b}(z)= 0.3\textrm{K}\left( \frac{\Omega_{{\textrm{HI}}}}{10^{-3}} \right) \left( \frac{\Omega_{\textrm{m}}+(1+z)^{-3}\Omega_{\Lambda}}{0.29} \right)^{-1/2} \left( \frac{1+z}{2.5} \right) ^{1/2}. \end{align} \normalsize We assume that the neutral HI fraction is $\Omega_{{\textrm{HI}}}=5\times 10^{-4}$ \citep{Switzer2013} and the HI bias is independent of scale and redshift with $b_{{\textrm{HI}}}=1$. The HI brightness temperature power spectrum can be modeled as \begin{equation} P_{{\textrm{T}}_{\textrm{b}}}(\vec{k},z)=\bar{T}^2_{\textrm{b}}(z) \begin{bmatrix} \textrm{b}_{\textrm{HI}}+f\mu^2\end{bmatrix}^2 D^2(z)P_{\textrm{m}}(k,z), \end{equation} where $\mu \sim k_{\parallel}/k$ with the flat-sky approximation, $P_{\textrm{m}}(k,z)$ the matter power spectrum, $D(z)$ the linear growth factor normalised by $D(0)=1$, and $f$ the linear growth rate $f=\text{\textnormal{d\,log}}D/\text{\textnormal{d\,log}}a,$ where $a$ is the cosmological scale factor. The HI angular power spectrum is obtained from Gaussian random fields with the flat sky angular power spectrum \citep{Datta2007} \begin{equation} C^{\textrm{flat}}_{\ell}(\Delta \nu)=\frac{\bar{T}_{\textrm{b}}^2}{\pi r^2_{\textrm{v}} }\int_0^\infty \textrm{d} k_\parallel \text{\textnormal{cos}}(k_\parallel r_{\textrm{v}} \Delta \nu)P_{{\textrm{T}}_{\textrm{b}}}(\textbf{k}), \end{equation} where $r_{\textrm{v}}$ is the comoving distance, $\textbf{k}$ has components $k_\parallel$ and $\ell/r_{\textrm{v}}$ along the line-of-sight and in the plane of the sky respectively. Using these inputs, we generate the maps of the HI signal which have r.m.s. fluctuations around $ 0.1$\,mK. \subsection{Simulation of a single-dish experiment}\label{subsec:mapmaking} We consider a single-dish experiment based on the BINGO concept. BINGO will be a dual mirror Compact Antenna Test Range (CATR) telescope with a 40\,m primary mirror and an offset focus. Apart from the telescope optics the design of the instrument is similar to that of \citet{Battye2013}. The proposed BINGO experiment will have a receiver array containing between 50 and 60 feed horns. In our simulation, we model the receiver plane with 56 feed horns with a 90\,m focal length. We consider the frequency range from 960\,MHz ($z=0.48$) to 1260\,MHz ($z=0.13$). To decrease the computational speed, we choose to divide the 300\,MHz band into 20 channels, each of 15\,MHz bandwidth, though the actual instrument will have much narrower frequency channels to facilitate RFI excision. The sampling rate is 0.1\,Hz. The instrumental parameters used for our simulation are listed in Table \ref{tab:survp}. \begin{table} \caption{Instrumental parameters for BINGO simulation.} \label{tab:survp} \begin{center} \leavevmode \begin{tabular}{lcr} \hline \hline \small Survey parameters & \\ \hline \small Redshift range [$z_{\textrm{min}}, z_{\textrm{max}}$] & \small [0.13, 0.48] \\ \small Frequency range [$\nu_{\textrm{min}}, \nu_{\textrm{max}}$] (MHz) & \small [960, 1260] \\ \small Channel width $\Delta \nu$ (MHz) & \small 15 \\ \small FWHM (arcmin) at 1 GHz & \small 40 \\ \small Number of feed horns $n_\textrm{f}$ & \small 56 \\ \small Sky coverage $\Omega_{{\textrm{sur}}}$ (deg$^2$) & \small 3000 \\ \small Observation time $t_{{\textrm{obs}}}$ (yr) & \small 1 \\ \small System temperature $T_{{\textrm{sys}}}$ (K) & \small 50 \\ \small Sampling rate (Hz) & \small 0.1 \\ \hline \end{tabular} \end{center} \end{table} We assume that the horns are arranged in a rectangular configuration spaced 3.3\,m apart and the beams are given by a circular Gaussian. The beams are diffraction-limited and, therefore, the full width at half maximum $\theta_{\textrm{FWHM}}$ of the beam can be scaled to any frequency $\nu$ by \begin{equation} \theta_{\textrm{FWHM}}(\nu)=\theta_{\textrm{FWHM}}(\nu_0)\frac{\nu_0}{\nu}, \end{equation} with $\nu_0=1000$\,MHz and $\theta_{\textrm{FWHM}}(\nu_0)=40$\,arcmin. For the following simulations, we will assume that the telescope will map a $15\degr$ declination strip centred at $-5\degr$ as the sky drifts past the telescope. The declination of $-5\degr$ has been chosen to minimise the foreground emission, which is lowest between 10 and $-10\degr$ declination. We assume one full year of on-source integration. In practice, this will likely represent about 2 years of real observation time since we could consider only night observations we will probably remove some data due to foreseeable technical issues like such as radio frequency interference, weather downtime etc. In order to obtain the simulated BINGO maps, we use a maximum likelihood map-making algorithm \citep{Stompor2002,Hamilton2003}. We model the timelines $\bf{d}$ as $\textbf{d}=A\bf{s}+\bf{n}$, where $\bf{s}$ is the pixelized sky signal which is mapped into the timelines and corrupted by noise $\bf{n}$. The pointing information is represented by the pointing matrix $A$ of size N$_{{\textrm{samples}}}$ $\times$ N$_{{\textrm{pixels}}}$, which connects the time index to the pixel index. The map-making step is given by \begin{equation} \hat{\textbf{s}}=({\transpose{A}} N^{-1}A)^{-1}{\transpose{A}}N^{-1}\textbf{d}, \end{equation} where $N$ is the noise covariance matrix and $\hat{\textbf{s}}$ is the best estimate of $\bf{s}$. An impact of the $1/f$ noise is to induce slow drifts of the gains of the receivers. If we do not take steps to mitigate it, the $1/f$ noise will introduce stripes in the maps along the direction of the drift scan. The inversion of $({\transpose{A}} N^{-1}A)$ is performed by using the preconditioned conjugate gradient method. The preconditioner is a pixel domain diagonal matrix weighting the pixels by the number of times they have been observed. This method is described in detail in \citet{Cantalupo2010}. We set the \texttt{HEALPix} resolution of the map equal to nside\,$=\,128$, which corresponds to a map pixel size of 27 arcmin. The focal plane configuration will lead to some gaps in the observed sky band. To correct for this, we rotate the beams of the horns on the sky with an angle $\sim5\,\degr$. In Fig.~\ref{Fig:rotsignal}, we show the drift scan strips of the sky emission. In the following, we consider a single frequency channel centered at 997.5\,MHz to display the results. The top panel shows the HI signal and the bottom panel the Galactic synchrotron emission plus a background of unresolved point sources. The amplitude of the foreground emission is much higher than the signal of interest by four orders of magnitude (note the difference in colour-scale in the strips). These maps are plotted with a Cartesian projection using the \texttt{HEALPix} software. \begin{figure} \centering { \includegraphics[height=0.4cm, width=\columnwidth]{signal.png} } \quad { \includegraphics[width=5.5cm]{bar.pdf} } \quad { \includegraphics[height=0.4cm, width=\columnwidth]{foreground.png} } \quad { \includegraphics[width=5.5cm]{bar2.png} } \caption{Two drift scan strips of the observed sky with only HI signal in the \textit{top} panel while the bottom panel includes Galactic synchrotron emission plus a background of unresolved point sources. The mask of the Galactic plane at $|b|<20^{\circ}$ is plotted with grey pixels. Note the different linear intensity scales. Colour bars represent the temperature in mK.} \label{Fig:rotsignal} \end{figure} \subsection{Instrumental noise}\label{subsec:instru_noise} The timelines are corrupted by thermal (white) noise. The optimal sensitivity of the BINGO experiment per pixel can be defined as follows \begin{equation} \sigma_{\textrm{t}}=\frac{T_{{\textrm{sys}}}}{\sqrt{ t_{{\textrm{pix}}}\Delta \nu }}, \label{eq:therm} \end{equation} where $\Delta\nu$ is the frequency channel given in Table \ref{tab:survp}. We assume the same system temperature $T_{{\textrm{sys}}}$ for all receivers. The parameter $t_{{\textrm{pix}}}$ is the integration time per pixel defined by \begin{equation} t_{{\textrm{pix}}}=n_{\textrm{f}}t_{{\textrm{obs}}}\frac{\Omega_{{\textrm{pix}}}}{\Omega_{{\textrm{sur}}}}, \end{equation} where $n_{\textrm{f}}$ denotes the number of feed horns, $t_{{\textrm{obs}}}$ is the total integration time, $\Omega_{{\textrm{sur}}}$ is the survey area and $\Omega_{{\textrm{pix}}}$ is the beam area. The values of these parameters are given in Table \ref{tab:survp}. We assume $\Omega_{{\textrm{pix}}}=\theta_{\textrm{FWHM}}^2$ and for an integration time of one year, we obtain $\sigma_{\textrm{t}}= \, 25\, \mu$K. Our simulation also contains $1/f$ noise, produced by gain fluctuations of the amplifiers and thus this noise is correlated across all frequency channels. The impact of these fluctuations is usually simulated in the frequency domain using a 1/$f$ power spectrum. To generate realistic 1/$f$ noise we use an algorithm to create a time sequence of white noise. Then we compute its Fourier Transform and weight the data by a zero-mean power spectral density with the distribution \begin{equation} P_{{\textrm{sd}}}=\frac{\sigma_{\textrm{t}}^2}{\nu_{{\textrm{samp}}}}\left[1+ \left( \frac{f_{{\textrm{knee}}}}{f} \right) ^{\alpha}\right], \end{equation} where $\nu_{{\textrm{samp}}}$ is the sampling frequency. $f$ is the discrete Fourier transform sample frequency, given a length of the number of time samples and a sample spacing of 1/$\nu_{\textrm{samp}}$. The 1/$f$ knee value is the integration time (the inverse of the discrete Fourier transform sample frequency) at which the thermal and $1/f$ noise make equal contributions to the power spectral density. Finally, we compute the inverse Fourier transform of these data to obtain the time-ordered data of the noise. In practice, we will filter the data on timescales of 20\,min during data processing; this is the timescale for which the largest structures of interest take to drift through the BINGO field-of-view. For simplicity, we simulate each 20\,min timestream separately and join them together by fixing the first sample of the nth equal to the $(n-1)$ sample. We assume the same value of the knee frequency for each receiver $f_{{\textrm{knee}}}=10^{-3}$\,Hz for a 15 MHz channel bandwidth, which corresponds to the value we aim to achieve with the BINGO pseudo-correlation receivers. The 1/$f$ slope index $\alpha$ is assumed to be 1. In this paper, we start by making the assumption that the $1/f$ noise is perfectly correlated between the frequency channels, which is what is expected if we are dealing with simple gain fluctuations and we assume a flat frequency spectrum ($\beta=0$). In Section~\ref{sec:smoothness}, we investigate what happens if these assumptions are relaxed. In Fig.~\ref{Fig:noisemap}, we show two maps of the noise with thermal noise only (in the top panel) and with added $1/f$ noise (in the bottom panel). One can notice the stripes along the direction of the scan induced by the instrumental $1/f$ noise, which is much larger than the thermal noise by a factor of $\sim$ 100. \begin{figure*} \centering { \includegraphics[width=15cm]{noise_thermal.png} } \quad { \includegraphics[width=5.5cm]{noise_thermal_bar.png} } \quad { \includegraphics[width=15cm]{noise.png} } \quad { \includegraphics[width=5.5cm]{noise_bar.png} } \caption{Maps of the drift scan of the instrumental noise. The \textit{top} panel represents the thermal noise and the \textit{bottom} panel the $1/f$ noise. Note the different intensity scales. Colour bars represent the temperature in mK.} \label{Fig:noisemap} \end{figure*} \subsection{Atmospheric noise}\label{subsec:atm_prediction} The observations of a ground-based telescope are affected by the atmosphere at different levels, depending on the observing frequency and of course on the prevailing weather conditions. The incoming signal will be absorbed or scattered by the atmosphere. This effect increases the noise level of the instrument and we quantify this in Appendix~\ref{subsec:totalatm}. Around 1\,GHz, the optical depth is dominated by oxygen which is constant in time with a small contribution from water, usually quantified in terms of the precipitable water vapour (PWV). Hence, it can undergo changes on an hourly (or faster) timescales and vary spatially. In Appendix~\ref{subsec:totalatm} we show that the amount of PWV at 1\,GHz does not have a significant impact on the total brightness temperature of the atmosphere. We find an atmospheric contribution $\sim 1.81$\,K in days with favourable atmospheric conditions PWV $< 2$\,mm, and $\sim 1.82$\,K in days with unfavourable weather PWV $> 4$\,mm. Thus, we expect the atmospheric contribution to the system temperature to be at the level of a few percent. However, we are also concerned with the fluctuating part of the emission and absorption, which leads to an additional 1/$f$ noise-like component in the time-ordered data. We quantify this effect in Appendix~\ref{subsec:varatm}. The main source of fluctuations is atmospheric turbulence in the troposphere. Oxygen molecules are uniformly mixed in the atmosphere, but the water vapour molecules show inhomogeneities. Even if the water contributes a small percentage to the noise level, around 1\%, its variations with time can produce fluctuations that are large compared to the HI signal and thermal noise. This can generate some correlations between the time streams of data from different receivers and for the same receiver at different times. These fluctuations depend on the weather conditions at the observing site, on the telescope, on the frequency bands and on the observing strategy \citep{Church1995,Lay2000}. The effects of atmospheric noise have been well studied in cosmic microwave background experiments (e.g. \citealp{Davies1996, Sayers2010}), and can be approximated by $1/f$ noise at low-frequency. However, the amplitude of the atmospheric fluctuations in the time streams has not been measured at 1\,GHz. From the back-of-the-envelope calculation detailed in Appendix~\ref{subsec:varatm}, we find that the amplitude of the atmospheric fluctuations in antenna temperature for a single dish is $\Delta T_{{\textrm{atm}}} \sim 0.01$\,mK. Thus, the amplitude of these fluctuations is below the instrumental noise, which is expected to be $\sim$1\,mK for a frequency resolution of 15\,MHz for BINGO. So, the contribution from the atmospheric fluctuations appears not to be a challenge for a single-dish experiment observing around 1\,GHz when the weather is stable. \section{Foreground and instrumental noise subtraction}\label{sec:fg_noise_sep} For an observing frequency around 1\,GHz, the synchrotron emission and the extragalactic point sources are the most relevant foregrounds. The removal of the foregrounds and instrumental $1/f$ will rely on the smoothness of their frequency spectra. In this section, we want to quantify how well the foregrounds can be subtracted in the presence of thermal and 1/$f$ noise. Our philosophy is to focus on two simple cleaning procedures, parametric fitting and a blind method with principal component analysis (PCA). We describe these methods in Section~\ref{subsec:sep_meth} and present their results in Section~\ref{subsec:res_sep_meth}. We also demonstrate the possibility of using a blind method to remove the instrumental $1/f$ noise in Section~\ref{subsec:noise_sep}. In the following, we assume no systematics and a perfect calibration of the data. \subsection{Methods}\label{subsec:sep_meth} \subsubsection{Parametric fitting} Parametric fitting is a common method to parameterise foregrounds (e.g. \citealp{Brandt1994, Ansari2012}). The approach of the method is to fit directly an explicit parametric model of the foregrounds and noise to each pixel of the maps along the frequency direction. The common foreground model is a modified power-law. As the main foreground emission, the Galactic synchrotron can be approximated by a parametric distribution with a curvature to first order \citep{Kogut2012}. The $i$-th pixel of the simulated map of the sky at the frequency $\nu_j$ can be written as the sum of the intensity of the HI signal $T^i_{\textrm{21cm}}$, the foreground emissions $T^i_{\textrm{fg}}$ and the noise of the instrument $T^i_{\textrm{n}}$ \begin{equation} \hat{T}_j^{i}=\hat{T}^i_{\textrm{21cm},j}+\hat{T}^i_{\textrm{fg},j}+\hat{T}^i_{\textrm{n},j}. \end{equation} The hat symbol denotes a modelled quantity. We make the assumption that the foreground $\hat{T}_{\textrm{fg}}$ and $1/f$ noise $\hat{T}^i_{\textrm{n}}$ can be modelled by \begin{equation} \hat{T}_{\textrm{fg},j}^{i}+\hat{T}^i_{\textrm{n},j}+=A^i\left(\frac{\nu}{\nu_0}\right)^{\beta}, \label{eq:plfit} \end{equation} where $\beta$ is the spectral index and $A$ is the amplitude in mK. This assumption on the spectral slope of the $1/f$ noise can be justified by the fact that the $1/f$ noise fluctuations are expected to have a spectral form similar to the system temperature, which can be approximated by a power-law over the BINGO frequency range. We fit Eq.~\ref{eq:plfit} for each pixel of the map in the frequency direction minimised using a least-squares method. \subsubsection{Principal Component Analysis (PCA)} PCA \citep{Murtagh1987} has the advantage of being a non-parametric method and so requires no specific prior information on the spectra of the foreground and the noise. This method consists of transforming the independent maps of each frequency channel into orthogonal modes according to the covariance between frequencies. \noindent We consider the data to be a matrix $S$, with $N_f \times N_p$ elements. $N_f$ denotes the number of frequency channels and $N_p$ the number of pixels in the map. We compute the frequency covariance matrix from the simulated data \begin{equation} C_{ij}=\frac{1}{N_p}S\transpose{S}=\frac{1}{N_p}\sum_{p=1}^{N_p}T(\nu_i,\hat{n}_p)T(\nu_j, \hat{n}_p), \end{equation} where $T(\nu_i,\hat{n}_p)$ is the brightness temperature along the direction of the line-of-sight $\hat{n}_p$ and for the frequency channel $\nu_i$. Therefore, we can compute the entries of the correlation matrix between each pair of frequency channels \begin{equation} R_{jk}=\frac{C_{jk}}{C_{jj}^{1/2}C_{kk}^{1/2}}, \end{equation} where the indices run from 1 to $N_f$. We diagonalise the correlation matrix of the full data set with an eigenvalue decomposition and obtain \begin{equation} \transpose{P}RP=\Lambda \equiv \textrm{diag}\begin{Bmatrix}\lambda_1,...,\lambda_{N_f}\end{Bmatrix}, \end{equation} where the diagonal elements of the matrix $\Lambda$ are the eigenvalues $\lambda_j$ of the matrix $R$ and the matrix $P$ is an orthogonal matrix which contains the eigenvectors. The variance of each mode is given by the amplitude of the eigenvalues $\lambda_j$, so each eigenvalue measures the contribution of its corresponding eigenvector to the total sky variance. This method parameterises the foreground and noise components and produces independent eigenfunctions, which convert the spectral correlation into a number of largest variance modes. We pick the eigenvalues with the correlated components in frequency with the larger variances. So, we build a matrix $P_{c}$, with only the corresponding eigenvectors and we use this matrix to decompose the data into eigenfunctions $\mathbf{\phi}$ \begin{equation}\label{eq:pca_loss} \mathbf{\phi}=\transpose{P_c}S. \end{equation} The maps $S_{c}$ of the reconstructed foreground and $1/f$ noise are obtained by transforming back to the frequency space \begin{equation} S_{c}=P_{c}\mathbf{\phi}. \end{equation} Finally, we find the maps of the reconstructed HI signal $S_{\textrm{HI}}$ by subtracting the input maps and the reconstructed foreground and $1/f$ noise \begin{equation} S_{\textrm{HI}}=S-S_{c}. \end{equation} \subsection{$1/f$ noise subtraction using PCA}\label{subsec:noise_sep} First, we apply the PCA method to thermal and $1/f$ noise components only, ignoring foregrounds for the moment. The frequency spectrum of the correlated noise (1/$f$) is also expected to be smooth in frequency. Thus, one can use the PCA method to remove the instrumental $1/f$ noise. We show the result in Section~\ref{subsubsec:pcanoiserem} and we test the robustness of this noise removal method with different models of the $1/f$ noise in Section~\ref{subsubsec:noisefknee}. \subsubsection{PCA results}\label{subsubsec:pcanoiserem} The instrumental noise is simulated as explained in Section~\ref{subsec:instru_noise}, and we apply the PCA method to the maps of the instrumental noise. The $1/f$ noise is computed with the knee frequency $f_{\textrm{knee}}=1$\,mHz, which is thought to be achievable using balanced correlator receivers \citep{Jarosik2003, Bersanelli2010}. Note that a different scanning strategy than a drift-scan strategy can remove the $1/f$ noise in the map-making, but for a transit telescope, such as BINGO, we will rely on component separation and the smoothness of the frequency spectrum. In the top panel of Fig.~\ref{Fig:ps_noise} we plot the power spectra of the noise maps with different number of modes removed: 1, 2 and 3. The spectra of the maps are computed to $\ell <$1000 using \texttt{PolSpice} \citep{Szapudi2001,Chon2004,Challinor2005}. This code computes correlation functions and estimates the power spectra by integrating the resampled correlation function using Legendre-Gauss integration. The power spectra are corrected for the effect of the cut sky and for the beam and pixel window functions. In order to remove the ringing in the power spectra, we apodize the correlation function with a Gaussian of width 15$^{\circ}$. Subtracting one mode does not remove the $1/f$ noise sufficiently well, but the thermal noise level can be reached by removing 2 modes as displayed in Fig.~\ref{Fig:ps_noise}. This plot shows the residuals between the recovered thermal noise and the input thermal noise. For the case of two removed modes, the residual is significantly lower than the input thermal noise at all scales. It shows that we can recover the thermal noise model sufficiently well using principal component analysis by subtracting at least 2 principal modes. \subsubsection{Results with different noise models}\label{subsubsec:noisefknee} In order to test the efficiency of the PCA method, we compute the instrumental noise with different values of the knee frequency $f_{\textrm{knee}}$ between 1\,mHz and 10\,Hz. Note that a knee frequency of 1\,mHz might be expected for a pseudo-correlation receiver and that 10\,Hz is a worst-case scenario for a single channel radiometer. We find that the residual noise after removing 2 principal modes is independent of the input knee frequency and, hence, the PCA method is robust. We emphasise that we have assumed a flat frequency spectrum for the $1/f$ noise (i.e. it affects all frequency channels equally). The efficiency of the noise cleaning method depends on this assumption and on assuming a perfect calibration. We will quantify the success of this method as the function of the smoothness of the $1/f$ noise in Section~\ref{sec:smoothness}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ps_noise.pdf} \caption{Power spectra of the simulated noise maps after applying principal component analysis. We show the input 1/$f$ noise (\textit{red solid line}) and the input thermal noise (\textit{green solid line}), the reconstructed noise after applying principal component analysis with 1 mode removed (\textit{black dashed line}) and with 2 modes removed (\textit{orange dashed line}). We display the power spectrum of the difference between the maps of the input thermal noise and of the reconstructed thermal noise after PCA with 2 modes removed (\textit{blue dotted line}). It is below the input thermal noise at all scales.} \label{Fig:ps_noise} \end{center} \end{figure} \subsection{Inclusion of foreground emission}\label{subsec:res_sep_meth} In this Section, we show the results for the detection of the HI signal from the total intensity maps in the presence of instrumental noise and foreground emission, using the parametric fitting and principal component analysis methods. We compare both methods in Section~\ref{subsubsec:pf_res} and we focus on the results of the principal component analysis method in Section~\ref{subsubsec:pca_res}. We look at the residuals of the reconstructed cosmological signal in Section~\ref{subsubsec:ps_res}. \subsubsection{Parametric fitting results}\label{subsubsec:pf_res} Here we present the results of the parametric fitting method. The extraction of the HI signal is done using only the frequency information. The fit is made for each pixel. In Fig.~\ref{fig:plotPLF}, we show the measurements as a function of frequency, for a random line-of-sight, with the synchrotron model 3 quantified in Table \ref{tab:synchrmodel} and a background of unresolved point sources ($S< 100$\,mJy) as explained in Section~\ref{subsubsec:ps}. The result is averaged over 20 realisations of the instrumental noise (thermal and $1/f$ noise). The top panel represents the simulated measurements and the reconstructed foreground emission with parametric fitting, highlighting the smooth component of the foreground. The bottom panel represents the recovered cosmological signal with parametric fitting and principal component analysis after removing 7 modes compared to the input one. It shows that the parametric fitting, while superficially in agreement with the input signal, does not provide an accurate fit to the signal of interest compared to the PCA method. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{bin.pdf} \caption{\textit{Top} panel: brightness temperature of one particular line-of-sight as a function of frequency composed of sky emissions and instrumental noise (\textit{blue solid line}), averaged over 20 realisations and the fit based on a parametric fitting (\textit{red dashed line}). \textit{Bottom} panel: the recovered HI signal from parametric fitting (\textit{red dashed line}) and from principal component analysis (\textit{black dashed line}). We show the input HI (\textit{blue solid line}) and the thermal noise (\textit{green solid line}). } \label{fig:plotPLF} \end{center} \end{figure} \subsubsection{Results of PCA applied to sky maps}\label{subsubsec:pca_res} Fig.~\ref{fig:plotPLF} shows that PCA induces a small offset in the reconstructed HI signal: some cosmological signal leaks into the reconstructed foreground and noise components. However, with this method, the HI signal is well recovered with a relative error of $\sim$7\%. Here we will quantify the impact of the foreground residuals on the reconstructed HI signal after applying the PCA method to the simulated sky maps. In order to check the effectiveness of the foreground removal method, we define the relative error by \begin{equation} |T_{{\textrm{HI}}}(\nu)-\hat{T}_{{\textrm{HI}}}(\nu)|/ \sigma_\textrm{n}, \end{equation} where $T_{{\textrm{HI}}}(\nu)$ is the true HI signal at frequency $\nu$, $\hat{T}_{{\textrm{HI}}}(\nu)$ the recovered signal, and $\sigma_\textrm{n}$ the standard deviation of the thermal noise. Fig.~\ref{fig:residu_pca} represents the relative error as a function of the number of subtracted modes for a simulation with HI signal, $1/f$ and thermal noise and different models of foreground. We apply the PCA technique to each of the foreground models discussed in Section~\ref{subsubsec:synch}. Since the foreground and the $1/f$ noise spectra do not contain sharp features, we can expect that they are well described by a small number of eigenvectors, so the eigenvalues are much larger for the first few principal components. This implies that a small number of components contains almost all of the foreground emission and the $1/f$ noise. Fig.~\ref{fig:residu_pca} shows a fall-off of the amplitude of the eigenvalues with an increase in the value of the number of principal components. This steep drop means that the spectra are dominated by relatively few components, which are related to the foreground and smooth instrumental contamination. Furthermore, this figure shows that the foreground model has an impact on the extraction of the HI signal as the most complex foreground model (model 3) requires more modes to be removed in order to subtract the same level of foreground contamination. With the same noise model, the first foreground model requires the removal of at least 3 principal modes, the second model, 4 modes and the third model, 7 modes. We notice that, when a larger number of modes is removed beyond a certain threshold, the relative error associated with each foreground model begins to increase. This can be understood in terms of the component separation method inducing a leakage of the cosmological signal in the foreground eigenvectors. This becomes even more important when we remove a larger number of principal modes from the initial maps. For the most realistic foreground model, model 3, the cosmological signal is well recovered with a relative error of $\sim$7\% after subtracting 7 modes and the percentage error increases to $\sim$9.5\% with 11 modes removed. To evaluate the performance of the PCA method, we calculate the amount of leakage of the HI signal into the subtracted modes using Eq.~\ref{eq:pca_loss}. The relation between the maps of the HI signal at each frequency channel $S_{\textrm{HI}}$ and the orthogonal matrix $P$ which contains the principal modes of the maps is given by the eigenfunctions $\mathbf{\phi}$ \begin{equation} \mathbf{\phi}=\transpose{P}S_{\textrm{HI}}. \end{equation} Finally, we obtain the maps of the leakage of the HI signal $S_{\textrm{HI}}^{'}$ using the orthogonal matrix $P_{c}$, which contains only the eigenvectors removed from the initial maps \begin{equation} S_{\textrm{HI}}^{'}=P_{c}\mathbf{\phi}. \end{equation} Removing 7 principal modes with the PCA method enables one to recover the HI signal. However, this induces a loss of the cosmological signal of $\sim$5\%. In order to determine the foreground component that has the most impact on the foreground separation, we generate maps of the sky emission with only synchrotron emission and HI signal, and maps with a background of unresolved point sources and HI signal. The maps with synchrotron emission require the removal of 3 modes in order to extract the signal of interest, and we obtain a leakage of the HI signal in the removed principal modes of $\sim$2.4\%. To recover the HI signal from maps with only a background of unresolved point sources, the subtraction of 1 mode is required and leads to a leakage of $\sim$1.6\% of the cosmological signal. Subtracting too many principal modes induces a significative loss of the cosmological signal, while when an insufficient number of modes is removed, the recovered HI signal will be still affected by the foreground contamination. Up to now we have assumed a perfect smoothness of the foreground spectra. This assumption could be broken in the presence of instrumental systematic effects such as not sufficient knowledge of the beams, imperfect polarisation purity and mis-calibration will affect the results of the component separation methods, adding additional uncertainties. To measure the BAO wiggles from the HI power spectrum, we require that the statistical error dominates the errors from foreground cleaning methods and calibration. For the present analysis, we neglect calibration systematics, postponing discussion of the required calibration accuracy needs to be done to a future paper. We are however aware that accurate calibration of bandpasses and beam polar diagrams is essential. Quantifying calibration requirements will be done with an end-to-end pipeline. We are also investigating a more complex foreground cleaning method, which uses combined spatial and spectral filtering techniques based on the expertise from the CMB (e.g. \citealp{Leach2008, Remazeilles2011} and Olivari et al. in prep.). \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{residu_pca1.pdf} \caption{The percentage relative error of the thermal noise as a function of the number of modes removed. We plot the error for a foreground model with a background of unresolved point sources and Galactic synchrotron emission with a constant index $\beta=-2.8$ (\textit{red dashed line}), with the introduction of the random spatial distribution of the index $\beta$ across the sky (\textit{green dashed line}), and with correlated spatial and spectral variations of the index $\beta$ plus a background of unresolved point sources (\textit{black dashed line}). These models correspond respectively to models 1, 2 and 3 explained in Table \ref{tab:synchrmodel}. The relative error increases significatively above 10 modes due to the leakage of the cosmological signal into the reconstructed foreground and correlated noise components.} \label{fig:residu_pca} \end{center} \end{figure} \subsubsection{Maps of the reconstructed HI signal and power spectra}\label{subsubsec:ps_res} We perform the foreground cleaning using the PCA method with 7 modes subtracted and the parametric fitting. In Fig.~\ref{fig:fitting_res}, we display the maps of the recovered HI signal for the foreground model 3. The simulations contain instrumental noise ($1/f$ and thermal noise). To compare with the results of the cleaning methods, we plot in the first strip the true HI signal. The second, third and fourth strips show the reconstructed HI signal after 1, 3 and 7 principal modes are removed respectively. One can notice the similarity between the reconstructed HI signal shown in the fourth strip and the strip of the true HI signal. These two maps show that we can extract the cosmological signal from a highly contaminated map. The fifth strip represents the recovered HI signal after applying parametric fitting. Noting that the temperature scales are different, we observe significant differences between the input HI strip and that recovered with parametric fitting. It is clear that parametric fitting does not provide a sufficiently good fit to foreground and noise components. It is evident that parametric fitting does not remove the foreground completely, thus resulting in foreground leaking into the reconstructed cosmological signal. To highlight the comparison between parametric fitting and PCA, we show in Fig.~\ref{fig:tt} the dispersion, pixel by pixel, between the recovered HI signal as a function of the input HI signal, obtained from parametric fitting and from PCA. This plot shows that parametric fitting is much less effective than the PCA method and induces a bias in the reconstructed HI signal. We can quantify the leakage of the thermal noise and of the cosmological signal into the reconstructed foreground and noise components by calculating their power spectra and comparing them to their input power spectra. In the bottom panel of Fig.~\ref{fig:ps_foreground}, we display the power spectra of the true and the recovered HI signal after the removal of 7 modes. This figure shows that both parametric fitting and PCA methods remove several orders of magnitude of foreground contamination and 1/$f$ noise, but PCA gives lower residuals than the parametric fitting method. We plot the cosmological signal leakage into the foreground and $1/f$ noise reconstruction in the bottom panel. The power spectra of the thermal noise leakage and the HI signal leakage are lower than the input HI signal at all scales, thus, with PCA, it is feasible to extract the HI signal from a highly contaminated foreground map. \begin{figure*} \centering { \includegraphics[width=12.5cm,height=0.5cm]{signal.png} } \quad { \subfigure[Input HI signal]{ \includegraphics[width=5.5cm]{bar.pdf}} } \quad { \includegraphics[width=12.5cm,height=0.5cm]{pca1.png} } \quad { \subfigure[PCA 1 mode removed]{ \includegraphics[width=5.7cm]{bar.pdf} }} \quad { \includegraphics[width=12.5cm,height=0.5cm]{pca3.png} } \quad { \subfigure[PCA 3 modes removed]{ \includegraphics[width=5.7cm]{bar.pdf}} } \quad { \includegraphics[width=12.5cm,height=0.5cm]{pca7.png} } \quad {\subfigure[PCA 7 modes removed]{ \includegraphics[width=5.5cm]{bar.pdf}} } \quad { \includegraphics[width=12.5cm,height=0.5cm]{plf.png} } \quad {\subfigure[Parametric fitting]{ \includegraphics[width=5.5cm]{fit_plf_cbar.png}} } \caption{Five versions of a strip in declination of the HI signal. The input cosmological signal is shown in the first strip and the reconstructed signal from principal component analysis in strips 2, 3 and 4 (after 1, 3 and 7 modes removed respectively) and from parametric fitting in strip 5. Notice the different colour bar scales. } \label{fig:fitting_res} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{tt.pdf} \caption{Scatter plot of the recovered HI signal from parametric fitting (\textit{blue markers}) and from principal component analysis (\textit{red markers}) as a function of the input HI signal. The parametric fitting is more noisy than the principal component analysis and also appears to be biased. The black dashed line represents the perfect correlation between the recovered HI signal and the true signal.} \label{fig:tt} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ps_foreground2.pdf} \caption{\textit{Top} panel: the power spectra of the simulated maps after applying foreground cleaning. This plot shows the simulated HI signal (\textit{blue solid line}), the thermal noise (\textit{green solid line}) and the sky emission (\textit{cyan solid line}). We plot the results of the foreground cleaning from principal component analysis (\textit{black dashed line}) and from parametric fitting (\textit{red dashed line}). \textit{Bottom} panel: the power spectra of the leakage of the HI signal (\textit{dotted light blue line}) and of the noise (\textit{gold dashed line}) after applying PCA. The principal component analysis method clearly makes it possible to extract the HI signal.} \label{fig:ps_foreground} \end{center} \end{figure} \section{Requirements on foreground and instrumental noise frequency spectra}\label{sec:smoothness} For almost all component separation techniques developed for intensity mapping data analysis, the efficiency of these methods depends on the spectral smoothness of the foreground and that of the $1/f$ noise and the bandpass calibration. We are confident that the foregrounds have sufficiently smooth spectra in frequency. This characteristic enables us to remove the components correlated in frequency and, therefore, to recover the HI signal with the instrumental white noise. However, some structure in the receiver band-passes is inevitable caused by amongst other things, standing waves, frequency variations in the receiver gain temperature and spectrally dependent beam patterns. The effect of these can all be reduced by careful calibration but it is important to know how good this calibration has to be. In this section we define some requirements on the smoothness of the instrumental bandpass. In the following, for the foreground emission, we consider only the brightest component; i.e. the Galactic synchrotron emission. To quantify the effect of a non-smooth bandpass, we add to the measured frequency spectrum a sinusoidal wave defined by \begin{equation} \phi(\nu)=A \, \text{\textnormal{sin}} \left( \frac{ \pi \nu}{\Delta \nu} \right), \label{eq:sim} \end{equation} where $A$ is the amplitude, $\nu$ the frequency of observation and $\Delta \nu$ the wavelength. We explore a range of $A$ between 1 and 150\,mK and a range of $\Delta \nu$ between 1 to 300\,MHz. We show the modified spectra for different values of $A$ and $\Delta \nu$ in Fig.~\ref{fig:ex_req}. To highlight the impact of the addition of the sinusoidal wave, we divide the resulting spectrum by the original one. We see curvature and/or oscillations in the resulting spectra. A higher value of $\Delta \nu$ leads to a curvature of the spectrum, similar to a standing wave, whereas a smaller value induces a sinusoidal wave that behaves in a similar way to noise, when $\Delta \nu$ is smaller than the frequency channel width. \begin{figure} \begin{center} {\includegraphics[width=\columnwidth]{req.pdf} \caption{The corrupted spectrum divided by the original, undistorted, spectrum for different versions of the sinusoidal wave. A small value of $\Delta \nu$ induces a spectrum that fluctuates in frequency in a similar way to random noise, while a large value of $\Delta \nu$ leads to a curvature of the frequency spectrum. } \label{fig:ex_req} }\end{center} \end{figure} In what follows, we simulate the maps generated by the instrument with the same model of the Galactic synchrotron emission, HI signal and instrumental noise (thermal and 1/$f$ noise) and we add Eq.~\ref{eq:sim} the sinusoidal wave to the frequency spectrum of the generated data. In order to extract the signal of interest, we apply the principal component analysis to the maps. In Fig.~\ref{fig:contour_sky}, we plot the relative error of the recovered HI signal as a function of the amplitude $A$ and $\Delta \nu$ after applying PCA with 6 modes subtracted. The colour bar represents the amplitude of the relative error between the recovered HI signal and the true signal. The smoothness of the frequency spectrum, i.e. the value of $\Delta \nu$, has a significative impact on the efficiency of the cleaning methods. The relative error increases with a small value of $\Delta \nu$, which corresponds to a sinusoidal wave with a period shorter than the frequency channel width. In order not to be affected by the variation of the bandpass, the value of $\Delta \nu$ has to be lower than 100\,MHz and the amplitude $A$ has to be below 45\,mK. With the values $A<40$\,mK and $\Delta \nu<100$\,MHz, we find a relative error $<7.3$\% after 6 modes are subtracted with the PCA. In absolute terms, after subtracting 6 principal modes, we obtain residuals lower than 0.1\,mK, which means that the HI signal can be detected. Finally, we perform simulations varying the number of frequency channels used to perform the PCA. We consider 20 frequency channels (15\,MHz channel bandwidth) and 200 frequency channels (1.5\,MHz channel width) and we test different values of $\Delta \nu $. We choose the amplitude of the sinusoidal wave to $A=120$\,mK. Fig.~\ref{fig:nbchannel} shows the relative error between the recovered HI signal and the input signal as a function of the smoothness of the frequency spectrum $\Delta \nu$ after removing 6 and 7 principal modes. We find that the PCA method does better with a larger number of channels. The relative error is $<7$\% for 6 removed modes when we have 200 frequency channels for all values of $\Delta \nu$ between 1 to 400\,MHz. The reason for the improvement when more channels are added can be understood by the fact that the frequency band is better sampled. Thus, as long as we have a frequency spectrum with slow oscillations, or enough frequency channels to sample the spectrum with sufficient accuracy, the smoothness of the bandpass does not constitute an issue for the foreground and the $1/f$ noise cleaning methods. An amplitude around 40\,mK requires the bandpass to be calibrated to an accuracy of better than 1 part in 1000. However, one would expect to calibrate at least every day so we will only require a dynamic range of 1 part in 50. \begin{figure} \includegraphics[width=\columnwidth]{cube_pca2.png} \caption{Relative error as a function of the amplitude $A$ and the period $\Delta \nu$ of the sinusoidal wave after foreground and noise subtraction after applying principal component analysis (6 modes removed). The values $A$ and $\Delta \nu$ of the sinusoidal waves are indicated on the axes of the plot. The colour bar gives the percentage error relative to the noise.} \label{fig:contour_sky} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{ps_channels.pdf} \caption{The percentage relative error as a function of the period $\Delta \nu$ of the sinusoidal wave after foreground removal. We show the relative errors for 20 frequency channels with 6 modes removed (\textit{blue markers}), for 20 frequency channels with 7 modes removed (\textit{green markers}) and for 200 frequency channels with 6 and 7 modes removed (\textit{red and black markers}) respectively.} \label{fig:nbchannel} \end{figure} \section{Conclusion} In Section~\ref{sec:simu}, we have simulated the kind of data which would be produced in a single-dish intensity mapping experiment like BINGO. We have adopted a sky model with Galactic synchrotron emission and a background of point sources ($<10$\,mJy). We have added both white and $1/f$ noise. This simulation can be generalised to any intensity mapping experiment with a drift scan strategy. This work has yielded several results concerning the challenge of any future single-dish experiment in terms of foreground, atmospheric and instrumental noise. We have made some estimates of the total amplitude of the atmospheric noise and of its fluctuations arising from turbulences in emission of the water vapour. At 1\,GHz, we have found that the r.m.s. noise level induced by these fluctuations is $\sim 0.01$\,mK below the instrumental noise level for the BINGO experiment. Thus, for an observing frequency around 1\,GHz, the atmospheric contamination should not constitute a challenge for a single-dish experiment. We have investigated the problem of cleaning foreground contamination in order to recover the HI signal. We have focused on simple methods based on the spectral information contained in the signal. We have considered two different foreground cleaning methods, a non-blind method with parametric fitting and a blind method with PCA. This last method does not require any assumption on the physics of the foreground emissions. We have shown that the parametric fitting method does not provide an accurate fit to the data for a simulation with instrumental noise and foreground emissions. In contrast, by subtracting 7 principal modes, with a realistic foreground model, PCA enables to reach the HI signal level. Thus, on simulated data, the application of the PCA method shows that it is feasible to extract the cosmological signal across a wide range of multipoles and redshifts. However, we have found that PCA induces a small offset in the reconstructed the HI signal of $\sim$5\%. This result is confirmed by other recent works \citep{Alonso2015}, which have shown that it is possible to recover successfully HI signal from a highly contaminated foreground map with a blind method but that the foreground removal induces a bias in the HI power spectrum. In spite of this HI leakage into the removed contamination, a modification of the global shape of the power spectrum should not modify the positions of the BAOs wiggles \citep{Wolz2014} and therefore will not have significant impact on our ability to measure the BAO scale. If the PCA method is chosen, we can correct for this small bias by calibrating it with simulations. We are also investigating a more complex cleaning method, which utilises both the spectral and spatial information (Olivari et al., in prep.), and therefore should be more accurate. Given the assumption that the frequency spectrum of the $1/f$ noise is flat, or at least only has slow variations across the band, we have shown that PCA can cope with 1/$f$ noise with relatively high knee frequencies. It is easily removed from the noise maps by subtracting 2 principal modes and the 1$/f$ noise contamination is dampened down to the level of the thermal noise. This result implies that it might be possible to simplify the design of an experiment, such as BINGO, by removing the correlation receivers. In subsequent work, we plan to investigate more complex 1/$f$ noise models. The effectiveness of the blind cleaning methods depends on the spectral smoothness of the foregrounds and the instrumental $1/f$ noise. In order to challenge our cleaning methods, we have considered instrumentally distorted frequency spectra of the components we want to remove. We have shown that a sinusoidal distortion of the frequency spectrum can induce higher residuals on the maps. However, for a sinusoidal wave with an amplitude $A<45$\,mK and period $\Delta \nu<100$\,MHz, after subtracting six principal modes from the initial maps, the residuals of the noise and the sky emission are below 0.1\,mK, which could enable the successful extraction of the HI signal. Still, accurate calibration will be critical for the success of measuring Baryon Acoustic Oscillations and probing the expansion history of the Universe. How good this calibration needs to be will be investigated in a future paper. The results of this paper show that the principal component analysis method is a promising tool for the extraction of the cosmological signal in any future HI intensity mapping experiment. \section{Acknowledgements} MABS, CD, MR and YZM acknowledge support from an ERC Starting Grant (no. 307209). We acknowledge use of the \texttt{HEALPix} package \citep{Gorski2005} and the python tools, PyOperators and PySimulators \citep{Chanial2012}. We would like to thank the BINGO collaboration for providing the basic concept for which this work was based.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The action of General Relativity (GR) becomes polynomial in any first-order formalism. Thus, one can tame the algebraic complexity of the perturbative expansion of the Einstein--Hilbert action by introducing an auxiliary connection variable. The tradeoff is that this connection becomes an additional field on top of the metric, with non-vanishing 2-point functions to itself and to the metric. But the number of different vertices one has to consider becomes finite. In the usual metric formulation, the ultimate power of this perturbative first-order formalism is achieved by taking the inverse densitised metric as the basic variable. The action is then cubic in the fields; see, e.g., \cite{Cheung:2017kzx}. In the metric/affine-connection formalism, there are propagators of all possible types: metric-metric, metric-connection and connection-connection. Also, these propagators typically are sums of terms with a complicated structure of Lorentz indices, which means that even simple Feynman diagrams generate many terms. One can try to simplify things by a shift of the connection field that eliminates the metric-connection propagator (see \cite[Eq.~(25)]{Cheung:2017kzx}), but this results in a considerably complicated character of the interaction vertices. At the same time, the connection-connection propagator of this formalism is algebraic, i.e., the connection is a true auxiliary and non-propagating field, present there just to reduce the algebraic complexity of the action. This suggests that there must exist a first-order formalism in which the connection-connection propagator is totally absent. The purpose of this paper is to develop such a perturbative formalism for four-dimen\-sio\-nal GR\@. The formalism we describe is based on two elements. First, we use the two-component spinor technique, as is appropriate for computations that are based on the spinor helicity formalism. Second, the formalism we develop is chiral. It relies on special features of four spacetime dimensions and uses self-dual projections. Being chiral means that one of the graviton helicities is described differently from the other. One can anticipate that, at least in four dimensions, the usage of two-component spinor technique (and the associated availability of the powerful Schouten identity) should allow one to write both propagators and vertices more compactly. It is known that, in four spacetime dimensions, one can impose the Einstein condition by considering only the chiral half of the Levi-Civita connection. The necessary chiral projections are easiest to describe in the spinor notation. In units $8\pi G=1$, the corresponding action (with zero cosmological constant) is \begin{equation}\label{action} S[\theta,\omega] = 2 {\rm i} \int \Sigma^{AB} \wedge F_{AB} \, , \end{equation} where $A,B=1,2$ are unprimed 2-component spinor indices, the self-dual 2-forms are given by \begin{equation}\label{sigma} \Sigma^{AB} = \frac12 \theta^A{}_{C'} \wedge \theta^{BC'} \, , \end{equation} with $\theta^{AA'}$ being the soldering form (tetrad), and the curvature 2-form $F^{AB}$ is given by \begin{equation} F^{AB} = d \omega^{AB} + \omega^{AC} \wedge \omega_C{}^B \, . \end{equation} The object $\omega^{AB}$ is the self-dual part of the spin connection, and is locally a one-form with values in the symmetric $\omega^{AB}=\omega^{(AB)}$ second power of the unprimed spinor bundle. Integrating out the connection $\omega^{AB}$ by solving its field equations and substituting the result into the action, one obtains the Einstein--Hilbert action $(1/2) \int R \sqrt{-g}$ for the metric \begin{equation} ds^2 = \theta^A{}_{A'} \theta_A{}^{A'} \, , \end{equation} together with the so-called Holst term (see \cite{Holst:1995pc}) with an imaginary coefficient. The Holst term is a total derivative. So, modulo this imaginary boundary term, the action (\ref{action}) gives an equivalent description of GR\@. This action is obtained by applying the chiral self-dual projection to the first-order Einstein--Cartan action in terms of the tetrad $\theta^{AA'}$ and the full spin connection (of which $\omega^{AB}$ is the self-dual part). Alternatively, the full non-chiral Einstein--Cartan action can be written as the real part of (\ref{action}). The action (\ref{action}) is closely analogous to its, perhaps, better known Yang--Mills (YM) cousin. Thus, modulo (an imaginary) boundary term, the YM action can be written as \begin{equation}\label{action-YM} S[A]= -\frac{1}{2g^2} \int \left( F^a_+ \right)^2 \, , \end{equation} where $F^a_+$ is the self-dual projection of the YM field strength, and $a$ is a Lie algebra index. The usual YM action, a multiple of $\int F^2$, can be written as the sum $\int (F_+)^2+(F_-)^2$. One then notes that, in view of the Pontryagin topological term $\int F\wedge F \sim \int (F_+)^2-(F_-)^2$, the integrals of the self-dual (SD) and anti-self-dual (ASD) parts of the field strength are equal to each other, modulo a surface term. This explains why (\ref{action-YM}) is a valid YM action. This action can then be written in the first-order form by introducing a self-dual auxiliary field $B_+$: \begin{equation}\label{action-YM-first} S[A,B_+] = \int F^a B_+^a +g^2 \left( B_+^a \right)^2 \, . \end{equation} Integrating out $B_+^a$, one gets back (\ref{action-YM}). The action (\ref{action-YM-first}) has only a simple cubic vertex. Moreover, there is a natural gauge-fixing procedure that leads to the absence of the propagator of the auxiliary field $B_+$ with itself, see, e.g., Section~2 of \cite{Krasnov:2016emc} for details. There exists an effective formalism for this YM perturbation theory. In this formalism, one of the legs in the cubic vertex becomes special in that two such legs should never join to form a propagator. Feynman graphs in which such connections were present would exactly cancel the contributions from the quartic vertex of the usual YM perturbation theory, see, e.g., \cite[Eq.~(55)]{Cofano:2015jva} for more details. So, in effect, the main advantage of the formalism (\ref{action-YM-first}) is that it puts the YM cubic vertex in the form that eliminates the need to consider the quartic vertex. It is easy to pass from the chiral description of YM provided by (\ref{action-YM-first}) to the self-dual YM theory, in which the only solutions are YM field configurations with vanishing self-dual part of the field strength. This is achieved simply by setting $g=0$ in the above action. This eliminates the gauge field to gauge field propagator, leaving only the 2-point function between $B_+$ and $A$ non-zero. The diagrams that can be constructed in the self-dual YM are a subset of the diagrams of the full YM, which means that a subset of the full YM amplitudes is correctly captured by the self-dual theory, see \cite{Krasnov:2016emc} for more on this. Our gravity formalism based on (\ref{action}) is precisely analogous to the above description. Firstly, as we already noted, a gauge is available in which the auxiliary field $\omega^{AB}$ has a vanishing 2-point function with itself. This is by no means trivial, and relies on a delicate matching between the number of components in the perturbation of the tetrad, in the connection, and in the Lagrange multipliers added in the process of gauge fixing. We will explain the main idea of our gauge-fixing procedure below. Secondly, there are only cubic and quartic vertices, as is clear from expanding (\ref{action}) perturbatively. The arising propagators will be of two types: tetrad-tetrad and tetrad-connection. The latter has a factor of momentum in the numerator, while the former is the standard $1/k^2$ times a product of the spinor metrics. One then notes that the derivative present in the tetrad-connection propagator can be assigned to the connection legs of the vertices. This leads to an effective formalism in which only the tetrad propagator is present, and all vertices contain two derivatives as is appropriate for a gravity theory. However, one needs to mark some legs of vertices (those that come from the connection $\omega$) as special, and to respect the rule that special legs cannot connect to each other. Feynman graphs in which such connections were present would result in contact terms that would need to be cancelled by higher order vertices. Thus, as in the case of chiral YM, the price to pay for low valency of the vertices is that some vertex legs are marked as special. Finally, there is also a way to pass from (\ref{action}) to a version of the theory where only (anti-)\,self-dual configurations are solutions. This is achieved by removing the $\omega\omega$ term of the curvature $F$ from the action, which eliminates the tetrad-to-tetrad propagator. The corresponding action for self-dual GR is contained in the bosonic part of the action in Section 6 of \cite{Siegel:1992wd}. As we already noted, our main result is a new gauge-fixing procedure that eliminates the connection-connection propagator. It will be useful to describe this gauge-fixing procedure qualitatively already in the Introduction. \bigskip \noindent {\bf Gauge-fixing procedure.} When expanded perturbatively around the Minkowski background, the action (\ref{action}) is a function of the tetrad perturbation $h^{AA'BB'}$ with its 16 components, as well as the chiral spin connection $\omega^{ABCC'}$, which is symmetric in its first two indices, and so has $3\times 4=12$ components. However, one quickly finds that the free Lagrangian (i.e., the collection of terms that are quadratic in the perturbations) is independent of the component $h^{A(A'}{}_{A}{}^{B')}$ of the tetrad perturbation. This component transforms non-trivially and irreducibly with respect to the anti-self-dual chiral half of the Lorentz group, and the gauge in which it is set to zero in the complete action is the most natural one. Setting $h^{A(A'}{}_{A}{}^{B')}$ to zero results in 13 components in the tetrad perturbation, plus 12 components in the connection. The first step of our gauge-fixing procedure is to add 4 more fields $\lambda^{AA'}$ to the linearised action with the purpose of fixing the diffeomorphism gauge freedom. The most economic way of doing this is to absorb these new fields into the connection field $\omega^{ABCC'}$ as its extra components. This is done by adding to the connection, originally symmetric in $\scriptstyle AB$, a new, $\scriptstyle AB$ anti-symmetric part, which contains precisely 4 new fields. We show that this can be done in such a way as to produce a de Donder type gauge-fixing term in the Feynman form, schematically $\lambda \partial h + \lambda^2$. Overall, the diffeomorphism gauge freedom is gauge-fixed by extending the connection to contain 4 more components. The new connection field is again $\omega^{ABCC'}$, but without any symmetry property. It remains to fix the gauge freedom of the self-dual chiral part of the Lorentz group. A straightforward way to do it would be to set to zero the component $h^{(A}{}_{A'}{}^{B)A'}$ of the tetrad perturbation. One would then deal with 10 components of the metric perturbation plus 16 components of the connection. However, because of the mismatch between these numbers, there would be a non-trivial connection-connection propagator, as in all previous versions of the gravitational perturbation theory. Our main new finding is a different way to fix the self-dual chiral half of Lorentz gauge freedom. It is done by using a Lorentz-type gauge condition $\partial^{CC'}\omega^{AB}{}_{CC'} = 0$ with a derivative on the connection rather than an algebraic condition on the tetrad. This gauge condition requires introduction of 3 new Lagrange multipliers forming a symmetric spinor $\lambda^{AB}$. Together with the 13 components already contained in $h^{AA'BB'}$, this gives 16 field components, which is the same number as in the connection field (extended by $\lambda^{AA'}$). Matching the number of components in the (extended) tetrad and connection fields is necessary to get rid of the connection-connection propagator. In our gauge-fixing procedure, the space of fields $h^{AA'BB'}$, $\omega^{ABCC'}$, $\lambda^{AA'}$, $\lambda^{AB}$ is separated into two sets of conjugate pairs that decouple in the free Lagrangian and greatly facilitates the procedure of inverting the kinetic term. This separation is non-trivial and is described in the main text, but its essential elements are as follows. The part $h^{(AB)(A'B')}$ of the tetrad perturbation, symmetric with respect to both pairs of indices, is combined together with $\lambda^{AB} \epsilon^{A'B'}$, where $\epsilon^{A'B'}$ is the spinor metric, into a new field $H^{ABA'B'} = H^{(AB)A'B'}$, which is no longer symmetric in the primed spinor indices, but remains symmetric in the unprimed ones. There is then a certain combination of $\omega^{ABCC'}$ and $\lambda^{AA'}$, which we call $\Omega^{ABCC'} = \Omega^{(AB)CC'}$ and which is symmetric in its first two indices. In the free Lagrangian it couples to the field $H^{ABA'B'}$, both fields having 12 components. The kinetic term for this pair of fields is trivial to invert, as its derivative part is just the chiral Dirac operator, see (\ref{Lkin}). In addition to the pair of fields $(H^{ABA'B'},\, \Omega^{ABCC'})$, there is another pair. The trace part $h^{AA'}{}_{AA'}$ of the tetrad perturbation combines with $h^{(AB)A'}{}_{A'}$ into a four-component field $h^{AB}$ without any symmetry property. In addition, there are four remaining components of the connection, a field we call $\omega^{AA'}$, and the derivative part of the kinetic term for this pair of fields is again a chiral Dirac operator, see the second term in (\ref{Lkin}). All in all, our gauge-fixing procedure resulted in matching the number of (extended) tetrad and connection fields, and separation of the free part of the theory into two decoupled sectors. This leads us to very simple propagators, which is the main result of our analysis. \bigskip We work in the signature mostly plus. The organisation of the rest of the paper is as follows. In Sec.~\ref{sec:expansion}, we describe the expansion of the chiral action in basic fields and introduce the necessary notation. In Sec.~\ref{sec:gauge}, which is the central section of this paper, we describe our non-trivial procedure of gauge fixing, obtain the field variables that diagonalise and simplify the free Lagrangian, and calculate the propagators. In Sec.~\ref{sec:interaction}, we discuss the structure of interaction and introduce the formalism of effective vertices with marked lines. As a test of our formalism, we apply it to the calculation of three-point amplitudes in Sec.~\ref{sec:amplitudes}, some simple off-shell currents in Sec.~\ref{sec:currents}, and 2-to-2 graviton scattering amplitudes in Sec.~\ref{sec:scattering}. We formulate our conclusions in Sec.~\ref{sec:conclusions}. Appendix describes an alternative procedure to arrive at our gauge-fixing. \section{Expansion of the action} \label{sec:expansion} We expand the fields around the Minkowski space configuration. Since the background connection is zero, $\omega^{AB}$ describes the total connection. From now on we denote by $\theta^{AA'}$ the Minkowski spacetime tetrad, and its perturbation is denoted as $h^{AA'}$. We then split action (\ref{action}) into the free and interaction parts: \begin{align}\label{free} S_0 &= {\rm i} \int \theta^A{}_{A'} \wedge \theta^{BA'} \wedge \omega_A{}^C \wedge \omega_{CB} + {\rm i} \int \left( \theta^A{}_{A'} \wedge h^{BA'} + h^A{}_{A'} \wedge \theta^{BA'} \right) \wedge d \omega_{AB} \, , \\ S_{\rm int} &= 2 {\rm i} \int \theta^A{}_{A'} \wedge h^{BA'} \wedge \omega_A{}^C \wedge \omega_{CB} + {\rm i} \int h^A{}_{A'} \wedge h^{BA'} \wedge \left( d \omega_{AB} + \omega_A{}^C \wedge \omega_{CB} \right) \, . \label{inter} \end{align} We expand all forms in the basis of background 1-forms $\theta^{AA'}$. Thus, we have \begin{equation}\label{tetex1} h^{AA'} = h^{AA'}{}_{MM'} \theta^{MM'} \, , \qquad \omega^{AB} = \omega^{AB}{}_{MM'} \theta^{MM'} \, , \end{equation} as well as \begin{equation}\label{tetex2} d\omega^{AB} = \partial_{MM'} \omega^{AB}{}_{NN'} \theta^{MM'}\wedge \theta^{NN'} . \end{equation} Under diffeomorphisms generated by an infinitesimal vector field $\xi^\mu = \xi^{AA'} \theta_{AA'}^\mu$, and local SL(2,C) gauge transformations with the infinitesimal parameters $\phi^{AB},\bar{\phi}^{A'B'}$, our variables transform as \begin{align} \delta_\xi h^{AA'}{}_{BB'} &= \partial_{BB'} \xi^{AA'} \, , &\delta_\xi \omega^{AB}{}_{CC'} &= 0 \, , \label{xitrans} \\ \delta_\phi h^{AA'}{}_{BB'} &= {} - \phi^A{}_B \epsilon^{A'}{}_{B'} - \bar \phi^{A'}{}_{B'} \epsilon^A{}_B \, , &\delta_\phi \omega^{AB}{}_{CC'} &= \partial_{CC'} \phi^{AB} \, . \label{phitrans} \end{align} \subsection{Parametrisation of the tetrad perturbation} To exhibit the structures arising, let us decompose the tetrad perturbation into its irreducible components \begin{equation} h^{AA'BB'} = h^{(AB)(A'B')} + h^{(AB)}\epsilon^{A'B'} + h^{(A'B')}\epsilon^{AB} + h \epsilon^{AB}\epsilon^{A'B'}, \end{equation} which also defines all the components on the right hand-side. In fact, it will prove to be convenient to combine the second and fourth terms here, and define a new field $h^{AB}=h^{(AB)}+h\epsilon^{AB}$ that is no longer $\scriptstyle AB$ symmetric. Thus, let us instead use the following parametrisation \begin{equation} h^{AA'BB'} = h^{ABA'B'} + h^{AB}\epsilon^{A'B'} + h^{A'B'}\epsilon^{AB} . \end{equation} It should now be kept in mind that $h^{ABA'B'}$ is symmetric in its both pairs of indices, $h^{AB}$ does not have any symmetry, and $h^{A'B'}$ is symmetric.\footnote{To minimize the notation, we thus distinguish between $h^{AA'BB'}$ and its complete symmetrisation $h^{ABA'B'} = h^{(AB)(A'B')}$ by the position of indices.} \subsection{Imposing a partial gauge} Using the action of one chiral half of the Lorentz group [see the first equation in (\ref{phitrans})], we can eliminate the $h_{A'B'}$ part of the tetrad perturbation. This part does not enter into the linearised action, and only appears in the interaction terms. We will simplify things considerably by killing this part from the start. With this part set to zero, we have the following parametrisation of the tetrad perturbation: \begin{equation}\label{hdeco} h^{AA'BB'} = h^{ABA'B'} + h^{AB}\epsilon^{A'B'} . \end{equation} \section{Gauge fixing} \label{sec:gauge} Our strategy of fixing the diffeomorphism gauge freedom will consist in removing the symmetry of $\omega^{AB}$ and modifying the action in an appropriate way [the antisymmetric part of $\omega^{AB}$ would drop from the unmodified action (\ref{free}), (\ref{inter})]. The net result of all this will be generation of reasonable gauge-fixing terms for the original action. After that, we fix the other half of the Lorentz gauge freedom by adding new Lagrange multipliers. The whole procedure will greatly simplify the free part of the action, splitting it into two autonomous sectors, and eliminating the connection-connection propagator, as we will see below. The unmodified action is calculated by expanding the forms according to (\ref{tetex1}), (\ref{tetex2}) and using the arising orientation form to contract the spinor indices in (\ref{free}) and (\ref{inter}). Specifically, one uses the relation \begin{equation} \theta^{MM'}\wedge \theta^{NN'} \wedge \theta^{RR'}\wedge \theta^{SS'}= \epsilon^{MM'NN'RR'SS'} \upsilon \, , \end{equation} where $\upsilon$ is the space-time volume form, and $\epsilon^{MM'NN'RR'SS'}$ is the internal-space orientation. This last has two known representations, each consisting of two mutually complex-conjugate terms, which we denote appropriately (omitting indices)\footnote{Our choice of orientation differs by sign from the usual one.}: \begin{align} \text{Penrose representation:} \quad &\epsilon = \epsilon_{\rm P} + \bar \epsilon_{\rm P} \, , \quad &\epsilon_{\rm P}^{MM'NN'RR'SS'} = {\rm i}\, \epsilon^{MS} \epsilon^{NR} \epsilon^{M'R'} \epsilon^{N'S'} \, , \label{orP} \\ \text{Wald representation:} \quad &\epsilon = \epsilon_{\rm W} + \bar \epsilon_{\rm W} \, , \quad &\epsilon_{\rm W}^{MM'NN'RR'SS'} = {\rm i}\, \epsilon^{MR} \epsilon^{NS} \epsilon^{M'N'} \epsilon^{R'S'} \, . \label{orW} \end{align} Our idea of modifying the action is to perform contraction of indices by using separately two parts of orientation in (\ref{orP}) or (\ref{orW}) with different position of indices of $\omega^{AB}$ in the action for each part (and treating now $\omega^{AB}$ as a connection without any symmetry). We proceed to describing this in detail.\footnote{The third possible splitting $\epsilon^{MM'NN'RR'SS'} = {\rm i}\, \epsilon^{MS} \epsilon^{NR} \epsilon^{M'N'} \epsilon^{R'S'} + \text{c.c.}$ does not lead to anything interesting.} \subsection{Kinetic term} Consider first the kinetic term in (\ref{free}). With respect to this term, we use the following schematic procedure\footnote{This procedure is not unique, e.g., one could transpose the indices of $\omega^{AB}$ in the first term of (\ref{kinsplit}) rather than in the second one. This, however, will lead to a similar final result.}: \begin{equation} \label{kinsplit} \left( \theta^A{}_{A'} \wedge h^{BA'} \wedge d \omega_{AB} \right)_{\textstyle \epsilon_{\rm P}} + \left( \theta^A{}_{A'} \wedge h^{BA'} \wedge d \omega\underline{_{BA}} \right)_{\textstyle \bar \epsilon_{\rm P}} \, , \end{equation} where the subscripts $\epsilon_{\rm P}$ and $\bar \epsilon_{\rm P}$ mean contraction with the corresponding parts in (\ref{orP}), and we have underlined the transposed indices in $\omega^{AB}$. This is equivalent to saying that the term with the symmetric part $\omega^{(AB)}$ of the connection is contracted with the complete orientation spinor $\epsilon_{\rm P} + \bar \epsilon_{\rm P} = \epsilon$, while the term containing the antisymmetric part $\omega^{[AB]}$ is contracted with $ \epsilon_{\rm P} - \bar \epsilon_{\rm P}$. The result is \begin{equation}\label{kin0} L_{\rm kin} = 2 h^{AA'BB'} \left( \partial_{BA'} \omega^C{}_{ACB'} - \partial_{CB'} \omega_A{}^C{}_{BA'} \right) \, . \end{equation} The presence of the antisymmetric part in the connection, $\omega^{[AB]CA'} = \epsilon^{AB} \lambda^{CA'}$, adds to the original action an extra term \begin{equation}\label{lam} 2 \lambda^{AA'} \partial_{BB'} \left( h_A{}^{B'B}{}_{A'} + h^B{}_{A'A}{}^{B'} \right) \, , \end{equation} which can be seen to be precisely the combination that leads to the de~Donder gauge for the tetrad perturbation. Using decomposition (\ref{hdeco}) for the tetrad perturbation, from (\ref{kin0}) we obtain \begin{align}\label{Lkin0} L_{\rm kin} &= 2 h^{ABA'B'} \left( \partial_{BA'}\omega^C{}_{ACB'} - \partial_{CA'} \omega_A{}^C{}_{BB'} \right) \nonumber \\ &\quad {} + 2 h^{AB} \left( \partial_{BA'} \omega^C{}_{AC}{}^{A'} + \partial_{CA'} \omega_A{}^C{}_B{}^{A'} \right) \, . \end{align} This expression can be written identically as \begin{equation} \label{Lkin1} L_{\rm kin} = - 2 \left[ h^{ABA'B'} - h^{(AB)} \epsilon^{A'B'} \right] \partial_{CA'} \left( \omega_A{}^C{}_{BB'} + \epsilon^C{}_B \omega^D{}_{ADB'} \right) + 4 h^{AB} \partial_{BB'} \omega^C{}_{AC}{}^{B'}\, . \end{equation} We note the appearance of two special combinations of the connection, coupling to the tetrad perturbations, for which we introduce the notation \begin{align}\label{om} \omega^{AA'} &= \omega^{CA}{}_C{}^{A'} \, , \\ \Omega^{ABCA'} &= \omega^{ACBA'} + \epsilon^{CB} \omega^{AA'} = \omega^{ABCA'} - \epsilon^{BC} \omega^D{}_D{}^{AA'} \, . \label{Om} \end{align} In deriving the last equality, we have used the identity \begin{equation} \omega^{C}{}_{CAA'} = \omega^C{}_{ACA'} - \omega_A{}^C{}_{CA'} \, . \end{equation} Spinor (\ref{Om}) is symmetric in its first two indices by construction, hence, has 12 independent components, while (\ref{om}) has 4 components. Together, these fields completely describe the connection with its 16 components. Note that the first relation in (\ref{Om}) already expresses the connection field $\omega^{ABCA'}$ in terms of new variables: \begin{equation}\label{w} \omega^{ABCA'} = \Omega^{ACBA'} + \epsilon^{CB} \omega^{AA'} \, . \end{equation} The conjugate fields in (\ref{Lkin1}) are $h^{ABA'B'}$ with 9 components, and $h^{AB}$ with 4 components. It is clear then that the kinetic term is still degenerate. On the other hand, only one of the two chiral halves of the Lorentz gauge freedom has been fixed thus far. To fix the other half, we add additional Lagrange multipliers, to impose a version of the Lorentz gauge. Specifically, we add the term $\lambda^{AB} \epsilon^{M'N'}$ to the expression in the first bracket of (\ref{Lkin1}). We introduce a new name for the combination that arises in this way: \begin{equation}\label{H} H^{ABA'B'}:= h^{ABA'B'} + \left[\lambda^{AB} - h^{(AB)} \right] \epsilon^{A'B'}\, . \end{equation} This adds additional 3 components to the metric perturbation fields. Given that the field $\lambda^{AB}$ is independent, this procedure makes the fields $H^{ABA'B'}$ and $h^{AB}$ completely independent. The new spinor (\ref{H}) is symmetric in its first two indices, thus having 12 independent components, and is conjugate to $\Omega^{ABCA'}$. Using (\ref{hdeco}) and (\ref{H}), we can express the tetrad perturbation and the new Lagrange multiplier in terms of new variables: \begin{align}\label{hdeco1} h^{AA'BB'} &= H^{AB(A'B')} + h^{AB} \epsilon^{A'B'} \, , \\ \lambda^{AB} &= h^{(AB)} - \frac12 H^{ABC'}{}_{C'} \, . \label{ldeco} \end{align} This completely gauge fixes the kinetic term, so that the kinetic part of the Lagrangian reads \begin{equation}\label{Lkin} L_{\rm kin} = - 2H^{ABA'B'} \partial_{CA'} \Omega_{AB}{}^C{}_{B'} + 4h^{AB}\partial_{BA'} \omega_{A}{}^{A'} \, . \end{equation} It is decoupled into two sectors $\left( H , \Omega \right)$ and $\left( h , \omega \right)$. \subsection{Potential term} It turns out that the potential term can be diagonalised in new connection components (\ref{om}) and (\ref{Om}) by using the splitting of the orientation in the Wald form. Specifically, similarly to (\ref{kinsplit}), we modify the first term in (\ref{free}) as \begin{equation} \left( \theta^A{}_{A'} \wedge \theta^{BA'} \wedge \omega_A{}^C \wedge \omega_{CB} \right)_{\textstyle \epsilon_{\rm W}} + \left( \theta^A{}_{A'} \wedge \theta^{BA'} \wedge \omega_A{}^C \wedge \omega\underline{_{BC}} \right)_{\textstyle \bar \epsilon_{\rm W}} \, , \end{equation} in which, again, we underlined the transposed indices in $\omega^{AB}$. The result is \begin{equation}\label{Lpot} L_{\rm pot} = - \omega^{ABCA'} \omega_{ABCA'} + 2 \omega^{AB}{}_B{}^{A'} \omega^C{}_{ACA'} = - \Omega^{ABCA'} \Omega_{ABCA'} + 2 \omega^{AA'} \omega_{AA'} \, . \end{equation} Here, we have made the substitution (\ref{w}) to obtain the last equality. Remarkably, the potential term also decouples in two sectors, which makes the process of finding propagators trivial; they can be read off from the gauge-fixed linearised action. The antisymmetric part $\omega^{[AB]CA'} = \epsilon^{AB} \lambda^{CA'}$ in (\ref{Lpot}) adds to the original action another term, quadratic in the Lagrange multiplier $\lambda^{AA'}$, which, together with (\ref{lam}) and insertion of $\lambda^{AB}$ into (\ref{Lkin1}), fixes the diffeomorphism gauge in the Feynman form as follows: \begin{align} L_{\rm g.f.} &= 2 \lambda^{AA'}\left[ \partial_{BB'} \left( h_A{}^{B'B}{}_{A'} + h^B{}_{A'A}{}^{B'} \right) - 2 \partial_{BA'} \lambda_A{}^B \right] - 4 \lambda^{AA'} \lambda_{AA'} \nonumber \\ &= 4 \lambda^{AA'} \left( \partial^{BB'} H_{ABA'B'} - \partial^B{}_{A'} H_{AB}{}^{C'}{}_{C'} + \partial^B{}_{A'} h_{AB} \right) - 4 \lambda^{AA'} \lambda_{AA'} \, . \end{align} \subsection{Propagators} Now that the $(H,\Omega)$ and $(h,\omega)$ sectors are decoupled, it is easy to derive the propagators. To this end, we couple all fields to currents. The complete free Lagrangian with currents is \begin{align}\label{Lnew} L_{\rm free} = {} - \Omega^{ABCA'} \Omega_{ABCA'} - 2\Omega_{ABCA'} \partial^C{}_{B'} H^{ABB'A'} + 2\omega^{AA'} \omega_{AA'} + 4 \omega_{AA'} \partial_B{}^{A'} h^{AB} \nonumber \\ {} + J_{ABCA'} \Omega^{ABCA'} + J_{AA'} \omega^{AA'} + J_{ABA'B'} H^{ABA'B'} + J_{AB} h^{AB} \, , \end{align} where the currents for our new fields have respective symmetries in indices. From Lagrangian (\ref{Lnew}), we obtain the field equations \begin{align} \frac{\delta}{\delta \Omega} : \quad &\Omega^{ABCA'} = \frac12 J^{ABCA'} - \partial^C{}_{B'} H^{ABB'A'} \, , \label{eq-Om} \\ \frac{\delta}{\delta \omega} : \quad &\omega^{AA'} = \partial^{BA'} h^A{}_B - \frac14 J^{AA'} \, , \label{eq-Phi} \\ \frac{\delta}{\delta H} : \quad &\partial^C{}_{A'} \Omega_{ABCB'} + \frac12 J_{ABA'B'} = 0 \, , \label{eq-H} \\ \frac{\delta}{\delta h} : \quad &\partial_{BA'} \omega_A{}^{A'} + \frac14 J_{AB} = 0 \, . \label{eq-h} \end{align} Substituting (\ref{eq-Om}) and (\ref{eq-Phi}) into (\ref{eq-H}) and (\ref{eq-h}), we obtain \begin{align} &\partial^C{}_{A'} J_{ABCB'} + \Box H_{ABA'B'} + J_{ABA'B'} = 0 \, , \\ &2\Box h_{AB} + \partial_{BA'} J_A{}^{A'} - J_{AB} = 0 \, , \end{align} where $\Box = \partial^A{}_{A'} \partial_A{}^{A'}$. From these and (\ref{eq-Om}), (\ref{eq-Phi}), we get \begin{align} \Box H_{ABA'B'} &= - \partial^C{}_{A'} J_{ABCB'} - J_{ABA'B'} \, , \label{eq1} \\ \Box h_{AB} &= \frac12 \left( J_{AB} - \partial_{BA'} J_A{}^{A'} \right)\, , \\ \Box \Omega_{ABCA'} &= \partial_{CC'} J_{AB}{}^{C'}{}_{A'} \, , \\ \Box \omega^{AA'} &= \frac12 \partial^{BA'} J^A{}_B \, . \label{eq4} \end{align} The generating Lagrangian can be calculated as \begin{equation} L_W = \frac12 \left( J_{ABCA'} \Omega^{ABCA'} + J_{AA'} \omega^{AA'} + J_{ABA'B'} H^{ABA'B'} + J_{AB} h^{AB} \right) \, , \end{equation} where we need to substitute the solutions for the fields in terms of the sources. Using (\ref{eq1})--(\ref{eq4}), we obtain the result: \begin{align}\label{propag} L_W &= - \frac12 J_{ABA'B'} \Box^{-1} J^{ABA'B'} + \frac14 J_{AB} \Box^{-1} J^{AB} \nonumber \\ &\quad {} + J^{ABCA'} \Box^{-1} \partial_{CB'} J_{AB}{}^{B'}{}_{A'} - \frac12 J^{AA'} \Box^{-1} \partial_{BA'} J_A{}^B \, . \end{align} All propagators can be read off directly from here. We will only need the $HH$ and $H\Omega$ propagators for what follows. The $HH$ propagator is given by \begin{equation}\label{prop-HH} \left \langle H_{ABA'B'}(k) H^{MNM'N'}(-k) \right \rangle = \frac{1}{{\rm i} k^2}\, \epsilon_A{}^{(M} \epsilon_B{}^{N)} \epsilon_{A'}{}^{M'} \epsilon_{B'}{}^{N'} \, , \qquad \begin{fmfgraph*}(70,20) \fmfleft{i1} \fmfright{i2} \fmf{dbl_plain}{i1,i2} \end{fmfgraph*} \end{equation} which we depict by a double straight line. The symmetrisation of the propagator in the unprimed spinor indices is necessary because the field $H^{ABA'B'}$ is $\scriptstyle AB$ symmetric. The $\Omega H$ propagator is given by \begin{equation}\label{prop-OH} \left \langle \Omega_{ABCC'}(k) H^{MNM'N'}(-k) \right \rangle = \frac{1}{k^2} \, \epsilon_A{}^{(M}\epsilon_B{}^{N)} k_C{}^{M'} \epsilon_{C'}{}^{N'}\, , \qquad \begin{fmfgraph*}(70,20) \fmfleft{i1} \fmfright{i2} \fmf{dbl_wiggly}{i1,v} \fmf{dbl_plain}{v,i2} \end{fmfgraph*} \end{equation} We note that we can understand the $\Omega H$ propagator as the result of applying a derivative to the $HH$ propagator. Indeed, we see from (\ref{eq-Om}) that, in the absence of the current, \begin{eqnarray}\label{Omega-H*} \Omega_{ABCC'} = - \partial_{CA'} H_{AB}{}^{A'}{}_{C'}. \end{eqnarray} This relation also holds for the propagators. Indeed, we see that \begin{eqnarray}\label{prop-relation} \left\langle \Omega_{ABCC'}(k) H^{MNM'N'}(-k) \right\rangle = - {\rm i} k_{CA'} \left \langle H_{AB}{}^{A'}{}_{C'}(k) H^{MNM'N'}(-k) \right \rangle \, . \end{eqnarray} As we shall see below, this means that, for all practical purposes, we can replace $\Omega$ with its expression (\ref{Omega-H*}), keeping in mind, however, that the corresponding copy of $H$ is related to the connection and is special. This will be discussed in more details below. \section{Interaction} \label{sec:interaction} \subsection{The $h h \partial \omega$ term} For this term, our orientation-splitting procedure gives little simplification. Using the procedure similar to (\ref{kinsplit}), we obtain \begin{equation} \label{hhdw0} {\rm i} h^A{}_{A'} \wedge h^{BA'} \wedge d \omega_{AB} \quad \to \quad 2 h^{AA'BB'} h^C{}_{A'}{}^{DC'} \partial_{DB'} \omega_{ACBC'} \, . \end{equation} In this term, we first can replace $\omega^{ABCA'}$ by $\Omega^{ABCA'}$. Indeed, the difference between them, according to (\ref{Om}) is $\epsilon^{BC} \omega^D{}_D{}^{AA'}$, which is our Lagrange multiplier for fixing the diffeomorphism gauge. With decomposition (\ref{hdeco1}) taken into account, this component then becomes \begin{align}\label{L1} L_1 = - 2H^{AC(B'C')} H^{BD}{}_{(A'C')} \partial_{DB'} \Omega_{ABC}{}^{A'} - 2h^{AB} h^{CD} \partial_{DA'} \Omega_{ACB}{}^{A'} \nonumber \\ {} + 2h^{AB} H_{CB(A'B')} \partial_D{}^{B'} \Omega_A{}^{CDA'} \, . \end{align} Note that the connection component namely $\omega_{AA'}$ does not appear in this part of interaction. It will only enter the parts $L_2$ and $L_3$ of the combined type $(h+hh)\omega\omega$. It would be nice to get rid of the symmetrization in (\ref{L1}). We note that [see (\ref{H})] \begin{equation} H^{AB(A'B')} = H^{ABA'B'} + \left[ h^{(AB)} - \lambda^{AB} \right] \epsilon^{A'B'} \, . \end{equation} Hence, we could shift the $H$'s in (\ref{L1}) by the Lagrange multiplier as follows: \begin{equation}\label{H-lambda-shift} H^{AB(A'B')} \to H^{AB(A'B')} + \lambda^{AB} \bar \epsilon^{A'B'} = H^{ABA'B'} + h^{(AB)} \bar \epsilon^{A'B'} \, . \end{equation} The problem is that this shift will modify the {\em Lorentz\/} constraint by the presence of $\lambda^{AB}$ in it. In other words, there will be new non-trivial terms quadratic in the added Lagrange multipliers, which may be problematic. We postpone the full analysis of possible non-linear gauges to a separate publication, where we plan to return to this problem using the BRST formalism. The only cubic vertex that is important for our later purposes will be the $HH\partial \Omega$ term, which we will be written below, and for which the presence of symmetrization in (\ref{L1}) will be of no importance. \subsection{Other components} In deriving the other components of interaction we might also use the procedure of splitting the orientation, which we used to deal with the free term. It turns out, however, that they do not simplify these expressions. Hence, we compute the remaining components of the interaction directly using (\ref{inter}): \begin{align} L_2 &= 2h^{AA'BB'} \left(\omega^C{}_{ABA'} \omega^D{}_{CDB'} - \omega^C{}_{ADB'} \omega^D{}_{CBA'} \right) \, , \\ L_3 &= h^{AA'BB'} h^C{}_{A'}{}^{DC'} \left( \omega_A{}^E{}_{DB'} \omega_{ECBC'} - \omega_A{}^E{}_{BC'} \omega_{ECDB'} \right) \, . \end{align} Both these parts do not depend on the antisymmetric part $\omega^{[AB]CA'}$ of the connection. We would like to use non-linear gauge-fixing procedure in which the Lagrange multipliers added to gauge-fix the kinetic term are also added at the level of the interaction terms. We would also like to work with the fields $H^{ABA'B'}$, $h^{AB}$, $\Omega^{ABCA'}$, and $\omega^{AA'}$, for which the propagators are known. If we are to avoid terms quadratic in the Lagrange multiplier $\lambda^{AA'}$ that also depend on the fields, we can only add terms linear in $\lambda^{AA'} = \omega^B{}_B{}^{AA'}$ to modify the diffeomorphism constraint by another non-linear term. The option with the simplest such result is obtained if we substitute $\omega^{ABCA'} \to \Omega^{ABCA'}$ for the (1,4) instances of $\omega$ in the products $\omega_1 \omega_2 - \omega_3 \omega_4$, and leave the other two $\omega$'s intact. These components then read \begin{align}\label{L2} L_2 &= 2h^{AA'BB'} \Omega^{CD}{}_{AB'} \Omega_{CDBA'} - 4 h^{AA'BB'} \omega^C{}_{B'} \Omega_{ACBA'} \, , \\ L_3 &= \omega^{AA'} \left(\Omega_{AC}{}^D{}_{C'} h^{CB'BC'} h_{BB'DA'} + \Omega_{BC}{}^D{}_{C'} h_A{}^{B'BC'} h^C{}_{B'DA'} \right) \nonumber \\ &\quad {} + h^{AA'BB'} h^C{}_{A'}{}^{DC'} \left( \Omega_A{}^F{}_{DB'} \Omega_{BFCC'} - \Omega_{AB}{}^F{}_{C'} \Omega_{CFDB'} \right) \, . \label{L3} \end{align} Again, we can replace the $h^{AA'BB'}$ field here with its expression in terms of $H^{ABA'B'}$ and $h^{AB}$ fields. For our later purposes we will only need the $H\Omega\Omega$ vertex. \subsection{Effective $HHH$ vertex} As we shall see in what follows, the most important role in the tree level computations is played by the $HH\partial \Omega$ vertex given by \begin{equation}\label{main-vertex-prelim} -2{\rm i} H^{AS}{}_{M'}{}^{R'} H^{BR}{}^{M'S'} \partial_{RR'} \Omega_{ABSS'} \qquad \begin{gathered} \begin{fmfgraph*}(100,100) \fmftop{i1,i2} \fmfbottom{i3} \fmf{dbl_plain}{i1,v1} \fmf{dbl_plain}{i2,v1} \fmf{dbl_wiggly}{i3,v1} \fmflabel{1}{i1} \fmflabel{2}{i2} \fmflabel{3}{i3} \end{fmfgraph*} \end{gathered} \end{equation} \bigskip \noindent where the factor of the imaginary unit in front is the one in the exponent of $e^{{\rm i} S}$ in the path integral. When the $\Omega$ leg of this vertex is internal, one has to insert the $\Omega H$ propagator (\ref{prop-OH}) into this leg. However, this propagator can be obtained by applying a derivative to the $HH$ propagator, as we discussed in (\ref{prop-relation}). When this leg is external, one has to substitute into it the corresponding state, which is again obtained (\ref{omega-state}) by applying the derivative to the $H$ state. This means that we can always replace $\Omega$ by its expression (\ref{Omega-H*}), in particular in this vertex, and work with an effective $HHH$ vertex. The only subtlety is that one has to keep track of what used to be the $\Omega$ leg, and remember that there is no $\Omega\Omega$ propagator, which means that two $\Omega$ legs can never contract. This can be taken care of by putting a cilia next to the corresponding leg. The cilia is indicated by a bullet symbol, which we put both as an index of the corresponding copy of $H$, as well as as a label on the corresponding leg of the vertex. With these conventions, the new effective $HHH$ vertex reads \fmfcmd{% style_def dotted expr p = draw_double p; filldraw fullcircle scaled 5 shifted point length(p)/4 of p enddef;} \begin{equation}\label{main-vertex} 2{\rm i} H^{AS}{}_{M'}{}^{R'} H^{BR}{}^{M'S'} \partial_{RR'} \partial_{SK'} H^\bullet_{AB}{}^{K'}{}_{S'} \qquad \begin{gathered} \begin{fmfgraph*}(100,100) \fmftop{i1,i2} \fmfbottom{i3} \fmf{dbl_plain}{i1,v1} \fmf{dbl_plain}{i2,v1} \fmf{dotted}{v1,i3} \fmflabel{1}{i1} \fmflabel{2}{i2} \fmflabel{3}{i3} \end{fmfgraph*} \end{gathered} \end{equation} It is second-order in derivatives, as is appropriate for a gravity theory. \subsection{Another effective vertex} The other vertex that is relevant for our tree level computations is $H\Omega\Omega$. This can be read-off from (\ref{L2}) and reads \begin{equation}\label{hww} 2 {\rm i} H^{ABA'B'} \Omega^{CD}{}_{AA'} \Omega_{CDBB'} \qquad \begin{gathered} \begin{fmfgraph*}(100,100) \fmftop{i1,i2} \fmfbottom{i3} \fmf{dbl_wiggly}{i1,v1} \fmf{dbl_wiggly}{i2,v1} \fmf{dbl_plain}{i3,v1} \fmflabel{1}{i1} \fmflabel{2}{i2} \fmflabel{3}{i3} \end{fmfgraph*} \end{gathered} \, , \end{equation} \bigskip\noindent where again the factor of the imaginary unit is the one in front of the action. As we already discussed, all copies of $\Omega$ can be replaced by their expressions (\ref{Omega-H*}). This gives rise to another effective $HHH$ vertex, where now there are two $H$ legs that came from $\Omega$, which needs a decoration by two cilia \begin{equation}\label{main-vertex-other} 2 {\rm i} H^{ABA'B'} \partial_{AR'} H^{\bullet\,CDR'}{}_{A'} \partial_{BS'} H^\bullet_{CD}{}^{S'}{}_{B'} \qquad \begin{gathered} \begin{fmfgraph*}(100,100) \fmftop{i1,i2} \fmfbottom{i3} \fmf{dotted}{v1,i1} \fmf{dotted}{v1,i2} \fmf{dbl_plain}{i3,v1} \fmflabel{1}{i1} \fmflabel{2}{i2} \fmflabel{3}{i3} \end{fmfgraph*} \end{gathered} \, . \end{equation} It is symmetric in $H^\bullet$ placeholders. \section{Amplitudes} \label{sec:amplitudes} \subsection{States} From the fields $H,\Omega,h,\omega$ only the pair $H,\Omega$ can be non-zero on-shell. The fields $h,\omega$ only describe the scalar part of the metric perturbation, and vanish on-shell. For the metric perturbation described by $H$, we take the following usual states \begin{equation} \epsilon_-^{ABA'B'} =\frac{q^A q^B k^{A'} k^{B'}}{\langle q k\rangle^2}, \qquad \epsilon_+^{ABA'B'} =\frac{k^A k^B q^{A'} q^{B'}}{[qk]^2}. \end{equation} Here $k^A, k^{A'}$ are the momentum spinors with the null momentum of the particle being $ k_{AA'}=k_A k_{A'}$. The spinors $q_A, q_{A'}$ are the auxiliary spinors. Here, $\langle q k \rangle = q^A k_A$ and $[q k] = q_{A'} k^{A'}$. To determine the states for the connection field $\Omega$ we note that on-shell the connection is given by (\ref{Omega-H*}). Substituting here the helicity states for $H$, with the momentum space rule for the derivative being $\partial_{AA'}\to {\rm i} k_{A} k_{A'}$, we see that the connection can only support the positive helicity state \begin{equation}\label{omega-state} \epsilon_+^{ABCA'} = {\rm i} \frac{k^A k^B k^C q^{A'}}{[qk]} \, . \end{equation} However, a more efficient version of the Feynman rules consists in replacing $\Omega$ by its expression (\ref{Omega-H*}) everywhere, as we already discussed. \subsection{Amplitude $--+$} Three-point amplitudes vanish in the physical Lorentzian signature, but are non-vanishing in, e.g., $(2,2)$ signature. These amplitudes are particularly simple as they are completely fixed by Lorentz invariance, modulo an overall coefficient. It is interesting to reproduce them by our formalism. Let us start with an easier $(--+)$ amplitude, where our convention is that the minus helicity is the one that is preferred by our chiral formalism. This is because the connection can only carry the other, positive helicity. Thus, for the amplitude $(--+)$ only the vertex of the type $HH\partial\Omega$ can contribute, with the two negative helicity states being inserted into the $H$ legs. With this in mind, we take the usual spinor helicity states for the negative gravitons that we label $1,2$ \begin{equation} \epsilon^{AA'BB'}_1 = \frac{q^A q^B 1^{A'} 1^{B'}}{\langle q 1\rangle^2}, \quad \epsilon^{AA'BB'}_2 = \frac{q^A q^B 2^{A'} 2^{B'}}{\langle q 2\rangle^2}. \end{equation} Here the notation is $k_1^A \equiv 1^A$, and the same for the primed index spinors. If we use the effective version of the Feynman rules with only $H$ field, the positive helicity state to be inserted in the third leg is \begin{equation} \epsilon_3^{ABA'B'} = \frac{3^A 3^B q^{A'}q^{B'}}{[q3]^2}. \end{equation} We now insert the states $1,2$ into the $H$ legs, and state $3$ into $H^\bullet$ leg. The vertex is not explicitly symmetric in the two factors of $H$, and so there will be two contributions. We get \begin{align}\label{mmp-calc} 2 {\rm i} \left( \frac{q^A q^S 1_{M'} 1^{A'}}{\langle q1\rangle^2}\frac{q^B q^R 2^{M'} 2^{S'}}{\langle q2\rangle^2}+ \frac{q^A q^S 2_{M'} 2^{A'}}{\langle q2\rangle^2}\frac{q^B q^R 1^{M'} 1^{S'}}{\langle q1\rangle^2}\right) 3_R 3_{A'} 3_S 3^{R'} \frac{3_A 3_B q_{R'} q_{S'}}{[q3]^2} \nonumber \\ = 2{\rm i} \frac{\langle q3\rangle^4 [12]}{\langle q1\rangle^2\langle q2\rangle^2 [q3]} \left( 1^{A'} 2^{S'}-2^{A'} 1^{S'} \right) 3_{A'} q_{S'}= \frac{2}{{\rm i}} \frac{\langle q3\rangle^4 [12]^2}{\langle q1\rangle^2\langle q2\rangle^2} \, . \end{align} Using now the momentum conservation in the form \begin{equation} \langle q1\rangle 1^{A'} + \langle q2\rangle 2^{A'} +\langle q3\rangle 3^{A'} =0 \, , \end{equation} we have $\langle q3\rangle/\langle q1\rangle = - [21]/[23]$, $\langle q3\rangle/\langle q2\rangle = - [12]/[13]$, and so we have for this amplitude \begin{equation} {\rm i} {\cal M}^{--+}= 2\, \frac{[12]^6}{[13]^2[23]^2} \, , \end{equation} which, after restoring the factors of the Newton's constant, becomes the correct answer. \subsection{Amplitude $-++$} Let us now consider the $(-++)$ amplitude. We will take the negative helicity state to correspond to momentum 1, and the two positive helicity states are $2, 3$. Now the vertex $HH\Omega$ can in principle also give contribution, and we will see that it does. Let us start with the contribution from (\ref{main-vertex}). The $H^\bullet$ leg can only take one of the positive helicities. However, there are now in total four different possible ways to insert the states. When inserted into an $H$ leg, the positive states correspond to the following helicity spinors: \begin{equation} \epsilon_2^{AA'BB'} = \frac{2_A 2_B q_{A'} q_{B'}}{[q2]^2} \, , \qquad \epsilon_3^{AA'BB'} = \frac{3_A 3_B q_{A'} q_{B'}}{[q3]^2}\, . \end{equation} Let us start with the two terms that get generated by inserting $1,2$ into $HH$ and $3$ into $H^\bullet$. This is similar to (\ref{mmp-calc}): \begin{align}\label{mmp-calc-1} 2{\rm i} \left( \frac{q^A q^S 1_{M'} 1^{A'}}{\langle q1\rangle^2}\frac{2^B 2^R q^{M'} q^{S'}}{[q2]^2}+ \frac{2^A 2^S q_{M'} q^{A'}}{[q2]^2}\frac{q^B q^R 1^{M'} 1^{S'}}{\langle q1\rangle^2}\right) 3_R 3_{A'} 3_S 3^{R'} \frac{3_A 3_B q_{R'} q_{S'}}{[q3]^2} \nonumber \\ = \frac{2}{{\rm i}} \frac{[q1]^2 \langle 23\rangle^2 \langle q3\rangle^2}{\langle q1\rangle^2[q2]^2} \, , \end{align} where now only the second term in the brackets contributes. We now add to this the $2,3$ permutation \begin{equation}\label{mpp-1} \frac{2}{{\rm i}} \frac{[q1]^2 \langle 23\rangle^2 }{\langle q1\rangle^2} \left( \frac{\langle q3\rangle^2}{[q2]^2}+ \frac{\langle q2\rangle^2}{[q3]^2}\right) \, . \end{equation} We now evaluate a contribution from the vertex (\ref{main-vertex-other}). In order to be able to recycle the result in later computations of the 4 point amplitudes, let us evaluate this vertex by inserting the states $2,3$ into the $H^\bullet$ legs, and a placeholder field $H_{ABA'B'}$ into the $H$ leg. We have \begin{equation} -2{\rm i} H^{ABA'B'} 2_A 2_{R'} \frac{2^C 2^D q^{R'} q_{A'}}{[q2]^2} 3_B 3_{S'} \frac{3_C 3_D q^{S'} q_{B'}}{[q3]^2} = \frac{2}{{\rm i}} H_{AB}{}^{R'S'} 2^A 3^B q_{R'} q_{S'} \frac{ \langle 23\rangle^2}{ [q2][q3]} \, , \end{equation} where the minus sign in front of the first expression comes from the two derivatives in the vertex. We need to add to this the same expression with $2,3$ interchanged, which gives \begin{equation}\label{mpp-placeholder} \frac{2}{{\rm i}} H_{AB}{}^{R'S'} \left( 2^A 3^B + 3^A 2^B \right) q_{R'} q_{S'} \frac{ \langle 23\rangle^2}{[q2][q3]}\, . \end{equation} We now evaluate this inserting the negative helicity state of the graviton 1 instead of the placeholder. We get \begin{equation}\label{mpp-2} \frac{4}{{\rm i}} \frac{\langle q2\rangle \langle q3\rangle [q1]^2 \langle 23\rangle^2}{\langle q1\rangle^2 [q2][q3]}\, . \end{equation} Adding (\ref{mpp-1}) and twice (\ref{mpp-2}), we get \begin{align}\label{mpp-3} \frac{2}{{\rm i}} \frac{[q1]^2 \langle 23\rangle^2 }{\langle q1\rangle^2[q2]^2[q3]^2} \left( \langle q2\rangle^2 [q2]^2 + \langle q3\rangle^2 [q3]^2 + 2 \langle q2\rangle \langle q3 \rangle [q2] [q3]\right) \nonumber \\ =\frac{2}{{\rm i}} \frac{[q1]^2 \langle 23\rangle^2 }{\langle q1\rangle^2[q2]^2[q3]^2} \left( \langle q2\rangle [q2] + \langle q3\rangle [q3] \right)^2 = \frac{2}{{\rm i}} \frac{[q1]^4 \langle 23\rangle^2 }{[q2]^2[q3]^2}\, , \end{align} where we have used the momentum conservation in the form \begin{equation} \langle q1\rangle [q1]+ \langle q2\rangle [q2] + \langle q3\rangle [q3]=0 \, . \end{equation} We can now use the momentum conservation to convert the square brackets in (\ref{mpp-3}). Altogether, this gives the correct answer for the amplitude: \begin{equation} {\rm i} {\cal M}^{-++} = 2\, \frac{\langle 23\rangle^6}{\langle 12\rangle^2 \langle 13\rangle^2}\, . \end{equation} This computation shows that the vertex (\ref{hww}) is essential for getting the right answer for the amplitudes. \section{Currents} \label{sec:currents} The technology of currents is very efficient as it works by computing objects that are later recycled in other calculations. A current is an object obtained as the sum of all Feynman diagrams with every except one leg on shell. \subsection{$J(1^-, 2^-)$ current} Let us start by computing the simplest negative-negative current obtained by inserting two negative gravitons states into the $H$ legs of the vertex (\ref{main-vertex-prelim}), followed by the $\Omega H$ propagator. Equivalently, we use the effective vertex (\ref{main-vertex}). This gives the current that we denote as $J^{ABA'B'}(1^-,2^-)$. The insertion of two negative helicity gravitons $1,2$ into the vertex (\ref{main-vertex}), followed by the $HH$ propagator, is given by \begin{align*} J^{ABR'S'}(1^-,2^-)=\frac{2}{(1+2)^2} \left( \frac{q^A q^S 1_{M'} 1^{A'}}{\langle q1\rangle^2}\frac{q^B q^R 2^{M'} 2^{S'}}{\langle q2\rangle^2}+ \frac{q^A q^S 2_{M'} 2^{A'}}{\langle q2\rangle^2}\frac{q^B q^R 1^{M'} 1^{S'}}{\langle q1\rangle^2}\right) \\ {} \times (1+2)_{RA'} (1+2)_{S}{}^{R'} = - \frac{2}{(1+2)^2} \frac{q^A q^S q^B q^R [12]^2}{\langle q1\rangle^2 \langle q2\rangle^2} (1+2)_{R}{}^{S'} (1+2)_{S}{}^{R'} \, . \end{align*} We now replace $(1+2)^2=2\langle 12\rangle [12]$ and get \begin{equation}\label{current} J^{ABA'B'}(1^-,2^-)=q^A q^B \langle q| 1+2|^{A'} \langle q| 1+2|^{B'} J(1^-,2^-)\, , \end{equation} where we have introduced the notation \begin{eqnarray} \langle q| 1+2|^{R'} = q^{R} (1+2)_R{}^{R'}, \end{eqnarray} as well as the current with its index structure stripped off: \begin{equation}\label{mm-current} J(1^-,2^-):=- \frac{ [12]}{\langle q1\rangle^2 \langle q2\rangle^2 \langle 12\rangle}\, . \end{equation} We note that the single negative helicity state can also be written in the form (\ref{current}). Indeed, we can write \begin{equation} \epsilon_-^{ABA'B'} \equiv J^{ABA'B'}(1^-) = q^A q^B \langle q| 1|^{A'} \langle q| 1|^{B'} J(1^-)\, , \end{equation} where \begin{equation} J(1^-)=\frac{1}{\langle q1\rangle^4} \, . \end{equation} The currents $J(1^-)$, $J(1^-,2^-)$ are the beginning of a sequence of all negative helicity currents, for which a recursion relation can be written and solved in a closed form; see, e.g., \cite{Krasnov:2013wsa} for the recursion as well as the general expression for this current. \subsection{$J(1^-,2^+)$ current} We now repeat the previous current calculation, but this time consider the coupling of a negative and a positive helicity states. We first consider contribution of the vertex (\ref{main-vertex}), and first compute the insertion of $1^-,2^+$ into the $H$ legs of this vertex. We get \begin{align*} J^{ABR'S'}(1^-,2^+)=\frac{2}{(1+2)^2} \left( \frac{q^{(A} q^S 1_{M'} 1^{A'}}{\langle q1\rangle^2}\frac{2^{B)} 2^R q^{M'} q^{S'}}{[q2]^2}+ \frac{2^{(A} 2^S q_{M'} q^{A'}}{[q2]^2}\frac{q^{B)} q^R 1^{M'} 1^{S'}}{\langle q1\rangle^2}\right) \\ {} \times (1+2)_{RA'} (1+2)_{S}{}^{R'} = \frac{2^{(A} q^{B)} \langle q| 1+2 | q] [q1]}{\langle q1\rangle^2 [q2]^2 [12]} 1^{R'} 1^{S'}. \end{align*} We note that this expression can be put into the form (\ref{current}) if we choose the gauge $q^A=2^A$, in which the auxiliary spinor of the negative helicity gravitons is equal to the momentum spinor of the single positive helicity graviton. Indeed, in this case, the current becomes \begin{equation} J^{ABA'B'}(1^-,2^+) = q^A q^B \langle q| 1+2|^{A'} \langle q| 1+2|^{B'} J(1^-,2^+)\, , \end{equation} where \begin{equation}\label{mp-current} J(1^-,2^+) = - \frac{[q1]^2}{\langle q1\rangle^2 [q2]^2 [12]\langle 12\rangle}\, . \end{equation} Note that this only exhibits the correct scaling property with respect to graviton $1$. The degree of homogeneity for the graviton $2$ is now partially carried by the factors of $q$ in the spinor prefactor. Let us now consider the case where the $2^+$ graviton is inserted into the $H^\bullet$ leg of (\ref{main-vertex}) instead. Inserting the $1^-$ into one of the two $H$ legs, and keeping the other $H$ as a placeholder, we get \begin{equation}\label{mp-diff-insert} - \frac{2}{(1+2)^2} \left(\frac{ q^A q^S 1_{M'} 1^{R'}}{\langle q1\rangle^2} H^{BR}{}^{M'S'} +H^{AS}{}_{M'}{}^{R'} \frac{q^B q^R 1^{M'} 1^{S'}}{\langle q1\rangle^2} \right) 2_R 2_{R'} 2_{S} 2_{K'} \frac{2_A 2_B q^{K'} q_{S'}}{[q2]^2} \, . \end{equation} By using the Schouten identity, this gives the following contribution to the current: \begin{equation} - \frac{2 \langle q2\rangle^2}{(1+2)^2 \langle q1\rangle^2} 2^A 2^B 1^{A'} 1^{B'}\, . \end{equation} This vanishes in our gauge $q^A=2^A$. There is another contribution to this current, obtained by inserting a positive and a negative helicity state into (\ref{main-vertex-other}). We must insert the negative state into the $H$ leg of this vertex, while the positive state can go into either of the two $H^\bullet$ legs. This gives \begin{equation} \frac{4}{(1+2)^2} \frac{ \tbr{q2} \sbr{q1}}{\tbr{q1}^2 \sbr{q2}} 2^A 2^B \left( \tbr{q1} 1^{A'} + \tbr{q2} 2^{A'} \right) 1^{B'} \, , \end{equation} which again vanishes in the gauge $q^A=2^A$, and so this contribution can be ignored in this gauge. \subsection{$J(1^+,2^+)$ current} Although for the computations that follow we do not need this current, let us also consider the current of two positive helicity states, for completeness. Let us start with the contribution from (\ref{main-vertex}). The insertion of both states into the $H$ legs vanishes because the auxiliary spinors $q^{A'}$ get contracted. Thus, we must insert one of the states into the $H^\bullet$ leg. Taking this to be the state $2$, we get a modification of (\ref{mp-diff-insert}): \begin{equation} - \frac{2}{(1+2)^2} \left(\frac{ 1^A 1^S q_{M'} q^{R'}}{[q1]^2} H^{BR}{}^{M'S'} +H^{AS}{}_{M'}{}^{R'} \frac{1^B 1^R q^{M'} q^{S'}}{[q1]^2} \right) 2_R 2_{R'} 2_{S} 2_{K'} \frac{2_A 2_B q^{K'} q_{S'}}{[q2]^2}\, . \end{equation} Only the first term contributes, and we get the following contribution to this current: \begin{equation} - \frac{2 \langle 12\rangle^2}{(1+2)^2 [q1]^2} 2^A 2^B q^{A'} q^{B'}\, . \end{equation} This must be symmetrised with respect to $1\leftrightarrow 2$. Let us now consider the contribution of the vertex (\ref{main-vertex-other}). The only non-vanishing option is to insert both positive states into the $H^\bullet$ legs. We then have the following contribution: \begin{equation} - \frac{2\langle 12\rangle^2}{(1+2)^2 [q1][q2]} (1^A 2^B + 2^A 1^B) q^{A'} q^{S'}\, . \end{equation} Collecting all contributions, we get \begin{equation} J(1^+,2^+)^{ABA'B'} = - \frac{\langle 12\rangle}{[12] [q1]^2[q2]^2} [q|1+2|^A [q|1+2|^B q^{A'} q^{B'}\, , \end{equation} which is just the complex conjugate of (\ref{current}). \subsection{Coupling current to a negative helicity state} We now compute the result of coupling a current of the general form (\ref{current}) to a negative helicity state, inserting both into the vertex (\ref{main-vertex}) and then following by the propagator. We have \begin{align} J^{ABR'S'}(K,3^-)=\frac{2}{( K+3)^2} \left( J^{(A|S|}{}_{M'}{}^{A'}(K) \frac{q^{B)} q^R 3^{M'} 3^{S'}}{\langle q3\rangle^2}+ \frac{q^{(A} q^{|S|} 3_{M'} 3^{A'}}{\langle q3\rangle^2} J^{B)RM'S'}(K)\right) \nonumber \\ {} \times (K+3)_{RA'} (K+3)_S{}^{R'} \nonumber \\ = \frac{2 J(K)}{( K+3)^2} \left( q^A q^S \langle q|K|_{M'} \langle q| K|^{A'}\frac{q^B q^R 3^{M'} 3^{S'}}{\langle q3\rangle^2}+ \frac{q^A q^S 3_{M'} 3^{A'}}{\langle q3\rangle^2} q^B q^R \langle q| K|^{M'} \langle q|K|^{S'}\right) \nonumber \\ {} \times (K+3)_{RA'} (K+3)_S{}^{R'}\, , \end{align} where $K$ is the sum of all the momenta of particles participating in the current $J^{ABA'B'}(K)$, and $J (K)$ is the scalar part of this current, see (\ref{current}). A computation similar to one performed for $J(1^-,2^-)$ shows that the current obtained by coupling a smaller current to a negative helicity graviton keeps its form \begin{equation} J^{ABR'S'}(K,3^-) = q^A q^B \langle q| K+3|^{R'} \langle q| K+3|^{S'} J(K,3^-)\, , \end{equation} where \begin{equation} J(K,3^-) = - \frac{2}{(K+3)^2 \langle q3\rangle^2} J(K) \langle q| K|3]^2\, . \end{equation} For $K=1^-$, and $3$ replaced by $2$, this reproduces the previous result (\ref{mm-current}). \subsection{Coupling current to a positive helicity state} We now do a similar computation but now couple a current of the general form (\ref{current}) to a positive helicity state. We have \begin{align} J^{ABR'S'}(K,3^+)=\frac{2}{( K+3)^2} \left( J^{(A|S|}{}_{M'}{}^{A'}(K) \frac{3^{B)} 3^R q^{M'} q^{S'}}{[q3]^2}+ \frac{3^{(A} 3^{|S|} q_{M'} q^{A'}}{[q3]^2} J^{B)RM'S'}(K)\right) \nonumber \\ {} \times (K+3)_{RA'} (K+3)_S{}^{R'} \nonumber \\ = \frac{2 J(K)}{( K+3)^2} \left( q^{(A} q^{|S|} \langle q|K|_{M'} \langle q| K|^{A'}\frac{3^{B)} 3^R q^{M'} q^{S'}}{[q3]^2}+ \frac{3^{(A} 3^{|S|} q_{M'} q^{A'}}{[q3]^2} q^{B)} q^R \langle q| K|^{M'} \langle q|K|^{S'}\right) \nonumber \\ {} \times (K+3)_{RA'} (K+3)_S{}^{R'}. \end{align} If we now assume that the current inserted is an all-negative current, and assume that the negative helicity auxiliary spinors $q^A$ are equal to the momentum spinor of the positive helicity graviton $q^A=3^A$, then the first term does not contribute. This is because of the combination \begin{equation} \langle q| K|^{A'} \langle 3|K+3|_{A'} = \langle q| K|^{A'} \langle 3|K|_{A'} \sim K^2 \langle q3\rangle\, , \end{equation} which vanishes if $q=3$. The second term gives, in the same gauge, \begin{equation} J^{ABR'S'}(K,3^+) = q^A q^B \langle q| K+3|^{R'} \langle q| K+3|^{S'} J(K,3^+)\, , \end{equation} where \begin{equation}\label{Kp-current} J(K,3^+) = - \frac{2J(K) \langle q| K+3|q]^2}{(K+3)^2 [q3]^2}\, . \end{equation} Again, for $J(K)$ being $J(1^-)$ and $3=2$, this reproduces the previous result (\ref{mp-current}). The contribution to the coupling of a current to a positive helicity graviton $3$ via the vertex (\ref{main-vertex-other}) is also checked to vanish in the gauge $q^A=3^A$, so (\ref{Kp-current}) is the complete answer in this gauge. \subsection{Two negative one positive helicity current} Let us use the above results to evaluate the $J(1^-,2^-,3^+)$ current. There are three terms that are contributing. One can couple $J(1^-,2^-)$ to $3^+$, as well as $J(1^-,3^+)$ to $2^-$ and $J(2^-,3^+)$ to $1^-$. This gives \begin{equation}\label{mmp-current-index} J^{ABA'B'}(1^-,2^-,3^+) = q^A q^B \langle q| 1+2+3|^{A'} \langle q| 1+2+3|^{B'} J(1^-,2^-,3^+) \, , \end{equation} where \begin{equation} J(1^-,2^-,3^+) = \frac{2 [12]^2}{(1+2+3)^2 \langle q1\rangle^2 \langle q2\rangle^2 [q3]^2} \left( \frac{\langle q| 1+2|q]^2}{\langle 12\rangle [12]} + \frac{[q1]^2\langle q1\rangle^2}{[13]\langle 13\rangle} + \frac{[q2]^2\langle q2\rangle^2}{[23]\langle 23\rangle} \right) \, . \end{equation} The expression in the brackets here can be written as \begin{align*} [q1]^2\langle q1\rangle^2 \left( \frac{1}{\langle 12\rangle [12]} + \frac{1}{\langle 13\rangle [13]} \right) + [q2]^2\langle q2\rangle^2 \left( \frac{1}{\langle 12\rangle [12]} + \frac{1}{\langle 23\rangle [23]} \right) + \frac{2 \langle q1\rangle \langle q2\rangle [q1] [q2] }{\langle 12\rangle [12]} \\ = \frac{[q1]^2\langle q1\rangle^2 ( K^2/2- \langle 23\rangle [23])}{\langle 12\rangle [12]\langle 13\rangle [13]} + \frac{[q2]^2\langle q2\rangle^2 ( K^2/2- \langle 13\rangle [13])}{\langle 12\rangle [12]\langle 23\rangle [23]} + \frac{2 \langle q1\rangle \langle q2\rangle [q1] [q2] }{\langle 12\rangle [12]} \, . \end{align*} Here, \begin{equation} \frac{1}{2} K^2 = \frac{1}{2} (1+2+3)^2 = \langle 12\rangle [12]+\langle 13\rangle [13]+\langle 23\rangle [23] \, . \end{equation} The above expression has a part proportional to $K^2$, and the part \begin{align} &\frac{1}{\langle 12\rangle [12]\langle 13\rangle [13] \langle 23\rangle [23]} \left( - [q1]^2\langle q1\rangle^2 \langle 23\rangle^2 [23]^2 - [q2]^2\langle q2\rangle^2 \langle 13\rangle^2 [13]^2 \right. \nonumber \\ &{} + 2 \left. \vphantom{[q1]^2} \langle q1\rangle \langle q2\rangle [q1] [q2] \langle 13\rangle [13] \langle 23\rangle [23] \right) = - \frac{\left( [q1]\langle q1\rangle \langle 23\rangle [23]-[q2]\langle q2\rangle \langle 13\rangle [13] \right)^2}{\langle 12\rangle [12]\langle 13\rangle [13] \langle 23\rangle [23]} \, . \end{align} We now use the fact that $q^A=3^A$ in our gauge, as well as the Schouten identity \begin{equation} -[q1][23]+[q2][13]=[q3][12] \, . \end{equation} This shows that the part not proportional to $K^2$ is given by \begin{equation} - \frac{ \langle 13\rangle \langle 23\rangle [12] [q3]^2}{\langle 12\rangle [13][23]}\, . \end{equation} Thus, overall, we get \begin{equation}\label{mmp-current} J(1^-,2^-,3^+) = \frac{[12]( [q1]^2 \langle 13\rangle [23]+ [q2]^2 \langle 23\rangle [13])}{\langle 12\rangle \langle 13\rangle^2 \langle 23\rangle^2 [13][23] [q3]^2} - \frac{2 [12]^3}{K^2 \langle 12\rangle \langle 13\rangle \langle 23\rangle [13][23]} \, . \end{equation} Note that the term containing $K^2$ in the denominator is gauge invariant, i.e., independent of the auxiliary spinor $q^{A'}$. We will use this result to compute the graviton-graviton scattering amplitude. It can be shown that the pattern visible for the $1^-,2^-,3^+$ current continues, and a general all except one negative helicity current is of the form (\ref{mmp-current-index}), with its scalar part given by a term where $K^2$ has cancelled, as well as a gauge-invariant (i.e., $q^{A'}$ independent) term where $K^2$ remains in the denominator. One can obtain a recursion relation for the gauge-independent part, see \cite{Delfino:2014xea}, but no closed expression for a solution of this recursion is known. \subsection{Contribution of the 4-valent vertex} The computation above was carried out by taking into account contributions from the cubic vertices. However, already for the current involving 3 gravitons, we may need to consider the contribution from the 4-valent vertex. It is easy to see that this contribution vanishes in the gauge that we used. To see this, we take the $HH\Omega\Omega$ vertex in the form [see (\ref{L3})] \begin{equation} H^{ABA'B'} H^{CD}{}_{A'}{}^{C'} \left( \partial_{DM'} H^\bullet_A{}^{FM'}{}_{B'} \partial_{CN'} H^\bullet_{BF}{}^{N'}{}_{C'} - \partial^F{}_{M'} H^\bullet_{AB}{}^{M'}{}_{C'} \partial_{DN'} H^\bullet_{CF}{}^{N'}{}_{B'} \right) \,. \end{equation} We then insert two currents of the general form (\ref{mmp-current-index}) into the two $H$ legs, and a positive helicity state into one of the $H^\bullet$ legs. It is easy to see that there are factors of the auxiliary spinor $q^A$ contracting with itself, in the gauge where the $q^A$ of the negative helicity states is equal to the momentum spinor of the positive helicity state. So, the 4-valent vertex does not contribute to computations of the all except one negative currents, in the gauge used. \section{Graviton-graviton scattering} \label{sec:scattering} \subsection{Vanishing amplitudes} We now compute the graviton-graviton, or a 4-point amplitude. Let us first convince ourselves that only the $--++$ such amplitude can be non-zero. Let us consider the $----$ amplitude first. Such states can only be inserted into the field $H$ external legs. However, it is easy to see that one cannot construct a 4-point tree level diagram with four external legs being those of the $H$ field. So, this amplitude must vanish. Let us consider the $---+$ amplitude. We now can construct a diagram with 3 external $H$ legs and one external $\Omega$, for example \begin{equation}\label{diag-mmp} \begin{gathered} \begin{fmfgraph*}(100,150) \fmftop{i1,i2} \fmfbottom{i4,i3} \fmf{dbl_plain}{i1,v1} \fmf{dbl_plain}{i2,v1} \fmf{dbl_plain}{i3,v2} \fmf{dotted}{v2,i4} \fmf{dotted}{v1,v2} \fmflabel{1}{i1} \fmflabel{4}{i4} \fmflabel{2}{i2} \fmflabel{3}{i3} \end{fmfgraph*} \end{gathered} \end{equation} Thus, we must analyse the situation more closely. We will use the effective version of the vertices, where there is always two derivatives in the vertex, and no derivatives in the propagator. The 3 helicity states for $H$ carry six copies of the auxiliary spinor $q^A$. We choose this spinor to be equal to the momentum spinor $k^A$ of the fourth positive helicity state. In total, there are 8 copies of the spinor $q^A$. These must somehow contract by the spinor metrics present in the vertices and the propagators. Any contraction of a pair of such spinors gives zero, so they cannot contract between themselves, and can only contract into the unprimed spinor indices of the factors of momenta that come from the derivatives in the vertices. There are 4 such derivatives in this diagram. There are simply not enough factors of momenta to give a non-zero result. Exactly the same logic applies to the amplitudes $-+++$ and $++++$, but this time applied to primed auxiliary spinors. \subsection{The non-vanishing $--++$ amplitude} Let us now compute the non-vanishing $--++$ amplitude. We label the gravitons so that the gravitons $1,2$ are negative helicity, and $3,4$ are positive helicity. We will compute this amplitude by utilising the previous results on the currents, specifically the result (\ref{mmp-current-index}) and (\ref{mmp-current}) on the current of 2 negative helicity gravitons $1,2$ and a third positive helicity graviton $3$. We will then put this current on-shell, which extracts the 4-point amplitude. Given the result (\ref{mmp-current}), the 4-point amplitude is easily computed. First, we need to amputate the propagator from the off-shell leg of the current, i.e., multiply the result by ${\rm i} K^2$. Given that $K^2=4^2=0$, this kills the first term in (\ref{mmp-current}). Thus, the second term in (\ref{mmp-current}) is essentially the result. We just need to correct it by the prefactor that comes by inserting the positive helicity state $4^A 4^B q^{A'} q^{B'}/[q4]^2$ into the current (\ref{mmp-current-index}). The factors of the auxiliary spinor $q^{A'}$ cancel out, and the amplitude is given by the last term in (\ref{mmp-current}) multiplied by ${\rm i} K^2$, and multiplied by $\langle 34\rangle^4$. This gives \begin{equation} {\cal M}^{--++} = 2{\rm i} \frac{\langle 34\rangle^6}{ \langle 13\rangle \langle 23\rangle \langle 14\rangle \langle 24\rangle } \frac{[12]}{\langle 12\rangle}\, , \end{equation} where we have used the momentum conservation in the form $[12]/[13]=-\langle 34\rangle/\langle 24\rangle$ and $[12]/[23] = \langle 34\rangle /\langle 14\rangle$ to convert square brackets in the denominator into the angle brackets. This is the correct result for the graviton-graviton amplitude. \section{Conclusions} \label{sec:conclusions} In this paper, starting from the chiral action (\ref{action}) of first-order formalism for GR in two-component spinor notation and then fixing the diffeomorphism and Lorentz-group gauge freedom, we finally arrived at a remarkably simple free part (\ref{Lnew}) of the Lagrangian. It describes two sectors of fields $(\Omega^{ABCA'}, H^{ABA'B'})$ and $(\omega^{AA'}, h^{AB})$ that decouple from each other and, moreover, have absent $\Omega$--$\Omega$ and $\omega$--$\omega$ propagators [see the structure of propagators in (\ref{propag})]. The pair $(\Omega^{ABCA'}, H^{ABA'B'})$ can describe gravitons in asymptotic states, while the pair $(\omega^{AA'}, h^{AB})$ can only be present in the propagators of internal lines. The fields $\Omega^{ABCA'}$ and $\omega^{AA'}$ are composed of the spin connection one-form $\omega^{AB}$ and the Lagrange multipliers that fix the diffeomorphism gauge freedom. The fields $H^{ABA'B'}$ and $h^{AB}$ are composed of the perturbation $h^{AA'}$ of the tetrad one-form and the Lagrange multipliers that fix the gauge freedom of the chiral half of the Lorentz group. Fixing gauge freedom can be extended to include also the interaction part of the action. However, this can be done in a number of non-equivalent ways, and the simplest and most useful form still remains to be found. Without making this choice, it is already possible to apply the formalism to various computational problems, and we illustrated it by calculating the simplest amplitudes, such as 3- and 4-point graviton scattering amplitudes. For these simplest problems only the fields $(\Omega^{ABCA'}, H^{ABA'B'})$ matter, and moreover, only the vertices (\ref{main-vertex}) and (\ref{main-vertex-other}) contribute. The reader will hopefully appreciate the ease with which the formalism we developed reproduces the known results. The Feynman rules that result from a first-order formalism such as one we consider in this paper deal with both metric and connection fields. However, in the absence of the connection-connection propagator, there exists an effective version of the Feynman rules in which the derivative present in the numerator of the tetrad-connection propagator is assigned to the vertex. This is done by replacing the $\Omega$ in vertices by derivatives of $H$, see (\ref{Omega-H*}). In this effective version of the perturbation theory there is only the tetrad field that propagates, with the propagator given by (\ref{prop-HH}), and all vertices contain two copies of the derivative operator. However, one has to mark the vertex legs that came from the connection by appropriate cilia, and contraction of two legs with cilia are forbidden in view of the absence of the connection-connection propagator. The formalism that arises this way is quite similar to that in the case of the chiral Yang--Mills theory. There are other interesting calculations that can be performed with the formalism we developed. First, it is not hard to generalise our perturbative expansion and gauge-fixing to backgrounds other than Minkowski. The only change in this case is that the partial derivative operator needs to be replaced by the appropriate covariant derivative operator with respect to the background connection. It would be interesting to apply this to a computation of the one-loop effective action on a general Einstein background. Also, given the simplicity of the propagators and vertices in this formalism, it is possible that even a chiral version of the two-loop computation \cite{Goroff:1985th} may be within reach. \section*{Acknowledgements} The work of Y.~S. was supported by the National Academy of Sciences of Ukraine (project 0116U003191) and by the scientific program ``Astronomy and Space Physics'' (project 19BF023-01) of the Taras Shevchenko National University of Kiev. \section{Appendix: Alternative derivation} In the main text, we have motivated our gauge-fixing procedure and the introduction of fields $(\Omega^{ABCA'}, H^{ABA'B'})$ and $(\omega^{AA'}, h^{AB})$ using different spinor representations (\ref{orP}), (\ref{orW}) of the totally anti-symmetric $\epsilon$ tensor. We now sketch an alternative path, where most of the manipulations needed to arrive at the final result are done already at the level of the expansion of the $\Sigma^{AB}$ 2-form. \subsection*{Expansion of the SD 2-form} The $\Sigma^{AB}$ 2-form in the action has the following expansion: \begin{align}\label{sigma-pert} \Sigma^{AB} &= \frac12 \left( \theta^A{}_{C'}+ h^A{}_{C' MM'} \theta^{MM'} \right) \wedge \left( \theta^{BC'} +h^{BC'}{}_{NN'} \theta^{NN'} \right) \nonumber \\ &= \frac12 \left( \vphantom{\epsilon^{A'}{}_{M'}} \epsilon^A{}_M \epsilon_{C'M'} + h^A{}_{C' MM'} \right) \left( \epsilon^B{}_N \epsilon^{C'}{}_{N'} +h^{BC'}{}_{NN'} \right) \theta^{MM'}\wedge \theta^{NN'} \nonumber \\ &= \frac12 \left[ \epsilon^A{}_M \epsilon^B{}_N \epsilon_{M'N'} + 2 \epsilon^{(A}{}_M h^{B)}{}_{M'NN'} + h^A{}_{C'MM'} h^{BC'}{}_{NN'} \right] \theta^{MM'}\wedge \theta^{NN'} \, . \end{align} We now compute this in the parametrisation (\ref{hdeco}). We have \begin{align} \Sigma^{AB} &= \frac{1}{2} \left[\vphantom{\epsilon^{A'}{}_{M'}} \epsilon^A{}_M \epsilon^B{}_N \epsilon_{M'N'} + 2 \epsilon^{(A}{}_M h^{B)}{}_{NM'N'} + 2 \epsilon^{(A}{}_M h^{B)}{}_N \epsilon_{M'N'} \right. \nonumber \\ &\left. {} + h^A{}_{MC'M'} h^B{}_N{}^{C'}{}_{N'} + 2 h^{(A}{}_M h^{B)}{}_{NM'N'} + h^A{}_M h^B{}_N \epsilon_{M'N'} \right] \theta^{MM'}\wedge \theta^{NN'} \, . \end{align} Collecting the terms with $\epsilon_{M'N'}$ we get \begin{align} \label{sigma-pert*} \Sigma^{AB} &= \left( \frac{1}{2} \left[ \epsilon^{(A}{}_M + h^{(A}{}_M \right] \left[\epsilon^{B)}{}_N + h^{B)}{}_N \right] \epsilon_{M'N'} + \left[ \epsilon^{(A}{}_M + h^{(A}{}_M \right] h^{B)}{}_{NM'N'} \right. \nonumber \\ &\left. {} + \frac{1}{2} h^A{}_{MC'M'} h^B{}_N{}^{C'}{}_{N'} \right) \theta^{MM'}\wedge \theta^{NN'}. \end{align} The first term here is SD, the second is ASD, and the third contains both parts. It is interesting that the object $h_{AB}$ always appears in the combination with $\epsilon_{AB}$. \subsection*{Computation} We now establish a convenient form of the 2-form perturbation wedged with two copies of the background tetrad. Thus, we now wedge (\ref{sigma-pert*}) with $\theta^{RR'}\wedge\theta^{SS'}$ and use (\ref{orW}) to get \begin{align}\label{comp-1} \frac{1}{{\rm i}} \Sigma^{AB}\wedge \theta^{RR'}\wedge \theta^{SS'} &= \left[ \epsilon^{(A|R|} + h^{(A|R|} \right] \left[\epsilon^{B)S} + h^{B)S} \right] \epsilon^{R'S'} + \left[ \epsilon^{(A|M|} + h^{(A|M|} \right] h^{B)}{}_{M}{}^{R'S'} \epsilon^{RS} \nonumber \\ &\quad {} + \frac{1}{2} h^{(A|R|}{}_{M'N'} h^{B)S}{}^{M'N'} \epsilon^{R'S'} - \frac{1}{2} h^{(A}{}_{MM'}{}^{|R'|} h^{B)MM'S'} \epsilon^{RS} \, , \end{align} where we have omitted the volume form $\upsilon$ on the right-hand side. Expanding the brackets in the first term here we have \begin{equation} \epsilon^{(A|R|} \epsilon^{B)S} + h^{(A|R|} h^{B)S} + h^{(A|R|} \epsilon^{B)S} + h^{(A|S|} \epsilon^{B)R} \, . \end{equation} Using the Schouten identity, we can rewrite the last two terms as \begin{equation} h^{(A|R|} \epsilon^{B)S} + h^{(A|S|} \epsilon^{B)R} = 2 h^{(A|R|} \epsilon^{B)S} - \epsilon^{RS} h^{(AB)} \, . \end{equation} The first line in (\ref{comp-1}) then becomes \begin{align} &\left[ \epsilon^{(A|R|} \epsilon^{B)S} + h^{(A|R|} h^{B)S} + 2h^{(A|R|} \epsilon^{B)S} \right] \epsilon^{R'S'} \nonumber \\ &{} + \left[ \epsilon^{(A|N|} + h^{(A|N|} \right] h^{B)}{}_{N}{}^{R'S'} \epsilon^{RS} - h^{(AB)} \epsilon^{RS}\epsilon^{R'S'}\, . \end{align} We can rewrite the second line as \begin{equation} \left[ h^{AB}{}^{R'S'} - h^{(AB)} \epsilon^{R'S'} + h^{(A|N|} h^{B)}{}_{N}{}^{R'S'} \right] \epsilon^{RS} \, . \end{equation} Thus, overall, we can rewrite (\ref{comp-1}) as \begin{align}\label{comp-2} &\left[ \epsilon^{(A|R|} \epsilon^{B)S} + 2h^{(A|R|} \epsilon^{B)S} \right] \epsilon^{R'S'} + \left[ h^{AB}{}^{R'S'} - h^{(AB)} \epsilon^{R'S'} \right] \epsilon^{RS} \nonumber \\ &{} + \frac{1}{2} h^{(A|R|}{}_{M'N'} h^{B)S}{}^{M'N'} \epsilon^{R'S'} + h^{(A|R|} h^{B)S} \epsilon^{R'S'} \nonumber \\ &{} - \frac{1}{2} h^{(A}{}_{MM'}{}^{|R'|} h^{B)MM'S'} \epsilon^{RS}+ h^{(A|N|} h^{B)}{}_{N}{}^{R'S'} \epsilon^{RS} \, . \end{align} We note that the first term in the second and third lines can be combined using the Schouten identity \begin{equation} \frac{1}{2} h^{(A|R|}{}_{M'N'} h^{B)S}{}^{M'N'} \epsilon^{R'S'}- \frac{1}{2} h^{(A}{}_{MM'}{}^{|R'|} h^{B)MM'S'} \epsilon^{RS} = h^{(A |S}{}_{M'}{}^{R'|} h^{B)R M'S'} \, . \end{equation} Thus, an even more compact form of (\ref{comp-1}) is \begin{align}\label{sigma-expanded} \frac{1}{{\rm i}} \Sigma^{AB}\wedge \theta^{RR'}\wedge \theta^{SS'} = \left[ \epsilon^{(A|R|} \epsilon^{B)S} + 2h^{(A|R|} \epsilon^{B)S} \right] \epsilon^{R'S'} + \left[ h^{AB}{}^{R'S'} - h^{(AB)} \epsilon^{R'S'} \right] \epsilon^{RS} \nonumber \\ {} + h^{(A |S}{}_{M'}{}^{R'|} h^{B)R M'S'} +h^{(A|R|} h^{B)S} \epsilon^{R'S'} + h^{(A|N|} h^{B)}{}_{N}{}^{R'S'} \epsilon^{RS} \, , \end{align} where the first line contains terms of degree zero and one in the tetrad perturbation, and the second line contains the quadratic terms. When contracted with the $\partial\omega$ part of the curvature, the first line gives rise to the kinetic term. The second term in the first line exhibits the combination $h^{AB}{}^{R'S'} - h^{(AB)} \epsilon^{R'S'}$ that motivates the introduction of the new field $H^{ABA'B'}$ in (\ref{H}). \end{fmffile}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} Kadison and Kastler started the study of perturbation theory of operator algebras with \cite{KK} in 1972. They equipped the set of operator algebras on a fixed Hilbert space with a metric induced by Hausdorff distance between the unit balls. Examples of close operator algebras are obtained by conjugating by a unitary near to the identity. They conjectured sufficiently close operator algebras must be unitarily equivalent. For injective von Neumann algebras, this conjecture was settled in \cite{Chris2, RT, Johnson1, Chris5} with earlier special cases \cite{Chris1, P1}. Cameron et al. \cite{CCSSWW} and Chan \cite{Chan} gave examples of non-injective von Neumann algebras satisfying the Kadison-Kastler conjecture. In Christensen \cite{Chris3}, this conjecture was solved positively for von Neumann subalgebras of a common finite von Neumann algebra. For C$^*$-algebras, the separable nuclear case was solved positively in Christensen et al. \cite{CSSWW}, building on the earlier special cases in \cite{Chris5, PR1, PR2, Khoshkam1}. In full generality, there are examples of arbitrarily close non-separable nuclear C$^*$-algebras which are not $*$-isomorphic in Choi and Christensen \cite{CC}. Johnson gave examples of arbitrarily close pairs of separable nuclear C$^*$-algebras which conjugate by unitaries where the implementing unitaries could not be chosen to be the identity in \cite{Johnson2}. The author and Watatani \cite{IW} showed that for an inclusion of simple C$^*$-algebras $C\subseteq D$ with finite index in the sense of Watatani \cite{Watatani}, sufficiently close intermediate C$^*$-subalgebras are unitarily equivalent. The implementing unitary can be chosen close to the identity and in the relative commutant algebra $C'\cap D$. Our estimates depend on the inclusion $C\subseteq D$, since we use the finite basis for $C\subseteq D$. Dickson obtained uniform estimates independent of all inclusions in \cite{Dickson}. To get this, Dickson showed that row metric is equivalent to the Kadison-Kastler metric. The author \cite{I} showed that von Neumann subalgebras of a common von Neumann algebra with finite probabilistic index in the sense of Pimsner-Popa \cite{PP} satisfy the Kadison-Kastler conjecture. The implementing unitary can be chosen as being close to the identity. Compared with the author and Watatani case \cite{IW}, we do not assume that von Neumann subalgebras have a common subalgebra with finite index. In this paper, we study perturbations of crossed product C$^*$-algebras by discrete amenable groups. We introduce crossed product-like inclusions of C$^*$-algebras in Definition \ref{amenable}. For a unital inclusion of $\C^*$-algebras $A\subseteq B$, we call $A\subseteq B$ is crossed product-like if there exists a discrete group $U$ in the normalizer $\mathcal{N}_B(A)$ of $A$ in $B$ such that $A$ and $U$ generate $B$. An example of crossed product-like inclusions is $A\subseteq A\rtimes G$, where $G$ is a discrete group. Now suppose that we have a unital inclusion $C\subseteq D$ of $\C^*$-algebras and two close separable intermediate $\C^*$-subalgebras $A,B$ for this inclusion. If there is a conditional expectation $E\colon D\to B$, then we get a map from $A$ into $B$ which is uniformly close to the identity map of $A$ by restricting $E$ to $A$. Since $C$ is a subalgebra of $A\cap B$, $E|_A\colon A\to B$ is a $C$-fixed map, that is, $E|_A(c)=c$ for any $c\in C$. Furthermore, if $C\subseteq A$ is crossed product-like by a discrete amenable group $U$ in $\mathcal{N}_A(C)$, then we can consider the point-norm averaging technique form \cite{CSSWW} by using the amenability of $U$. To apply this technique to $E|_A$ we need that $E|_A$ is a $C$-fixed map. Then in Lemma \ref{1.4}, we can obtain a $C$-fixed $(X,\varepsilon)$-approximate $*$-homomorphism from $A$ into $B$ for a finite subset $X$ in $A_1$ and $\varepsilon>0$. To show this, we modify \cite[Lemma 3.2]{CSSWW} to $C$-fixed versions. In Lemma \ref{1.5}, we obtain unitaries which conjugate these maps by modifying \cite[Lemma 3.4]{CSSWW} to $C$-fixed version. The unitaries can be chosen in the relative commutant $C'\cap D$ of $C$ in $D$. Therefore, if $C\subseteq D$ is irreducible, then the untiaries are scalars. Then by these lemmas, we show our first main result: Theorem A, which is appeared in Theorem \ref{irreducible}. \begin{theorem} Let $C\subseteq D$ be a unital irreducible inclusion of $\C^*$-algebras acting on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\C^*$-subalgebras for $C\subseteq D$ with a conditional expectation from $D$ onto $B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $\dab<140^{-1}$. Then $A = B$. \end{theorem} In Theorem B, we show our second main result. By an intertwining argument which is modified \cite[Lemma 4.1]{CSSWW} to $C$-fixed version, we show that $A$ is $*$-isomorphic to $B$. The implementing surjective $*$-isomorphism can be chosen as $C$-fixed. Theorem B is provided in Section \ref{isomorphism} as Theorem \ref{3.3}. \begin{theorem} Let $C\subseteq D$ be a unital inclusion of $\C^*$-algebras and let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation from $D$ onto $B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<10^{-3}$. Then there exists a $C$-fixed surjective $*$-isomorphism $\alpha \colon A\to B$. \end{theorem} In section \ref{von}, we consider crossed product-like inclusions of von Neumann algebras. Given an inclusion $N\subseteq M$ of von Neumann algebras, we call $N\subseteq M$ is crossed product-like if there is a discrete group $U$ in $\mathcal{N}_M(N)$ such that $M$ is generated by $N$ and $U$. For a crossed product-like inclusion $A\subseteq B$ of $\C^*$-algebras acting non-degenerately on $H$, an inclusion $\overline{A}^{\w}\subseteq \overline{B}^{\w}$ of von Neumann algebras is crossed product-like. In Theorem C, we consider the perturbations of crossed product von Neumann algebras by discrete amenable groups. This result is based on Christensen's work \cite{Chris2} and is appeared in Theorem \ref{2.3.5}. \begin{theorem} Let $N\subseteq M$ be an inclusion of von Neumann algebras in $\mathbb{B}(H)$ and let $A,B$ be intermediate von Neumann subalgebras for $N \subseteq M$ with a normal conditional expectation from $M$ onto $B$. Suppose that $N\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-2}$. Then there exists a unitary $u\in N ' \cap (A\cup B)''$ such that $u A u^*= B$ and $\| u - I \| \le 2(8+\sqrt{2})\gamma$. \end{theorem} By the theorem above, we can consider the perturbations of the second dual C$^*$-algebras of crossed product algebras by amenable groups in Corollary \ref{2.4}. Given a unital inclusion $C\subseteq D$ of $\C^*$-algebras and sufficiently close intermediate $\C^*$-subalgebras $A,B$ for this inclusion, if $C\subseteq A$ is a crossed product-like inclusion by a discrete amenable group and there is a conditional expectation $E\colon D\to B$, then $A^{**}$ and $B^{**}$ are unitarily equivalent. To show this, we use a normal conditional expectation $E^{**}\colon D^{**}\to B^{**}$ and identify $A^{**},B^{**},C^{**}$ and $D^{**}$ with $\pi(A)'',\pi(B)'',\pi(C)''$ and $\pi(D)''$, respectively, where $\pi$ is the universal representation of $D$. In Proposition \ref{4.3}, we obtain a unitary such that the unitary implement a $*$-isomorphism under the assumption $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{\w}$. To show Proposition \ref{4.3} we prepare Lemma \ref{4.1} and \ref{4.2} by using Lemma \ref{1.5} and \ref{1.8} and Theorem \ref{3.3}. Combining Proposition \ref{4.3} with Corollary \ref{2.4} gives Theorem D, which is appeared in Theorem \ref{main}. To show this, we modify the arguments of Section 5 in Christensen et al. \cite{CSSWW}. \begin{theorem} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\C^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{\w}$. If $\dab<10^{-7}$, then there exists a unitary $u\in C'\cap (A\cup B)''$ such that $u A u^* = B$. \end{theorem} \section{Preliminaries} Given a C$^*$-algebra $A$, we denote by $A_1$ and $A^u$ the unit ball of $A$ and the unitaries in $A$, respectively. We recall Kadison and Kastler's metric on the set of all C$^*$-subalgebras of a C$^*$-algebra from \cite{KK}. \begin{defn}\upshape Let $A$ and $B$ be C$^*$-subalgebras of a C$^*$-algebra $C$. Then we define a metric between $A$ and $B$ by \[ d(A,B):= \max \left\{ \sup_{a\in A_1} \inf_{b\in B_1} \| a - b\| , \ \sup_{b\in B_1}\inf_{a\in A_1} \| a-b\| \right\}. \] \end{defn} In the definition above, $d(A,B)<\gamma$ if and only if for any $x$ in either $A_1$ or $B_1$, there exists $y$ in other one such that $\| x - y\| <\gamma$. \begin{exam}\upshape Let $A$ be a C$^*$-algebra in $\mathbb{B}(H)$ and $u$ be a unitary in $\mathbb{B}(H)$. Then $d(A, u A u^*)\le 2\| u-I_H\|$. \end{exam} Near inclusions of $\C^*$-algebras are defined by Christensen in \cite{Chris5}. \begin{defn}\upshape Let $A$ and $B$ be C$^*$-subalgebras of a C$^*$-algebra $C$ and let $\gamma>0$. We write $A\subseteq_{\gamma}B$ if for any $x\in A_1$ there exists $y\in B$ such that $\| x-y\| \le \gamma$. If there is $\gamma'<\gamma$ with $A\subseteq_{\gamma'}B$, then we write $A\subset_{\gamma}B$. \end{defn} The next two proposition is folklore. The second can be found as \cite[Proposition 2.10]{CSSWW}. \begin{prop}\label{surjective} Let $A$ and $B$ be $\C ^*$-algebras with $A\subseteq B$. If $B\subset_1 A$, then $A=B$. \end{prop} \begin{prop} Let $A$ and $B$ be $\C^*$-subalgebras of a $\C^*$-algebra $C$. If $B\subset_{1/2} A$ and $A$ is separable, then $B$ is separable. \end{prop} The following lemma appears in \cite[Lemma 5]{KK}. \begin{lemma}\label{weak-closure} Let $A$ and $B$ be $\C^*$-subalgebras acting on a Hilbert space $H$. Then $d(\overline{A}^{\w},\overline{B}^{\w})\le \dab$. \end{lemma} The lemma below shows some standard estimates. \begin{lemma}\label{polar} Let $A$ be a unital $\C^*$-algebra. \begin{enumerate} \item Given $x\in A$ with $\| I-x \|<1$, let $u\in A$ be the unitary in the polar decomposition $x=u|x|$. Then, \[ \| I-u\| \le \sqrt{2}\| I-x \|. \] \item Let $p\in A$ be a projection and $a\in A$ a self-adjoint operator. Suppose that $\delta:=\| a-p\| <1/2$. Then $q:=\chi_{[1-\delta,1+\delta]}(a)$ is a projection in $C^*(a,I)$ satisfying \[ \| q-p\|\le 2\| a-p\| <1. \] \item Let $p,q\in A$ be projections with $\| p-q\| <1$. Then there exists a unitary $w\in A$ such that \[ w p w^* =q \ \ \text{and} \ \ \| I-w \| \le \sqrt{2}\| p-q\|. \] \end{enumerate} \end{lemma} In the paper, we consider metric between maps restricted to finite sets. The following are introduced by \cite{CSSWW}. \begin{defn}\upshape Let $A$ and $B$ be C$^*$-algebras and let $\phi_1,\phi_2\colon A\to B$ be maps. Given a subset $X\subseteq A$ and $\varepsilon>0$, write $\phi_1\approx_{X,\varepsilon}\phi_2$ if \[ \|\phi_1(x)-\phi_2(x)\|\le \varepsilon, \ \ x\in X. \] \end{defn} \begin{defn}\upshape Let $A$ and $B$ be C$^*$-algebras, $X$ a subset of $A$ and $\varepsilon>0$. Given a completely positive contractive map (cpc map) $\phi\colon A\to B$, we call $\phi$ is an ($X,\varepsilon$)-{\it approximate} $*$-{\it homomorphism} if it satisfies \[ \| \phi (x) \phi(x^*)-\phi(xx^*)\| \le \varepsilon, \ \ x\in X\cup X^*. \] \end{defn} We only consider pairs of the form $(x,x^*)$ in the previous definition by the following proposition, which can be found as \cite[Lemma 7.11]{Paulsen}. \begin{prop}\label{1/2} Let $A$ and $B$ be $\mathrm{C}^*$-algebras and $\phi\colon A\to B$ a cpc map. Then for $x,y\in A$, \[ \| \phi(x y) -\phi(x) \phi(y) \| \le \| \phi(x x^*) -\phi(x) \phi(x^*) \|^{1/2} \|y \|. \] \end{prop} \begin{defn}\upshape Let $A$ and $B$ be C$^*$-algebras and let $C$ be a C$^*$-subalgebras of $A\cap B$. A map $\phi:A\to B$ is {\it $C$-fixed} if $\phi|_C=\id_C$. \end{defn} \begin{rem}\upshape Given a map $\phi\colon A\to B$ between $\C^*$-algebras and a $\C^*$-subalgebra $C$ of $A\cap B$, if $\phi$ is $C$-fixed, then $\phi$ is a $C$-bimodule map, that is, for $x,z\in C$ and $y\in A$, \[ \phi(x y z)=x \phi(y) z. \] \end{rem} The following lemma appears in \cite[p.332]{Arveson}. We need the lemma in Lemma \ref{1.4}, \ref{1.5} and \ref{1.8}. \begin{lemma}\label{Arveson} Let $X\subseteq \mathbb{C}$ be a compact set and $\varepsilon,M>0$. Then given a continuous function $f\in C(X)$, there exists $\eta>0$ such that for any Hilbert space $H$, normal operator $s\in\mathbb{B}(H)$ with $\mathrm{s p}(s)\subseteq X$ and $a\in \mathbb{B}(H)$ with $\| a \| \le M$, the inequality $\| s a - a s \| <\eta$ implies that $\| f(s)a - a f(s) \| <\varepsilon$. \end{lemma} \begin{proof} Let $p$ be a polynomial such that $\| f- p\|<\varepsilon/(4M)$, where this norm is the supremum norm of $C(X)$. Let $p$ have the form $p(t)= c_o+c_1 t +\cdots + c_n t^n$. Define \[ \eta:= \frac{\varepsilon}{2} \left( \sum_{k=1}^n k |c_k| \right)^{-1}. \] Let a Hilbert space $H$ be given and let a normal operator $s\in \mathbb{B}(H)$ and $a\in \mathbb{B}(H)_1$ satisfy $\| s a - a s \|<\eta$. Let $D$ be the derivation $D(x)=x a - a x$. Since $D(s^{n+1})=s D(s^n) - D(s) s^n$, $\| D(s^{n+1})\| \le \| D(s^n)\| + \| D(s)\|$. Hence, $\| D(s^n) \| \le n \| D(s) \|$. Therefore, \begin{align*} \| f(s) a - a f(s)\| &\le \| f(s) a - p(s) a\| +\| D(p(s))\| + \| a p(s) - a f(s)\| \\ &\le 2 \| f-p\| \| a\| + \sum_{k=1}^n k |c_k| \| D(s) \| <\varepsilon, \end{align*} and the lemma follows. \end{proof} The next lemma appears in the proof of \cite[Lemma 3.7]{CSSWW}. \begin{lemma}\label{1.7} Let $H$ be a Hilbert space. Then for any $\mu_0>0$, there exists $\mu>0$ with the following property$\colon$ given a finite set $S\subseteq H_1$ and a self-adjoint operator $h\in \mathbb{B}(H)_1$, there exists a finite set $S'\subseteq H_1$ such that for any self-adjoint operator $k\in \mathbb{B}(H)_1$, if \[ \| (h-k) \xi' \| < \mu , \ \ \xi' \in S', \] then \[ \| ( e^{i\pi h} - e^{i\pi k}) \xi \| <\mu_0 \ \ and \ \ \| ( e^{i\pi h} - e^{i\pi k})^* \xi \| <\mu_0, \ \ \xi\in S. \] \end{lemma} \begin{proof} There exists a polynomial $p(t)=\sum_{j=0}^r \lambda_j t^j$ such that \begin{align}\label{1.7.1} | p(t) - e^{i\pi t} | <\frac{\mu_0}{3}, \ \ -1\le t \le 1. \end{align} Let \begin{align}\label{1.7.2} \mu:= \frac{\mu_0}{3 r \sum_{j=0}^r | \lambda_j | }. \end{align} Given a finite set $S\subseteq H_1$ and a self-adjoint operator $h \in \mathbb{B}(H)_1$, define \begin{align*} S':= \{ h^m \xi : \xi\in S, \, m \le r-1 \}. \end{align*} Let $k \in\mathbb{B}(H)_1$ be a self-adjoint operator with \begin{align}\label{1.7.3} \| (h-k) \xi'\| <\mu, \ \ \xi'\in S'. \end{align} For any $\xi \in S$ and $0\le j\le r$, by (\ref{1.7.3}), \begin{align*} \| (h^j - k^j) \xi \| &\le \| (h^j - k h^{j-1}) \xi \| + \| ( k h^{j-1} - k^2 h^{j-2}) \xi \| + \dots + \| (k^{j-1} h - k^j) \xi \| \\ &\le \| (h - k) h^{j-1} \xi \| + \| k ( h-k) h^{j-2} \xi \| + \dots + \| k^{j-1} (h - k ) \xi\| \\ &\le \sum_{m=0}^{j-1} \| (h - k ) h^m \xi \| < r \mu. \end{align*} Thus, for $\xi\in S$, \begin{align*} \| (p(h) - p(k)) \xi \| \le \sum_{j=0}^r |\lambda_j| \| (h^j- k^j) \xi \| \le \sum_{j=0}^r |\lambda_j| r\mu = \frac{\mu_0}{3}, \end{align*} by (\ref{1.7.2}). \begin{align*} \| ( e^{i\pi h} - e^{i\pi k})\xi \| &\le \| (e^{i\pi h} -p(h)) \xi\| + \| (p(h) -p(k) )\xi\| + \| (p(k) - e^{i\pi k}) \xi \| \\ &\le \frac{\mu_0}{3} + \frac{\mu_0}{3} + \frac{\mu_0}{3} = \mu_0, \end{align*} by (\ref{1.7.1}). Similarly, we have $\|(e^{i\pi h}-e^{i\pi k})^* \xi\| <\mu_0$. \end{proof} \section{Crossed product-like inclusions and approximate averaging} In this section, we introduce the crossed product-like inclusions of C$^*$-algebras. Moreover, we use the F\o lner condition of discrete amenable groups to modify the averaging results in \cite[Section 3]{CSSWW}. In Theorem \ref{irreducible}, we show our first main result: Theorem A. Given an inclusion $A\subseteq B$ of C$^*$-algebras, we denote by $\mathcal{N}_B(A)$ the normalizer of $A$ in $B$, that is, $\mathcal{N}_B(A)= \{ u \in B^u : u A u^*=A\}$. \begin{defn}\label{amenable}\upshape Let $A\subseteq B$ be a unital inclusion of C$^*$-algebras. Then we call the inclusion $A\subseteq B$ is {\it crossed product-like} if there exists a discrete group $U$ in $\mathcal{N}_B(A)$ such that $B= C^*(A,U)$. \end{defn} Since $U$ is in $\mathcal{N}_B(A)$, $B=C^*(A,U)$ is the norm closure of $\mathrm{span}\{ a u : a\in A, u\in U\}$. Throughout this paper, we only consider crossed product-like inclusions are by discrete {\it amenable} groups. \begin{rem}\upshape For any $x\in B$ and $\varepsilon>0$, there exist $\{ a_1,\dots,a_N \}\subseteq A_1$ and $\{u_1,\dots,u_N\}\subseteq U$ such that $\| x- \sum_{i=1}^N a_i u_i \| <\varepsilon$. In fact, let $K$ be a positive integer with $K \ge \max\{ \| a_1\|,\dots,\|a_N\| \}$. Define \[ a_{(i-1)K+j}':= \frac{1}{K} a_i, \ \ i=1,2,\dots,N, j=1,2,\dots,K. \] Then $a_k'\in A_1$ and \[ \sum_{i=1}^N a_i u_i = \sum_{j=1}^K \sum_{i=1}^N a_{(i-1)K+j}' u_i. \] \end{rem} \begin{exam}\upshape Let $G$ be a discrete amenable group acting on a $\C^*$-algebra $A$. Then an inclusion $A\subseteq A\rtimes G$ is crossed product-like by $\{\lambda_g\}_{g\in G}$. \end{exam} \begin{exam}\upshape Let $(A,G,\alpha,\sigma)$ be a twisted $\C^*$-dynamical system and let $A\rtimes_{\alpha,r}^{\sigma} G$ be the reduced twisted crossed product. Then an inclusion $A\subseteq A\rtimes_{\alpha,r}^{\sigma} G$ is crossed product-like by $\{\lambda_{\sigma}(g)\}_{g\in G}$. \end{exam} \begin{exam}\upshape Let $A\subseteq B$ be a crossed product-like inclusion of $\C^*$-algebras by $U$. Then for a unital $\C^*$-algebra $C$, $A\otimes C\subseteq B\otimes C$ is a crossed product inclusion by $U\otimes I$. \end{exam} \begin{rem}\upshape If $\mathbb{C}I \subseteq A$ is a crossed product-like inclusion of $\C^*$-algebras by a discrete amenable group, then $A$ is strongly amenable. Hence, the Cuntz algebras $\mathcal{O}_n$ are nuclear but $\mathbb{C}I\subseteq \mathcal{O}_n$ is not crossed product-like by discrete amenable groups. \end{rem} In the next lemma, to get a point-norm version of \cite[Lemma 3.3]{Chris2} we modify the argument of \cite[Lemma 3.2]{CSSWW} for crossed product-like inclusions by amenable groups. \begin{lemma}\label{1.4} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras and let $A,B$ be intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group $U$ and $d(A,B)<\gamma<1/4$. Then for any finite subset $X\subseteq A_1$ and $\varepsilon>0$, there exists a unital $C$-fixed $(X,\varepsilon)$-approximate $*$-homomorphism $\phi\colon A\to B$ such that \[ \| \phi -\id_A \| \le \left(8 \sqrt{2} + 2 \right)\gamma . \] \end{lemma} \begin{proof} Let a finite set $X\subseteq A_1$ and $0<\varepsilon<1$ be given. Let $D$ act on a Hilbert space $H$. By Stinespring's theorem, we can find a Hilbert space $K \supseteq H$ and a unital $*$-homomorphism $\pi\colon D\to \mathbb{B}(K)$ such that \[ E(d)= P_H \pi(d) |_H, \ \ d\in D, \] because $E\colon D\to B$ is a unital cpc map. Furthermore, $P_H\in \pi(B)'$, since $E$ is a $B$-fixed map. By Lemma \ref{Arveson}, there exists $\eta>0$ such that for any self-adjoint operator $t\in \mathbb{B}(K)$ with $\mathrm{s p}(t)\subseteq [0,2\gamma]\cup[1-2\gamma,1]$ and $x\in\mathbb{B}(K)$ with $\|x\|\le 2$, the inequality $\| x t-t x\| <\eta$ implies $\| x p - p x\| < \varepsilon^2/18$, where $p$ is the spectral projection of $t$ for $[1-2\gamma,1]$. There exist $\{ u_1,\dots,u_N\}\subseteq U$ and $\{c_i^{(x)} : 1\le i \le N, x\in X\}\subseteq C_1$ such that \[ \left\| x - \sum_{i=1}^{N} c_i^{(x)} u_i \right\| <\frac{\varepsilon}{3}, \ \ x\in X. \] Let $\tilde{x}:=\sum_{i=1}^N c_i^{(x)} u_i$ for $x\in X$. Then $\|\tilde{x} \| \le \|x\|+\varepsilon<2$. Since $U$ is amenable, we may choose a finite subset $F\subseteq U$ satisfying \[ \frac{| u_i F\bigtriangleup F |}{|F|} <\frac{\eta}{N}, \ \ 1\le i \le N. \] Define \[ t := \frac{1}{|F|}\sum_{v\in F} \pi(v) P_H \pi(v^*) \in \mathbb{B}(K). \] Since $U\subseteq \mathcal{N}_A(C)$ and $P_H\in \pi(C)'$, we have $t \in \pi(C)'$. For any $x\in X$, \begin{align*} \pi(\tilde{x}) t &=\sum_{i=1}^{N} \pi( c_i^{(x)} u_i ) \frac{1}{|F|} \sum_{v\in F} \pi(v) P_H \pi(v^*) \\ &=\frac{1}{|F|} \sum_{i=1}^{N} \sum_{v\in F} \pi( c_i^{(x)}u_i v) P_H \pi (v^* ) \\ &=\frac{1}{|F|} \sum_{i=1}^{N} \sum_{\tilde{v}\in u_i F} \pi( c_i^{(x)}\tilde{v}) P_H \pi (\tilde{v}^*u_i ) \end{align*} and \begin{align*} t \pi(\tilde{x}) &= \frac{1}{|F|} \sum_{v\in F} \pi(v) P_H \pi(v^*) \sum_{i=1}^{N} \pi( c_i^{(x)} u_i ) \\ &= \frac{1}{|F|} \sum_{i=1}^{N} \sum_{v\in F} \pi( c_i^{(x)} v) P_H \pi ( v^* u_i ) . \end{align*} Therefore, \begin{equation}\label{eta} \left\| \pi(\tilde{x}) t - t \pi(\tilde{x}) \right\| \le \sum_{i=1}^{N} \frac{| u_i F\bigtriangleup F|}{|F|} <\eta, \ \ x \in X. \end{equation} For $v\in F$, there exists $v'\in B_1$ such that $\| v - v' \| <\gamma$. Since $P_H\in \pi(B)'$, we have \[ \| \pi(v) P_H - P_H \pi(v) \| \le \| \pi(v) P_H - \pi(v') P_H\| + \| P_H \pi(v') - P_H \pi (v) \| \le 2 \gamma. \] Thus, $\mathrm{s p}(t) \subseteq [0,2\gamma]\cup [1-2\gamma,1]$, since \begin{align*} \| t -P_H\| &=\left\| \frac{1}{|F|}\sum_{v\in F} \pi(v) P_H \pi(v^*) - \frac{1}{|F|}\sum_{v\in F} P_H \pi(v) \pi(v^*) \right\| \\ &\le \frac{1}{|F|} \sum_{v\in F} \| \pi(v) P_H - P_H \pi(v) \| \| \pi(v^*)\| \le 2 \gamma. \end{align*} Let $q=\chi_{[1-2\gamma,1]}(t) \in C^* ( t, I_K)$. By (\ref{eta}), \begin{equation}\label{ab} \| \pi(\tilde{x}) q- q\pi(\tilde{x})\| < \frac{\varepsilon^2}{18}, \ \ x\in X. \end{equation} Since $\| q- P_H\| \le 2\| t- P_H\| <1$, there exists a unitary $w\in C^*( t, P_H , I_K)$ such that $w P_H w^* =q$ and $\| w-I_K\| \le \sqrt{2} \| q- P_H\|$. Define $\phi\colon A\to \mathbb{B}(K)$ by \[ \phi(a)= P_H w^* \pi(a) w |_H, \ \ a\in A. \] Since $w\in C^*(t, P_H, I_K) \subseteq C^*( \pi(A) ,P_H)$ and $P_H \pi(A)|_H = \mathrm{ran}(E) \subset B$, the range of $\phi$ is contained in $B$. Furthermore, $\phi|_C=\id_C$ because $w\in C^*(t,P_H,I_K) \subseteq \pi(C)'$. For $x \in X\cup X^*$, by using $P_H w^* =P_H w^* q$ and (\ref{ab}), \begin{align} \begin{split} \label{cc} \| \phi(\tilde{x} \tilde{x}^*) -\phi(\tilde{x})\phi(\tilde{x}^*)\| &=\| P_H w^* \pi(\tilde{x}\tilde{x}^*) w P_H - P_H w^* \pi(\tilde{x}) w P_H w^* \pi(\tilde{x}^*)w P_H \| \\ &=\| P_H w^* q \pi(\tilde{x} \tilde{x}^*) w P_H - P_H w^* \pi(\tilde{x}) q \pi(\tilde{x}^*) w P_H \| \\ &\le \| q \pi(\tilde{x}) -\pi (\tilde{x}) q \| \| \pi (\tilde{x}^*) \| < \frac{\varepsilon^2}{9} . \end{split} \end{align} Therefore, by (\ref{cc}) and Proposition \ref{1/2}, \begin{align*} &\| \phi(x x^*) -\phi(x)\phi(x^*)\| \\ &\le \| \phi(x x^*) - \phi(x \tilde{x}^*) \| + \| \phi(x \tilde{x}^*) -\phi(x) \phi(\tilde{x}^*)\| + \| \phi(x)\phi(\tilde{x})-\phi(x)\phi(x^*)\| \\ &\le \| x x^* - x \tilde{x}^*\| + \| \phi( \tilde{x}\tilde{x}^*)-\phi(\tilde{x})\phi(\tilde{x}^*)\|^{1/2}\| x\| + \|\phi(x)\| \| \phi(\tilde{x}^*)-\phi(x^*)\| \\ &\le \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon. \end{align*} For $a\in A_1$, we have \begin{align*} \| \phi(a) - a \| &\le \| \phi(a) - E(a) \| + \| E(a) -a \| \\ &\le \| P_H w^* \pi(a) w P_H - P_H \pi(a) P_H\| + 2 d(A,B) \\ &\le 2 \|w-I_K\| +2d(A,B) \le (8\sqrt{2}+2)\gamma, \end{align*} and the lemma follows. \end{proof} The next lemma is a version of \cite[Lemma 3.4]{CSSWW} for crossed product-like inclusions by amenable groups. \begin{lemma}\label{1.5} Let $A,B$ and $C$ be $\mathrm{C}^*$-algebras with a common unit. Suppose that $C\subseteq A\cap B$ and $C\subseteq A$ is crossed product-like by a discrete amenable group $U$. Then for any finite set $X\subseteq A_1$ and $\varepsilon>0$, there exist a finite set $Y\subseteq A_1$ and $\delta>0$ with the following property\,$:$ Given $\gamma<1/10$ and two unital $C$-fixed $(Y,\delta)$-approximation $*$-homomorphisms $\phi_1,\phi_2\colon A\to B$ with $\phi_1\approx_{Y,\gamma} \phi_2$, there exists a unitary $u\in C'\cap B$ such that \[ \phi_1 \approx_{X,\varepsilon} \mathrm{Ad}(u) \circ \phi_2 \ \ \text{and} \ \ \| u-I\| \le \sqrt{2}(\gamma +\delta). \] \end{lemma} \begin{proof} Let a finite set $X\subseteq A_1$ and $0<\varepsilon<1$ be given. There exist $\{ u_1,\dots,u_N \} \subseteq U$ and $\{ c_i^{(x)} : 1\le i \le N, x\in X \} \subseteq C_1$ such that \[ \left\| x- \sum_{i=1}^N c_i^{(x)} u_i \right\| <\frac{\varepsilon}{3} , \ \ x\in X. \] Let $\tilde{x}:=\sum_{i=1}^{N} c_i^{(x)} u_i^{(x)}$ for $x\in X$. Then $\| \tilde{x} \| \le 1+ \varepsilon/3< 2$. By Lemma \ref{Arveson}, there exists $\eta>0$ such that for any $s\in B_1$ and $a\in B$ with $\| a\|\le 2$, the inequality $\| s s^* a- a s s^*\| <\eta$ implies $\| |s| a- a |s| \| < \varepsilon/12$. Let \[ 0<\delta < \min \left\{ \left(\frac{\varepsilon}{60}\right)^2, \frac{\eta^2}{100} \right\}. \] There exists a finite set $Y \subseteq U$ such that \[ \frac{ | u Y \bigtriangleup Y | }{ | Y | } <\frac{\delta}{N}, \ \ u\in \left\{ u_i, u_i^* : 1\le i\le N \right\}. \] Let $\gamma<1/10$ and $\phi_1, \phi_2\colon A\to B$ be $C$-fixed $(Y,\delta)$-approximation $*$-homomorphisms with $\phi_1 \approx_{Y,\gamma} \phi_2$. Define \[ s:= \frac{1}{ | Y | } \sum_{v\in Y} \phi_1(v) \phi_2(v^*). \] Since $\phi_1$ and $\phi_2$ are $C$-fixed maps and for $u\in U$, $u C u^*=C$, we have $s\in C'\cap B$. By Proposition \ref{1/2}, for $x\in X$ and $v\in Y$, \begin{align} &\| \phi_1 (\tilde{x} v)-\phi_1 (\tilde{x}) \phi_1 (v) \| \le \| \phi_1 (v v^* ) - \phi_1 (v) \phi_1 (v^*) \|^{1/2} \|\tilde{x} \| \le 2 \sqrt{\delta}, \label{ad} \\ &\| \phi_2( v^* \tilde{x} ) -\phi_2(v^*) \phi_2( \tilde{x})\| \le \| \phi_2( v^* v)-\phi_2(v^*)\phi_2(v)\|^{1/2}\| \tilde{x} \| \le 2 \sqrt{\delta}. \label{ae} \end{align} Furthermore, \begin{align} \begin{split}\label{aa} \frac{1}{|Y|} \sum_{v\in Y} \phi_1(\tilde{x} v) \phi_2(v^*) &= \frac{1}{|Y|} \sum_{v\in Y} \sum_{i=1}^{N} \phi_1 \left( c_i^{(x)} u_i v \right) \phi_2(v^*) \\ &= \frac{1}{|Y|} \sum_{i=1}^{N} \sum_{v\in u_i Y} \phi_1 \left( c_i^{(x)} v \right) \phi_2 \left( v^* u_i \right) \end{split} \end{align} and \begin{align} \begin{split}\label{ab} \frac{1}{|Y|} \sum_{v\in Y} \phi_1( v) \phi_2(v^* \tilde{x} ) &=\frac{1}{|Y|} \sum_{v\in Y} \sum_{i=1}^{N} \phi_1(v) \phi_2 \left( v^* c_i^{(x)} u_i \right) \\ &=\frac{1}{|Y|} \sum_{i=1}^{N} \sum_{v\in Y} \phi_1 \left( c_i^{(x)} v \right) \phi_2 \left( v^* u_i \right) \end{split} \end{align} By (\ref{aa}), (\ref{ab}) and the choice of $Y$, for $x\in X$, \begin{align} \begin{split}\label{ac} \left\| \frac{1}{|Y|} \sum_{v\in Y} \phi_1(\tilde{x} v) \phi_2(v^*) - \frac{1}{|Y|} \sum_{v\in Y} \phi_1( v) \phi_2(v^* \tilde{x} ) \right\| < \sum_{i=1}^N \frac{ | u_i^{(x)} Y \bigtriangleup Y | }{ | Y | } <\delta. \end{split} \end{align} Similarly, we have \begin{equation}\label{az} \left\| \frac{1}{|Y|} \sum_{v\in Y} \phi_1(\tilde{x}^* v) \phi_2(v^*) - \frac{1}{|Y|} \sum_{v\in Y} \phi_1( v) \phi_2(v^* \tilde{x}^* ) \right\| <\delta, \ \ x\in X. \end{equation} By (\ref{ad}), (\ref{ae}) and (\ref{ac}), \begin{equation}\label{af} \| \phi_1( \tilde{x} ) s- s \phi_2(\tilde{x}) \| \le \delta + 4 \sqrt{\delta} < 5 \sqrt{\delta}, \ \ x \in X \cup X^*. \end{equation} By taking adjoints, \[ \| s^* \phi_1(\tilde{x}) - \phi_2(\tilde{x}) s^* \| \le 5 \sqrt{\delta} , \ \ x \in X\cup X^*. \] Thus, for $x\in X\cup X^*$, \begin{align*} \| \phi_2(\tilde{x}) s^* s - s^* s \phi_2(\tilde{x}) \| &\le \| \phi_2(\tilde{x}) s^*s - s^* \phi_1( \tilde{x}) s \| + \| s^* \phi_1(\tilde{x} ) s - s^* s \phi_2(\tilde{x}) \| \\ &\le \| \phi_2(\tilde{x}) s^* - s^* \phi_1(\tilde{x}) \| \| s\| + \| s^* \| \| \phi_1(\tilde{x}) s -s \phi_2(\tilde{x}) \| \\ &\le 10 \sqrt{\delta}<\eta. \end{align*} By the choice of $\eta$ and (\ref{ah}), \begin{equation}\label{ah} \| \phi_2(\tilde{x}) |s| - |s| \phi_2(\tilde{x}) \| < \frac{\varepsilon}{12}, \ \ x\in X\cup X^*. \end{equation} Since $\phi_1$ is a $(Y,\delta)$-approximation $*$-homomorphism and $\phi_1\approx_{Y,\gamma} \phi_2$, we have \begin{align} \begin{split}\label{aj} \| s -I \| &= \left\| \frac{1}{ |Y| } \sum_{v\in Y} \phi_1(v) \phi_2(v^*) - \frac{1}{ |Y| } \sum_{v\in Y} \phi_1(v v^*) \right\| \\ &\le \frac{1}{ |Y| } \sum_{v\in Y} \left\| \phi_1(v) \phi_2(v^*) - \phi_1(v) \phi_1(v^*) \right\| + \frac{1}{ |Y| } \sum_{v\in Y} \left\| \phi_1(v) \phi_1(v^*) - \phi_1(v v^*) \right\| \\ &\le \gamma + \delta<1. \end{split} \end{align} Since this inequality gives invertibility of $s$, the unitary $u$ in the polar decomposition $s=u|s|$ lies in $C^*(s, I)\subseteq C'\cap B$ and satisfies $\| u- I \| \le \sqrt{2}(\gamma+\delta)$. Then, by (\ref{aj}), \begin{align*} \| |s | -I \| \le \| u^*s- I \| \le \| s- I\| + \| I -u\| \le (1+ \sqrt{2})(\gamma+ \delta)< \frac{1}{2}. \end{align*} Hence, $\| |s|^{-1} \| \le 2$ so, \begin{align}\label{ak} \begin{split} \| \phi_1(\tilde{x}) - u \phi_2(\tilde{x})u^* \| &= \| \phi_1(\tilde{x})u - u \phi_2(\tilde{x}) \| \\ &\le \| \phi_1(\tilde{x}) u|s| - u \phi_2(\tilde{x}) |s| \| \, \| |s|^{-1}\| \\ &\le 2 \| \phi_1(\tilde{x}) u|s| - u \phi_2(\tilde{x}) |s| \| \\ &\le 2 \| \phi_1(\tilde{x}) s- s \phi_2(\tilde{x})\| +2 \| s \phi_2(\tilde{x}) - u \phi_2(\tilde{x}) |s| \| \\ &\le 10 \sqrt{\delta}+ 2\| |s| \phi_2(\tilde{x}) - \phi_2(\tilde{x}) |s| \| \\ &\le 10 \sqrt{\delta} + \frac{\varepsilon}{6} < \frac{\varepsilon}{3}, \end{split} \end{align} for $x\in X$, by (\ref{af}), (\ref{ah}) and (\ref{aj}). For $x\in X$, by (\ref{ak}), \begin{align*} \| \phi_1(x) - u \phi_2(x) u^* \| &\le \| \phi_1(x) - \phi_1(\tilde{x}) \| + \| \phi_1(\tilde{x}) - u \phi_2(\tilde{x}) u^* \| + \| u \phi_2(\tilde{x}) u^* -u \phi_2(x) u^* \| \\ &< \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon. \end{align*} Therefore, $\phi_1 \approx_{X, \varepsilon} \mathrm{Ad}(u) \circ \phi_2$. \end{proof} \begin{rem}\upshape Let a pair $(Y,\delta)$ hold Lemma \ref{1.5}. Then for any finite set $Y'\supseteq Y$ and constant $\delta'<\delta$, a pair $(Y',\delta')$ holds Lemma \ref{1.5}. \end{rem} By Lemma \ref{1.4} and \ref{1.5}, we can show Theorem A. \begin{thm}\label{irreducible} Let $C\subseteq D$ be a unital irreducible inclusion of $\mathrm{C}^*$-algebras acting on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\C^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group. If $\dab<140^{-1}$, then $A= B$. \end{thm} \begin{proof} Let $a\in A_1,\varepsilon>0$ and $\dab<\gamma<140^{-1}$ be given. By Lemma \ref{1.5}, there exist a finite subset $Y\subseteq A_1$ and $\delta>0$ with the following property: Given $\gamma'<1/10$ and two unital $C$-fixed $(Y,\delta)$-approximate $*$-homomorphisms $\phi_1,\phi_2\colon A\to D$ with $\phi_1\approx_{Y,\gamma'}\phi_2$, there exists a unitary $u\in C'\cap D$ such that \[ \| \phi_1(a) - (\Ad(u) \circ \phi_2)(a) \| \le \varepsilon. \] By Lemma \ref{1.4}, there exists a unital $C$-fixed $(Y,\delta)$-approximate $*$-homomorphism $\phi\colon A\to B$ such that $\| \phi - \id_A\| \le \left(8 \sqrt{2} + 2 \right)\gamma$. Then there exists a unitary $u\in C'\cap D$ such that \[ \| \phi(a) - (\Ad(u)) (a) \| \le \varepsilon \] by the definition of $Y$ and $\delta$. Since $u\in C'\cap D=\mathbb{C}I$, we have $\| \phi(a) - a\| \le \varepsilon$. Therefore, since $\phi(a)\in B$ and $\varepsilon$ is arbitrary, $a\in B$, that is, $A\subseteq B$. Furthermore, by Lemma \ref{surjective}, the theorem follows. \end{proof} In the following lemma, we modify \cite[Lemma 3.6]{CSSWW}, which is a Kaplansky density style result for approximate commutants. \begin{lemma}\label{1.6} Let $C\subseteq A$ be a unital inclusion of non-degenerate $\mathrm{C}^*$-algebras in $\mathbb{B}(H)$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group $U$. Then for any finite set $X\subseteq A_1$ and $\varepsilon, \mu >0$, there exist a finite set $Y\subseteq A_1$ and $\delta>0$ with the following property$\colon$ Given a finite set $S\subseteq H_1$ and a self-adjoint operator $m\in \overline{C'\cap A_1}^{\w}$ with \begin{equation} \| m y - y m\| \le \delta, \ \ y\in Y, \end{equation} there exists a self-adjoint operator $a\in C'\cap A_1$ such that $\| a\| \le \| m\|$, \begin{align} \| ax- x a\| <\varepsilon, \ \ x\in X, \end{align} and \begin{align} \| (a-m)\xi\| <\mu \ \ and \ \ \| (a-m)^* \xi\| <\mu , \ \ \xi \in S. \end{align} \end{lemma} \begin{proof} Let a finite set $X\subseteq A_1$ and $\varepsilon,\mu>0$ be given. There exist $\{u_1,\dots,u_{N}\}$\hspace{0pt}$\subseteq U$ and $\{ c_i^{(x)}: 1\le i\le N, x\in X\}\subseteq C_1$ such that \[ \left\| x- \sum_{i=1}^{N} c_i^{(x)} u_i \right\| <\frac{\varepsilon}{3}, \ \ x\in X. \] Let $\tilde{x}:= \sum_{i=1}^{N} c_i^{(x)} u_i$ for $x\in X$. Since $U$ is amenable, there exists a finite set $F\subseteq U$ such that \begin{align}\label{1.6.1} \frac{ | u_i F\bigtriangleup F | }{|F|} <\frac{\varepsilon}{3N}, \ \ 1\le i\le N. \end{align} Define $Y:=F\cup F^*$ and $\delta:=\mu/2$. Let $S$ be a finite set in $H_1$ and $m \in \overline{C'\cap A_1}^{\w}$ be a self-adjoint operator with \begin{align}\label{1.6.2} \| m y - y m \| < \delta, \ \ y\in Y. \end{align} By Kaplansky's density theorem, there exists a self-adjoint operator $a_0\in C'\cap A_1$ such that $\| a_0\| \le \| m\|$, \begin{align}\label{1.6.3} \| (a_0-m) v^*\xi\| <\mu \ \ \text{and} \ \ \| (a_0-m)^* v \xi\| <\mu , \ \ v\in Y, \ \xi\in S. \end{align} Define \[ a:= \frac{1}{|F|}\sum_{v\in F} v a_0 v^*. \] Then, $\| a \| \le \| a_0 \| \le \|m \|$. For any $x\in X$, \begin{align}\label{1.6.4} \begin{split} \| \tilde{x}a- a\tilde{x}\| &= \left\| \frac{1}{|F|} \sum_{i=1}^{N}\sum_{v\in F} c_i^{(x)} u_i v a_0v^* - \frac{1}{|F|} \sum_{i=1}^{N} \sum_{v\in F} c_i^{(x)} v a_0 v^* u_i \right\| \\ &= \left\| \frac{1}{|F|} \sum_{i=1}^{N} c_i^{(x)} \left( \sum_{\tilde{v}\in u_i F} \tilde{v} a_0 \tilde{v}^* u_i - \sum_{v\in F} v a_0 v^* u_i \right) \right\| \\ &\le \sum_{i=1}^{N} \frac{| u_i F \bigtriangleup F|}{|F|} <\frac{\varepsilon}{3}, \end{split} \end{align} by (\ref{1.6.1}). For $x\in X$, since $\| x- \tilde{x}\| <\varepsilon/3$, \begin{align*} \| x a - a x\| &\le \| x a- \tilde{x} a\| + \| \tilde{x}a- a \tilde{x}\| + \| \tilde{x}a- x a\| \\ &\le \frac{\varepsilon}{3} + \frac{\varepsilon}{3}+ \frac{\varepsilon}{3}=\varepsilon \end{align*} by (\ref{1.6.4}). For $\xi \in S$, by (\ref{1.6.2}) and (\ref{1.6.3}), \begin{align*} &\| (a-m) \xi\| \\ &\le \left\| \left( \frac{1}{|F|}\sum_{v\in F} v a_0 v^* - \frac{1}{|F|}\sum_{v\in F} v m v^* \right) \xi \right\| + \left\| \left( \frac{1}{|F|}\sum_{v\in F} v m v^* - \frac{1}{|F|}\sum_{v\in F} v v^* m \right) \xi \right\| \\ &\le \max_{v\in F} \|(a_0-m) v^*\xi \| + \max_{v\in F}\| m v^* - v^* m \| \\ &\le \frac{\mu}{2}+ \delta = \mu. \end{align*} Similarly, for $\xi\in S$, \begin{align*} \| (a-m)^*\xi\| &\le \max_{v\in F} \| (a_0 -m )^* v \xi\| + \max_{v\in F} \| m v - v m\| \le \frac{\mu}{2}+ \delta =\mu, \end{align*} and the lemma follows. \end{proof} By Lemma \ref{1.7} and \ref{1.6}, we obtain the following version of Lemma \ref{1.6} for unitary operators. We need the next lemma in Section \ref{unitary}. \begin{lemma}\label{1.8} Let $C\subseteq A$ be a unital inclusion of non-degenerate $\mathrm{C}^*$-algebras in $\mathbb{B}(H)$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group $U$. Then for any finite set $X\subseteq A_1$, $\varepsilon_0, \mu_0 >0$ and $0<\alpha<2$, there exist a finite set $Y\subseteq A_1$ and $\delta_0>0$ with the following property$\colon$Given a finite set $S\subseteq H_1$ and a unitary $u \in \overline{C'\cap A}^{\w}$ with $\| u - I_H\| \le \alpha$ and \begin{equation} \| u y - y u \| \le \delta_0 , \ \ y\in Y, \end{equation} there exists a unitary $v\in C'\cap A$ such that $\| v - I_H \| \le \| u - I_H \|$, \begin{align} \| v x- x v \| <\varepsilon_0, \ \ x\in X, \end{align} and \begin{align} \| (v-u)\xi\| <\mu_0 \ \ and \ \ \| (v-u)^* \xi\| <\mu_0 , \ \ \xi \in S. \end{align} \end{lemma} \begin{proof} Let a finite set $X\subseteq A_1$, $\varepsilon_0, \mu_0 >0$ and $0<\alpha<2$ be given. There exists $0<c <\pi$ such that $| 1 - e^{i \pi \theta}| \le \alpha$ if and only if $\theta \in [ -c , c]$ modulo $2\pi$. By Lemma \ref{Arveson}, there exists $\varepsilon>0$ such that given a self-adjoint operator $k \in \mathbb{B}(H)_1$ and $a \in \mathbb{B}(H)_1$, if $\| a k - k a \| <\varepsilon$, then $\| a e^{i\pi k} - e^{i\pi k} a\| <\varepsilon_0$. By Lemma \ref{1.7}, there exists $\mu>0$ with the following property: Given a finite set $S\subseteq H_1$ and a self-adjoint operator $k\in \mathbb{B}(H)_1$, there exists a finite set $S'\subseteq H_1$ such that for a self-adjoint operator $k\in \mathbb{B}(H)_1$, if \begin{align*} \| (h-k)\xi'\| <\mu_0, \ \ \xi'\in S', \end{align*} then \begin{align*} \| (e^{i\pi h} - e^{i\pi k}) \xi\| <\mu \ \ \text{and} \ \ \| (e^{i\pi h} - e^{i\pi k})^* \xi\| <\mu, \ \ \xi \in S. \end{align*} By Lemma \ref{1.6}, there exist a finite set $Y\subseteq A_1$ and $\delta>0$ with the following property: For any finite set $S\subseteq H_1$ and self-adjoint operator $m\in \overline{C'\cap A_1}^{\w}$ with \begin{align*} \| m y - y m \| < \delta, \ \ y\in Y, \end{align*} there exists a self-adjoint operator $a \in C'\cap A_1$ such that $\| a \| \le \| m \|$, \begin{align*} \| a x &- x a\| <\varepsilon, \ \ x\in X, \\ \| (a - m) \xi \| <\mu& \ \ \text{and} \ \ \| (a-m)^* \xi\| <\mu , \ \ \xi \in S. \end{align*} By Lemma \ref{Arveson}, there exists $\delta_0>0$ such that for any $y \in \mathbb{B}(H)$ and unitary $u \in \mathbb{B}(H)$ with $\| u - I_H\| \le \alpha$, if $\| u y - y u\| \le \delta_0$, then \begin{align*} \left\| \frac{\log u}{\pi} y - y \frac{\log u}{\pi} \right\| \le \delta. \end{align*} Given a finite set $S\subset H_1$ and a unitary $ u\in \overline{C'\cap A}^{\w}$ with $\| u - I_H\| \le \alpha$ and \begin{align*} \| u y - y u \| \le \delta_0, \ \ y\in Y. \end{align*} Let \[ h:= -i \frac{\log u}{\pi} \in C'\cap M. \] By the definition of $\delta_0$, \begin{align*} \| h y - y h \| \le \delta, \ \ y\in Y. \end{align*} By the definition of $\mu$, there exists a finite set $S'\subseteq H_1$ such that for any self-adjoint operator $k \in \mathbb{B}(H)_1$, if \begin{align*} \| (h - k) \xi' \| <\mu_0, \ \ \xi'\in S', \end{align*} then \begin{align*} \| (e^{i\pi h}- e^{i\pi k})\xi \| <\mu \ \ \text{and} \ \ \| (e^{i\pi h} - e^{i\pi k})^* \xi\| <\mu, \ \ \xi\in S. \end{align*} By the definitions of $Y$ and $\delta$, there exists a self-adjoint operator $k\in C'\cap A_1$ such that $\|k\| \le \|h\|$, \begin{align*} \| k x &- x k \| <\varepsilon, \ \ x\in X, \\ \| (h - k) \xi' \| <\mu& \ \ \text{and} \ \ \| (h - k)^* \xi' \| <\mu , \ \ \xi' \in S'. \end{align*} Define $v:= e^{i\pi k}$. Then, we have $\| v- I_H\| \le \| e^{i\pi h} - I_H \| = \| u- I_H\| $. By the definition of $\varepsilon$ and $S'$, we have \[ \| v x - x v \| <\varepsilon_0 , \ \ x\in X \] and \[ \| (v - u) \xi \| <\mu_0 \ \ \text{and} \ \ \| (v - u )^* \xi \| <\mu_0 , \ \ \xi \in S. \] Hence the lemma is proved. \end{proof} \section{Isomorphisms}\label{isomorphism} In this section, we show Theorem B. Given a unital inclusion $C\subseteq D$ of C$^*$-algebras and intermediate C$^*$-subalgebras $A,B$ for this inclusion with a conditional expectation form $D$ onto $B$, if $A=C\rtimes G$, where $G$ is a discrete amenable group, and if $A$ and $B$ are sufficiently close, then $A$ must be $*$-isomorphic to $B$. To do this, we modify \cite[Lemma 4.1]{CSSWW} in the next lemma. The approximation approach of \cite[Lemma 4.1]{CSSWW} inspired by the intertwining arguments of \cite[Theorem 6.1]{Chris5}. \begin{lemma}\label{3.1} Let $C\subseteq D$ be a unital inclusion of $\C^*$-algebras and let $A,B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E \colon D\to B$. Let $\{a_n\}_{n=0}^{\infty}$ be a dense subset in $A_1$ with $a_0=0$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-3}$. Put $\eta:=(8\sqrt{2}+2)\gamma$. Then for any finite set $X \subseteq A_1$, there exist finite subsets $\{X_n\}_{n=0}^{\infty}, \{ Y_n\}_{n=0}^{\infty}\subseteq A_1$, positive constants $\{ \delta_n \}_{n=0}^{\infty}$, $C$-fixed cpc maps $\{ \theta_n\colon A\to B\}_{n=0}^{\infty}$ and unitaries $\{u_n\}_{n=1}^{\infty}\subseteq C'\cap B$ with the following conditions$\colon$ \begin{enumerate} \item[\upshape{(a)}] For $n\ge 0$, $\delta_n < \min\{ 2^{-n}, \gamma \}$, $a_n\in X_n\subseteq X_{n+1}$ and $X \subseteq X_1;$ \item[\upshape{(b)}] For $n\ge0$ and two unital $C$-fixed $(Y_n,\delta_n)$-approximation $*$-\hspace{0pt}homomorphisms $\phi_1,\phi_2 \colon A\to B$ with $\phi_1\approx_{Y_n,2\eta}\phi_2$, there exists a unitary $u\in C'\cap B$ such that $\mathrm{Ad}(u)\circ \phi_1\approx_{X_n,\gamma/2^n}\phi_2$ and $\| u- I\| \le \sqrt{2}(2\eta+\delta_n);$ \item[\upshape{(c)}] For $n\ge 0$, $X_n\subseteq Y_n;$ \item[\upshape{(d)}] For $n\ge 0$, $\theta_n$ is a $(Y_n,\delta_n)$-approximation $*$-homomorphism with $\| \theta_n-\id_A\| \le \eta;$ \item[\upshape{(e)}] For $n\ge 1$, $\mathrm{Ad}(u_n)\circ \theta_n\approx_{X_{n-1},\gamma/2^{n-1}}\theta_{n-1}$ and $\| u_n- I\| \le \sqrt{2}(2\eta+\delta_{n-1})$. \end{enumerate} \end{lemma} \begin{proof} We prove this lemma by the induction. Let a finite subset $X\subseteq A_1$ be given. Let $X_0=\{ 0 \}=\{ a_0\}=Y_0$, $\delta=1$ and $\theta:=E|_A \colon A\to B$. Suppose that we can construct completely up to the $n$-th stage. We will write the condition (a) for $n$ as (a)$_n$. Let $X_{n+1}:= X_n\cup X \cup \{a_{n+1} \} \cup Y_n$. By Lemma \ref{1.5}, there exist a finite set $Y_{n+1}\subseteq A_1$ and $0<\delta_{n+1} < \min\{ \delta_n, 2^{-(n+1)}, \gamma \}$ satisfying condition (b)$_{n+1}$ and $X_{n+1}\subseteq Y_{n+1}$. By Lemma \ref{1.4}, there exists a unital $C$-fixed $(Y_{n+1},\delta_{n+1})$-approximation $*$-\hspace{0pt}homomorphism $\theta_{n+1}\colon A\to B$ such that $\| \theta_{n+1}- \id_A\| \le \eta$. Therefore, $X_{n+1}, Y_{n+1}, \delta_{n+1}$ and $\theta_{n+1}$ satisfy (a)$_{n+1}$, (b)$_{n+1}$, (c)$_{n+1}$ and (d)$_{n+1}$. Since $Y_n\subseteq Y_{n+1}$ and $\delta_{n+1}<\delta_n$, $\theta_n$ and $\theta_{n+1}$ are unital $C$-fixed $(Y_n,\delta_n)$-approximation $*$-homomorphisms with $\| \theta_n- \theta_{n+1} \| \le 2\eta$. Thus, by (b)$_n$, there exists a unitary $u_{n+1}\in C'\cap B$ such that $\mathrm{Ad}(u_{n+1})\circ \theta_{n+1}\approx_{X_n, \gamma/2^n}\theta_n$ and $\| u_{n+1}- I\|\le \sqrt{2}(2\eta +\delta_n)$. Then (e)$_{n+1}$ follows. \end{proof} \begin{prop}\label{3.2} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras and let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-3}$. Then for any finite subset $X\subseteq A_1$, there exists a $C$-fixed surjective $*$-isomorphism $\alpha\colon A\to B$ such that \[ \alpha\approx_{X, 15\gamma} \id_A. \] \end{prop} \begin{proof} Let $\{a_n\}_{n=0}^{\infty}$ be a dense subset in $A_1$ with $a_0=0$. Put $\eta:=(8\sqrt{2}+2)\gamma$. By Lemma \ref{3.1}, we can construct $\{X_n\}_{n=0}^{\infty}, \{Y_n\}_{n=0}^{\infty}\subseteq A_1$, $\{\delta_n \}_{n=0}^{\infty}$, $\{ \theta_n\colon A\to B\}_{n=0}^{\infty}$ and $\{u_n\}_{n=1}^{\infty}\subseteq C'\cap B$ which satisfy conditions (a)-(e) of that lemma. For any $n\ge 1$, define \[ \alpha_n:= \mathrm{Ad}(u_1\cdots u_n)\circ \theta_n. \] Fix $k\in \mathbb{N}$ and $x\in X_k$. For any $n\ge k$, \begin{align}\label{3.2.1} \begin{split} \| \alpha_{n+1}(x)-\alpha_n(x) \| &=\| \left( \mathrm{Ad}(u_1\cdots u_n)\circ \mathrm{Ad} (u_{n+1}) \circ \theta_{n+1} - \mathrm{Ad}(u_1\cdots u_n) \circ \theta_n \right)(x) \| \\ &= \| \left( \mathrm{Ad}(u_{n+1}) \circ \theta_{n+1} - \theta_n \right) (x) \| \le \frac{\gamma}{2^n} \end{split} \end{align} For any $\varepsilon>0$, there exists $N\ge k$ such that $\gamma/2^{N-1} < \varepsilon$. For any two natural numbers $m \ge n\ge N$, by (\ref{3.2.1}), \[ \| \alpha_m(x) - \alpha_n(x) \| \le \sum_{j=n}^{m-1} \| \alpha_{j+1}(x) -\alpha_j(x) \| \le \sum_{j=n}^{m-1} \frac{\gamma}{2^j} <\frac{\gamma}{2^{N-1}} <\varepsilon. \] Thus, $\{ \alpha_n(x) \}$ is a cauchy sequence. Since $\bigcup_{n=0}^{\infty} X_n$ is dense in $A_1$, the sequence $\{ \alpha_n \}$ converges in the point-norm topology to a $C$-fixed cpc map $\alpha\colon A\to B$. Moreover, $\alpha$ is a $*$-homomorphism, since $\lim_{n\to \infty}\delta_n=0$ and $\bigcup_{n=0}^{\infty} Y_n$ is dense in $A_1$. For any $n\in \mathbb{N}$ and $x\in X_n$, \begin{align}\label{3.2.2} \begin{split} \| \alpha(x) - \alpha_n(x) \| \le \sum_{j=n}^{\infty} \| \alpha_{j+1}(x) - \alpha_j(x) \| \le \sum_{j=n}^{\infty} \frac{\gamma}{2^j} = \frac{\gamma}{2^{n-1}}. \end{split} \end{align} Hence, by (e) in Lemma \ref{3.1}, \begin{align}\label{3.2.3} \begin{split} \| \alpha(x) \| &\ge \left| \| \alpha(x) - \alpha_n(x) \| - \| \alpha_n(x) \| \right| \\ &\ge \| \alpha_n(x)\| - \frac{\gamma}{2^{n-1}} \\ &= \| \theta_n(x) \| - \frac{\gamma}{2^{n-1}} \\ &\ge | \| \theta_n(x) - x \| - \| x \| | - \frac{\gamma}{2^{n-1}} \\ &\ge \| x \| - \eta - \frac{\gamma}{2^{n-1}}. \end{split} \end{align} Let $n\to \infty$ in (\ref{3.2.3}). Then, for any $x$ in the unit sphere of $A$, we have \begin{align} \| \alpha(x) \| \ge 1-\eta >0, \end{align} by the density of $\bigcup_{n=0}^{\infty}X_n$ in $A_1$. Therefore, $\alpha$ is an injective map. For any $b\in B_1$ and $n\in \mathbb{N}$, there exists $x\in A_1$ such that $\| x- u_n^* \cdots u_1^* b u_1 \cdots u_n \| <\gamma$. \begin{align*} \| \alpha_n(x) - b_j \| &= \| u_1\cdots u_n \theta_n(x) u_n^* \cdots u_1^* - b \| \\ &\le \| \theta_n(x) -x \| + \| x - u_n^* \cdots u_1^* b u_1 \cdots u_n \| \\ &< \eta + \gamma <1. \end{align*} Thus, $d(\alpha(A),B)<1$, that is, $\alpha$ is a surjective map by Proposition \ref{surjective}. For any $x\in X$, \begin{align*} \| \alpha(x) - x \| \le \| \alpha(x) - \alpha_1(x) \| + \| \theta_1(x) - x \| \le \gamma +\eta <15 \gamma. \end{align*} by (\ref{3.2.3}) and (e) in Lemma \ref{3.1}. \end{proof} \begin{thm}\label{3.3} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras and let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-3}$. Then for any finite subset $X\subseteq A_1$ and finite set $Y\subseteq B_1$, there exists a $C$-fixed surjective $*$-isomorphism $\alpha \colon A\to B$ such that \[ \alpha\approx_{X, 15\gamma} \id_A \ \ and \ \ \alpha^{-1} \approx_{Y, 17\gamma} \id_B. \] \end{thm} \begin{proof} There exists a finite set $\tilde{X}\subseteq A_1$ such that $\tilde{X}\subset_{\gamma} Y$. By Proposition \ref{3.2}, there exists a $C$-fixed surjective $*$-isomorphism $\alpha\colon A\to B$ such that \[ \alpha \approx_{X\cup \tilde{X}, 15\gamma} \id_A. \] Fix $y\in Y$. Since $\tilde{X}\subset_{\gamma}Y$, there exists $\tilde{x}\in \tilde{X}$ such that $\| \tilde{x} - y \| <\gamma$. Then, we have \begin{align*} \| \alpha^{-1}(y) - y\| &\le \| \alpha^{-1}(y) - \tilde{x}\| + \| \tilde{x} - y\| \\ &\le \| y- \alpha(\tilde{x}) \| + \gamma \\ &\le \| y - \tilde{x} \| + \| \tilde{x} - \alpha(\tilde{x}) \| +\gamma \\ &\le \gamma + 15\gamma + \gamma \le 17 \gamma. \end{align*} Therefore, $\alpha^{-1} \approx_{Y,17\gamma} \id_B$. \end{proof} \section{Crossed product von Neumann algebras by amenable groups}\label{von} In Theorem \ref{2.3.5}, we now show that given a unital inclusion $N\subseteq M$ of von Neumann algebras and intermediate von Neumann subalgebras $A,B$ for this inclusion with a normal conditional expectation form $M$ onto $B$, if $A=N\rtimes G$, where $G$ is a discrete amenable group, and if $A$ and $B$ are sufficiently close, then there exists a unitary $u \in N' \cap M$ such that $u A u^* = B$. This unitary can be chosen to be close to the identity. \begin{defn}\upshape Let $N\subseteq M$ be a unital inclusion of von Neumann algebras in $\mathbb{B}(H)$. Then we call the inclusion $N\subseteq M$ is {\it crossed product-like} if there exists a discrete group $U$ in $\mathcal{N}_M(N)$ such that $M= (N \cup U)''$. \end{defn} \begin{exam}\upshape Let $G$ be a discrete amenable group acting on a von Neumann algebra $N$. Then an inclusion $N\subseteq N\rtimes G$ is crossed product-like by $\{\lambda_g\}_{g\in G}$. \end{exam} \begin{exam}\upshape Let $A\subseteq B$ be a crossed product-like inclusion of $\C^*$-algebras acting non-degenerately on $H$. Then an inclusion $\bar{A}^{\w}\subseteq \bar{B}^{\w}$ of von Neumann algebras is crossed product-like. \end{exam} \begin{rem}\upshape Let $N\subseteq M$ be a crossed product-like inclusion of von Neumann algebras in $\mathbb{B}(H)$ by a discrete amenable group $U\subseteq \mathcal{N}_M(N)$. Then there is a left-invariant mean $m\colon \ell^{\infty}(U)\to \mathbb{R}$ with a net of finite subsets $\{ F_{\mu} \} \subseteq U$ such that \[ \lim_{\mu} \frac{1}{|F_{\mu}| } \sum_{g\in F_{\mu}} f(g) = m(f) , \ \ f\in \ell^{\infty}(U). \] Given a linear bounded map $\phi\colon U\to \mathbb{B}(H)$. For $\xi,\eta\in H$, define $\phi_{\xi,\eta}\in \ell^{\infty}(U)$ by \[ \phi_{\xi,\eta}(u)=\langle \phi(u) \xi, \eta \rangle, \ \ u\in U. \] Then there is an operator $T_{\phi}\in \mathbb{B}(H)$ which we will often write in the form \[ T_{\phi}= \int_{u\in U} \phi(u) d m \] such that \[ \langle T_{\phi}\xi,\eta \rangle = m( \phi_{\xi,\eta} )= \int_{u\in U} \langle \phi(u) \xi,\eta \rangle d m , \ \ \xi,\eta\in H. \] By the construction of $m$, we have \[ \langle T_{\phi} \xi, \eta \rangle = \lim_{\mu} \frac{1}{|F_{\mu}|} \sum_{u\in F_{\mu}} \left\langle\phi(u) \xi,\eta \right\rangle, \ \ \xi,\eta\in H, \] that is, $T_{\phi}\in \overline{\mathrm{conv}}^{\mathrm{w}} \{ \phi(u) : u \in U \}$. Furthermore, \begin{align*} \| T_{\phi} \| &= \sup_{\xi,\eta\in H} \left| \int_{u\in U} \langle \phi(u) \xi,\eta \rangle d m \right| \\ &\le \int_{u\in U} \sup_{\xi,\eta\in H} \left| \langle \phi(u) \xi,\eta \rangle \right| d m =\int_{u\in U} \| \phi(u) \| d m. \end{align*} \end{rem} In the next lemma, we shall find a unital normal $*$-homomorphism between von Neumann algebras. This lemma is originated in Christensen's work \cite[Lemma 3.3]{Chris2}, which discusses the perturbation theory for injective von Neumann algebras. \begin{lemma}\label{2.2} Let $N\subseteq M$ be an inclusion of von Neumann algebras in $\mathbb{B}(H)$ and let $A,B$ be intermediate von Neumann subalgebras for $N\subseteq M$ with a normal conditional expectation $E \colon M\to B$. Suppose that $N\subseteq A$ is crossed product-like by a discrete amenable group $U$ and $d(A,B)<\gamma<1/4$. Then there exists a unital $N$-fixed normal $*$-homomorphism $\Phi \colon A \to B$ such that \[ \| \Phi -\id_A \| \le ( 8 \sqrt{2}+2)\gamma . \] \end{lemma} \begin{proof} Let $A_0:= \mathrm{span} \{ x u : x\in N, u\in U\}$. By Stinespring's theorem, there exist a Hilbert space $\tilde{H} \supseteq H$ and a unital normal $*$-homomorphism $\pi \colon M \to \mathbb{B}(\tilde{H})$ such that \begin{align*} E (x)= P_H \pi (x) |_H, \ \ x\in M . \end{align*} Let $m\colon \ell^{\infty}(U)\to \mathbb{R}$ be a left-invariant mean with a net of finite subsets $\{ F_{\mu} \} \subseteq U$ such that \[ \lim_{\mu} \frac{1}{|F_{\mu}| } \sum_{g\in F_{\mu}} f(g) = m(f) , \ \ f\in \ell^{\infty}(U). \] Define \[ t:= \int_{u\in U} \pi(u)P_H \pi(u^*) d m. \] Since $P_H\in \pi( N )'$, we have $t\in \pi(N )'$. Fix $x\in A_0$. Then there exist $\{ u_1,\dots, u_N\} \subseteq U$ and $\{ x_1,\dots,x_N\}\subseteq N_1$ such that $x=\sum_{i=1}^N x_i u_i $. For any $\xi,\eta\in H$, \begin{align*} \left\langle \pi(x) t \xi, \eta \right\rangle &=\int_{u\in U} \left\langle \pi(x) \pi(u) P_H \pi(u^*) \xi, \eta \right\rangle d m \\ &=\int_{u\in U} \sum_{i=1}^N \left\langle \pi( x_i u_i u)P_H\pi(u^*) \xi, \eta \right\rangle d m \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \pi(x_i v)P_H\pi(v^* u_i ) \xi, \eta \right\rangle d (u_i^* m) \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \pi( x_i v)P_H\pi(v^* u_i ) \xi, \eta \right\rangle d m \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \pi(v)P_H\pi(v^* x_i u_i ) \xi, \eta \right\rangle d m \\ &=\int_{u\in U} \left\langle \pi(u) P_H \pi(u^*) \pi(x) \xi, \eta \right\rangle d m = \left\langle t \pi(x) \xi, \eta \right\rangle. \end{align*} Therefore, $t\in \pi( A)'$ by the normality of $\pi$. Furthermore, for $u\in U$, there is $v\in B_1$ such that $\| u- v\| <\gamma$. Then since $P_H\in \pi(B)'$, we have \[ \| \pi (u)P_H - P_H\pi(u)\| \le \| \pi(u) P_H- \pi(v)P_H\| + \| P_H\pi(v) - P_H\pi(u)\| <2\gamma. \] Therefore, \begin{align*} \| t - P_H\| &\le \int_{u\in U} \| \pi(u)P_H \pi(u^*) - P_H\| d m \\ &=\int_{u\in U} \| \pi(u) P_H - P_H \pi(u) \| d m \le 2 \gamma <\frac{1}{2}. \end{align*} Define $\delta:=\| t - P_H\|$ and $q:=\chi_{[1-\delta,1]}(t)$. Since $\|q-P_H\|\le 2\delta<1$, there exists a unitary $w\in C^*(t, P_H, I_{\tilde{H}})$ such that $w P_H w^*=q$ and $\| w- I_{\tilde{H}} \| \le 2 \sqrt{2}\delta$ by Lemma \ref{polar}\,(3). Define a map $\Phi\colon A \to \mathbb{B}(\tilde{H})$ by \[ \Phi(x):= P_H w^* \pi (x) w |_H, \ \ x\in A . \] Since $t\in \overline{\mathrm{conv}}^{\w} \{ \pi(u) P_H \pi(u^*) : u\in U\}$ and $P_H \pi( A ) |_H= E( A )\subseteq B$, we have $\Phi( A )\subseteq B$. For any $x,y\in A$, \begin{align*} \Phi(x)\Phi(y) &=P_H w^*\pi(x) w P_H w^*\pi(y)w P_H \\ &=P_H w^* \pi(x) q \pi(y) w P_H \\ &=P_H w^* q \pi(x y)w P_H \\ &=P_H w^* \pi (x y) w P_H =\Phi(x y). \end{align*} Therefore, $\Phi$ is a $*$-homomorphism. Furthermore, for any $x\in A_1$, \begin{align*} \| \Phi(x) - x \| &\le \| \Phi(x) - E(x)\| + \| E (x) - x\| \\ &\le \| P_H w^* \pi(x) w P_H - P_H \pi(x) P_H \| + 2d(A , B) \\ &\le 2\| w - I_{\tilde{H}}\| + 2 d(A , B) \\ &\le (8 \sqrt{2}+2) \gamma. \end{align*} Since $w\in C^*(t, P_H, I_{\tilde{H}})\subseteq \pi(N)'$, $\Phi$ is a $N$-fixed map. \end{proof} We base the next lemma on Christensen's work \cite[Propositions 4.2 and 4.4]{Chris2}, which show similar results for injective von Neumann algebras. \begin{lemma}\label{2.3} Let $A,B$ and $N$ be von Neumann algebras in $\mathbb{B}(H)$ with $N\subseteq A\cap B$. Suppose that $N\subseteq A$ is crossed product-like by a discrete amenable group $U$. Then given two unital $N$-fixed normal $*$-homomorphisms $\Phi_1,\Phi_2 \colon A \to B$ with $\| \Phi_1-\Phi_2\| <1$, there exists a unitary $u\in N' \cap B$ such that $\Phi_1=\mathrm{Ad}(u)\circ \Phi_2$ and $\| u - I \| \le \sqrt{2}\| \Phi_1- \Phi_2\|$. \end{lemma} \begin{proof} Let $A_0:= \mathrm{span}\{ x u : x\in N ,u\in U\}$ and let $m\colon\ell^{\infty}(U)\to \mathbb{R}$ be a left-invariant mean with there is a net of finite subsets $\{ F_{\mu} \} \subseteq U$ such that $m_{\mu}\to m$ in the weak-$*$ topology, where \[ m_{\mu}(f)= \frac{1}{|F_{\mu}| } \sum_{g\in F_{\mu}} f(g), \ \ f\in \ell^{\infty}(U). \] Define \[ s := \int_{u\in U} \Phi_1(u) \Phi_2(u^*) d m . \] Since $\Phi_1$ and $\Phi_2$ are $N$-fixed maps and $U\subseteq \mathcal{N}_A(N)$, we have $s \in N ' \cap B$. For $x\in A_0$, there exist $\{ u_1,\dots,u_N\} \subseteq U$ and $\{ x_1,\dots ,x_N\} \subseteq N$ such that $x=\sum_{i=1}^N x_i u_i $. For any $\xi ,\eta\in H$, \begin{align*} \langle \Phi_1(x)s\xi,\eta \rangle &=\int_{u\in U} \left\langle \Phi_1(x) \Phi_1(u)\Phi_2(u^*) \xi, \eta \right\rangle d m \\ &=\int_{u\in U} \sum_{i=1}^N \left\langle \Phi_1(c_i u_i u) \Phi_2(u^* ) \xi, \eta\right\rangle d m \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \Phi_1(c_i v) \Phi_2(v^* u_i ) \xi, \eta\right\rangle d (u_i^* m) \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \Phi_1(c_i v) \Phi_2(v^* u_i ) \xi, \eta\right\rangle d m \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \Phi_1( v) \Phi_2(v^* c_i u_i ) \xi, \eta\right\rangle d m \\ &= \int_{v\in U} \left\langle \Phi_1(v) \Phi_2(v^* x) \xi, \eta\right\rangle d m =\langle s \Phi_2(x) \xi,\eta \rangle. \end{align*} Therefore, by the normality of $\Phi_1$ and $\Phi_2$, \begin{align}\label{2.3.1} \Phi_1(x)s= s\Phi_2(x), \ \ x\in A. \end{align} By taking adjoint, \begin{align}\label{2.3.2} s^*\Phi_1(x)=\Phi_2(x)s^*, \ \ x\in A. \end{align} By (\ref{2.3.1}) and (\ref{2.3.2}), for $x\in A$, $s^* s \Phi_2(x)= s^* \Phi_1(x) s= \Phi_2(x) s^* s$. Thus, \begin{align}\label{2.3.3} |s|^{-1} \Phi_2(x)= \Phi_2(x)|s|^{-1}, \ \ x\in A. \end{align} Furthermore, \begin{align*} \|s- I_H\| \le \int_{u\in U} \| \Phi_1(u)\Phi_2(u^*)- \Phi_1(u)\Phi_1(u^*) \| d m \le \| \Phi_1-\Phi_2\| <1. \end{align*} Hence by Lemma \ref{polar}\,(1), we can choose the unitary $u\in C^*(s,I )\subseteq N'\cap B$ in the polar decomposition of $s$ with $\| u- I \| \le \sqrt{2}\| s- I \|$. By (\ref{2.3.1}) and (\ref{2.3.3}), \begin{align*} \Phi_1(x)u=\Phi_1(x)s |s|^{-1}= s\Phi_2(x) |s|^{-1}= s |s|^{-1} \Phi_2(x) =u \Phi_2(x), \ \ x\in A. \end{align*} Therefore, $\Phi_1=\mathrm{Ad}(u)\circ \Phi_2$. \end{proof} Using Lemma \ref{2.2} and \ref{2.3}, it follows Theorem C. \begin{thm}\label{2.3.5} Let $N\subseteq M$ be an inclusion of von Neumann algebras in $\mathbb{B}(H)$ and let $A,B$ be intermediate von Neumann subalgebras for $N \subseteq M$ with a normal conditional expectation from $M$ onto $B$. Suppose that $N\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-2}$. Then there exists a unitary $u\in N ' \cap (A\cup B)''$ such that $u A u^*= B$ and $\| u - I \| \le 2(8+\sqrt{2})\gamma$. \end{thm} \begin{proof} By Lemma \ref{2.2}, there exists a unital $N$-fixed normal $*$-\hspace{0pt}homomorphism $\Phi\colon A \to B$ such that \[ \| \Phi- \id_A \| \le (8 \sqrt{2}+2)\gamma. \] Since $(8 \sqrt{2}+2)\gamma <1$, there exists a unitary $u\in N' \cap (A\cup B)''$ such that $\Phi=\mathrm{Ad}(u)$ and $\| u - I \| \le \sqrt{2}\| \Phi- \id_A \| $ by Lemma \ref{2.3}. Thus, \[ u A u^* = \Phi(A) \subseteq B. \] Fix $x\in B_1$. There exists $y \in A_1$ such that $\| x -y \|\le \gamma$. Then, \begin{align*} \| y - u x u^*\| \le \| y- x\| + \| x- u x u^*\| \le \gamma + 2\| u- I\| <1. \end{align*} Thus, $d(u A u^*, B )<1$, that is, $u A u^*= B $ by Proposition \ref{surjective}. \end{proof} \begin{cor}\label{2.4} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras and let $A,B$ be intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-2}$. Then there exists a unitary $u\in (C^{**})' \cap W^*(A^{**}, B^{**})$ such that $u A^{**} u^*= B^{**}$ and $\| u - I \| \le 2(8+\sqrt{2})\gamma$. \end{cor} \begin{proof} By a general construction, there exists a normal conditional expectation $E ^{**}\colon D^{**}\to B^{**}$. Let $(\pi, H)$ be the universal representation of $D$ and identify $A^{**}$, $B^{**}$, $C^{**}$ and $D^{**}$ with $\pi(A)''$, $\pi(B)''$, $\pi(C)''$ and $\pi(D)''$, respectively. Then by Theorem \ref{2.3.5} and Lemma \ref{weak-closure}, the corollary follows. \end{proof} \section{Unitary equivalence}\label{unitary} In this section, we show the fourth main result: Theorem D. For a unital inclusion $C\subseteq D$ of C$^*$-algebras acting on a separable Hilbert space $H$ and sufficiently close separable intermediate C$^*$-subalgebras $A$, $B$ for $C\subseteq D$ with a conditional expectation of $D$ onto $B$, if $A=C \rtimes G$ with $G$ discrete amenable and if $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{\w}$, then $A$ and $B$ are unitarily equivalent. The unitary can be chosen in the relative commutant of $C'\cap (A\cup B)''$. To show this, we modify the arguments of Section 5 in Christensen et al. \cite{CSSWW}. \begin{lemma}\label{4.1} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting non-degenerately on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{\w}$. If $d(A,B)<\gamma<10^{-4}$, then for any finite subsets $X\subseteq B_1$, $Z_A\subseteq A_1$ and $\varepsilon,\mu>0$, there exist finite subsets $Y\subseteq B_1$, $Z\subseteq A_1$, a positive constant $\delta>0$, a unitary $u\in C'\cap C^*(A,B)$ and a $C$-fixed surjective $*$-isomorphism $\theta\colon B\to A$ with the following conditions$\colon$ \begin{enumerate} \item[\upshape{(i)}] $\delta<\varepsilon;$ \item[\upshape{(ii)}] $X\subseteq_{\varepsilon}Y;$ \item[\upshape{(iii)}] $\| u - I\| \le 75\gamma;$ \item[\upshape{(iv)}] $\theta\approx_{Y,\delta} \mathrm{Ad}(u);$ \item[\upshape{(v)}] $\theta \approx_{X,117\gamma} \id_B$ and $\theta^{-1} \approx_{Z_A, 115\gamma} \id_A;$ \item[\upshape{(vi)}] For any $C$-fixed surjective $*$-isomorphism $\phi\colon B\to A$ with $ \phi^{-1}\approx_{Z, 365\gamma} \id_A, $ there exists a unitary $w\in C'\cap A$ such that \[ \mathrm{Ad}(w) \circ \phi \approx_{Y, \delta/2} \theta \ \ \text{and} \ \ \| w - u\| \le 665\gamma; \] \item[\upshape{(vii)}] For any finite subset $S\subseteq H_1$ and unitary $v\in C'\cap C^*(A,B)$ with $\mathrm{Ad} (v) \approx_{Y,\delta}\theta$ and $\| v -u\| \le 740\gamma$, there exists a unitary $\tilde{v} \in C'\cap A$ such that $\mathrm{Ad}(\tilde{v} v) \approx_{X,\varepsilon} \theta$, $\| \tilde{v}- I\| \le 740\gamma$ and \[ \| (\tilde{v}v-u)\xi \| <\mu \ \ and \ \ \| (\tilde{v}v-u)^* \xi\| <\mu, \ \ \xi \in S. \] \end{enumerate} \end{lemma} \begin{proof} Let $X, Z_A, \varepsilon$ and $\mu$ be given. By Lemma \ref{1.5}, there exists a finite subset $Z_1\subseteq B_1$ satisfying the following condition: given two unital $C$-fixed $*$-homomorphisms $\phi_1,\phi_2\colon B\to B$ with $\phi_1\approx_{Z_1,32\gamma}\phi_2$, there exists a unitary $w_1\in C'\cap B$ such that $\phi_1\approx_{X,\varepsilon/3}\mathrm{Ad} (w_1) \circ \phi_2$ and $\| w_1- I_H\| \le 32\sqrt{2}\gamma$. By Proposition \ref{3.2}, there exists a $C$-fixed surjective $*$-isomorphism $\beta\colon B\to A$ such that \begin{align}\label{4.1.1} \beta\approx_{Z_1, 17\gamma} \id_B. \end{align} Let $X_0:=\beta(X)$. By Lemma \ref{1.8}, there exist a finite set $Y_0\subseteq A_1$ and $\delta>0$ with the following properties: $\delta<\varepsilon/6$, $X_0\subseteq Y_0$ and given a finite set $S_0\subseteq H_1$ and a unitary $u\in C'\cap C^*(A,B)$ with $\| u - I\| \le 740\gamma$ and \begin{align*} \| u y_0 - y_0 u \| \le 3\delta, \ \ y_0\in Y_0, \end{align*} there exists a unitary $v\in C'\cap A$ such that $\| v -I\| \le 740\gamma$, \[ \| v x_0 - x_0 v \| \le \frac{\varepsilon}{6}, \ \ x_0\in X_0 \] and \[ \| (v-u)\xi_0\| <\mu \ \ \mathrm{and} \ \ \|(v-u)^*\xi_0\|<\mu, \ \ \xi_0\in S_0, \] since $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{\w}$. By Lemma \ref{1.5}, there exists a finite set $Z\subseteq A_1$ with the following properties: $\beta(Z_1)\cup Z_A\subseteq Z$ and given $\gamma_0<1/10$ and two unital $C$-fixed $*$-homomorphism $\phi_1,\phi_2\colon A\to C^*(A,B)$ with $\phi_1\approx_{Z,\gamma_0}\phi_2$, there exists a unitary $u_0\in C'\cap C^*(A,B)$ such that $\mathrm{Ad} (u_0) \circ \phi_1\approx_{Y_0,\delta/2}\phi_2$ and $\| u_0 - I\| \le \sqrt{2}\gamma_0$. By Proposition \ref{3.3}, there exists a $C$-fixed surjective $*$-isomorphism $\sigma\colon A\to B$ such that \begin{align}\label{4.1.2} \sigma\approx_{Z,15\gamma} \id_A \ \ \mathrm{and} \ \ \sigma^{-1}\approx_{X,17\gamma} \id_B. \end{align} Hence, by the choice of $Z$, there exists a unitary $u_0\in C'\cap C^*(A,B)$ such that \begin{align}\label{4.1.3} \sigma\approx_{Y_0,\delta/2} \mathrm{Ad} (u_0) \end{align} and $\| u_0 - I\| \le 15\sqrt{2}\gamma <25\gamma$. Since $\beta(Z_1)\subseteq Z$, (\ref{4.1.1}) and (\ref{4.1.2}), we have \begin{align}\label{4.1.4} \sigma\circ\beta\approx_{Z_1,32\gamma} \id_B. \end{align} By the definition of $Z_1$, there exists a unitary $w_1\in C'\cap B$ such that \begin{align}\label{4.1.5} \sigma\circ \beta \approx_{X,\varepsilon/3} \Ad (w_1) \end{align} and $\| w_1 - I\| \le 32\sqrt{2}\gamma < 50\gamma$. Now define $\theta:=\sigma^{-1}\circ \Ad (w_1)$, $Y:=\theta^{-1}(Y_0)$ and $u:= u_0^*w_1$. Fix $y\in Y$. Let $y_0:=\theta(y)\in Y_0$. Then, \begin{align*} \| \theta(y) - \Ad (u) (y)\| &=\| y_0 - (\Ad (u) \circ\theta^{-1})(y_0) \| \\ &=\| y_0- (\Ad (u_0^*) \circ \sigma)(y_0) \| \\ &=\| \Ad (u_0) (y_0) - \sigma(y_0) \| \le \frac{\delta}{2}, \end{align*} since $\theta^{-1}=\Ad (w_1^*) \circ \sigma$ and (\ref{4.1.3}). Thus, $\theta\approx_{Y,\delta/2}\Ad (u)$, so that condition (iv) holds. By the definition of $u$, we have \begin{align*} \| u- I\| = \| w_1- u_0\| \le \| w_1- I\| + \| I - u_0 \| <75 \gamma. \end{align*} Hence, condition (iii) holds. For any $x\in X$, \begin{align}\label{4.1.6} \begin{split} \| \theta(x) - x \| &\le \| (\sigma^{-1}\circ\Ad (w_1))(x) - \sigma^{-1}(x) \| + \| \sigma^{-1}(x) - x\| \\ &\le 2\| w_1- I_H\| +17\gamma \\ &\le 100\gamma +17\gamma= 117\gamma. \end{split} \end{align} For any $z\in Z$, \begin{align*} \| \theta^{-1}(z) - z \| &\le \| (\Ad (w_1^*) \circ \sigma)(z) - \Ad (w_1^*)(z) \| + \| \Ad (w_1^*)(z) - z\| \\ &\le \| \sigma(z) - z\| + 2\| w_1-I\| \\ &\le 15\gamma +100\gamma=115\gamma. \end{align*} Therefore, \begin{align}\label{4.1.7} \theta^{-1}\approx_{Z,115\gamma} \id_A. \end{align} Since $Z_A\subseteq Z$, we have $\theta^{-1}\approx_{Z_A,115\gamma} \id_A$, so that condition (v) holds. By (\ref{4.1.5}), \begin{align}\label{4.1.8} \theta=\sigma^{-1}\circ \Ad(w_1) \approx_{X,\varepsilon/3} \sigma^{-1}\circ \sigma\circ\beta=\beta \end{align} Fix $x_0\in X_0$. Let $x:= \beta^{-1}(x_0)\in X$. Then, by (\ref{4.1.8}), \begin{align*} \| \theta^{-1}(x_0)-\beta^{-1}(x_0)\| = \| (\theta^{-1}\circ\beta)(x) - x \| =\| \beta(x) -\theta(x)\| \le \frac{\varepsilon}{3}. \end{align*} Therefore, \begin{align}\label{4.1.9} \theta^{-1}\approx_{X_0, \varepsilon/3} \beta^{-1}. \end{align} Hence, \[ X=\beta^{-1}(X_0)\subseteq_{\varepsilon/3} \theta^{-1}(X_0) \subseteq \theta^{-1}(Y_0) =Y, \] so that condition (ii) holds. We now verify condition (vi). Let $\phi\colon B\to A$ be a $C$-fixed surjective $*$-isomorphism with $\phi^{-1}\approx_{Z,365\gamma} \id_A$. By (\ref{4.1.2}), \[ \phi^{-1}\approx_{Z,380\gamma}\sigma. \] Thus, by the definition of $Z$, there exists a unitary $w_0\in C'\cap B$ such that \begin{align}\label{4.1.10} \Ad (w_0)\circ \phi^{-1}\approx_{Y_0,\delta/2}\sigma \end{align} and $\| w_0 - I\| \le 380\sqrt{2}\gamma<540\gamma$. Fix $y\in Y$. Let $y_0:= \theta(y)\in Y_0$. Then, since $w_0^*w_1\in B$, we have \begin{align*} \| \theta(y) - (\Ad(\phi(w_0^*w_1))\circ \phi)(y) \| &= \| \theta(y)- (\phi\circ\Ad(w_0^*w_1))(y) \| \\ &= \| y_0 - (\phi\circ\Ad (w_0^*) \circ\sigma)(y_0)\| \\ &=\| (\Ad (w_0) \circ\phi^{-1})(y_0)-\sigma(y_0)\| \le \frac{\delta}{2} \end{align*} by (\ref{4.1.10}). Define $w:=\phi(w_0^*w_1)$ so $\theta\approx_{Y,\delta/2}\Ad (w) \circ\phi$. Since $\phi$ is $C$-fixed map and $w_0,w_1\in C'$, $w$ is in $C'\cap A$. Moreover, \begin{align*} \| w-u\| &\le \| w -I\|+ \| I-u\| \le \| w_0^*w_1-I\| + 75\gamma \\ &\le \|w_0-I\|+\|I-w_1\|+75\gamma \le (540+50+75)\gamma \\ &=665\gamma. \end{align*} Therefore, condition (vi) is proved. It only remains to prove condition (vii). Let $S\subseteq H_1$ be a finite set and $v\in C'\cap C^*(A,B)$ be a unitary with $\| v-u\| \le 740\gamma$ and \begin{align}\label{4.1.11} \Ad (v)\approx_{Y,\delta}\theta. \end{align} Fix $y_0\in Y_0$. Let $y:=\theta^{-1}(y_0)\in Y$. Then, \begin{align}\label{4.1.99} \begin{split} \| \sigma(y_0)-\Ad(w_1v^*)(y_0)\| &=\| (\Ad (w_1^*) \circ \sigma)(y_0)- \Ad (v^*) (y_0) \| \\ &=\| y - (\Ad (v^*) \circ\theta)(y) \| \\ &=\| \Ad (v) (y) - \theta(y) \| \le\delta . \end{split} \end{align} This and (\ref{4.1.3}) give $\Ad (u_0) \approx_{Y_0,3\delta/2}\Ad(w_1v^*)$. Therefore, for any $y_0\in Y_0$, \begin{align}\label{4.1.12} \| (v w_1^* u_0) y_0 - y_0(v w_1^* u_0) \| =\|u_0 y_0u_0^*- (w_1 v^*) y_0 (w_1 v^*)^* \| \le \frac{3}{2}\delta. \end{align} Furthermore, \begin{align}\label{4.1.13} \| v w_1^* u_0 - I\| =\| w_1^* u_0- v^*\| = \| u^* - v^*\| \le 740\gamma. \end{align} Let $S_0:=S\cup w_1 S\cup v S$. By the definition of $Y_0$ and $\delta$, with $v w_1^* u_0$ and $S_0$, there exists a unitary $v_0\in C'\cap A$ such that $\| v_0-I\|\le 740\gamma$, \begin{align}\label{4.1.14} \| v_0 x_0- x_0 v_0 \| \le \frac{\varepsilon}{6}, \ \ x_0\in X_0 \end{align} and \begin{align}\label{4.1.15} \| (v_0- v w_1^*u_0)\xi_0\| <\mu \ \ \mathrm{and} \ \ \| (v_0- v w_1^*u_0)^*\xi_0\| <\mu, \ \ \xi_0\in S_0. \end{align} Let $\tilde{v}:=v_0^*$. Then, $\| \tilde{v}-I\| \le \| v_0-I\| \le 740\gamma $. For any $\xi\in S$, \begin{align*} \| (\tilde{v}v -u)\xi\| &=\| ( v_0^* v - u_0^* w_1)\xi \| =\| ( v_0^* - u_0^* w_1 v^*) v \xi\| \\ &=\| (v_0- v w_1 u_0)^* v \xi \| <\mu \end{align*} by (\ref{4.1.15}) and $v \xi \in S_0$. Moreover, \[ \| (\tilde{v} v -u)^*\xi\| =\| (v_0^*v- u_0^*w_1)^*\xi\| = \| v^*(v_0 - v w_1^* u_0)\xi\| <\mu \] by (\ref{4.1.15}). For any $x_0\in X_0$, by (\ref{4.1.99}) and (\ref{4.1.14}), \begin{align}\label{4.1.16} \begin{split} \| \theta^{-1}(x_0) - \Ad(v^* v_0)(x_0)\| &\le \| \theta^{-1}(x_0) - \Ad(v^*)(x_0) \| + \| \Ad(v^*)(x_0) - \Ad(v^* v_0)(x_0) \| \\ &= \| (\Ad(w_1^*) \circ \sigma)(x_0) - \Ad(v^*)(x_0) \| + \| x_0 - \Ad(v_0)(x_0) \| \\ &= \| \sigma(x_0) - \Ad(w_1v^*)(x_0) \| + \| v_0 x_0 - x_0 v_0 \| \\ &< \delta + \frac{\varepsilon}{6} \le \frac{\varepsilon}{3}. \end{split} \end{align} Let $x\in X$ and $x_0:=\beta(x)\in X_0$. By (\ref{4.1.8}), (\ref{4.1.9}) and (\ref{4.1.16}), \begin{align*} \| \Ad(\tilde{v}v)(x) - \theta(x) \| &\le \| \Ad(\tilde{v}v)(x) - \beta(x) \| + \| \beta(x)- \theta(x)\| \\ &\le \| (\Ad(\tilde{v}v)\circ \beta^{-1})(x_0) - x_0 \| + \frac{\varepsilon}{3} \\ &= \| \beta^{-1}(x_0) - \Ad(v^* v_0)(x_0) \| + \frac{\varepsilon}{3} \\ &\le \| \beta^{-1}(x_0) - \theta^{-1}(x_0) \| + \| \theta^{-1}(x_0) - \Ad(v^* v_0)(x_0)\| + \frac{\varepsilon}{3} \\ &\le \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon. \end{align*} Therefore, condition (vii) holds. \end{proof} \begin{lemma}\label{4.2} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting non-degenerately on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Let $\{ a_n\}_{n=1}^{\infty}$, $\{ b_n\}_{n=1}^{\infty}$ and $\{ \xi_n\}_{n=0}^{\infty}$ be dense subsets in $A_1$, $B_1$ and $H_1$, respectively. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{\w}$. If $d(A,B)<\gamma<10^{-5}$, then there exist finite subsets $\{ X_n\}_{n=0}^{\infty}, \{Y_n\}_{n=0}^{\infty}\subseteq B_1$, $\{Z_n\}_{n=0}^{\infty}\subseteq A_1$, positive constants $\{\delta_n\}_{n=0}^{\infty}$, unitaries $\{u_n\}_{n=0}^{\infty}\subseteq C'\cap C^*(A,B)$ and $C$-fixed surjective $*$-isomorphisms $\{\theta_n\colon B\to A\}_{n=0}^{\infty}$ with the following conditions$\colon$ \begin{enumerate} \item[\upshape{(1)}] For $n\ge 1$, $b_1,\dots,b_n\in X_n;$ \item[\upshape{(2)}] For $n\ge 0$, $X_n\subseteq_{2^{-n}/3}Y_n$ and $\delta_n<2^{-n};$ \item[\upshape{(3)}] For $n\ge 1$, $\theta_n\approx_{X_{n-1},2^{-(n-1)}}\theta_{n-1};$ \item[\upshape{(4)}] For $n\ge 0$, $\theta_n\approx_{Y_n,\delta_n}\Ad (u_n);$ \item[\upshape{(5)}] For $1\le j \le n$, $\|(u_n- u_{n-1})\xi_j\|<2^{-n}$ and $\| (u_n- u_{n-1})^* \xi_j \| <2^{-n};$ \item[\upshape{(6)}] For $1\le j\le n$, there exists $x\in X_n$ such that $\| \theta_n(x)-b_j\| \le 9/10;$ \item[\upshape{(7)}] For $n\ge 0$ and a $C$-fixed surjective $*$-isomorphism $\phi\colon B\to A$ with $\phi^{-1}\approx_{Z_n, 365\gamma} \id_A$, there exists a unitary $w\in C'\cap A$ such that $\mathrm{Ad} (w) \circ \phi \approx_{Y_n, \delta_n/2} \theta_n$ and $\| w - u_n\| \le 665\gamma;$ \item[\upshape{(8)}] For $n\ge 0$, a finite subset $S\subseteq H_1$ and a unitary $v\in C'\cap C^*(A,B)$ with $\mathrm{Ad} (v) \approx_{Y_n,\delta_n}\theta_n$ and $\| v -u_n\| \le 740\gamma$, there exists a unitary $\tilde{v} \in C'\cap A$ such that $\mathrm{Ad}(\tilde{v} v) \approx_{X_n, 2^{-(n+1)}} \theta_n$, $\| \tilde{v}- I\| \le 740\gamma$ and \[ \| (\tilde{v}v-u_n)\xi \| < \frac{1}{2^{n+1}} \ \ and \ \ \| (\tilde{v}v-u_n)^* \xi\| <\frac{1}{2^{n+1}}, \ \ \xi \in S; \] \item[\upshape{(9)}] For $n\ge 0$, there is a unitary $z\in A$ such that $\| z - u_n\| \le 75\gamma$. \end{enumerate} \end{lemma} \begin{proof} We prove this lemma by using the induction. Denote by (a)$_n$ the condition (a) for $n$. Let $X_0=Y_0=Z_0=\emptyset$, $\delta_0=1/2$, $u_0=I$ and $\theta_0\colon B\to A$ be any $C$-fixed surjective $*$-isomorphism by Proposition \ref{3.2}. Then, conditions (1)$_0$, (3)$_0$, (5)$_0$ and (6)$_0$ do not be defined. Conditions (2)$_0$ and (4)$_0$ are clear, since $X_0=Y_0=\emptyset$. In conditions (7)$_0$, (8)$_0$ and (9)$_0$, by taking $w=I$, $\tilde{v}=v^*$ and $z=I$, that conditions are satisfied. Assume the statement holds for $n$; we will prove it for $n+1$. By (9)$_n$, there exists a unitary $z\in A$ such that $\| z - u_n\| \le 75\gamma$. For $1\le j\le n+1$, there exists $x_j\in B_1$ such that $\| x_j- z^* a_j z\| \le \gamma$. Define $X_{n+1}:=X_n\cup Y_n\cup \{b_n\}\cup \{x_1,\ldots,x_{n+1}\}$. In Lemma \ref{4.1}, let $X=X_{n+1}$, $Z_A=Z_n$, $\varepsilon=\delta_n/6$ and $\mu=2^{-(n+2)}$ and so there exist $Y_{n+1}\subseteq B_1$, $Z_{n+1}\subseteq A_1$, $\delta_{n+1}>0$, $u\in C'\cap C^*(A,B)$ and $\theta\colon B\to A$ with conditions (i)-(vii) of that lemma. By Lemma \ref{4.1} (i), $\delta_{n+1}<\varepsilon=\delta_n/6<2^{-(n+1)}/3$. By Lemma \ref{4.1} (ii), $X_{n+1}\subseteq_{2^{-(n+1)}/3} Y_{n+1}$. Thus, condition (2)$_{n+1}$ holds. By applying $\theta$ to condition (7)$_n$, we may find a unitary $w\in C'\cap A$ such that \begin{align}\label{4.2.1} \Ad (w) \circ \theta\approx_{Y_n,\delta_n/2}\theta_n \end{align} and $\| w-u_n\|\le665\gamma$. Fix $y\in Y_n$. Since $Y_n\subseteq X_{n+1}\subseteq_{\delta_n/6}Y_{n+1}$, there exists $\tilde{y}\in Y_{n+1}$ such that $\| y-\tilde{y}\|\le \delta_n/6$. Then, by Lemma \ref{4.1} (iv), \begin{align*} \| \Ad (u)(y)- \theta(y)\| &\le \| \Ad (u) (y)-\Ad (u) (\tilde{y})\|+\|\Ad (u) (\tilde{y})-\theta(\tilde{y})\|+\|\theta(\tilde{y})-\theta(y)\| \\ &\le \frac{\delta_n}{6}+\delta_{n+1}+\frac{\delta_n}{6}\le \frac{\delta_n}{2}. \end{align*} This and (\ref{4.2.1}) give \begin{align}\label{4.2.2} \Ad(w u)\approx_{Y_n,\delta_n} \theta_n. \end{align} Moreover, \begin{align}\label{4.2.3} \begin{split} \| w u - u_n\| \le \| w (u - I) \| + \| w - u_n \| \le 75\gamma+665\gamma=740\gamma. \end{split} \end{align} By (\ref{4.2.2}) and (\ref{4.2.3}), we can apply $w u$ and $\{\xi_1,\ldots,\xi_{n+1}\}$ to condition (8)$_n$. Hence, there exists a unitary $\tilde{v}\in C'\cap A$ such that \begin{align}\label{4.2.4} \mathrm{Ad}(\tilde{v} w u) \approx_{X_n, 2^{-(n+1)}} \theta_n, \end{align} $\| \tilde{v}- I\| \le 740\gamma$ and \begin{align}\label{4.2.5} \| (\tilde{v}w u-u_n)\xi_j \| < \frac{1}{2^{n+1}} \ \ \mathrm{and} \ \ \| (\tilde{v}w u-u_n)^* \xi_j\| <\frac{1}{2^{n+1}}, \ \ 1\le j\le n+1. \end{align} Define $\theta_{n+1}:=\Ad(\tilde{v}w)\circ \theta$ and $u_{n+1}:= \tilde{v}w u$. By (\ref{4.2.5}), condition (5)$_{n+1}$ is trivial. Since $\tilde{v}w\in A$ and \[ \| \tilde{v}w-u_{n+1}\|=\|\tilde{v}w-\tilde{v}w u\|=\|I-u\|\le 75\gamma, \] condition (9)$_{n+1}$ holds. By Lemma \ref{4.1} (iv), $\theta_{n+1}=\Ad(\tilde{v}w)\circ\theta\approx_{Y_{n+1},\delta_{n+1}}\Ad(\tilde{v}w u)=\Ad (u_{n+1})$. Thus, condition (4)$_{n+1}$ is satisfied. Fix $x\in X_n$. Let $y\in Y_{n+1}$ satisfy $\| x- y\|\le 2^{-(n+1)}/3$. Then, by (4)$_{n+1}$, \begin{align*} &\| \theta_{n+1}(x)- \Ad (u_{n+1}) (x) \| \\ &\le \| \theta_{n+1}(x) -\theta_{n+1}(y)\|+\|\theta_{n+1}(y)-\Ad (u_{n+1}) (y)\| +\| \Ad (u_{n+1}) (y)- \Ad (u_{n+1}) (x) \| \\ &\le \frac{1}{3\cdot 2^{n+1}}+\delta_{n+1}+\frac{1}{3\cdot 2^{n+1}} < \frac{1}{2^{n+1}}. \end{align*} Therefore, \begin{align*} \theta_{n+1}\approx_{X_n,2^{-(n+1)}} \Ad (u_{n+1}). \end{align*} This and (\ref{4.2.4}) give $\theta_{n+1}\approx_{X_n,2^{-n}}\theta_n$. Hence, condition (3)$_{n+1}$ holds. For any $x\in A_1$, \begin{align}\label{4.2.6} \begin{split} &\| \Ad(\tilde{v}w)(x)- \Ad (z)(x) \| \\ &\le \| \Ad(\tilde{v}w)(x)- \Ad (w) (x)\| + \| \Ad (w) (x)- \Ad (u_n) (x)\| + \| \Ad (u_n) (x)- \Ad (z) (x)\| \\ &\le 2\|\tilde{v}-I\|+ 2\| w-u_n\|+ 2\|u_n-z\| \\ &\le (1480+ 1330+ 150)\gamma=2960\gamma. \end{split} \end{align} For $1\le j\le n+1$, there exists $x_j\in X_{n+1}$ such that $\| x_j- z^* a_j z\| \le \gamma$ by the definition of $X_{n+1}$. (\ref{4.2.6}) and Lemma \ref{4.1} (v) give \begin{align*} &\| \theta_{n+1}(x_j) - a_j\| \\ &\le \| \theta_{n+1}(x_j)-\Ad(\tilde{v}w)(x_j)\|+\|\Ad(\tilde{v}w)(x_j)-\Ad (z) (x_j)\| +\|\Ad (z) (x_j)-a_j\| \\ &\le \| (\Ad(\tilde{v}w)\circ\theta)(x_j)-\Ad(\tilde{v}w)(x_j)\| + 2960\gamma + \| x_j - z^* b_j z\| \\ &\le \| \theta(x_j) - x_j\| + 2960\gamma+\gamma \\ &\le 3078\gamma <\frac{9}{10}. \end{align*} Therefore, condition (6)$_{n+1}$ is proved. Let $\phi\colon B\to A$ be a $C$-fixed surjective $*$-isomorphism with $\phi^{-1}\approx_{Z_{n+1},365\gamma}\id_A$. By Lemma \ref{4.1} (vi), there exists a unitary $\tilde{w}\in C'\cap A$ such that \begin{align}\label{4.2.7} \Ad (\tilde{w}) \circ \phi \approx_{Y_{n+1},\delta_{n+1}/2} \theta \end{align} and $\| \tilde{w}- u \| \le 665\gamma$. For any $y\in Y_{n+1}$, by (\ref{4.2.7}), \begin{align*} \| (\Ad(\tilde{v}w \tilde{w})\circ \phi)(y) - \theta_{n+1}(y) \| =\| (\Ad (\tilde{w}) \circ \phi)(y) - \theta(y) \| \le \frac{\delta_{n+1}}{2}. \end{align*} Furthermore, we have \begin{align*} \| \tilde{v}w\tilde{w}-u_{n+1}\| =\| \tilde{v}w\tilde{w} - \tilde{v}w u \| =\| \tilde{w}-u\| \le 665 \gamma. \end{align*} Thus, $\tilde{v}v\tilde{w}$ satisfies (7)$_{n+1}$. It remains to prove condition (8)$_{n+1}$. Let $S\subseteq H_1$ be a finite set and $v\in C'\cap C^*(A,B)$ be a unitary with $\| v-u_{n+1}\|\le 740\gamma$ and $\Ad (v) \approx_{Y_{n+1},\delta_{n+1}}\theta_{n+1}$. Then, we have \[ \| w^*\tilde{v}^*v - u\|=\| v-\tilde{v}w u\| =\| v- u_{n+1}\|\le 740\gamma \] and $\Ad(w^*\tilde{v}^*v)\approx_{Y_{n+1},\delta_{n+1}} \Ad(w^*\tilde{v}^*)\circ\theta_{n+1} =\theta$. Hence, by applying Lemma \ref{4.1} (vii) to $w^*\tilde{v}^*v$ and $S':=S\cup \{ w^*\tilde{v}^*\xi : \xi\in S\}$, there exists a unitary $v'\in C'\cap A$ such that $\Ad(v' w^*\tilde{v}^*v)\approx_{X_{n+1},\delta_n/6}\theta$, $\| v'-I\|\le 740\gamma$ and \begin{align*} \| (v' w^*\tilde{v}^*v- u) \xi'\|<\frac{1}{2^{n+2}} \ \ \mathrm{and} \ \ \| (v' w^*\tilde{v}^*v- u)^* \xi'\|<\frac{1}{2^{n+2}}, \ \ \xi'\in S'. \end{align*} For any $x\in X_n$, we have \begin{align*} \| \Ad(\tilde{v}w v' w^* \tilde{v}^* v)(x)- \theta_{n+1}(x) \| =\| \Ad( v' w^* \tilde{v}^* v)(x) - \theta(x) \|\le \frac{\delta_n}{6}<\frac{1}{2^{n+2}}. \end{align*} and \begin{align*} \| \tilde{v}w v' w^* \tilde{v}^* -I\| =\| v'-I \| \le 740\gamma. \end{align*} For $\xi\in S$, we have \begin{align*} \| ( \tilde{v}w v' w^* \tilde{v}^* v - u_{n+1})\xi \| = \| (v' w^* \tilde{v}^* v - u)\xi\| < \frac{1}{2^{n+2}}. \end{align*} and \begin{align*} \| (\tilde{v} w v' w^* \tilde{v}^* v - u_{n+1})^*\xi \| =\| (v' w^*\tilde{v} v - u )^* w^* \tilde{v}^* \xi\| <\frac{1}{2^{n+2}}. \end{align*} Therefore, $\tilde{v}w v' w^*\tilde{v}^*$ satisfies $(8)_{n+1}$, and the lemma follows. \end{proof} \begin{prop}\label{4.3} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting non-degenerately on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\C^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{\w}$. If $\dab<10^{-5}$, then there exists a unitary $u\in C'\cap (A\cup B)''$ such that $u A u^* = B$. \end{prop} \begin{proof} Let $\{ a_n\}_{n=1}^{\infty}$, $\{ b_n\}_{n=1}^{\infty}$ and $\{ \xi_n\}_{n=0}^{\infty}$ be dense subsets in $A_1$, $B_1$ and $H_1$, respectively. In Lemma \ref{4.2}, we may choose $\{X_n\}_{n=0}^{\infty}$, $\{Y_n\}_{n=0}^{\infty}$, $\{Z_n\}_{n=0}^{\infty}$, $\{\delta_n\}_{n=0}^{\infty}$, $\{u_n\}_{n=0}^{\infty}$ and $\{\theta_n\}_{n=0}^{\infty}$ with (1)--(8). For any $b_k$ and $\varepsilon>0$, there is $N\in\mathbb{N}$ such that $2^{-(N-1)}<\varepsilon$ and $k< N$. For $m \ge n \ge N$, \[ \| \theta_m(b_k)-\theta_n(b_k)\| \le \sum_{j=n}^{m-1} \| \theta_{j+1}(b_k) - \theta_j(b_k) \| \le\sum_{j=n}^{m-1} \frac{1}{2^j} <\frac{1}{2^{N-1}} <\varepsilon. \] Thus, for any $b_k$, $\{\theta_n(b_k)\}_{n=0}^{\infty}$ is a Cauchy sequence. Since $\| \theta_n\| \le 1$, the sequence $\{\theta_n\}$ converges to a $C$-fixed $*$-homomorphism $\theta\colon B\to A$ in the point-norm topology. For any $a_j$ and $n\ge j$, there is $x\in X_n$ such that $\| \theta_n(x) - a_j\| \le 9/10$. \begin{align*} \| a_j - \theta(x) \| &\le \| a_j - \theta_n(x) \| + \sum_{m=n}^{\infty} \| \theta_{m+1}(x) - \theta_m(x) \| \\ &\le \frac{9}{10}+ \sum_{m=n}^{\infty}\frac{1}{2^m} \le \frac{9}{10}+\frac{1}{2^{n-1}}. \end{align*} Since $n\ge j$ was arbitrary and $\{a_n\}$ is dense set in $A_1$, we have $d(A,\theta(B))<1$. Therefore, $\theta$ is surjective by Corollary \ref{surjective}. By Lemma \ref{4.2} (5), $\{u_n\}$ converges to a unitary $u\in C'\cap (A\cup B)''$ in the $*$-strong topology. Moreover, by Lemma \ref{4.2} (4), we have $\theta=\Ad (u)$. Therefore, $A=u B u^*$, since $\theta$ is surjective. \end{proof} Finally, we show Theorem D by using Proposition \ref{4.3} and Corollary \ref{2.4}. \begin{thm}\label{main} Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\C^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{\w}$. If $\dab<10^{-7}$, then there exists a unitary $u\in C'\cap (A\cup B)''$ such that $u A u^* = B$. \end{thm} \begin{proof} Let $\dab<\gamma<10^{-7}$. By Corollary \ref{2.4}, there exists a unitary $u_0\in (C^{**})'\cap W^*(A^{**}, B^{**})$ such that $u_0 A^{**} u_0^* = B^{**}$ and $\| u_0-I\| \le 19\gamma$. Let $e_D$ be the support projection of $D$ and define $K:=\mathrm{ran}(e_D) \subseteq H$. Now restrict $A,B,C$ and $D$ to $K$. By the universal property, there exists a unique normal representation $\pi\colon D^{**}\to \mathbb{B}(K)$ such that $\pi|_D=\id_D$ and $\pi(D^{**})=D''$. Define $\tilde{A}:= \pi(u_0) A \pi(u_0^*)\subseteq \mathbb{B}(K)$, then $d(\tilde{A},B)\le 2\|u_0-I\| + \dab<39\gamma<10^{-5}$. Since $\tilde{A}'' = \pi(u_0) \pi(A^{**}) \pi(u_0^*)=\pi(B^{**}) = B''$ and $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{\w}$, \[ C'\cap C^*(\tilde{A},B)\subseteq C'\cap \tilde{A}'' = \pi(u_0) ( C'\cap A'')\pi(u_0)^* = \pi(u_0) (\overline{C'\cap A}^{\w}) \pi(u_0)^* =\overline{C'\cap \tilde{A}}^{\w}. \] Therefore, there exists a unitary $u_1\in C'\cap B''\subseteq \mathbb{B}(K)$ such that $u_1 \tilde{A} u_1^* = B$ by Proposition \ref{4.3}. Hence, the unitary $u$ is given by \[ u=u_1\pi(u_0)+(I_K-e_D) \in C'\cap (A\cup B)''\subseteq \mathbb{B}(H), \] so that $u A u^*=B$. \end{proof} \begin{exam}\upshape Let $C=C(\mathbb{T})$ and $A=C(\mathbb{T})\rtimes \mathbb{Z}$ act on $H=\mathcal{L}^2(\mathbb{T})\otimes \ell^2(\mathbb{Z})$. Then we have $C'\cap A=C$ and $C'\cap \overline{A}^{\w}=\mathcal{L}^{\infty}(\mathbb{T})$, that is, $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{\w}$. \end{exam} But we should be careful that $C'\cap \overline{A}^{\w}$ may not be equal to the weak closure of $C'\cap A$ in general. \begin{exam}\upshape Let $\alpha$ be a free action of a group $G$ on a simple C$^*$-algebra $C$ and $A=C\rtimes_{\alpha}G$ act irreducibly on a Hilbert space $H$. Then $C'\cap A=\mathbb{C}$ but $C'\cap \overline{A}^{\w}=C'\cap \mathbb{B}(H)$. \end{exam} \renewcommand{\defn}{{\bf Acknowledgment.}} \begin{defn} The author would like to thank Professor Yasuo Watatani for his encouragement and advise. \end{defn}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{} For the purposes of the J-PARC E36 experiment \cite{cite1} by the TREK Collaboration \cite{cite2}, we are upgrading the experimental E246 apparatus \cite{cite3,cite4}---which was based on a 12-sector superconducting iron-core toroidal spectrometer (Fig. \ref{fig:fig1}) \cite{cite5} previously used at the High Energy Accelerator Research Organization (KEK), of Tsukuba, Japan---into a new TREK/E36 detector system. The primary goal of the TREK/E36 experiment is to test lepton universality using the decay channel $K^+ \to l^+\nu_l$ (known as $K_{l2}$ decay), where $l = e$ or $\mu $. To search for new physics beyond the Standard Model, we focus on precisely measuring the ratio between the positive kaon decay widths, $R_K = \Gamma (K^+ \to ~e^+\nu)/\Gamma (K^+ \to ~\mu ^+\nu )$, using a stopped kaon beam \cite{cite6}. The E36 experiment is scheduled to obtain physics data at the K1.1BR beam line at the Hadron Experimental Facility of the Japan Proton Accelerator Research Complex (J-PARC) in Tokai, Japan. \begin{figure}[h] \centering \includegraphics[width=0.40\textwidth,keepaspectratio]{nima2015_fig1.eps} \caption{A 12-sector superconducting iron-core toroidal spectrometer installed in the K1.1BR beam line of J-PARC in November 2014. This spectrometer has 12 identical gaps and a rotational symmetry of 30$^\circ $.} \label{fig:fig1} \end{figure} We attach special importance to particle identification (PID) in conducting this high precision measurement, which depends on efficiently detecting charged particles (i.e., positrons and positive muons) from kaon decays. The ratio of $K_{e2}$ to $K_{\mu 2}$ events is expected to be approximately 10$^{-5}$. For robust analysis, PID is performed by three independent detectors: time-of-flight (TOF) scintillation counters, threshold-type Cherenkov counters using silica aerogel as a radiator, and lead (Pb) glass Cherenkov counters \cite{cite7}. The use of three independent devices allows us to calibrate the PID capability of each device using the results from the other two. The aerogel Cherenkov (AC) counter was newly designed as a dedicated device for use in the TREK/E36 detector system. Silica aerogel is an amorphous and porous substance comprised of silica (SiO$_2$) particles and open, air-filled pores on the order of tens of nanometers in size. Recent aerogel production techniques enable to tune the refractive index ($n$) in a wide range from 1.0026 to 1.26 \cite{cite8}. Because of its peculiar, intermediate refractive index and optical transparency, silica aerogel has been widely used as a Cherenkov radiator (see for example Ref. \cite{cite9} as a review). The refractive index of aerogel is approximately related to its density ($\rho $) by $n(\lambda ) - 1 = k(\lambda )\rho $, where $k$ is a constant that depends on the wavelength of light ($\lambda $) \cite{cite10}. In this study, we have developed an aerogel radiator at Chiba University. This aerogel is hydrophobic, and hence requires no maintenance during the experimental period. \section{Requirements for an E36 aerogel Cherenkov radiator} \label{} The space (i.e., counter height) given to the AC counter is limited to approximately 7 cm between the upstream TOF counters and the CsI(Tl) calorimeter in the central barrel region of the TREK/E36 detector system. To effectively reject events in which the positive muons decay in-flight, we decided to locate the AC counter close to the kaon stopping active target, which is made of plastic scintillating fibers \cite{cite11} and is installed in the central gap of the spectrometer (see Fig. \ref{fig:fig1}). One module of the AC counter is shown in Fig. \ref{fig:fig2}. Fig. \ref{fig:fig3} shows a cross-sectional drawing of the central detector perpendicular to the kaon beam axis. It comprises the target, scintillating-fiber-based spiral fiber tracker \cite{cite12}, upstream TOF counters, and the AC counter. In keeping with the 12 acceptance gaps of the spectrometer, the AC counter is divided into 12 identical modules. Considering the acceptance of the whole detector system, the longitudinal length of the aerogel box of the AC counters is designed to be 180 mm (with an interior length of 179 mm) along the kaon beam axis. Two photomultiplier tubes (PMTs) are attached to both longitudinal sides. To cover the full solid angle around the target, the cross-sectional shape of the aerogel radiator needs to be trapezoidal. Considering a clearance of 0.5 mm at each side, the dimensions of the aerogel blocks to be fabricated were 178 mm in longitudinal length, 46.2 mm in upper-base length, and 24.8 mm in lower-base length, assuming a radiator height (thickness, $t$) of 40 mm. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig2.eps} \caption{Assembled AC counter module. Two PMTs are oriented to the kaon beam axis. The length of the counter housing (aerogel box) is 18 cm.} \label{fig:fig2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.40\textwidth,keepaspectratio]{nima2015_fig3.eps} \caption{A cross-sectional drawing of the central detector perpendicular to the kaon beam axis. The kaon stopping target holder and one AC counter module (red, 12 o'clock direction) with a PMT support are shown. The aerogel radiator block is blue shaded. The spiral fiber tracker (SFT) is directly wound around the target holder. The upstream TOF counter is located over the tracker. The AC counters should be installed between the TOF counter and the inner wall of CsI(Tl) calorimeter.} \label{fig:fig3} \end{figure} The refractive index ($n$) of the aerogel radiator needs to be less than 1.095 in order to reject muons with a momentum ($P$) of 236 MeV/$c$ from the $K_{\mu 2}$ decays. In the actual E36 experiment, the $K_{e2}$ ($P_{e^+}$ = 247 MeV/$c$) and $K_{\mu 2}$ events will be accepted by analyzing the charged particle momenta using the spectrometer and charged particle tracking devices, i.e., a spiral fiber tracker and three layers of multiwire proportional chambers. It is desirable to keep a high $n$ value because aerogel radiators with higher $n$ values produce more Cherenkov photons. Conversely, the transparency of the aerogel decreases with increasing $n$ value, resulting in degradation of Cherenkov light collection. Moreover, misidentification of muons as positrons due to Cherenkov radiation caused by knock-on electrons ($\delta $-rays) may increase in a dense aerogel. Our requirements for the AC counter include a positron detection efficiency greater than 98\% and a positive muon misidentification rate lower than 3\%. During 2010--2013, in search for the best solution for aerogel specification (e.g., $n$ = 1.037, 1.05, and 1.08) as well as the best counter configuration (e.g., specular/diffusive reflective sheets of inner wall and mirror shape on the inside of the outer wall), we performed a series of test beam experiments using prototype counter modules at the Research Center for Electron Photon Science at Tohoku University in Japan, the National Laboratory for Particle and Nuclear Physics (TRIUMF) in Canada, and J-PARC. From the results of the test experiments \cite{cite1,cite13,cite14}, the design of the AC counter module was finalized to use a specular reflective sheet (aluminized Mylar), and the specification of the aerogel radiator was proposed to be $n$ = 1.08 and $t$ = 40 mm. A deviation from $n$ = 1.08 will not have a significant impact on the detector performance; e.g., the final spread of $\pm $0.004 (5\%) in the produced refractive index for each counter is acceptable. \section{Aerogel fabrication} \label{} Our method for producing the silica aerogel blocks to be used in the E36 experiment was based on a modified conventional technique described in Ref. \cite{cite10}. First, a wet gel was synthesized by means of the sol--gel method in an appropriate mold described in Section 3.1. To obtain highly transparent aerogel, the classic KEK method \cite{cite15} (which uses ethanol or methanol as a solvent) was modified by introducing the solvent $N$,$N$-dimethylformamide (DMF) into the wet-gel synthesis step \cite{cite16}. To attain aerogel with $n \sim $ 1.08, we used only DMF as the solvent, whereas we generally used a mixture of DMF and methanol at $n \sim $ 1.05. After aging the wet gel, it was detached from the mold in an ethanol bath, and we performed a hydrophobic treatment by adding hexamethyldisilazane into the ethanol bath \cite{cite17}. After removing impurities from the wet gel by repeatedly replacing the ethanol, we finally obtained an aerogel using the supercritical carbon dioxide drying method. \subsection{Molding} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig4.eps} \caption{Custom-made molds for the (left) upstream layer and (center) downstream layers made of polypropylene. Both molds measure approximately 182 mm in longitudinal inner length and 21.5 mm in inner depth. Prior to the final production, the right mold was used for a test production of 4 cm-thick blocks (see Appendix A).} \label{fig:fig4} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig5.eps} \caption{A wet gel bar detached from the mold. When the aerogel density was more than $\sim $0.1 g/cm$^3$ ($n$ = 1.08 corresponding to $\rho $ = 0.27 g/cm$^3$), direct handling of the wet gel was possible. The wet gel needed to be returned to the ethanol bath within several tens of seconds to avoid cracking because of drying.} \label{fig:fig5} \end{figure} Aerogel blocks with a trapezoidal cross-section can be produced in two ways: cutting or molding. The cutting method uses a water jet cutter on square aerogel tiles with dimensions of approximately 11 $\times $ 11 $\times $ 2 cm$^3$ to produce trapezoidal prism aerogel segments, making full use of the hydrophobic feature of our aerogel. In this case, one block comprises a minimum of 4 trapezoidal prism segments. Commercially available square molds are economical. Conversely, the molding method uses a trapezoidal mold for synthesizing a wet gel. In the latter case, to suppress cracking of the aerogel blocks during supercritical drying, the thickness of one aerogel bar should be reduced to 2 cm by dividing one trapezoidal aerogel block with a thickness of 4 cm into two layers (see Appendix A). This method requires custom-made molds. The cost for manufacturing the custom-made molds is almost equivalent to that of machining the aerogel tiles with water jets. Considering the total cost, including manpower and time, molding is more efficient for a large production. When we manufactured aerogel in-house, the supercritical drying was the most manpower-intensive and time-consuming process for which we used our supercritical carbon dioxide drying apparatus with a 7.6 l autoclave. Aerogel cracking often appeared during this process. We assumed that the crack-free yield of aerogel blocks with $n \sim $ 1.08 would be 70\%, independent of the wet gel shape. The cutting and molding methods require four and three supercritical drying operations, respectively, to obtain the necessary aerogel blocks as well as several spares. More specifically, for 12 whole blocks and 2 spares, we had to produce 24 aerogel tiles without cracking (14 tiles for the larger downstream segments and 10 tiles for the smaller upstream segments near the kaon stopping target) by cutting, or 28 bars by molding. Considering the crack-free yield, we had to synthesize 35 wet gel tiles or 40 wet gel bars. Our autoclave for supercritical drying could store 10 wet gel tiles or 14 wet gel bars at a time. Molding had the important advantage of enabling us to obtain clear aerogel surfaces. Under cutting, the water jet-machined aerogel surface became significantly rough and scattered laser beams so that it was no longer possible to measure the refractive index. The aerogel surface could not be polished. Molding allowed us to create a very clear aerogel surface by keeping the inner surface of the mold flat. Our counter works best if all aerogel surfaces are clear, and requires crack-free aerogel blocks because it was designed to be lined with aluminized Mylar to induce specular reflection of Cherenkov light at the inside wall. Another advantage of molding is that an entire trapezoidal block could be made from only two aerogel parts. Under cutting, 4 segments (i.e., upstream and downstream layers, each segmented to two parts) were needed to form a whole trapezoidal prism block for one module. That was because the dimensions of the synthesized square wet gel were limited by the size of the commercially available square mold and our autoclave. Under molding, one trapezoidal block module could be made from two-layer semi-monolithic aerogel bars. We placed the long wet gel bars vertically in the autoclave because the autoclave was of sufficient depth (30 cm) (see Appendix A). Semi-monolithic aerogel blocks facilitate fixing them to the counter housing. To secure an air light guide gap between the aerogel blocks and counter roof (see Fig. \ref{fig:fig3}), the radiator is held to the bottom of the housing using fixtures. Because we can fix the radiator with a small number of fixtures, monolithic aerogel block is important, especially for the counter module in six o'clock direction, where it is arranged roof-side down around the kaon stopping target. Considering also the results of the pilot production described in Appendix A, we finally opted for the molding technique for producing aerogel blocks for use in the actual counters. To produce trapezoidal prism aerogel bars, we devised custom-made molds. The mold was an open-topped box into which we could pour the prepared chemical solution. It was manufactured by welding smooth polypropylene plates with a thickness of 10 mm using welding bars by Tokiwa Co., Ltd., Japan. To form the whole trapezoidal block from two-layer aerogel bars, we prepared ten copies each of two different sizes of mold; one for the upstream (small) and one for the downstream (large) layers (Fig. \ref{fig:fig4}). Both molds had the same length, but different widths of their trapezoidal sides. We expected the longitudinal shrinkage ratio of the aerogel bars to be 0.975 in the production process and designed the dimensions of the mold accordingly. The molds were manufactured with a dimensional accuracy greater than 1 mm. \subsection{Chemical preparation recipe} \begin{table*}[ht] \centering \caption{Chemical solutions used in wet gel synthesis for each aerogel bar.} \label{table:table1} \begin{tabular}{ll} \hline Chemicals & Dose [g] for upstream (downstream) layer \\ \hline Polymethoxy siloxane$^a$ & 39.05 (52.85) \\ Distilled water & 22.52 (30.48) \\ $N$,$N$-Dimethylformamide & 57.23 (77.46) \\ 28\% Ammonia solution & 0.24 (0.33) \\ \hline \multicolumn{2}{l} {$^a$Methyl silicate 51 (Fuso Chemical Co., Ltd., Japan).} \\ \end{tabular} \end{table*} Table \ref{table:table1} lists the preparation recipe for the raw chemicals for producing aerogel bars with $n$ = 1.08. This recipe allowed us to obtain samples with $n$ = 1.076 in the experimental production of square tiles (see Appendix A). The use of DMF as the solvent was helpful in attaining highly transparent aerogel with a high refractive index. Based on the performance of prototype aerogel samples as Cherenkov radiators measured with test beams \cite{cite14}, we chose the recipe for fabricating the aerogel blocks for the actual detector. Wet gel slightly shrank in the aging process followed by their synthesis, and depending on the refractive index (i.e., the density of the silica matrix), it also shrank during the supercritical drying process \cite{cite10}. The refractive index of the final products slightly depended on their volume and shape. Apart from the above shrinkage factor, the refractive index of the aerogel blocks was basically determined by the preparation recipe of the raw chemicals. Namely, the density of aerogel depends on the volume ratio of the solvent used in the wet gel synthesis. \subsection{Final production} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig6.eps} \caption{Wet gel in the hydrophobic treatment process. The wet gel bars were placed in the punched trays and soaked in a solution for the hydrophobic treatment.} \label{fig:fig6} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig7.eps} \caption{Punched trays placed in the autoclave of supercritical carbon dioxide drying apparatus. Each tray contained two wet gel bars. The autoclave was filled with ethanol at the beginning of the drying operation.} \label{fig:fig7} \end{figure} From April to June 2014, we produced aerogel bars with $n$ = 1.08 for use in the actual detector. Dividing the whole production into three lots, 25 and 20 wet gel bars for the upstream and downstream layers were synthesized, respectively. The molds were reused a maximum of three times by cleaning them after each use. Immediately after mixing the chemical solutions shown in Table \ref{table:table1} in a beaker, the prepared solution was strongly stirred for 30 s, immediately poured into a mold, and covered by a lid. From the specific gravity of each chemical, the volumes of the solution were calculated to be 116 and 157 ml for the upstream and downstream layers, respectively, corresponding to a 20.5 mm wet gel thickness. At room temperature (22--26$^\circ $C), we predetermined the amount of ammonia solution as a catalyst, so that the solution gelled approximately two minutes after the beginning of mixing. After a further two minutes, the surface of the wet gel synthesized in the mold was filled with 4--6 ml of methanol to prevent it from drying; it was then covered with a 0.3 mm-thick aluminum plate and aged in a sealed tank for one week. The detachment of wet gel from the molds was the key process in wet gel molding. To facilitate the detachment, the wet gel in the mold was aged in the sealed tank filled with ethanol for an additional day. Soaking the wet gel in ethanol promoted its shrinkage. Moreover, by soaking the mold into the ethanol upside down, the wet gel was spontaneously detached from the mold because of its own weight (Fig. \ref{fig:fig5}). \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig8.eps} \caption{Crack-free aerogel bars for (left) downstream and (right) upstream layers obtained for the final production. Both the aerogel bars had a longitudinal length of approximately 18 cm and a thickness of 2 cm.} \label{fig:fig8} \end{figure} After removing the mold, the wet gel was subjected to the hydrophobic treatment. The wet gel was transferred into a stainless steel punched tray specially manufactured for this study and soaked temporarily in another ethanol bath to prevent it from cracking due to drying. By adding hexamethyldisilazane into the ethanol bath (which was used for additional aging of the wet gel) and stirring, the solution for the hydrophobic treatment was prepared, with the volume ratio of the hydrophobic reagent to ethanol being approximately 1:9 \cite{cite10}. The wet gel was soaked in the solution for the hydrophobic treatment for 3--4 days, as shown in Fig. \ref{fig:fig6}. To reduce impurities other than ethanol in the wet gel, the hydrophobic reagent/ethanol filling the tank was replaced three times with new ethanol. Three operations of the supercritical carbon dioxide drying apparatus yielded 42 aerogel bars. The autoclave in the apparatus was filled with new ethanol, and the wet gel bars were placed there by standing them vertically on the punched trays, as shown in Fig. \ref{fig:fig7}. The punched trays were designed so that seven of them could be installed in the autoclave. Two wet gel bars (basically a combination of the upstream and downstream bars) could be placed in each tray, i.e., 14 wet gel bars could be dried at once. The operation process was based on Ref. \cite{cite10}, where the rate of temperature rise from 40 to 80$^\circ $C was reduced from 10$^\circ $C/h to 5$^\circ $C/h to suppress cracking of the aerogel bars. We emphasize that a slow pressure reduction rate (below 1 MPa/h) was also adopted in the end process of the operation for the same purpose. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig9.eps} \caption{Setup for refractive index measurement. An aerogel bar was arranged in the laser beam line on the rotating table using the aluminum support. The minimum deviation of the laser beam was measured on a screen approximately 1.8 m downstream of the rotating table.} \label{fig:fig9} \end{figure} \section{Optical characterization} \label{} \subsection{Cracking} We obtained 30 crack-free aerogel bars out of the 42 bars produced after the three supercritical drying operations. More specifically, 16 out of 24 aerogel bars and 14 out of 18 bars for the upstream (small) and downstream (large) layers, respectively, had no cracking. We ensured the required number of aerogel blocks for the 12 counter modules and the two spare sets. The obtained aerogel bars had an impressive appearance, as shown in Fig. \ref{fig:fig8}. The number of aerogel bars with cracking from the first, second, and third supercritical-drying batches (not synthesis lots) were 5, 5, and 2 bars out of 14 bars per batch. This slight batch dependence of cracking can be due to the manual operation (i.e., manual pressure control) of the drying apparatus. Cracking did not depend significantly on the aerogel size. \begin{figure}[ht] \centering \includegraphics[width=0.50\textwidth,keepaspectratio]{nima2015_fig10.eps} \caption{Distribution of transmission length at $\lambda $ = 400 nm (see Section 4.3) as a function of the refractive index. The refractive index was measured at $\lambda $ = 405 nm. A total of 30 aerogel bars for upstream and downstream layers are represented by circles and squares, respectively. For reference, the square aerogel tile is shown by a triangle.} \label{fig:fig10} \end{figure} \subsection{Refractive index} We measured the refractive index of the final production of aerogel bars using the laser Fraunhofer method described in Ref. \cite{cite10}. The method allowed to determine the refractive index at the corner of the aerogel blocks by measuring the deviation of the laser path. A blue--violet semiconductor laser with $\lambda $ = 405 nm was used. When measuring the refractive index of square aerogel tiles, each corner of the tiles was generally irradiated with the laser beam. For the trapezoidal prism aerogel bars produced, we exposed the right angle between the bottom surface and the trapezoidal side surface to the laser beam, as shown in Fig. \ref{fig:fig9}. The minimum distance between the laser path in the aerogel and the edge, defined as the side between the bottom surface and the trapezoidal side surface of the aerogel, was set to be 5 mm. The refractive indices measured at both ends of an aerogel bar were then averaged. We successfully obtained aerogel bars with the desired refractive index. The measured refractive index was distributed in a range between 1.0772 and 1.0825 for the 30 crack-free aerogel bars (Fig. \ref{fig:fig10}). As a reference, we fabricated a square tile with dimensions of approximately 9 $\times $ 9 $\times $ 2 cm$^3$ at the same time as the first lot. The refractive index of this reference aerogel was 1.0757, which was smaller than that of the trapezoidal prism aerogel. This suggests that the measured refractive index depended partially on the volume and the shape of aerogel blocks, where the synthesis volumes of the upstream (downstream) trapezoidal wet gel bars and the square one were approximately 116 (157) and 187 ml, respectively. In general, the refractive index was inversely proportional to the synthesis volume of the wet gel blocks. The macroscopic shape and size of wet gel could have an influence on the nanostructure formation and wet gel shrinkage in the synthesis and aging processes. \begin{figure}[th] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig11.eps} \caption{Measurement setup in the light-shielded chamber of the Hitachi U-4100 spectrophotometer. Light transmission along the aerogel thickness direction was measured. The bottom surface of the aerogel (see Fig. \ref{fig:fig8}) corresponding to the upstream side in the AC counter was placed at the upstream side of the spectrophotometer. The distance between the aerogel's downstream surface and the entrance of a light-integrating sphere was set to be 10 cm.} \label{fig:fig11} \end{figure} \subsection{Transparency} We measured the transmittance of the produced aerogel bars at wavelengths ranging from 200 to 800 nm using a spectrophotometer U-4100 (Hitachi, Ltd., Japan). Fig. \ref{fig:fig11} shows the measurement setup in the light-shielded chamber of the spectrophotometer. To detect as little of the light scattered in the aerogel as possible, the distance between the aerogel's downstream surface and the entrance of a light-integrating sphere was set to be 10 cm \cite{cite10}. Fig. \ref{fig:fig12} shows the measured transmittance of a typical aerogel bar from the final production as a function of wavelength. The mean transmittance at $\lambda $ = 400 nm, through a thickness of approximately 20 mm, was 38.4\% for the 30 crack-free aerogel bars. In the case in which we arranged a combination of upstream and downstream aerogel bars with a total thickness of approximately 40 mm, the transmittance was 13.1\%. \begin{figure}[hb] \centering \includegraphics[width=0.50\textwidth,keepaspectratio]{nima2015_fig12.eps} \caption{Transmittance curve for a typical aerogel ($t$ = 19.9 mm) as a function of wavelength. Circles show the transmittance measured every 10 nm, and the solid curve shows the fit with $T=A\exp(-Ct/\lambda ^4)$. The parameters obtained from the fitting were $A$ = 0.988 $\pm $ 0.001 and $C$ = 0.01220 $\pm $ 0.00005 $\mu $m$^4$/cm.} \label{fig:fig12} \end{figure} Light transmission in aerogel is known to be dominated by Rayleigh scattering: \[ T(\lambda , t)=A\exp(-Ct/\lambda ^4), \] where $T$ is the transmittance, $A$ is the amplitude, and $C$ is called the ``clarity coefficient,'' and it is usually measured in units of $\mu $m$^4$/cm. The clarity coefficient obtained from the fitting was $C$ = 0.01220 $\pm $ 0.00005 $\mu $m$^4$/cm for the above typical aerogel sample (Fig. \ref{fig:fig12}). The calculated transmission length, defined as $\Lambda _{\rm T}(\lambda ) = -t/{\rm ln}T(\lambda )$ of the aerogel bars at $\lambda $ = 400 nm was reasonable, considering the refractive index ($n \sim $ 1.08). The mean transmission length of the 30 aerogel bars at $\lambda $ = 400 nm was 20.8 mm (see Fig. \ref{fig:fig10}). This value is consistent with that plotted in the transmission length--refractive index scatter graph shown in Ref. \cite{cite10}. The reference square aerogel tile had $\Lambda _{\rm T}$ = 23.5 mm. \subsection{Dimensions} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth,keepaspectratio]{nima2015_fig13.eps} \caption{Upstream aerogel bar stacked on top of a downstream one. The longitudinal length and total thickness are approximately 18 and 4 cm, respectively.} \label{fig:fig13} \end{figure} The dimensions of the produced aerogel bars met our requirements. The longitudinal length of the bars ranged from 176.3 to 177.8 mm for the 30 crack-free aerogel bars, which was consistent with our expectation (i.e., 178 mm). The lower- (upper-) base length of the cross-sectional trapezoid of the upstream (downstream) layer was 24.0--24.5 (45.5--46.4) mm, which is in good agreement with the requirements (i.e., 24.8 (46.2) mm). As shown in Fig. \ref{fig:fig13}, the upstream aerogel bar was flush with the downstream one; thus, they will form a radiator unit in the counter module. The mean longitudinal shrinkage ratio was 0.972, close to our expectation of 0.975, where the longitudinal length of the mold was taken to be 182.25 mm based on the actual measurement. Fig. \ref{fig:fig14} shows the refractive index as a function of the longitudinal shrinkage ratio. There is a tendency for the refractive index to increase with decreasing longitudinal shrinkage ratio. In addition, the refractive index depends on the wet gel lot in which it was synthesized, especially between the first and third lots. This could be due to the difference in room temperature during the production process. The third lot was fabricated in a slightly high-temperature environment (24--26$^\circ $C) compared with the first lot (21--24$^\circ $C). \begin{figure}[t] \centering \includegraphics[width=0.50\textwidth,keepaspectratio]{nima2015_fig14.eps} \caption{Refractive index measured at $\lambda $ = 405 nm as a function of longitudinal shrinkage ratio for each of the 30 crack-free aerogel bars and one reference square tile. The aerogel bars are classified based on their wet-gel synthesis lot, indicated by different symbols.} \label{fig:fig14} \end{figure} \section{Conclusion} \label{} We have developed hydrophobic silica aerogel with $n$ = 1.08 to be used as a radiator in threshold-type Cherenkov counters. These counters are meant to separate the positrons from positive muons with a momentum of approximately 240 MeV/$c$ produced by kaon decays in the J-PARC TREK/E36 experiment. The requirements for the Cherenkov radiator were determined by the results of test beam experiments and the design of the counter configuration. We have described a method for producing aerogel bars with a trapezoidal cross-section and a length of 18 cm to fit the barrel region surrounding the kaon stopping target of the TREK/E36 detector system. Production of the aerogel bars for the actual detector made up of 12 counter modules was successfully performed by dividing each radiator volume into two layers with a total thickness of 4 cm. The block dimensions and optical parameters, including a transmission length at 400 nm wavelength of approximately 20 mm, have been measured and found suitable for use in the actual detector. \section*{Acknowledgments} \label{} The authors are grateful to the members of the J-PARC TREK/E36 Collaboration for fruitful discussions on aerogel development. We are also grateful to Dr. H. Nanjo of Kyoto University for his assistance in designing the aerogel mold. We performed the optical measurements of aerogel at KEK; we are thankful to Prof. I. Adachi for his support. We are also thankful to the Venture Business Laboratory at Chiba University for offering room to manufacture the aerogel. This work was supported by a Grant-in-Aid for Scientific Research (B) (No. 25287064) from the Japan Society for the Promotion of Science (JSPS). M. Tabata was supported in part by the Space Plasma Laboratory at the Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Quantum quenches have proven to be an interesting tool to understand how non-equilibrium systems thermalize. In a quantum quench, we start with a ground state of some Hamiltonian $H_{0}$ and at time $t=0$ we change the Hamiltonian from $H_{0}$ to $H$. The state for $t > 0$ evolves according to the new Hamiltonian, $H$, and will have a non-trivial time dependence. An interesting example of a quantum quench is when $H_{0}$ is the Hamiltonian of some gapped theory whereas $H$ is the Hamiltonian of a CFT. In this case, it was argued in \cite{calabrese-cardy,gq-2} that we can model the quantum quench by replacing the state at $t=0$ by \begin{align} \ket{\Psi(t=0)} \, = \, e^{-\frac{\beta}{4}H} \, \ket{\mathcal{B}} \, ,\label{eq-bdy-quench} \end{align} where $\ket{\mathcal{B}}$ is a conformal boundary state and $1/\beta$ corresponds to the mass gap of the original theory. The state at time $t \ge 0$ is then given by \begin{align} \ket{\Psi(t)} \, = \, e^{-\left(it + \frac{\beta}{4}\right)H} \, \ket{\mathcal{B}} \, . \end{align} Even though the state of the whole system remains pure (under unitary evolution), we expect the reduced state of some small subsystem to thermalize at late times. This is exactly what was observed in the behavior of the correlation functions in \cite{calabrese-cardy,gq-2}. Further evidence of (local) thermalization comes from the time evolution of the entanglement entropy of a subregion $A$ of size $L$, which is defined as \begin{align} S_{A}(t) \, = \, - \, \text{tr} \, \rho_{A}(t) \, \log \rho_{A}(t) \, \quad\quad \text{where} \quad\quad \, \rho_{A}(t) \, = \, \text{tr}_{\bar{A}} \ket{\Psi(t)}\bra{\Psi(t)} \, . \end{align} In the scaling limit \begin{align} t \, , L \, \gg \beta \, ,\label{eq-scaling} \end{align} it was found in \cite{calabrese-cardy,gq-2} that the time evolution of the entanglement entropy, for all CFTs, only depends on the central charge, $c$, of the CFT and the parameter $\beta$ of the initial state. In particular, it was found that the entanglement entropy of region $A$ as a function of time is given by \begin{align} S_{A}(t) \, = \, S_{A}^{\text{vac}} \, + \, 2 s_{eq} \times \label{eq-sa} \begin{cases} \, t \quad&\text{for $t <\frac{L}{2}$} \, ,\\[0ex] \, \frac{L}{2} \quad&\text{for $t>\frac{L}{2}$} \, , \end{cases} \end{align} where $S_{A}^{\text{vac}}$ is the vacuum entanglement entropy at $t=0$ which contains the usual ultraviolet divergence \cite{Holzhey:1994we}, and \begin{align} s_{eq} \, \equiv \, \frac{\pi c}{3\beta} \label{eq-seq} \end{align} is the thermal entropy density at temperature $1/\beta$. A simpler model for studying time-dependence and (local) thermalization, called the `thermal double model', was introduced in \cite{hartman-maldacena}. In this model, we take two copies of our CFT, \textit{i.e.} CFT$_{1} \, \otimes $ CFT$_{2} \, $, and consider the following entangled state: \begin{align} \ket{\Psi_{\beta}} \, = \, \frac{1}{\mathcal{N}_{\beta}} \, \sum_{n} \, e^{-\beta E_{n}/2} \, \ket{n}_{1}\otimes\ket{n^{*}}_{2} \, , \label{eq-tfd} \end{align} where $\ket{n}$ are the energy eigenstates of the original CFT, $\ket{n^{*}}$ are the action of the antiunitarity CPT on $\ket{n}$, and $E_{n}$ are the corresponding energy eigenvalues. Furthermore, we demand that the time evolution is generated by $H_{1} + H_{2}$. As a result, the state in Eq.~\eqref{eq-tfd} evolves in time. Now suppose that the subregion $A$ consists of two identical intervals of size $L$, one in each copy of the CFT. The time dependence of the entanglement entropy for region $A$ in this model was studied in \cite{hartman-maldacena}. It was found that this time dependence, up to a factor of $2$, is the same as the time dependence in Eq.~\eqref{eq-sa}. The quantitative behavior of $S_{A}(t)$ in these two models, that is the linear growth for $t < L/2$ and the saturation for $t>L/2$, can be described in terms of the propagation of entangled pairs of quasi-particles \cite{calabrese-cardy,gq-2}. Assume that EPR pairs of entangled quasi-particles are uniformly produced everywhere at $t=0$. Each quasi-particle and its entangled partner move in the opposite direction with (instantaneous) speed $v=1$. The entanglement entropy of region $A$ at any time $t$ is proportional to the number of EPR pairs for which one entangled partner is in region $A$ at time $t$ whereas other is outside the region $A$. Now consider two disconnected subregions, $A$ and $B$. The entanglement entropy for disconnected regions are not completely fixed by the conformal symmetry and hence depends on the details of the CFT \cite{gq-11,gq-12,hartman-2}. Nevertheless, it was shown in \cite{hartman-2} that the quasi-particle picture correctly captures the evolution of entanglement entropy for disconnected regions for a certain class of theories. In these theories, the asymptotic number of conserved currents is approximately equal to the total number of states. In other words, the central charge of these theories is $c \, = \, c_{\text{current}}$, where $c_{\text{current}}$ is an effective central charge of the chiral sector of the theory. For this reason, these kind of theories were called `current dominated' in \cite{hartman-2}. Examples of current dominated theories include all rational CFTs and some non-rational CFTs \cite{hartman-2}. Another class of CFTs that we would be interested in is holographic theories. These theories have $c \gg c_{\text{current}} \sim 1$ and hence, these CFTs are not current dominated. Indeed, the quasi-particle picture is known to be incorrect for these theories \cite{gq-11,gq-12,hartman-2}. The time dependence of entanglement entropy for these theories has been studied in \cite{gq-3,Albash:2010mv,hartman-2,gq-4,gq-5,gq-6,gq-7,gq-9,gq-10,gq-11,gq-12,gq-13,gq-14,gq-15,gq-16,gq-17,gq-18,gq-20,gq-22}. The quantitative behavior of the entanglement entropy in holographic theories can be described in terms of a spread of an `entanglement tsunami wave' \cite{gq-9,gq-10,gq-12} or in terms of a `minimal membrane' \cite{Mezei-1,Mezei-3}. It is also interesting to study how does the entanglement or correlation between two disconnected regions, $A$ and $B$, change following a quantum quench. Entanglement entropy $S_{A\cup B}(t)$ is not a useful quantity for this purpose. This is because $S_{A\cup B}(t)$ measures the entanglement of $A$ and $B$ with the rest of the system instead of measuring the entanglement between $A$ and $B$. One possible quantity that captures the correlation between two disconnected regions is the mutual information, which is defined as \begin{align} I(A|B) \, \equiv \, S_{A} + S_{B} - S_{A\cup B} \, . \end{align} For current dominated theories, time evolution of the mutual information can also be described in terms of propagating quasi-particles \cite{10.21468/SciPostPhys.4.3.017}. In particular, the mutual information at any time is proportional to the number of EPR pairs for which one entangled partner is in region $A$ whereas the other is in region $B$. For concreteness, suppose that regions $A$ and $B$ are of equal size $L$ and they are separated by a distance $\ell$. The mutual information in the thermal double model, according to the quasi-particle picture, is \cite{10.21468/SciPostPhys.4.3.017} \begin{align} I(A|B)(t) \, = \, 4 s_{eq} \, \times \label{eq-mi} \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{\ell}{2}$} \, ,\\[0ex] \, t - \frac{\ell}{2} \quad&\text{for $\quad \frac{\ell}{2} < t < \frac{L+\ell}{2}$} \, ,\\ \, L + \frac{\ell}{2} - t \quad&\text{for $\quad \frac{L+\ell}{2} < t < \frac{2L+\ell}{2}$} \, , \\ \, 0 \quad&\text{for $\quad t >\frac{2L+\ell}{2}$} \, . \end{cases} \end{align} This result is true irrespective of whether $L > \ell$ or $L<\ell$. Time evolution of mutual information of two disconnected region has also been studied for holographic CFTs. Unlike the mutual information in current dominated theories, the mutual information for holographic theories in the scaling limit ($\beta \ll t,L,\ell$) depends on whether $L>\ell$ or $L<\ell$. For $L<\ell$, the mutual information vanishes for all time, whereas for $L>\ell$, the mutual information in the thermal double model is given by \cite{gq-6} \begin{align} I(A|B)(t) \, = \, 4 s_{eq} \, \times \label{eq-mi-holo} \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{\ell}{2}$} \, ,\\[0ex] \, t - \frac{\ell}{2} \quad&\text{for $\quad \frac{\ell}{2} < t < \frac{L}{2}$} \, ,\\ \, L - \frac{\ell}{2} - t \quad&\text{for $\quad \frac{L}{2} < t < \frac{2L-\ell}{2}$} \, , \\ \, 0 \quad&\text{for $\quad t >\frac{2L-\ell}{2}$} \, . \end{cases} \end{align} Another quantity that captures the entanglement between two disconnected regions is the logarithmic negativity, which is defined as \cite{Vidal:2002zz} \begin{align} \mathcal{E}(A|B) \, \equiv \, \log \, \text{tr} \, |\rho_{AB}^{T_{B}}| \, , \end{align} where $\rho_{A\cup B}^{T_{B}}$ denotes the partial transpose with respect to region $B$. The time evolution of negativity after a quench was studied in \cite{Coser:2014gsa} for theories for which we expect quasi-particle picture to be valid. It was found that the logarithmic negativity at any instant of time is proportional to the mutual information. More precisely, the negativity is given by \begin{align} \mathcal{E}(A|B)(t) \, = \, \frac{3}{4} \, I(A|B)(t) \, . \end{align} Recently, a new measure of entanglement between two disconnected regions, called the reflected entropy, was introduced in \cite{faulkner}. This involves finding the `canonical' purification of a mixed state $\rho$. Consider a mixed state $\rho \in \mathcal{H}$ in its eigenbasis, \begin{equation} \rho \, = \, \sum_{a} \rho_{a} \ket{\rho_{a}}\bra{\rho_{a}} \, . \end{equation} The canonical purification of this state is denoted by $\ket{\sqrt{\rho}} \in \mathcal{H}\otimes\mathcal{H}'$ and is given by \begin{align} \ket{\sqrt{\rho}} \, = \, \sum_{a} \sqrt{\rho_{a}} \ket{\rho_{a}}\otimes\ket{\rho_{a}} \, . \end{align} For example, the canonical purification of a thermal state is the thermofield double state. Now given a density matrix $\rho_{AB} \, \in \, \mathcal{H}_{A}\otimes\mathcal{H}_{B} $ and its canonical purification $\ket{\sqrt{\rho_{AB}}} \, \in \, \mathcal{H}_{A}\otimes\mathcal{H}'_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}'_{B} $, the reflected entropy is defined as \begin{align} S_{R}(A|B) \, \equiv \, - \, \text{tr} \, \rho_{AA'} \, \log \rho_{AA'} \, \quad\quad \text{where} \quad\quad \, \rho_{AA'} \, = \, \text{tr}_{BB'} \ket{\sqrt{\rho_{AB}}}\bra{\sqrt{\rho_{AB}}} \, . \end{align} Our goal in this paper is to study the time evolution of the reflected entropy in rational and holographic CFTs\footnote{Time evolution of the reflected entropy after a local quench was studied in \cite{sr-local-1,sr-local-2}.}. Owing to its simplicity, we use the thermal double model to study this time evolution. For rational\footnote{Though, as we will discuss in Sec.~(\ref{sec-case-2}), our results for rational CFTs are valid for any current dominated CFT.} CFTs, we find that the time dependence of the reflected entropy of two disconnected regions (with arbitrary choice of $L$ and $\ell$) in the scaling limit is the same as the time dependence of the mutual information. That is, \begin{align} S_{R}(A|B)(t) \, = \, I(A|B)(t) \, .\label{eq-res} \end{align} For holographic theories, we only focus in the limit $L\to\infty$ and finite $\ell$. In this case, we find that the time evolution of the reflected entropy is \begin{align} S_{R}(A|B) \, = \, 4 s_{eq} \, \times \label{eq-sr-holo-fin-intro} \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{\ell}{2}$} \, ,\\[0ex] \, t - \frac{\ell}{2} \quad&\text{for $\quad t > \frac{\ell}{2}$} \, . \end{cases} \end{align} Various properties of the reflected entropy were derived in \cite{faulkner}. One such property is that the reflected entropy can never be less than the mutual information. That is, \begin{align} S_{R}(A|B) \, \ge \, I(A|B) \, . \label{eq-sr-bound} \end{align} However, if a pure state $\ket{\psi_{ABC}} \in \mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C}$ has only bipartite entanglement, then it was recently shown in \cite{Akers:2019gcv} that the bound in Eq.~\eqref{eq-sr-bound} is saturated. If we take the quasi-particle picture for the evolution of entanglement seriously, then it suggests that the time-dependent state, in the scaling limit, has bipartite entanglement structure. Our result in Eq.~\eqref{eq-res} for rational theories provides some more evidence for this bipartite entanglement structure. (Note that our result does not prove the bipartite entanglement structure as GHZ states are also known to saturate the bound in Eq.~\eqref{eq-sr-bound}.) The rest of this paper is organized as follows. In Sec.~(\ref{sec-sr-cft}), we review the tools that we will use in this paper to calculate the time dependence of the reflected entropy. In particular, we review the replica trick for computing reflected entropy in Sec.~(\ref{sec-rep-trick}) and the holographic dual of the reflected entropy in Sec.~(\ref{sec-sr-ent-wedge}). We perform the main calculations in Sec.~(\ref{sec-quench-cft}) and in Sec.~(\ref{sec-holo-tdm}). In Sec.~(\ref{sec-quench-cft}), we mostly focus on rational CFTs and use the replica trick to calculate the time dependence of reflected entropy in a thermal double model. In Sec.~(\ref{sec-holo-tdm}), we focus on holographic CFTs and use the holographic formula for reflected entropy to calculate the time dependence of reflected entropy. We end with a summary and some possible extensions of our work in Sec.~(\ref{sec-disc}). \section{Reflected entropy in CFTs} \label{sec-sr-cft} In this section, we briefly review the reflected entropy. In Sec.~(\ref{sec-rep-trick}), we review the replica trick approach of computing the reflected entropy. Then in Sec.~(\ref{sec-sr-ent-wedge}), we discuss the holographic dual of the reflected entropy. \subsection{Replica trick in $(1+1)$-dimensions} \label{sec-rep-trick} A replica trick for computing reflected entropy was developed in \cite{faulkner}. This involves writing the reflected entropy in terms of correlation functions of certain codimension-$2$ twist operators inserted at the boundaries of regions $A$ and $B$. This trick is powerful especially in $(1+1)$-dimensions where the twist operators become local operators inserted at the end points of regions $A$ and $B$. In this section, we merely summarize this method of computing reflected entropy in $(1+1)$-dimensional CFTs and refer the readers to \cite{faulkner} for more details. Reflected entropy, using replica trick, is given by\footnote{The order of limits may not commute. The correct order, as argued in \cite{sr-local-2}, is to first take $n\to 1$ and then take $m\to 1$.} \begin{align} S_{R}(A|B) \, = \, \lim_{m\to 1} \, \lim_{n\to 1} \, \frac{1}{1-n} \, \log \left(\frac{Z_{n,m}}{(Z_{1,m})^{n}}\right) \, , \label{eq-rep-sr} \end{align} where $Z_{n,m}$ is a correlation function of twist operators on CFT$^{\otimes mn}$. In particular, if we denote the end points of region $A$ by $a_{1}$ and $a_{2}$ and those of region $B$ by $b_{1}$ and $b_{2}$, then \begin{align} Z_{n,m} \, = \, \big\langle \, \sigma_{A}(a_{1}) \, \bar{\sigma}_{A}(a_{2}) \, \sigma_{B}(b_{1}) \, \bar{\sigma}_{B}(b_{2}) \, \big\rangle_{CFT^{\otimes mn}} \, . \label{eq-znm} \end{align} The conformal dimensions of these twist operators are \cite{faulkner} \begin{align} h_{A} \, = \, h_{B} \, = \, n \, h_{m} \, , \end{align} where \begin{align} h_{m} \, = \, \frac{c}{24} \, \frac{m^{2}-1}{m} \, , \label{eq-twist} \end{align} is the conformal dimension of the usual twist operators used in the calculation of the entanglement entropy \cite{Calabrese:2009qy}. We will use Eq.~\eqref{eq-rep-sr} in Sec.~(\ref{sec-quench-cft}) to study the time evolution of the reflected entropy in rational CFTs. We will find that the time-dependent reflected entropy, in the scaling limit, is governed by various operator product expansion (OPE) limits of twist operators in Eq.~\eqref{eq-znm}. Therefore, we now review the OPEs of twist operators in Eq.~\eqref{eq-znm}. The OPE of these operators, as discussed in \cite{faulkner}, is given by following fusion rules: \begin{align} \sigma_{A} \, \bar{\sigma}_{A} \, \to \, \mathbf{1} \, \quad\quad\quad\quad \sigma_{B} \, \bar{\sigma}_{B} \, \to \, \mathbf{1} \, \quad\quad\quad\quad \sigma_{A} \, \bar{\sigma}_{B} \, \to \, \sigma_{AB} \, \label{eq-ope} \end{align} The conformal dimension of $\sigma_{AB}$ is given by \begin{align} h_{AB} \, = \, 2 \, h_{n} \, , \end{align} where $h_{n}$ is defined as in Eq.~\eqref{eq-twist}. Moreover, the OPE coefficient for the last fusion rule in Eq.~\eqref{eq-ope} is \begin{align} C_{n,m} \, = \, (2m)^{-4h_{n}} \, . \end{align} This finishes our brief review of the replica trick method of computing the reflected entropy. Before we apply this method in Sec.~(\ref{sec-quench-cft}), we discuss the bulk dual of the reflected entropy for holographic CFTs. \subsection{Holographic dual of reflected entropy} \label{sec-sr-ent-wedge} In AdS-CFT correspondence, the bulk dual of a boundary subregion is the entanglement wedge. The entanglement wedge corresponding to a boundary subregion is the bulk domain of dependence of a spacelike slice between that boundary subregion and its corresponding Hubeny-Rangamani-Takayanagi (HRT) surface. When the boundary subregion is the union of two disconnected subregion, the entanglement wedge can either be `connected' or `disconnected'. The connectedness of the entanglement wedge can be quantified using a bulk quantity, called the `entanglement wedge cross-section', which was defined in \cite{eop-1,eop-2}. In the following, we review this bulk quantity and its relation to the reflected entropy. The entanglement wedge cross-section for boundary regions $A$ and $B$, $E_{W}(A|B)$, can be defined as follows \cite{eop-1,eop-2}: Let us denote the HRT surfaces corresponding to boundary regions $A\cup B$ by $m_{AB}$ and the restriction of the entanglement wedge on some time slice by $M_{AB}$. Then the region $M_{AB}$ is such that \begin{align} \partial M_{AB} \, = \, A \, \cup \, B \, \cup \, m_{AB} \, . \end{align} Now let us divide $m_{AB}$ into two parts as \begin{align} m_{AB} \, = \, m_{AB}^{(A)} \, \cup \, m_{AB}^{(B)} \, .\label{eq-div} \end{align} With this division, we define the entanglement wedge cross-section, $E_{W}(A|B)$, as \begin{align} E_{W}(A|B) \, = \, \text{min}_{m_{A}} \, \frac{ \, \text{Area}\Big( \Sigma_{AB} \Big) \,}{4G} \, ,\label{eq-ent-wc} \end{align} where the minimization is over all possible divisions in Eq.~\eqref{eq-div} and where $\Sigma_{AB} \subset M_{AB}$ is such that \begin{align} \partial \Sigma_{AB} \, = \, \partial \left(A \cup m_{AB}^{(A)}\right) \, = \, \partial \left(B \cup m_{AB}^{(B)}\right) \, , \end{align} and it is homologous to $A\cup m_{AB}^{(A)}$ and $B\cup m_{AB}^{(B)}$. By construction, $E_{W}(A|B)$ trivially vanishes when the entanglement wedge of $A \cup B$ is disconnected. In this sense, it is a measure of how connected the entanglement wedge is. It was argued and derived using the holographic replica trick in \cite{faulkner} that the boundary dual of the entanglement wedge cross-section is the reflected entropy. The precise relation between these quantities is \cite{faulkner} \begin{align} S_{R}(A|B) \, = \, 2 \, E_{W}(A|B) \, .\label{eq-ref-ent-holo} \end{align} This relation is valid in any dimension and for any holographic state. This provides a useful tool to compute the reflected entropy for holographic states. We will use this formula in Sec.~(\ref{sec-holo-tdm}) to study the time-dependent reflected entropy in a holographic thermal double model. \section{Time dependence of reflected entropy in rational CFTs} \label{sec-quench-cft} Consider a doubled copy of a $(1+1)$-d CFT in a thermofield double state given in Eq.~\eqref{eq-tfd}. Let $A_{1}$ and $B_{1}$ are two disconnected regions in CFT$_{1}$ whereas $A_{2}$ and $B_{2}$ are their identical counterparts in CFT$_{2}$. We take regions $A$ and $B$ to be the union $A_{1}\cup A_{2}$ and $B_{1}\cup B_{2}$ respectively. The reduced density matrix of regions $A$ and $B$ can be constructed as a Euclidean path-integral over an infinite cylinder of size $\beta$ with open cuts above and below regions $A$ and $B$ \cite{hartman-maldacena}. Now according to Eq.~\eqref{eq-rep-sr} and Eq.~\eqref{eq-znm}, the reflected entropy in the thermal double model is given in terms of the correlation function of twist operators in a cylinder. We follow \cite{hartman-maldacena,hartman-2} and insert the operators at the end points of regions $A$ and $B$ and at arbitrary Euclidean time. Then we analytically continue to Lorentzian time to get the time-dependence of the reflected entropy. If we take regions $A_{1}$ and $A_{2}$ to be intervals $[x_{1},x_{2}]$ and regions $B_{1}$ and $B_{2}$ to be intervals $[x_{3},x_{4}]$, then the reflected entropy at a given time is given in terms of the following correlation function: \begin{align} Z_{n,m}^{\text{cyl}} \, = \, \big\langle \, \sigma_{A}(z_{1},\bar{z}_{1}) \bar{\sigma}_{A}(z_{2},\bar{z}_{2}) \sigma_{B}(z_{3},\bar{z}_{3}) \bar{\sigma}_{B}(z_{4},\bar{z}_{4}) \sigma_{B}(z_{5},\bar{z}_{5}) \bar{\sigma}_{B}(z_{6},\bar{z}_{6}) \sigma_{A}(z_{7},\bar{z}_{7}) \bar{\sigma}_{A}(z_{8},\bar{z}_{8}) \, \big\rangle^{\text{cyl}}_{CFT^{\otimes mn}} \, . \label{eq-znm-cyl} \end{align} In this correlation function, \begin{align} z_{i} \, = \, x_{i} - t - i {\beta}/{4} \, , \quad\quad\quad\quad \bar{z}_{i} \, = \, x_{i} + t + i {\beta}/{4} \, ,\label{eq-zi} \end{align} for $i \, = \, \{1,2,3,4\}$, and \begin{align} z_{i} \, = \, \bar{z}_{9-i} \, \quad\quad\quad\quad \bar{z}_{i} \, = \, z_{9-i} \, ,\label{eq-zi-2} \end{align} for $i \, = \, \{5,6,7,8\}$. Note that $z^{*}_{i} \ne \bar{z}_{i}$ is due to the analytic continuation to Lorentzian time as discussed above. Note that an infinite cylinder can be mapped to a complex plane using the following conformal transformation: \begin{align} w = \exp\left(2\pi z/\beta\right) \, \quad\quad\quad \bar{w}= \exp\left(2\pi \bar{z}/\beta\right) \, . \end{align} Using this conformal transformation, we write the correlation function in Eq.~\eqref{eq-znm-cyl} as a correlation function on a plane. This yields \begin{align} Z_{n,m}^{\text{cyl}} \, = \, \left(\frac{2\pi}{\beta}\right)^{16 n h_{m}} \, \big|w_{1}w_{2}w_{3}w_{4}w_{5}w_{6}w_{7}w_{8}\big|^{2nh_{m}} \, Z_{n,m}^{\text{plane}} \, , \label{eq-znm-cyl-2} \end{align} where $Z_{n,m}^{\text{plane}}$ is the following correlation function on a plane: \begin{align} \big\langle \, \sigma_{A}(w_{1},\bar{w}_{1}) \bar{\sigma}_{A}(w_{2},\bar{w}_{2}) \sigma_{B}(w_{3},\bar{w}_{3}) \bar{\sigma}_{B}(w_{4},\bar{w}_{4}) \sigma_{B}(w_{5},\bar{w}_{5}) \bar{\sigma}_{B}(w_{6},\bar{w}_{6}) \sigma_{A}(w_{7},\bar{w}_{7}) \bar{\sigma}_{A}(w_{8},\bar{w}_{8}) \, \big\rangle^{\text{plane}}_{CFT^{\otimes mn}} \, . \label{eq-znm-plane} \end{align} Now recall from Eq.~\eqref{eq-rep-sr} that the reflected entropy is given in terms of the ratio $Z_{n,m}^{\text{cyl}}/\big(Z_{1,m}^{\text{cyl}}\big)^{n}$. We find that the conformal factor in Eq.~\eqref{eq-znm-cyl-2} drops out from this ratio, and we get \begin{align} \frac{Z_{n,m}^{\text{cyl}}}{\big(Z_{1,m}^{\text{cyl}}\big)^{n}} \, = \, \frac{Z_{n,m}^{\text{plane}}}{\big(Z_{1,m}^{\text{plane}}\big)^{n}} \, . \end{align} This is an interesting observation as it implies that the conformal factor in Eq.~\eqref{eq-znm-cyl-2} does not contribute to the reflected entropy\footnote{In fact, the conformal factor drops out from Eq.~\eqref{eq-rep-sr} even before taking the replica limit. This means that the conformal factor does not contribute to the Renyi generalization of the reflected entropy as well.}. Moreover, the reflected entropy in the thermal double model is given by \begin{align} S_{R}(A|B)(t) \, = \, \lim_{m\to 1} \, \lim_{n\to 1} \, \frac{1}{1-n} \, \log \left(\frac{Z^{\text{plane}}_{n,m}}{(Z^{\text{plane}}_{1,m})^{n}}\right) \, . \label{eq-sr-tdm} \end{align} In the following, we compute the the time dependence of $Z_{n,m}^{\text{plane}}$ and then combine it with Eq.~\eqref{eq-sr-tdm} to get the time dependence of the reflected entropy. \subsection{Setup} \label{sec-tdm-calc} The discussion in the previous subsection was for arbitrary regions $A$ and $B$. From now on, for concreteness, we take regions $A_{1}$, $B_{1}$, $A_{2}$, and $B_{2}$ to be of equal size $L$. Furthermore, we denote the separation between $A_{1}$ ($A_{2}$) and $B_{1}$ ($B_{2}$) by $\ell$. More precisely, we choose $x_{1}$, $x_{2}$, $x_{3}$, and $x_{4}$ in Eq.~\eqref{eq-zi} to be \begin{align} x_{1} \, = \, -L -\ell/2 \, , \quad\quad x_{2} \, = \, -\ell/2 \, , \quad\quad x_{3} \, = \, \ell/2 \, , \quad\quad x_{4} \, = \, L+\ell/2 \, .\label{eq-xi} \end{align} Having specified regions $A$ and $B$, we now compute the time-dependent reflected entropy. In the following, we consider the following three cases separately: \begin{itemize} \item Case $1$: $L\to \infty$. \item Case $2$: $L > \ell$. \item Case $3$: $L < \ell$. \end{itemize} \subsection{Case $1$: $L \to \infty$} \label{sec-case-1} This case is a simplified version of case $2$. However, we still think it is a good idea to consider it separately. This is because we expect this simpler case to shed light on interesting aspects of the calculation that will help us in studying case $2$ and case $3$. More importantly, as we will see in this section, the time dependence of the reflected entropy in this case is completely fixed by the conformal symmetry. Therefore, the results of this section are valid for \textit{all} CFTs. In this case, $Z_{n,m}^{\text{plane}}$ is given by a four-point function on the plane \begin{align} Z^{\text{plane}}_{n,m} \, = \, \big\langle \, \sigma_{A}(w_{1},\bar{w}_{1}) \bar{\sigma}_{B}(w_{2},\bar{w}_{2}) \sigma_{B}(w_{3},\bar{w}_{3}) \bar{\sigma}_{A}(w_{4},\bar{w}_{4}) \, \big\rangle^{\text{plane}}_{CFT^{\otimes mn}} \, , \label{eq-znm-case1} \end{align} where \begin{align} w_{1} \, = \, -i e^{-\frac{2\pi}{\beta}(t+\ell/2)} \, , \quad\quad\quad\quad \bar{w}_{1} \, = \, i e^{\frac{2\pi}{\beta}(t-\ell/2)} \, , \label{eq-w1-c1}\\ w_{2} \, = \, -i e^{-\frac{2\pi}{\beta}(t-\ell/2)} \, , \quad\quad\quad\quad \bar{w}_{2} \, = \, i e^{\frac{2\pi}{\beta}(t+\ell/2)} \, , \label{eq-w2-c1 \end{align} and $w_{3} \, = \, \bar{w}_{2} \, $, $\bar{w}_{3} \, = \, {w}_{2} \, $, ${w}_{4} \, = \, \bar{w}_{1} \, $, and $\bar{w}_{4} \, = \, {w}_{1} \, $. Recall that conformal symmetry fixes the four-point function on a plane up to a unknown function of the cross-ratio. Let us consider the following cross-ratio: \begin{align} \eta \, = \, \bar{\eta} \, \equiv \, \frac{(w_{1}-\bar{w}_{1}) (w_{2}-\bar{w}_{2}) }{(w_{1}-\bar{w}_{2}) (w_{2}-\bar{w}_{1}) } \, .\label{eq-eta} \end{align} Now using Eqs.~\eqref{eq-w1-c1}-\eqref{eq-w2-c1}, we get \begin{align} \eta \, = \, \frac{2 \, \sinh^{2}(2\pi t/\beta)}{\cosh(4\pi t/\beta) \, + \, \cosh(2\pi \ell/\beta)} \, . \end{align} In the scaling limit, that is $\beta \to 0$ limit, this expression simplifies to \begin{align} \eta \, = \, \frac{1}{1 \, + \, \exp\left(-\frac{2\pi}{\beta} (2t-\ell)\right)} \, = \, \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{\ell}{2}$} \, ,\\[0ex] \, 1 \quad&\text{for $\quad t>\frac{\ell}{2}$} \, , \end{cases} \, . \label{eq-eta-scaling} \end{align} Note that $\eta \to 0$ corresponds to the OPE limit \begin{align} (w_{1},\bar{w}_{1}) \leftrightarrow (w_{4},\bar{w}_{4}) \quad\quad \text{and} \quad\quad (w_{2},\bar{w}_{2}) \leftrightarrow (w_{3},\bar{w}_{3}) \, ,\label{eq-ope-c1-1} \end{align} whereas $\eta \to 1$ corresponds to the OPE limit \begin{align} (w_{1},\bar{w}_{1}) \leftrightarrow (w_{2},\bar{w}_{2}) \quad\quad \text{and} \quad\quad (w_{3},\bar{w}_{3}) \leftrightarrow (w_{4},\bar{w}_{4}) \, .\label{eq-ope-c1-2} \end{align} This means that the time dependence of the reflected entropy is governed by one of the OPEs in Eqs.~\eqref{eq-ope}. We now use the above observation to compute $Z^{\text{plane}}_{n,m}$ in Eq.~\eqref{eq-znm-case1} as a function of time. For $t<\ell/2$, we take the OPE limit in Eq.~\eqref{eq-ope-c1-1} to get \begin{align} Z^{\text{plane}}_{n,m} \, = \, |w_{1}-w_{4}|^{-4nh_{m}} \, |w_{2}-w_{3}|^{-4nh_{m}} \, . \end{align} This implies, \begin{align} \frac{Z^{\text{plane}}_{n,m}}{(Z^{\text{plane}}_{1,m})^{n}} \, = \, 1 \, , \end{align} and hence, by virtue of Eq.~\eqref{eq-sr-tdm}, \begin{align} S_{R}(A|B) \, = \, 0 \, . \end{align} For $t>\ell/2$, on the other hand, we take the OPE limit in Eq.~\eqref{eq-ope-c1-2} to get \begin{align} Z^{\text{plane}}_{n,m} \, =& \, (2m)^{-8 \, h_{n}} \, |w_{1}-w_{2}|^{-4nh_{m} + 4h_{n}} \, |w_{3}-w_{4}|^{-4nh_{m}+4h_{n}} \, |w_{1}-w_{4}|^{-4h_{n}} \, |w_{2}-w_{3}|^{-4h_{n}} \, ,\\ =& \, (2m)^{-8 \, h_{n}} \, |w_{1}-w_{2}|^{-4nh_{m}} \, |w_{3}-w_{4}|^{-4nh_{m}} \, \left( \frac{1-\eta}{\eta} \right)^{4h_{n}} \, , \end{align} where we have used Eq.~\eqref{eq-eta}. This implies \begin{align} \log \, \frac{Z^{\text{plane}}_{n,m}}{(Z^{\text{plane}}_{1,m})^{n}} \, = \, -8 h_{n} \log (2m) + 4h_{n} \, \log \left( \frac{1-\eta}{\eta} \right) \, . \end{align} In the scaling limit, this becomes \begin{align} \log \, \frac{Z^{\text{plane}}_{n,m}}{(Z^{\text{plane}}_{1,m})^{n}} \, = \, - \, \frac{16\pi h_{n}}{\beta} \, \left( t - \ell/2 \right) \, . \end{align} Combinging this result with Eq.~\eqref{eq-sr-tdm}, we get \begin{align} S_{R}(A|B) \, = \, 4 \, s_{eq} \, \left( t - \ell/2\right) \, , \end{align} where $s_{eq}$ is given in Eq.~\eqref{eq-seq}. To summarize, the time-dependence of the reflected entropy in the $L\to\infty$ limit is given by \begin{align} S_{R}(A|B)(t) = 4 s_{eq} \times \label{eq-fin-case-1-gen} \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{\ell}{2}$} \, ,\\[0ex] \, t - \frac{\ell}{2} \quad&\text{for $\quad t>\frac{\ell}{2}$} \, . \end{cases} \end{align} This is the main result of this section. We reiterate that that this result was fixed by the OPEs of the twist operators. We did not have to assume anything about the spectrum of the CFT. Hence, this result is valid for \textit{all} CFTs, as advertised at the beginning of this section. \subsection{Case $2$: $L > \ell$} \label{sec-case-2} Recall from Eq.~\eqref{eq-znm-plane} that $Z_{n,m}^{\text{plane}}$ is a eight-point function on a plane. The operators in this correlation function are inserted at \begin{align} w_{i} \, = \bar{w}_{9-i} \, = \, -i e^{-\frac{2\pi}{\beta}(t-x_{i})} \, , \quad\quad\quad\quad \bar{w}_{i} \, = \, w_{9-i} \, i e^{\frac{2\pi}{\beta}(t+x_{i})} \, , \end{align} for $i \, = \, \{1,2,3,4\}$ and $x_{i}$ are given in Eq.~\eqref{eq-xi}. The calculation of an eight-point function, in general, is quite difficult. However, as we saw in the last subsection, the operators approach each other in the scaling limit. To decide which two operators approach each other, we follow \cite{hartman-2} and consider the following cross-ratios: \begin{align} \eta_{ij} \, = \, \bar{\eta}_{ij} \, \equiv \, \frac{(w_{i}-\bar{w}_{i}) (w_{j}-\bar{w}_{j}) }{(w_{i}-\bar{w}_{j}) (w_{j}-\bar{w}_{i}) } \, .\label{eq-eta-ij} \end{align} In the scaling limit, these cross-ratios become \begin{align} \eta_{ij} \, = \, \frac{1}{1 \, + \, \exp\left(-\frac{2\pi}{\beta} (2t-|x_{i}-x_{j}|)\right)} \, = \, \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{|x_{i}-x_{j}|}{2}$} \, ,\\[0ex] \, 1 \quad&\text{for $\quad t>\frac{|x_{i}-x_{j}|}{2}$} \, , \end{cases} \, . \label{eq-eta-ij-scaling} \end{align} These cross-ratios are sufficient to determine the correct OPE limit in the Euclidean signature. However, this is not the case in the Lorentzian signature, as was discussed in \cite{hartman-2}. This is because an operator can approach a light cone of some other operator. To remedy this, we follow \cite{hartman-2} and consider two more cross-ratios \begin{align} \xi \, \equiv& \, \frac{(w_{1}-w_{2}) (w_{5}-w_{6}) }{(w_{1}-w_{5}) (w_{2}-w_{6}) } \, = \, \frac{(\bar{w}_{8}-\bar{w}_{7}) (\bar{w}_{4}-\bar{w}_{3}) }{(\bar{w}_{8}-\bar{w}_{4}) (\bar{w}_{7}-\bar{w}_{3}) } \, ,\\ \bar{\xi} \equiv& \, \frac{(w_{8}-w_{7}) (w_{4}-w_{3}) }{(w_{8}-w_{4}) (w_{7}-w_{3}) } \, = \, \frac{(\bar{w}_{1}-\bar{w}_{2}) (\bar{w}_{5}-\bar{w}_{6}) }{(\bar{w}_{1}-\bar{w}_{5}) (\bar{w}_{2}-\bar{w}_{6}) } \, . \end{align} In the scaling limit, these cross-ratios become \begin{align} \xi \, = \, \exp\left(-\frac{2\pi}{\beta} (2t+\ell)\right) \, \to \, 0 \, , \label{eq-xi-sc} \end{align} and \begin{align} \bar{\xi} \, = \, \frac{1}{1 \, + \, \exp\left(\frac{2\pi}{\beta} (|2t-L-\ell| - L)\right)} \, = \, \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{\ell}{2}$} \, ,\\[0ex] \, 1 \quad&\text{for $\quad \frac{\ell}{2} < t < \frac{2L+\ell}{2}$} \, ,\\ \, 0 \quad&\text{for $\quad t >\frac{2L+\ell}{2}$} \, ,\\[0ex] \end{cases} \, . \label{eq-xi-bar-sc} \end{align} Now we calculate the time-dependence of $Z_{n,m}^{\text{plane}}$ and that of $S_{R}(A|B)$. For $t < \ell/2$, all the cross-ratios vanishes. This suggests that the following points approach each other \begin{align} w_{1} \leftrightarrow& \, w_{8} \, \quad\quad w_{2} \leftrightarrow w_{7} \, \quad\quad \, w_{3} \leftrightarrow w_{6} \, \quad\quad w_{4} \leftrightarrow w_{5} \, ,\label{eq-conf-1-1}\\ \bar{w}_{1} \leftrightarrow& \, \bar{w}_{8} \, \quad\quad \bar{w}_{2} \leftrightarrow \bar{w}_{7} \, \quad\quad \, \bar{w}_{3} \leftrightarrow \bar{w}_{6} \, \quad\quad \bar{w}_{4} \leftrightarrow \bar{w}_{5} \, . \label{eq-conf-1-2} \end{align} In this limit, we can use OPEs in Eq.~\eqref{eq-ope} to write $Z_{n,m}^{\text{plane}}$ as \begin{align} Z_{n,m}^{\text{plane}} \, = \, |w_{1}-w_{8}|^{-2nh_{m}} \, |w_{2}-w_{7}|^{-2nh_{m}} \, |w_{3}-w_{6}|^{-2nh_{m}} \, |w_{4}-w_{5}|^{-2nh_{m}} \, . \end{align} Now we insert this in Eq.~\eqref{eq-sr-tdm} and fin \begin{align} S_{R}(A|B) \, = \, 0 \, . \label{eq-sr-c2-1} \end{align} Now let us consider $\ell/2 < t < L/2$. In this case, $\eta_{23}$ and $\bar{\xi}$ approach to $1$ while all other cross-ratios vanishes. This corresponds to the following configuration \begin{align} w_{1} \leftrightarrow& \, w_{8} \, \quad\quad w_{2} \leftrightarrow w_{3} \, \quad\quad \, w_{4} \leftrightarrow w_{5} \, \quad\quad w_{6} \leftrightarrow w_{7} \, ,\label{eq-conf-2-1}\\ \bar{w}_{1} \leftrightarrow& \, \bar{w}_{8} \, \quad\quad \bar{w}_{2} \leftrightarrow \bar{w}_{3} \, \quad\quad \, \bar{w}_{4} \leftrightarrow \bar{w}_{5} \, \quad\quad \bar{w}_{6} \leftrightarrow \bar{w}_{7} \, .\label{eq-conf-2-2} \end{align} In this limit, $Z_{n,m}^{\text{plane}}$ is again fixed by OPEs in Eq.~\eqref{eq-ope}. By using these OPE relations, we get \begin{align} Z_{n,m}^{\text{plane}} \, = \, &(2m)^{-8h_{n}} \, |w_{1}-w_{8}|^{-2nh_{m}} \, |w_{2}-w_{3}|^{-2nh_{m}} \, |w_{4}-w_{5}|^{-2nh_{m}} \, |w_{6}-w_{7}|^{-2nh_{m}} \nonumber\\ &\, \times \left(\frac{|w_{2}-w_{3}|}{|w_{2}-w_{7}||\bar{w}_{3}-\bar{w}_{6}|} \right)^{4h_{n}} \, . \end{align} This implies \begin{align} \frac{Z^{\text{plane}}_{n,m}}{(Z^{\text{plane}}_{1,m})^{n}} \, = \, (2m)^{-8h_{n}} \, \left( \frac{1 - \eta_{23}}{\eta_{23}} \right)^{4h_{n}} \, . \end{align} Now using Eq.~\eqref{eq-sr-tdm} and taking the scaling limit, we get \begin{align} S_{R}(A|B) \, = \, 4 \, s_{eq} \, \left( t - \ell/2\right) \, . \end{align} Now we consider $L/2 < t < (L+\ell)/2$. In this case, cross-ratios satisfy \begin{align} \eta_{12} \, , \, \eta_{34} \, , \, \eta_{23} \, , \, \bar{\xi} \, \to \, 1 \, . \label{eq-cr-dip-1} \end{align} and \begin{align} \eta_{13} \, , \, \eta_{24} \, , \, \eta_{14} \, , \, {\xi} \, \to \, 0 \, . \label{eq-cr-dip-2} \end{align} This implies the following configuration: \begin{align} w_{1} \leftrightarrow& \, w_{2} \, \quad\quad w_{3} \leftrightarrow w_{8} \, \quad\quad \, w_{4} \leftrightarrow w_{7} \, \quad\quad w_{5} \leftrightarrow w_{6} \, ,\label{eq-conf-3-1}\\ \bar{w}_{1} \leftrightarrow& \, \bar{w}_{6} \, \quad\quad \bar{w}_{2} \leftrightarrow \bar{w}_{5} \, \quad\quad \, \bar{w}_{3} \leftrightarrow \bar{w}_{4} \, \quad\quad \bar{w}_{7} \leftrightarrow \bar{w}_{8} \, .\label{eq-conf-3-2} \end{align} In this configuration, $w$'s and $\bar{w}$'s are in a different channel, and hence, this configuration does not correspond to any OPE limit. Therefore, the OPEs do not fix the eight point function in general. Nevertheless, for current dominated theories such as rational theories, we can treat the `left-movers' and `right-movers' separately \cite{hartman-2}. This allows us to choose different OPE channels for $w$'s and $\bar{w}$'s. Using this observation, we get \begin{align} Z_{n,m}^{\text{plane}} \, = \, (2m)^{-8h_{n}} \, &(w_{1}-w_{2})^{-2nh_{m}} \, (w_{3}-w_{8})^{-2nh_{m}+2h_{n}} \, (w_{4}-w_{7})^{-2nh_{m}+2h_{n}} \, (w_{5}-w_{6})^{-2nh_{m}} \nonumber\\ \times & (\bar{w}_{1}-\bar{w}_{6})^{-2nh_{m}+2h_{n}} \, (\bar{w}_{2}-\bar{w}_{5})^{-2nh_{m}+2h_{n}} \, (\bar{w}_{3}-\bar{w}_{4})^{-2nh_{m}} \, (\bar{w}_{7}-\bar{w}_{8})^{-2nh_{m}} \nonumber\\ \times& (w_{3}-w_{4})^{-2h_{n}} \, (w_{8}-w_{7})^{-2h_{n}} \, (\bar{w}_{1}-\bar{w}_{2})^{-2h_{n}} \, (\bar{w}_{6}-\bar{w}_{5})^{-2h_{n}} \, . \end{align} This implies \begin{align} \frac{Z^{\text{plane}}_{n,m}}{(Z^{\text{plane}}_{1,m})^{n}} \, = \, (2m)^{-8h_{n}} \, \left( \frac{1 - \bar{\xi}}{\bar{\xi}} \right)^{4h_{n}} \, . \label{eq-znm-grow} \end{align} Now using Eq.~\eqref{eq-sr-tdm} and using \begin{align} \log \left(\frac{1-\bar{\xi}}{\bar{\xi}}\right) \, = \, \frac{2\pi}{\beta} \, \left( |2t-L-\ell| \, - \, L \right) \, , \end{align} we get \begin{align} S_{R}(A|B) \, = \, 4 \, s_{eq} \, \left( t - \ell/2\right) \, .\label{eq-sr-dip-1} \end{align} Now we focus on $(L+\ell)/2 < t < (2L+\ell)/2$. In this case, $\eta_{14}$ and $\xi$ vanishes whereas all other cross-ratios approach $1$. This corresponds to the configuration \begin{align} w_{1} \leftrightarrow& \, w_{2} \, \quad\quad w_{3} \leftrightarrow w_{8} \, \quad\quad \, w_{4} \leftrightarrow w_{7} \, \quad\quad w_{5} \leftrightarrow w_{6} \, ,\label{eq-cr-grow-1}\\ \bar{w}_{1} \leftrightarrow& \, \bar{w}_{6} \, \quad\quad \bar{w}_{2} \leftrightarrow \bar{w}_{5} \, \quad\quad \, \bar{w}_{3} \leftrightarrow \bar{w}_{4} \, \quad\quad \bar{w}_{7} \leftrightarrow \bar{w}_{8} \, .\label{eq-cr-grow-2} \end{align} Note that this configuration is the same as that in Eqs.~\eqref{eq-conf-3-1}-\eqref{eq-conf-3-2}. This means that $Z^{\text{plane}}_{n,m}$ for rational theories still satisfies Eq.~\eqref{eq-znm-grow} and hence, the reflected entropy is given by \begin{align} S_{R}(A|B) \, = \, 4 \, s_{eq} \, \left( L + \ell/2 - t \right) \, . \label{eq-sr-grow} \end{align} Finally, we consider $t> (2L+\ell)/2$. In this case, all $\eta_{ij} \to 1$ but $\xi = \bar{\xi} \, = \, 0 \, $. This corresponds to the configuration: \begin{align} w_{1} \leftrightarrow& w_{2} \, \quad\quad w_{3} \leftrightarrow w_{4} \, \quad\quad \, w_{5} \leftrightarrow w_{6} \, \quad\quad w_{7} \leftrightarrow w_{8} \, ,\label{eq-conf-4-1}\\ \bar{w}_{1} \leftrightarrow& \bar{w}_{2} \, \quad\quad \bar{w}_{3} \leftrightarrow \bar{w}_{4} \, \quad\quad \, \bar{w}_{5} \leftrightarrow \bar{w}_{6} \, \quad\quad \bar{w}_{7} \leftrightarrow \bar{w}_{8} \, .\label{eq-conf-4-2} \end{align} In this limit, $Z_{n,m}^{\text{plane}}$ is once again fixed by the OPEs in Eq.~\eqref{eq-ope}. Using these OPEs, we get \begin{align} Z_{n,m}^{\text{plane}} \, = \, |w_{1}-w_{2}|^{-2nh_{m}} \, |w_{3}-w_{4}|^{-2nh_{m}} \, |w_{5}-w_{6}|^{-2nh_{m}} \, |w_{7}-w_{8}|^{-2nh_{m}} \, . \end{align} Now using Eq.~\eqref{eq-sr-tdm}, we get \begin{align} S_{R}(A|B) \, = \, 0 \, . \label{eq-sr-late} \end{align} To summarize, we find that the reflected entropy is given by \begin{align} S_{R}(A|B) \, = \, 4 s_{eq} \, \times \label{eq-sr-2-fin} \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{\ell}{2}$} \, ,\\[0ex] \, t - \frac{\ell}{2} \quad&\text{for $\quad \frac{\ell}{2} < t < \frac{L+\ell}{2}$} \, ,\\ \, L + \frac{\ell}{2} - t \quad&\text{for $\quad \frac{L+\ell}{2} < t < \frac{2L+\ell}{2}$} \, , \\ \, 0 \quad&\text{for $\quad t >\frac{2L+\ell}{2}$} \, . \end{cases} \, . \end{align} Note that this precisely matches the time-evolution of the mutual information in Eq.~\eqref{eq-mi}. That is, \begin{align} S_{R}(A|B)(t) \, = \, I(A|B)(t) \, \label{eq-sr-mi} \end{align} in a thermal double model. In the next subsection, we will see that this interesting result is valid even if $L < \ell$. \subsection{Case $3$: $L < \ell$} Here we repeat the calculation of the eight-point function, $Z_{n,m}^{\text{plane}}$, and the reflected entropy but for $L < \ell$. Note that the time-dependence of cross-ratios that we derived in the last subsection, that is Eq.~\eqref{eq-eta-ij-scaling}, Eq.~\eqref{eq-xi-sc}, and Eq.~\eqref{eq-xi-bar-sc}, is still valid in this case. Therefore, we can use these results to decide which two points are approaching each other in the scaling limit. For $t< L/2$, all the cross-ratios vanishes. This corresponds to the same configurations as in Eqs.~\eqref{eq-conf-1-1}-\eqref{eq-conf-1-2}. This means that the reflected entropy is fixed by the OPEs and is the same as in Eq.~\eqref{eq-sr-c2-1}: \begin{align} S_{R}(A|B) \, = \, 0 \, . \end{align} Now we consider $L/2 < t <\ell/2$. In this case, $\eta_{12}$ and $\eta_{34}$ approach to $1$ while all other cross-ratios vanishes. This corresponds to the following configuration \begin{align} w_{1} \leftrightarrow& w_{2} \, \quad\quad w_{3} \leftrightarrow w_{4} \, \quad\quad \, w_{5} \leftrightarrow w_{6} \, \quad\quad w_{7} \leftrightarrow w_{8} \, ,\\ \bar{w}_{1} \leftrightarrow& \bar{w}_{2} \, \quad\quad \bar{w}_{3} \leftrightarrow \bar{w}_{4} \, \quad\quad \, \bar{w}_{5} \leftrightarrow \bar{w}_{6} \, \quad\quad \bar{w}_{7} \leftrightarrow \bar{w}_{8} \, . \end{align} Note that this configuration is the same as in Eqs.~\eqref{eq-conf-4-1}-\eqref{eq-conf-4-2}. This means that the reflected entropy vanishes as in Eq.~\eqref{eq-sr-late}: \begin{align} S_{R}(A|B) \, = \, 0 \, . \end{align} Now we focus on $\ell/2 < t < (L+\ell)/2$. In this case, cross-ratios have the same limits as in Eqs.~\eqref{eq-cr-dip-1}-\eqref{eq-cr-dip-2}, and hence, we have the same configuration as in Eqs.~\eqref{eq-conf-3-1}-\eqref{eq-conf-3-2}. Therefore, we deduce that the reflected entropy is the same as in Eq.~\eqref{eq-sr-dip-1}: \begin{align} S_{R}(A|B) \, = \, 4 \, s_{eq} \, \left( t - \ell/2\right) \, . \end{align} Now we assume that $(L+\ell)/2 < t < (2L+\ell)/2$. In this case, $\eta_{14}$ and $\xi$ vanishes whereas all other cross-ratios approach $1$. This corresponds to the same configuration as in Eqs.~\eqref{eq-cr-grow-1}-\eqref{eq-cr-grow-2}. Therefore, we deduce that the reflected entropy is the same as in Eq.~\eqref{eq-sr-grow}: \begin{align} S_{R}(A|B) \, = \, 4 \, s_{eq} \, \left( L + \ell/2 - t \right) \, . \end{align} Finally, we consider $t>(2L+\ell)/2$. This again corresponds to the configurations in Eqs.~\eqref{eq-conf-4-1}-\eqref{eq-conf-4-2}. Therefore, we deduce that the reflected entropy vanishes: \begin{align} S_{R}(A|B) \, = \, 0 \, . \end{align} Combining the above results, we find that the time-dependent reflected entropy, even for $L < \ell$, is given by Eq.~\eqref{eq-sr-2-fin}. Hence, in a thermal double model, \begin{align} S_{R}(A|B)(t) \, = \, I(A|B)(t) \, .\label{eq-sr-mi} \end{align} As we discussed in Sec.~(\ref{intro}), our result that the reflected entropy equals mutual information in a thermal double model provides some more evidence for the quasi-particle picture for the spread of entanglement. \section{Time dependence of reflected entropy in holographic CFTs} \label{sec-holo-tdm} In this section, we compute the time evolution of the reflected entropy in the thermal double model using AdS-CFT correspondence. The holographic dual of the entangled state in Eq.~\eqref{eq-tfd} is a two-sided black brane \cite{Maldacena:2001kr}. This bulk geometry has two exterior regions, each corresponding to a single copy of the CFT. Therefore, the holographic dual of the thermal double model is a two-sided black brane where time is taken to run forwards on both of the exterior regions \cite{hartman-maldacena}. Since our focus in this work is only on $(1+1)$-dimensional CFTs, we consider the BTZ black brane in this section. The metric of the BTZ black string is \begin{align} ds^{2} \, = \, - \frac{4\pi^{2}}{\beta^{2}} \, \sinh^{2}\rho \, dt^{2} \, + \, d\rho^{2} \, + \, \frac{4\pi^{2}}{\beta^{2}} \, \cosh^{2}\rho \, dx^{2} \, . \end{align} Note that the two exterior regions are related to each other by continuation $t \to t + i\beta/2$. The BTZ black brane is locally equivalent to the Poincare AdS$_{3}$ spacetime \cite{Banados:1992wn} \begin{align} ds^{2} \, = \, \frac{1}{z^{2}} \, \left( dz^{2} - dx_{0}^{2} + dx_{0}^{2} \right) \, .\label{eq-met-poin} \end{align} Note that the two asymptotic boundaries of an eternal BTZ black brane corresponds to two Rindler wedges of the boundary of the Poincare AdS$_{3}$ \cite{Maldacena:1998bw,Parikh:2012kg}. Moreover, a point on an exterior region of the BTZ black brane can be mapped to a point on the Poincare AdS$_{3}$ using \cite{hartman-maldacena} \begin{align} z \, =& \, \frac{1}{\cosh\rho} \, e^{2\pi x/\beta} \, ,\label{eq-z-1}\\ x_{1} \, =& \, \tanh\rho \, \cosh\left(2\pi t/\beta\right) \, e^{2\pi x/\beta} \, ,\label{eq-x1-1}\\ x_{0} \, =& \, \tanh\rho \, \sinh\left(2\pi t/\beta\right) \, e^{2\pi x/\beta} \, \label{eq-x0-1}. \end{align} Since the two exterior regions of the BTZ black brane are related by the continuation $t \to t + i\beta/2$, we deduce that map between the other exterior region and the Poincare AdS$_{3}$ is \begin{align} z \, =& \, \frac{1}{\cosh\rho} \, e^{2\pi x/\beta} \, ,\label{eq-z-2}\\ x_{1} \, =& \, - \, \tanh\rho \, \cosh\left(2\pi t/\beta\right) \, e^{2\pi x/\beta} \, ,\label{eq-x1-2}\\ x_{0} \, =& \, - \, \tanh\rho \, \sinh\left(2\pi t/\beta\right) \, e^{2\pi x/\beta} \, .\label{eq-x0-2} \end{align} In the following, we will use these maps to relate the calculation of the entanglement wedge cross-section to the length of a geodesic in the Poincare AdS$_{3}$ geometry. Before we discuss the entanglement wedge cross-section, we need to discuss how does the entanglement wedge of boundary regions $A$ and $B$ changes as a function of time. This is the topic of the next subsection. \subsection{Time dependence of the entanglement wedge} Here we consider the same setup as in Sec.~(\ref{sec-case-1}). That is, both regions $A$ and $B$ consist of identical semi-infinite intervals\footnote{An error was pointed out to us by Jonah Kudler-Flam and Yuya Kusuki in our analysis of finite size regions in an earlier version of this paper, and hence, the analysis has been removed from this version.} on each of the asymptotic boundaries of the BTZ black brane, and that they are separated by an interval of size $\ell$. To understand what is the entanglement wedge for these boundary regions, we first need to understand the HRT surface for these boundary regions. The boundary anchored surfaces in the two-sided black brane were studied in \cite{hartman-maldacena}. It was found that the boundary anchored extremal surface can either go through the black hole from one asymptotic region to another or it can remain entirely in the exterior region. The area of the former surfaces grow linearly with time whereas the area of the latter surfaces scales as the size of the boundary regions where the surfaces are anchored. This implies that the correct HRT surface at any instant of time is given by one of the two possible configurations of the boundary anchored surfaces that we now discuss. \begin{figure} \centering \includegraphics[scale=.65]{fig-conf-1-v2.png} \caption{The pictorial representation of one of the two possibilities of the HRT surfaces corresponding to region $A\cup B$. We referred to these surfaces as configuration-$1$. The HRT surfaces (shown as red curves) in this configuration is the union of two surfaces each of which go through the black brane from one asymptotic region to another. The shaded region denotes the entanglement wedge, which is disconnected in this configuration. } \label{fig-conf-1} \end{figure} \begin{figure} \centering \includegraphics[scale=.65]{fig-conf-2-v2.png} \caption{The pictorial representation of the second possibility of the HRT surfaces corresponding to region $A\cup B$. We referred to these surfaces as configuration-$2$. The HRT surfaces (shown as red curves) in this configuration is the union of two surfaces, both of which remain in the asymptotic regions. As a result, the entanglement wedge (shaded region) in this configuration is connected. The blue surface is the cross-section of the entanglement wedge. } \label{fig-conf-2} \end{figure} One possible HRT surface is the union of two surfaces each of which go through the black brane. The pictorial representation of this configuration is shown in Fig.~(\ref{fig-conf-1}). The other possibility consists of two surfaces that are entirely in each of the exterior regions. This is shown in Fig.~(\ref{fig-conf-2}). The HRT surface at any instant of time is the configuration with the smallest area. The total area of surfaces in the two possible configurations is \begin{align} \text{Configuration-$1$:} \quad\quad &\text{Area} \, = \, \frac{4\pi}{\beta } \, \times \, 2t \, , \label{eq-conf-1}\\ \text{Configuration-$2$:} \quad\quad &\text{Area} \, = \, \frac{4\pi}{\beta } \, \times \, \ell \, . \label{eq-conf-2} \end{align} Now using these results, we deduce that HRT surfaces are the surfaces in the configuration-$1$ for $t < \ell/2$ whereas the HRT surfaces are the surfaces in the configuration-$2$ for $t > \ell/2$. Note that the entanglement wedge in the configuration-$1$ is disconnected as shown in Fig.~(\ref{fig-conf-1}). As we discussed in Sec.~(\ref{sec-sr-ent-wedge}), the reflected entropy is zero if the entanglement wedge is disconnected. Therefore, the reflected entropy is only non-zero for $t > \ell/2$. In the next subsection, we explicitly calculate the entanglement wedge cross-section and reflected entropy for this range of time. \subsection{Time dependence of the entanglement wedge cross-section} The entanglement wedge cross-section that we are interested in is shown as a blue curve in Fig.~(\ref{fig-conf-2}) and its end-points are denoted by $P_{1}$ and $P_{2}$. Note that the point $P_{1}$ is the bulk turning point of the minimal area surface anchored on the boundary $1$ at points $(\rho \, , \, t \, , \, x) \, = \, (\infty \, , t \, , \, \ell/2 )$ and $(\rho \, , \, t \, , \, x) \, = \, (\infty \, , t \, , \, - \ell/2 )$. Owing to symmetry, the coordinates of the point $P_{1}$ are \begin{align} P_{1} \, : \, (\rho \, , \, t \, , \, x) \, = \, (\rho_{*} \, , t \, , \, 0 ) \, , \end{align} and it was found in \cite{Hubeny:2007xt} that $\rho_{*}$ is given by \begin{align} \cosh\rho_{*} \, = \, \coth \left(\pi\ell/\beta\right) \, .\label{eq-rho-star} \end{align} Now using Eqs.~\eqref{eq-z-1}-\eqref{eq-x0-1}, we find that the point $P_{1}$ in Poincare coordinates is \begin{align} P_{1} \, : \, (z \, , \, x_{0} \, , \, x_{1}) \, = \, (1/\cosh\rho_{*} \, , \tanh\rho_{*} \, \cosh\left(2\pi t/\beta\right) \, , \, \tanh\rho_{*} \, \sinh\left(2\pi t/\beta\right) ) \, . \end{align} Similarly, the point $P_{2}$ in Poincare coordinates is \begin{align} P_{2} \, : \, (z \, , \, x_{0} \, , \, x_{1}) \, = \, (1/\cosh\rho_{*} \, , \tanh\rho_{*} \, \cosh\left(2\pi t/\beta\right) \, , \, -\tanh\rho_{*} \, \sinh\left(2\pi t/\beta\right) ) \, . \end{align} In Poincare coordinates, the geodesic connecting $P_{1}$ and $P_{2}$ is a segment of a semi-circle at equal $x_{0}$-slice. That is, this geodesic satisfies \begin{align} z \, = \, \sqrt{R^{2}-x_{1}^{2} \, } \, , \quad\quad\quad \text{and} \quad\quad\quad \, x_{0} \, = \, \tanh\rho_{*} \, \cosh\left(2\pi t/\beta\right) \, , \end{align} where \begin{align} R^{2} \, = \, 1 + \tanh^{2}\rho_{*} \, \sinh^{2}\left(2\pi t/\beta\right) \, .\label{eq-Rad} \end{align} The length of this geodesic between points $P_{1}$ and $P_{2} \, $, computed using the metric in Eq.~\eqref{eq-met-poin}, is \begin{align} L_{12} \, = \, 2\, \log \left( \cosh\rho_{*} \, R \, + \sqrt{\cosh^{2}\rho_{*} \, R^{2} \, - \, 1 \,} \right) \, . \end{align} In the scaling limit, this becomes \begin{align} L_{12} \, = \, \frac{4\pi}{\beta} \left(t \, - \, \ell/2\right) \, . \end{align} Now using Eq.~\eqref{eq-ent-wc}, the entanglement wedge cross-section is then given \begin{align} E_{W}(A|B) \, = \, \frac{\pi}{G\beta} \, \left(t-\ell/2\right) \, . \end{align} Combining the above result with the holographic formula for the reflected entropy, that is Eq.~\eqref{eq-ref-ent-holo}, and using the standard formula in AdS$_{3}$-CFT$_{2}$ correspondence, \begin{align} c \, = \, \frac{3}{2G} \, , \end{align} we get \begin{align} S_{R}(A|B) \, = \, 4 \, s_{eq} \, \left(t-\ell/2\right) \, , \end{align} where we have also used Eq.~\eqref{eq-seq}. To summarize, we find that the time-dependent reflected entropy in the limit $L\to\infty$ and finite $\ell$ is given by \begin{align} S_{R}(A|B) \, = \, 4 s_{eq} \, \times \label{eq-sr-holo-fin} \begin{cases} \, 0 \quad&\text{for $\quad t <\frac{\ell}{2}$} \, ,\\[0ex] \, t - \frac{\ell}{2} \quad&\text{for $\quad t > \frac{\ell}{2}$} \, . \end{cases} \end{align} This finishes our discussion of the time dependence of the holographic reflected entropy when $A$ and $B$ are semi-infinite regions. We find that our result matches the result in Eq.~\eqref{eq-fin-case-1-gen}. This should not be surprising, because as we discussed in Sec.~(\ref{sec-case-1}), the time dependence of the reflected entropy for two semi-infinite regions is completely fixed by the conformal symmetry and should be same for all CFTs. \section{Discussion} \label{sec-disc} In this paper, we have studied the time dependence of the reflected entropy in a thermal double model of \cite{hartman-maldacena}. We have focused on $(1+1)$-dimensional rational theories and holographic theories. For rational CFTs, we used the replica trick to calculate the reflected entropy. We found that the time dependence of the reflected entropy is the same as that of the mutual information. For holographic theories, we used the relation between the reflected entropy and the entanglement wedge cross-section to calculate the reflected entropy. As a future direction, it would be interesting to study the time evolution of the reflected entropy in CFTs which are neither rational nor holographic. There are many possible directions in which our work can be extended. For example, it has been argued that the dynamics of the entanglement entropy for holographic states can be described in terms of a minimal membrane \cite{Mezei-1,Mezei-3}. Since the reflected entropy in the holographic theories is also given by an extremization process, it is fair to expect that a similar membrane description holds for the dynamics of the reflected entropy\footnote{We thank M. Mezei for a discussion about it.}. It will be interesting to understand this membrane description in detail. \textit{Note:} Similar calculations of the time dependence of the reflected entropy are performed in an independent work \cite{Kudler-Flam:2020url} that appeared on arXiv simultaneously with the first version of this paper. \vskip .3cm {\bf Acknowledgments} It is a pleasure to thank C. Akers, N. Bao, T. Hartman, and P. Rath for helpful discussions, and to T. Hartman for useful feedback on a draft of this manuscript. This work was supported by the US Department of Energy under grant number DE-SC$0014123$. \bibliographystyle{utcaps} \section{Reply to referee report on JHEP-$189$P-$0220$} We would like to thank the referee for giving invaluable suggestions in improving the quality of this manuscript. Based on their suggestions, we have made the following changes. \begin{enumerate} \item \textit{The preprint studies rational CFTs in the limit of high temperature; and holographic CFTs in a further special limit. But the title is very general. We suggest that the title be modified to faithfully reflect the content of the preprint (in particular, to reflect the regime in which calculations are performed).} We have changed the title. The new title is now ``Time dependence of reflected entropy in rational and holographic conformal field theories." \item \textit{The abstract is not complete. We request the author to make the following changes in the abstract ...} We have changed the abstract accordingly. \item \textit{In the sentence below equation (1.2), please qualify the size of the subsystem. For subsystems with size more than half of the total system size, the reduced state is not expected to be thermal.} We agree with the referee and have added the adjective `small' appropriately. \item \textit{In the sentence below equation (1.4), what does "fixed by conformal symmetry" mean? Is $\beta$ fixed by conformal symmetry of the post-quench theory? No. So, equation (1.5) is not quite "fixed" by conformal symmetry. Please explain this nuance.} We agree with the referee that these wording could be confusing. We have changed the wording after equation $(1.4)$. \item \textit{In equation (1.5), please clarify what $S_A(t)$ is. Does it include the usual UV divergences in entanglement entropy? Is it renormalized in some way?} The referee is correct here. The entropy in this equation was `vacuum-subtracted' entropy. We have corrected this mistake in equation $(1.5)$. \item \textit{In equation (1.7), the definition of the TFD state is not correct....} We have corrected this mistake in equation $(1.7)$. \item \textit{In the second last paragraph on Page 2, please specify if "speed" is average speed or instantaneous speed.} We have added the word `instantaneous'. \item \textit{Please give a reference for...} We have added the suggested references. \item \textit{In the last paragraph on Page 3, state explicitly the definition of the scaling limit in the presence of the parameter $\ell$.} We have stated the scaling limit explicitly as suggested by the referee. \item \textit{After equation (1.16), we suggest that the author state their result for holographic CFTs (along with details about which regime it is calculated in).} We have included our final result for holographic CFTs inluding the limit in which we have calculated it. See equation $(1.17)$. \item \textit{Please expound on Footnote 2. Why is the statement given there true. } We have referred the readers to sec $3.3$ where this is discussed in detail. \item \textit{We suggest that the author draw a figure to explain the discussion of entanglement wedges on Page 7. However, this is not necessary.} Since we already have an example of an entanglement wedge in fig $2$, and since this is not necessary according to the referee, we have not included another figure on page $7$. \item \textit{On Page 16, Footnote 5 is missing. It is instead printed on Page 18. Please correct the layout here.} The footnote note $5$ is supposed to be on page $17$. We have checked that it is there. Moreover, we understand that latex takes care of the layout. \item \textit{In the captions on Page 17, correct the spelling of "shown". It is written wrongly as "shwon".} We thank the referee for spotting this mistake. We have corrected this typing mistake. \item \textit{In the first paragraph on Page 20, state explicitly the regime in which the calculation was done for holographic CFTs.} We have stated explicitly the limit in which we have done the calculation. \quad \quad We once again thank the referee for their valuable time. We hope that based on the changes that we have made, this manuscript will be published on JHEP. \end{enumerate} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In Generality Relativity we usually study the asymptotically flat 3-manifolds. It can be considered as the initial data set of the Einstein Equation. To study the geometry of such manifolds is also interesting and useful. In 1996 Huisken and Yau proved in \cite{HY} that in the asymptotically Schwarzschild manifold with positive mass, there exists a foliation by strictly stable constant mean curvature(CMC) spheres. They also use this foliation to defined the center of mass. The uniqueness of such foliation is a harder problem. Huisken and Yau proved that for $1/2<q\leq1$, stable CMC sphere outside $B_{H^{-q}}(0)$ is unique, where $H$ is the constant mean curvature of the surface. In 2002, Jie Qing and Gang Tian removed this radius condition and proved a sharper uniqueness theorem in \cite{QT}. They found a scaling invariant integral to detect the positive mass. To calculate this integral they blow down the constant mean curvature spheres in three differential scales and use some technique from harmonic maps to deal with the intermediate part. Then Lan-hsuan Huang considered in \cite{Huang center} the general asymptotically flat manifolds with Regge-Teitelboim condition. She proved a similar result as Huisken and Yau. Her uniqueness result also needs radius condition of the form $\ensuremath{r_{1}\leq C_{1}r_{0}^{\frac{1}{a}}}$ for some $a$ satisfying $\ensuremath{\frac{5-q}{2(2+q)}<a\leq1}$. In recent papers\cite{MICHAEL EICHMAIR AND JAN METZGER1,MICHAEL EICHMAIR AND JAN METZGER2}, Eichmair and Metzger considered the existence and uniqueness of isoperimetric surfaces in a kind of asymptotically flat manifolds which is $C^{0}$ asymptotic to Schwarzschild(for uniqueness they require more smoothness). In \cite{Shiguang Ma} I studied the uniqueness problem in $(m,k,\varepsilon)$-AF-RT manifold which requires the manifold to be close to asymptotically Schwarzschild manifold in some weak sense and under the weaker radius condition $\log(r_{1})\leq Cr_{0}^{1/4}$ I proved the uniqueness of the stable CMC spheres outside some sufficiently large compact set. In this article, we remove both the radius condition and the condition that the manifold being close to Schwarzschild. First we give some main definitions. A three-manifold $M$ with a Riemannian metric $g$ and a two-tensor $K$ is called an initial data set $(M,g,K)$ if $g$ and $K$ satisfy the constraint equations \begin{eqnarray} & & R_{g}-|K|_{g}^{2}+(tr_{g}(K))^{2}=16\pi\rho\nonumber \\ & & div_{g}(K)-d(tr_{g}(K))=8\pi J \end{eqnarray} where $R_{g}$ is the scalar curvature of the metric $g$, $tr_{g}(K)$ denotes $g^{ij}K_{ij}$, $\rho$ is the observed energy density, and $J$ is the observed momentum density. In this paper we consider asymptotically flat manifold of the following kind:\begin{Def} We say $(M,g,K)$ is asymptotically flat (AF) if it is an initial data set, and there is a compact subset $\widetilde{K}\subset M$ such that $M\setminus\widetilde{K}$ is diffeomorphic to $\mathbb{R}^{3}\setminus B_{1}(0)$ and there exists coordinate $\{x^{i}\}$ such that \begin{equation} g_{ij}(x)=\delta_{ij}+h_{ij}(x) \end{equation} \begin{eqnarray} h_{ij}(x)=O_{5}(|x|^{-1}) & & K_{ij}(x)=O_{1}(|x|^{-2}) \end{eqnarray} Also, $\rho$ and $J$ satisfy \begin{eqnarray} \rho(x)=O(|x|^{-4}) & & J(x)=O(|x|^{-4}) \end{eqnarray} Here, $f=O_{k}(|x|^{-q})$ means $\partial^{l}f=O(|x|^{-l-q})$ for $l=0,\cdots,k$. \end{Def} $M\setminus\widetilde{K}$ is called an end of this asymptotically flat manifold. Here we only consider the asymptotically flat manifolds with one end. We can define mass for this end as: \[ m=\lim_{r\rightarrow\infty}\frac{1}{16\pi}\int_{|x|=r}(h_{ij,j}-h_{jj,i})v_{g}^{i}d\mu_{g}, \] where $v_{g}$ and $d\mu_{g}$ are the unit normal vector and volume form with respect to the metric $g$. From \cite{Bartink}we know the mass is well defined if the scalar curvature $R_{g}$ is $L^{1}$ integrable. From the constraint equation we have $R_{g}$ decays like $r^{-4}$ which is in $L^{1}$ so the mass is well defined. For the definition of center of mass we introduce the Regge-Teitelboim(RT) condition \cite{RT}: \begin{Def}We say $(M,g,K)$ is asymptotically flat satisfying the Regge-Teitelboim condition (AF-RT) if it is AF, and g,K satisfy these asymptotically even/odd conditions \begin{eqnarray} h_{ij}^{odd}(x)=O_{2}(|x|^{-2}) & & K_{ij}^{even}(x)=O_{1}(|x|^{-3}) \end{eqnarray} Also, $\rho$ and $J$ satisfy \begin{eqnarray} \rho^{odd}(x)=O(|x|^{-5}) & & J^{odd}(x)=O(|x|^{-5}) \end{eqnarray} where $f^{odd}(x)=f(x)-f(-x)$ and $f^{even}(x)=f(x)+f(-x).$\end{Def} For (AF-RT) manifolds, the center of mass $C$ is defined as \begin{equation} C^{\alpha}=\frac{1}{16\pi m}\lim_{r\rightarrow\infty}(\int_{|x|=r}x^{\alpha}(h_{ij,i}-h_{ii,j})v_{g}^{j}d\mu_{g}-\int_{|x|=r}(h_{i\alpha}v_{g}^{i}-h_{ii}v_{g}^{\alpha})d\mu_{g}). \end{equation} From \cite{Huang center}, we know it is well defined. Let $\Sigma$ be a constant mean curvature(CMC for short) surface. We say it is stable if the second variation operator has only non-negative eigenvalues when restricted to the functions with $0$ mean value, i.e. \begin{equation} \int_{\Sigma}(|A|^{2}+Ric(v_{g},v_{g}))f^{2}d\mu\leq\int_{\Sigma}|\nabla f|^{2}d\mu \end{equation} for function $f$ with $\int_{\Sigma}fd\mu=0$, where $A$ is the second fundamental form, and $Ric(v_{g},v_{g})$ is the Ricci curvature in the normal direction with respect to the metric $g.$ In this paper we prove the following uniqueness theorem: \begin{thm}{Suppose $(M,g,K)$ is AF-RT manifold with positive mass. Then there exists a compact set $\widetilde{K}$}, such that for any $H>0$ sufficiently small{, there is only one stable sphere with constant mean curvature }$H$ that separates infinity from {$\widetilde{K}.$}\end{thm} This theorem follows form the key lemma below and Huang's uniqueness theorem: \begin{lem}\label{key lemma} If $(M,g)$ is asymptotically flat in the following sense \[ g_{ij}=\delta_{ij}+h_{ij}(x), \] where $h_{ij}(x)=O_{5}(|x|^{-1})$ and $h_{ij}^{odd}(x)=O_{2}(|x|^{-2})$. And the scalar curvature $R_{g}$ is $L^{1}$ integrable. Then if the mass is positive. For any sequence of stable CMC spheres $\Sigma_{n}$$ $ which separate infinity from the compact part, if \[ \lim_{n\rightarrow\infty}r_{0}(\Sigma_{n})=\infty \] then there exist some constant $C$ and some sufficiently large compact set $T$ such that for any $\Sigma_{n}$ outside $T$ we have $r_{1}(\Sigma_{n})/r_{0}(\Sigma_{n})\leq C.$ Where \begin{align*} \lefteqn{r_{0}(\Sigma_{n})=\inf\{|x|:x\in\Sigma_{n}\}}\\ & r_{1}(\Sigma_{n})=\sup\{|x|:x\in\Sigma_{n}\} \end{align*} \end{lem} Now we state the main idea of this article. To detect the positive mass, we follow the idea of Qing and Tian \cite{QT}, using an integral: \[ \int_{\Sigma}(H-H_{e})<v_{e},b>d\mu_{e}, \] where $H$ is the constant mean curvature of the sphere $\Sigma,$ $H_{e}$ is the mean curvature in Euclidean metric, $v_{e}$ is the out-point unit normal vector of $\Sigma$ and $b$ is some constant vector to be chosen later. In \cite{Shiguang Ma} I used harmonic coordinates to calculate this integral which needs the original metric to be close to Schwarzschild manifold. Under the radius condition $log(r_{1})\leq Cr_{0}^{1/4}$, I proved that the center of the sphere can not go very far away after suitably blowing-down which is sufficient to prove the uniqueness. In this article, I find a direct way to calculate this integral in Chapter 5 which does not need the manifold to be close to Schwarzschild. Also we find a better estimate on the second fundamental form, Lemma \ref{The new estimate for the second fundamental form} which removes the radius condition. \begin{description} \item [{Acknowledgement}] I would like to thank My advisor Professor Gang Tian for long time help and encouragement. I show my special thanks to Professor Jie Qing for helpful discussions. I also thank Yalong Shi for discussions on harmonic maps. \end{description} \section{Curvature estimate} All the curvature estimates in my original paper\cite{Shiguang Ma} are valid in this case if we require $q=1$ in that paper. We state the results directly. \begin{lem} Suppose $\Sigma$ is a stable constant mean curvature sphere in the asymptotically flat manifold. We have for $r_{0}$ sufficiently large$\AA$ \[ \int_{\Sigma}|\AA|^{2}d\mu\leq Cr_{0}^{-1} \] \begin{equation} H^{2}|\Sigma|\leq C \end{equation} \begin{equation} \int_{\Sigma}H^{2}d\mu=16\pi+O(r_{0}^{-1}) \end{equation} \end{lem} \begin{lem}Suppose $\Sigma$ is a CMC sphere in an asymptotically flat end $(R^{3}\setminus B_{1}(0))$, then we have: \[ \int_{\Sigma}H_{e}^{2}d\mu=16\pi+O(r_{0}^{-1}), \] where $H_{e}$ denotes the mean curvature with respect to the background Euclidean metric. \end{lem} \begin{proof} From the following explicitly expression \begin{eqnarray} & & H-H_{e}=-f^{ik}h_{kl}f^{lj}A_{ij}+\frac{1}{2}Hv^{i}v^{j}h_{ij}-f^{ij}v^{l}\overline{\nabla}_{i}h_{jl}\label{H-H_e}\\ & & +\frac{1}{2}f^{ij}v^{l}\overline{\nabla}_{l}h_{ij}\pm C|h||\overline{\nabla}h|\pm C|h|^{2}|A| \end{eqnarray} and the lemma above we can deduce the result. \end{proof} \begin{lem}\label{Sobolev}Suppose $\Sigma$ is a CMC sphere in the asymptotically flat end with $r_{0}$ sufficiently large and that $\int_{\Sigma}H^{2}\leq C,$ then: \[ (\int_{\Sigma}f^{2}d\mu)^{1/2}\leq C(\int_{\Sigma}|\nabla f|d\mu+\int_{\Sigma}H|f|d\mu). \] \end{lem} \begin{lem}Suppose $\Sigma$ is a CMC sphere in an asymptotically flat end with $r_{0}(\Sigma)$ sufficiently large, then: \[ C_{1}H^{-1}\leq diam(\Sigma)\leq C_{2}H^{-1}, \] where the $diam(\Sigma)$ denotes the diameter of $\Sigma$ in the Euclidean space $R^{3}.$ In particular, if the surface $\Sigma$ separates the infinity from the compact part, then: \[ C_{1}H^{-1}\leq r_{1}(\Sigma)\leq C_{2}H^{-1}. \] \end{lem} Then from Simons identity and Sobolev inequality Lemma \ref{Sobolev} we have the following basic curvature estimate: \begin{thm}Suppose that $(R^{3}\setminus B_{1}(0),g)$ is an asymptotically flat end. Then there exist positive numbers $\sigma_{0}$, $\delta_{0}$ such that for any CMC surface in the end, which separates the infinity from the compact part, we have: \[ |\AA|^{2}(x)\leq C|x|^{-2}\int_{B_{\delta_{0}|x|}(x)}|\AA|^{2}d\mu+C|x|^{-4}\leq C|x|^{-2}r_{0}^{-1} \] \[ |\nabla\AA|^{2}(x)\leq C|x|^{-2}\int_{B_{\delta_{0}|x|}(x)}|\nabla\AA|^{2}d\mu+C|x|^{-6}\leq C|x|^{-4}r_{0}^{-1/2} \] \[ \] \end{thm} \section{Blow down analysis} Now we have the three blow-downs as usual. First we consider \begin{align*} \widetilde{\Sigma} & =\frac{1}{2}H\Sigma=\{\frac{1}{2}Hx:x\in\Sigma\}\\ \end{align*} Suppose that there is a sequence of constant mean curvature surfaces $\{\Sigma_{i}\}$ such th\index{that}at \[ \lim_{i\rightarrow\infty}r_{0}(\Sigma_{i})=\infty, \] we have known that \[ \lim_{i\rightarrow\infty}\int_{\Sigma_{i}}H_{e}^{2}d\mu_{e}=16\pi. \] Then from L.Simon \cite{L.Simon} Theorem 3.1, we have \begin{lem}\label{Blow down by H/2}Suppose that $\{\Sigma_{i}\}$ is a sequence of constant mean curvature surfaces in a given asymptotically flat end $(\mathbb{R}^{3}\setminus B_{1}(0),g)$ and that \begin{equation} \lim_{i\rightarrow\infty}r_{0}(\Sigma_{i})=\infty. \end{equation} And suppose that $\Sigma_{i}$ separates the infinity from the compact part. Then, there is a subsequence of $\{\widetilde{\Sigma}_{i}\}$ which converges in Gromov-Hausdorff distance to a round sphere $S_{1}^{2}(a)$ of radius $1$ and centered at $a\in\mathbb{R}^{3}$. Moreover, the convergence is in $C^{2,\alpha}$ sense away from the origin.\end{lem} Then, we use a smaller scale $r_{0}$ to blow down the surface \begin{equation} \widehat{N}=r_{0}(\Sigma)^{-1}\Sigma=\{r_{0}^{-1}x:x\in\Sigma\}. \end{equation} \begin{lem}\label{Blow down by r_0^{-1}}Suppose that $\{\Sigma_{i}\}$ is a sequence of constant mean curvature surfaces in a given asymptotically flat end $(\mathbb{R}^{3}\setminus B_{1}(0),g)$ and that \begin{equation} \lim_{i\rightarrow\infty}r_{0}(\Sigma_{i})=\infty. \end{equation} And suppose that \begin{equation} \lim_{i\rightarrow\infty}r_{0}(\Sigma_{i})H(\Sigma_{i})=0. \end{equation} Then there is a subsequence of $\{\widehat{\Sigma}_{i}\}$ converges to a 2-plane at distance $1$ from the origin. Moreover the convergence is in $C^{2,\alpha}$ in any compact set of $R^{3}$.\end{lem} We must understand the behavior of the surfaces $\Sigma_{i}$ in the scales between $r_{0}(\Sigma_{i})$ and $H^{-1}(\Sigma_{i})$. We consider the scale $r_{i}$ such that \begin{eqnarray} \lim_{i\rightarrow\infty}\frac{r_{0}(\Sigma_{i})}{r_{i}}=0 & & \lim_{i\rightarrow\infty}r_{i}H(\Sigma_{i})=0 \end{eqnarray} and blow down the surfaces \begin{equation} \overline{\Sigma}_{i}=r_{i}^{-1}\Sigma=\{r_{i}^{-1}x:x\in\Sigma\}. \end{equation} \begin{lem}Suppose that$\{\Sigma_{i}\}$ is a sequence of constant mean curvature surfaces in a given asymptotically flat end $(\mathbb{R}^{3}\setminus B_{1}(0),g)$ and that \begin{equation} \lim_{i\rightarrow\infty}r_{0}(\Sigma_{i})=\infty \end{equation} And suppose that $r_{i}$ are such that \begin{eqnarray} \lim_{i\rightarrow\infty}\frac{r_{0}(\Sigma_{i})}{r_{i}}=0 & & \lim_{i\rightarrow\infty}r_{i}H(\Sigma_{i})=0 \end{eqnarray} Then there is a subsequence of $\{\overline{\Sigma}_{i}\}$ converges to a 2-plane at the origin in Gromov-Hausdorff distance. Moreover the convergence is $C^{2,\alpha}$ in any compact subset away from the origin. \end{lem} \section{Asymptotically analysis} In this chapter, we mainly follow the same idea as \cite{QT} or \cite{Shiguang Ma}. However in the end we will derive a new estimate on the second fundamental form Lemma \ref{The new estimate for the second fundamental form} which makes the uniqueness possible. First let us revise the properties of harmonic function on a column. Denote \[ \|u\|_{1,i}^{2}=\int_{[(i-1)L,iL]\times S^{1}}|u|^{2}+|\nabla u|^{2}dtd\theta, \] where$(t,\theta)$is the standard column coordinate. \begin{lem}\label{three element lemma}Suppose $u\in W^{1,2}(N,R^{k})$ satisfies \begin{equation} \Delta u+A\cdot\nabla u+B\cdot u=h \end{equation} in $N$, where$N=[0,3L]\times S^{1}$. And suppose that $L$ is given and large. Then there exists a positive number $\delta_{0}$ such that if \begin{equation} \|h\|_{L^{2}(N)}\leq\delta_{0}\max_{1\leq i\leq3}\|u\|_{1,i} \end{equation} and \begin{eqnarray} \|A\|_{L^{\infty}(N)}\leq\delta_{0} & & \|B\|_{L^{\infty}(N)}\leq\delta_{0} \end{eqnarray} then, (a)$\|u\|_{1,3}\leq e^{-\frac{1}{2}L}\|u\|_{1,2}$ implies$\|u\|_{1,2}<e^{-\frac{1}{2}L}\|u\|_{1,1}$ (b)$\|u\|_{1,1}\leq e^{-\frac{1}{2}L}\|u\|_{1,2}$ implies $\|u\|_{1,2}<e^{-\frac{1}{2}L}\|u\|_{1,3}$ (c)If both $\int_{L\times S^{1}}ud\theta$ and $\int_{2L\times S^{1}}ud\theta\leq\delta_{0}\max_{1\leq i\leq3}\|u\|_{1,i}$, then either $\|u\|_{1,2}<e^{-\frac{1}{2}L}\|u\|_{1,1}$ or $\|u\|_{1,2}<e^{-\frac{1}{2}L}\|u\|_{1,3}$\end{lem} Given a surface $\Sigma$ in $R^{3}$. Recall \[ \Delta_{e}v+|\nabla_{e}v|^{2}v=\nabla_{e}H_{e}, \] where $v$ is the Gauss map from$\Sigma\rightarrow S^{2}$. For the constant mean curvature spheres in the asymptotically flat end $(R^{3}\setminus B_{1}(0),g)$, we have \begin{lem} \[ |\nabla_{e}H_{e}|(x)\leq C|x|^{-3} \] \end{lem} \begin{proof}Because of the uniform equivalence of the metric $g$ and the euclidean metric, we can prove: \[ |\nabla H_{e}|(x)\leq C|x|^{-3} \] instead. From the expression of $H-H_{e}$(\ref{H-H_e}), we have \begin{eqnarray} & & |\nabla H_{e}|\leq|\overline{\nabla}h_{ij}||A|+|h_{ij}||A|^{2}+|h_{ij}||\nabla{\AA}_{ij}|+H|A||h_{ij}|+H|\overline{\nabla}h_{ij}|\nonumber \\ & & +|A||\overline{\nabla}h_{ij}|+|\overline{\nabla}^{2}h|\nonumber \\ & & \leq C|x|^{-3} \end{eqnarray} \end{proof} Suppose $\Sigma$ is a CMC surface in the asymptotically flat end. Set \[ A_{r_{1},r_{2}}=\{x\in\Sigma:r_{1}\leq|x|\leq r_{2}\} \] and $A_{r_{1},r_{2}}^{0}$stand for the standard annulus in $R^{2}$. Consider the behavior of $v$ on $A_{Kr_{0}(\Sigma),sH^{-1}(\Sigma)}$ of $\Sigma$ where$K$ will be fixed large and $s$ will be fixed small. The lemma below gives us a good coordinate on the surface. \begin{lem}Suppose $\Sigma$ is a constant mean curvature surface in a given asymptotically flat end $(\mathbb{R}^{3}\setminus B_{1}(0),g)$. Then, for any $\varepsilon>0$ and $L$ fixed and large, there are$M$,$s$ and $K$ such that, if$r_{0}\geq M$ and $Kr_{0}(\Sigma)<r<sH^{-1}(\Sigma)$, then $(r^{-1}A_{r,e^{L}r},r^{-2}g_{e})$ may be represented as $(A_{1,e^{L}}^{0},\overline{g})$ and \begin{equation} \|\overline{g}-|dx|^{2}\|_{C^{1}(A_{1,e^{L}}^{0})}\leq\varepsilon. \end{equation} In other words, in the cylindrical coordinates $(S^{1}\times[\log r,L+\log r,\overline{g}_{c}])$ \begin{equation} \|\overline{g}_{c}-(dt^{2}+d\theta^{2})\|_{C^{1}(S^{1}\times[\log r,L+\log r])}\leq\varepsilon \end{equation} \end{lem} Now consider the cylindrical coordinates $(t,\theta)$ on $(S^{1}\times[\log Kr_{0},\log sH^{-1}])$, then the tension field \begin{equation} |\tau(v)|=r^{2}|\nabla_{e}H_{e}|\leq Cr^{-1} \end{equation} for $t\in[\log Kr_{0},\log sH^{-1}]$. Thus, \begin{equation} \int_{S^{1}\times[t,t+L]}|\tau(v)|^{2}dtd\theta\leq Cr^{-2} \end{equation} Let $I_{i}$ stand for$S^{1}\times[\log Kr_{0}+(i-1)L,\log Kr_{0}+iL]$, and $N_{i}$ stand for $I_{i-1}\cup I_{i}\cup I_{i+1}$. On $\Sigma_{n}$ we assume $\log(sH^{-1})-\log(Kr_{0})=l_{n}L$. Now we get the energy decay by an argument which is a little different from that of \cite{QT}. Suppose $f_{ij}$ is the metric on the surface $\Sigma_{n}$ , i.e. the restriction of $g_{ij}$ on $\Sigma_{n}$$.$ For sufficiently large $K$, we consider $(\Sigma_{n}\cap B_{Kr_{0}}^{c}(0),f_{ij}|x|^{-4}(Kr_{0})^{2})$ which is close to the unit ball of $\mathbb{R}^{2}$. Now the Gauss map $v_{n}:\Sigma_{n}\rightarrow S^{2}$ induces a map $\tilde{v}_{n}:B_{1}(0)\rightarrow S^{2}$. Note that the energy of $\tilde{v}$ will concentrate at the origin of $B_{1}(0)$ and the tension field $\tilde{\tau}$ of the map $\tilde{v}$ satisfies \[ |\tilde{\tau}|\leq C|x|^{-3}|x|^{4}(Kr_{0})^{-2}=C|x|(Kr_{0})^{-2}=\frac{(Kr_{0})^{-1}}{\sqrt{4s^{2}e^{-2l_{n}L}+\tilde{r}^{2}}} \] where $\tilde{r}$ denotes the radius function of the unit ball. We notice that the tension field $\tilde{\tau}$ is not uniformly bounded in $L^{2}(B_{1}(0))$. But for any $p\in(1,2)$, it is uniformly bounded in $L^{p}(B_{1}(0))$. To use the $L^{p}$ theory of harmonic maps, we first find the weak limit of the map $\tilde{v}_{n}$ as $n$ tend to infinity. By Lemma \ref{Blow down by r_0^{-1}}, we can find a subsequence of $\tilde{v}_{n}$ (also denoted by $\tilde{v}_{n}$ ) which converges weakly in $W^{1,2}(B_{1}(0))$ to a constant map $\tilde{v}_{0}$ which is the unit normal vector of the limit plane of Lemma \ref{Blow down by r_0^{-1}}. Now we introduce Theorem 6.5.1 of \cite{Lin and Wang} \begin{thm}(Theorem 6.5.1 of \cite{Lin and Wang}) Let $M$ be a Riemannian surface without boundary. For any $p>1$, assume that $\{u_{i}\}\subset W^{1,2}(M,S^{L-1})$ are such that the tension fields: \[ \tau(u_{i})\equiv\Delta u_{i}+|\nabla u_{i}|^{2}u_{i} \] are bounded in $L^{p}(M)$. If $u_{i}$ converges to $u$ weakly in $W^{1,2}(M,S^{L-1})$, then there exist finitely many harmonic $S^{2}$'s, $\{\omega_{j}\}_{j=1}^{l}$, $\{a_{i}^{j}\}_{j=1}^{l}\subset M$, $\{\lambda_{i}^{j}\}_{j=1}^{l}\subset R_{+}$ such that \[ \lim_{i\rightarrow\infty}\|u_{i}-u-\sum_{j=1}^{l}\omega_{i}^{j}\|_{L^{\infty}(M)}=0 \] and hence \[ \lim_{i\rightarrow\infty}\|u_{i}-u-\sum_{j=1}^{l}\omega_{i}^{j}\|_{W^{1,2}(M)}=0 \] where \[ \omega_{i}^{j}(\cdot)=\omega_{j}(\frac{\cdot-a_{i}^{j}}{\lambda_{i}^{j}})-\omega_{j}(\infty) \] \end{thm} From the proof of this theorem we find the theorem holds also for $M=B_{1}(0)$ which is a Riemann surface with boundary. In the case we consider, there is only one bubble $\omega$ blown up at the origin. So from the theorem above we have: \[ \lim_{n\rightarrow\infty}\|\tilde{v}_{n}-\tilde{v}_{0}-(\omega(\frac{\cdot}{s\cdot e^{-l_{n}L}})-\omega(\infty))\|_{L^{\infty}(B_{1}(0))}=0. \] So from Lemma \ref{Blow down by H/2}, for $s$ sufficiently small we have: \[ OSC_{B_{1}(0)\backslash B_{e^{-l_{nL}}}(0)}\tilde{v}_{n}\leq OSC_{B_{1}(0)\backslash B_{e^{-l_{nL}}}(0)}\omega(\frac{\cdot}{s\cdot e^{-l_{n}L}})+o(1) \] can be arbitrarily small. So we have \begin{lem}\label{no neck lemma}For any $\varepsilon>0$, there is some $\delta>0$ and $M>0$ such that if $0<s<\delta$ and $n>M$ we have \[ OSC_{\Sigma_{n}\cap B_{sH^{-1}}(0)}v\leq\varepsilon. \] \end{lem} Now in the cylindrical coordinates $(t,\theta)$ on $(S^{1}\times[\log Kr_{0},\log sH^{-1}])$, we consider the equation satisfied by $v_{n}-v_{0}$, where $v_{0}=\tilde{v}_{0}$. If we denote the Laplacian and gradient in this coordinate as $\hat{\Delta}=\frac{\partial^{2}}{\partial t^{2}}+\frac{\partial^{2}}{\partial\theta^{2}}$ and $\hat{\nabla}=(\frac{\partial}{\partial t},\frac{\partial}{\partial\theta})$, then \[ \hat{\Delta}(v_{n}-v_{0})+v_{n}\hat{\nabla}v_{n}\cdot\hat{\nabla}(v_{n}-v_{0})=\tau \] where $|\tau|\leq Ce^{-t}.$ And \[ |v_{n}\hat{\nabla}v_{n}|\leq|\hat{\nabla}v_{n}|\leq C(s+r_{0}^{-1/2}) \] which can be very small. At last from the lemma above for any $1\leq i\leq l_{n}$, we have \[ \int_{iL\times S^{1}}(v_{n}-v_{0})d\theta\leq\varepsilon, \] so we can use Lemma \ref{three element lemma} to get the energy decay: \begin{lem}For each $i\in[3,l_{n}-2]$, there exists a geodesic $\gamma$ such that \begin{equation} \int_{I_{i}}|\hat{\nabla}v_{n}|^{2}dtd\theta\leq C(e^{-iL}+e^{-(l_{n}-i)L})(s^{2}+r_{0}^{-1}).\label{22-1} \end{equation} \end{lem} \begin{lem}\label{estimate on the normal vector in the neck}Suppose that$\{\Sigma_{n}\}$ is a sequence of constant mean curvature surfaces in a given asymptotically flat end $(\mathbb{R}^{3}\setminus B_{1}(0),g)$ and that \begin{equation} \lim_{i\rightarrow\infty}r_{0}(\Sigma_{n})=\infty \end{equation} And suppose that \begin{equation} \lim_{n\rightarrow\infty}r_{0}(\Sigma_{n})H(\Sigma_{n})=0 \end{equation} Then there exist a large number $K$, a small number $s$ and $n_{0}$ such that, when $n\geq n_{0}$, \begin{equation} \max_{I_{i}}|\hat{\nabla}v|\leq C(e^{-\frac{i}{2}L}+e^{-\frac{(l_{n}-i)}{2}L})(s+r_{0}^{-\frac{1}{2}}) \end{equation} where \begin{equation} I_{i}=S^{1}\times[\log(Kr_{0}(\Sigma_{n}))+(i-1)L,\log(Kr_{0}(\Sigma_{n}))+iL] \end{equation} and \begin{eqnarray} i\in[0,l_{n}] & & \log(Kr_{0}(\Sigma_{n}))+l_{n}L=\log(sH^{-1}(\Sigma_{n})) \end{eqnarray} \end{lem} From the Lemma above, we get the new estimate of the second fundamental form: \begin{lem}\label{The new estimate for the second fundamental form}If $\Sigma$ is a stable CMC sphere in the asymptotically flat end, then the second fundamental form of $\Sigma$ has the following estimate: For a point $x\in(B_{Kr_{0}e^{(i+1)L}}\setminus B_{Kr_{0}e^{iL}})\cap\Sigma$, \[ |A(x)|\leq C|x|^{-1}(e^{-\frac{i}{2}L}+e^{-\frac{(l_{n}-i)}{2}L})(s+r_{0}^{-\frac{1}{2}}) \] where $sH^{-1}=Kr_{0}\cdot e^{l_{n}L}$. \end{lem} \begin{proof}Note that \[ |A(x)|\leq C|\nabla v(x)|\leq C|x|^{-1}\sup_{I_{i}}|\hat{\nabla}v|\leq C|x|^{-1}(e^{-\frac{i}{2}L}+e^{-\frac{(l_{n}-i)}{2}L})(s+r_{0}^{-\frac{1}{2}}) \] \end{proof} \begin{cor}\label{Choose a mean value of the normal vector}Assume the same condition as Proposition\ref{estimate on the normal vector in the neck}. Let $v_{n}=v(p_{n})$ for some $p_{n}\in I_{\frac{l_{n}}{2}}$. Then \begin{equation} \sup_{I_{i}}|v-v_{n}|\leq C(e^{-\frac{1}{2}iL}+e^{-\frac{1}{4}l_{n}L})(s+r_{0}^{-\frac{1}{2}}) \end{equation} for $i\in[0,\frac{1}{2}l_{n}]$ \begin{equation} \sup_{I_{i}}|v-v_{n}|\leq C(e^{-\frac{1}{4}l_{n}L}+e^{-\frac{1}{2}(l_{n}-i)L})(s+r_{0}^{-\frac{1}{2}}) \end{equation} for $i\in[\frac{1}{2}l_{n},l_{n}]$\end{cor} \section{Mass integral} In this section we consider the integral in a very different way compared with \cite{Shiguang Ma}. In that paper, we use the harmonic coordinates to reduce the integral to explicit form. But here we calculate it directly. We have a new estimate on the second fundamental form Lemma \ref{The new estimate for the second fundamental form}, so we can deal with the bad term in the integral. Now by contradiction we assume that Lemma \ref{key lemma} were false. Then we could find a subsequence of stable CMC spheres $\{\Sigma_{n}\}$ such that : $\tilde{\Sigma}_{n}=\frac{1}{2}H\Sigma=\{\frac{1}{2}Hx:x\in\Sigma_{n}\}$ converges to some sphere $S_{1}(a)$, for some unit vector $a$. Then the origin lies on $S_{1}(a).$ For $b=-a,$we consider the integral: \begin{eqnarray} & & \int_{\Sigma}(H-H_{e})<v_{e}\cdot b>_{e}d\mu_{e}=\int_{\Sigma}(-f^{ik}h_{kl}f^{lj}A_{ij}+\frac{1}{2}Hv^{i}v^{j}h_{ij}-f^{ij}v^{l}\overline{\nabla}_{i}h_{jl}\nonumber \\ & & +\frac{1}{2}f^{ij}v^{l}\overline{\nabla}_{l}h_{ij}\pm C|h||\overline{\nabla}h|\pm C|h|^{2}|A|)<v_{e}\cdot b>_{e}d\mu_{e}+O(r_{0}^{-1}), \end{eqnarray} here $i,j$ ran from $1$ to 3, and $f_{ij}$ is the restriction of $g_{ij}$ . From \begin{eqnarray} & & \int_{\Sigma_{n}}-f^{ij}v^{l}(\overline{\nabla}_{i}h_{jl})v^{m}b^{m}d\mu_{e}\nonumber \\ & = & \frac{1}{2}\int_{\Sigma_{n}}(f^{ij}h_{jk}f^{kl}A_{li}-Hv^{j}v^{l}h_{jl})v^{m}b^{m}d\mu_{e}+\frac{1}{2}\int_{\Sigma_{n}}f^{ij}v^{l}h_{jl}A_{ik}f^{km}b^{m}d\mu_{e}\nonumber \\ & & -\frac{1}{2}\int_{\Sigma_{n}}f^{ij}v^{l}(\overline{\nabla}_{i}h_{jl})v^{m}b^{m}d\mu_{e}, \end{eqnarray} we change the integral into: \begin{eqnarray} \int_{\Sigma_{n}}(H-H_{e})<v_{e}\cdot b>_{e}d\mu_{e}=\int_{\Sigma_{n}}-\frac{1}{2}f^{ik}h_{kl}f^{lj}A_{ij}v^{m}b^{m}+\frac{1}{2}f^{ij}v^{l}h_{jl}A_{ik}f^{km}b^{m}\nonumber \\ -\frac{1}{2}f^{ij}v^{l}\overline{\nabla}_{i}h_{jl}v^{m}b^{m}+\frac{1}{2}f^{ij}v^{l}\overline{\nabla}_{l}h_{ij}v^{m}b^{m}d\mu_{e}+O(r_{0}^{-1})\nonumber \\ \end{eqnarray} Now for fixed $s$ sufficiently small and $K$ sufficiently large, we divide the integral into three parts: $\int_{\Sigma_{n}\cap B_{Kr_{0}}}$, $\int_{\Sigma_{n}\cap B_{sH^{-1}}^{c}}$, $\int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}$. For $\int_{\Sigma_{n}\cap B_{sH^{-1}}^{c}}$if we blow down $\Sigma_{n}$ by $H/2$, and denote $\tilde{\Sigma}_{n}=H\Sigma_{n}/2$, we have \begin{align*} & \int_{\Sigma_{n}\cap B_{sH^{-1}}^{c}}-\frac{1}{2}h_{ij}A_{ij}v^{m}b^{m}+\frac{1}{2}h_{il}A_{im}v^{l}b^{m}-\frac{1}{2}v^{l}\partial_{\alpha}h_{\alpha l}v^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{\alpha\alpha}v^{m}b^{m}d\mu_{e}\\ & =\int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}h_{ij}^{2/H}A_{ij}(\tilde{\Sigma}_{n})v^{m}b^{m}+\frac{1}{2}h_{il}^{2/H}A_{im}(\tilde{\Sigma}_{n})v^{l}b^{m}\\ & -\frac{1}{2}v^{l}\partial_{\alpha}h_{\alpha l}^{2/H}v^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{\alpha\alpha}^{2/H}v^{m}b^{m}d\mu_{e}(\tilde{\Sigma}_{n}), \end{align*} where $h_{ij}^{r}(x)=r\cdot h_{ij}(rx).$ Now from Theorem \ref{Blow down by H/2} and the estimate of the second fundamental form we have for fixed $s$ small, as $n\rightarrow\infty$, $\tilde{\Sigma}_{n}\cap B_{s/2}^{c}$will converge in $C^{2,\alpha}$ sense to $S_{1}(a)\cap B_{s/2}^{c}$. So we have $A_{ij}(\tilde{\Sigma}_{n})\rightarrow f_{ij}(\tilde{\Sigma}_{n})$ and $v^{i}\rightarrow x^{i}-a^{i}$. We know there is some constant $C$ such that \[ |h_{ij}^{r}(x)|_{C^{0}}\leq C|x|^{-1},|h_{ij,k}^{r}(x)|_{C^{0}}\leq C|x|^{-2}. \] Now for $\varepsilon>0$ we can choose $n$ sufficiently large such that: \begin{align*} \lefteqn{}| & \int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}h_{ij}^{2/H}A_{ij}(\tilde{\Sigma}_{n})v^{m}b^{m}+\frac{1}{2}h_{il}^{2/H}A_{im}(\tilde{\Sigma}_{n})v^{l}b^{m}\\ & -\frac{1}{2}v^{l}\partial_{\alpha}h_{\alpha l}^{2/H}v^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{\alpha\alpha}^{2/H}v^{m}b^{m}d\mu_{e}(\tilde{\Sigma}_{n})\\ & -\int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}h_{\alpha\alpha}^{2/H}v^{m}b^{m}+\frac{1}{2}h_{\alpha l}^{2/H}v^{l}b^{\alpha}\\ & -\frac{1}{2}v^{l}\partial_{\alpha}h_{\alpha l}^{2/H}(x^{m}-a^{m})b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{\alpha\alpha}^{2/H}(x^{m}-a^{m})b^{m}d\mu_{e}(\tilde{\Sigma}_{n})|\\ & \leq\varepsilon/8, \end{align*} where the Greek indices ran from $1$ to $2.$ By a simple argument we have: \begin{align*} & \int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}h_{\alpha\alpha}^{2/H}v^{m}b^{m}+\frac{1}{2}h_{\alpha l}^{2/H}v^{l}b^{\alpha}-\frac{1}{2}v^{l}\partial_{\alpha}h_{\alpha l}^{2/H}(x^{m}-a^{m})b^{m}\\ & +\frac{1}{2}v^{l}\partial_{l}h_{\alpha\alpha}^{2/H}(x^{m}-a^{m})b^{m}d\mu_{e}(\tilde{\Sigma}_{n})\\ & =\int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}h_{ii}^{2/H}v^{m}b^{m}+\frac{1}{2}h_{il}^{2/H}v^{l}b^{i}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}(x^{m}-a^{m})b^{m}\\ & +\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}(x^{m}-a^{m})b^{m}d\mu_{e}(\tilde{\Sigma}_{n})\\ & =\int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}h_{ii}^{2/H}v^{m}b^{m}+\frac{1}{2}h_{il}^{2/H}v^{l}b^{i}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}x^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}x^{m}b^{m}d\mu_{e}(\tilde{\Sigma}_{n})\\ & +\int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}d\mu_{e}(\tilde{\Sigma}_{n}). \end{align*} We denote the inside of $\tilde{\Sigma}_{n}$ by $int(\tilde{\Sigma}_{n})$. Then by divergence formula we have \begin{align*} \lefteqn{} & \int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}h_{ii}^{2/H}v^{m}b^{m}+\frac{1}{2}h_{il}^{2/H}v^{l}b^{i}\\ & -\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}x^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}x^{m}b^{m}d\mu_{e}(\tilde{\Sigma}_{n})\\ & =\int_{int(\tilde{\Sigma}_{n})\cap\partial B_{s/2}^{c}(0)}-\frac{1}{2}h_{ii}^{2/H}v^{m}b^{m}+\frac{1}{2}h_{il}^{2/H}v^{l}b^{i}\\ & -\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}x^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}x^{m}b^{m}d\mu_{e}\\ & -\frac{1}{2}\int_{int(\tilde{\Sigma}_{n})\cap B_{s/2}^{c}(0)}(h_{il,il}^{2/H}-h_{ii,ll}^{2/H})(x^{m}b^{m})dv. \end{align*} Note that the scalar curvature $R_{g}$ is $L^{1}$ integrable and $R_{g}=h_{ij,ij}-h_{ii,jj}+O(|x|^{-4}).$ So $ $$h_{ij,ij}-h_{ii,jj}$ is $L^{1}$ integrable. Define \[ F(r)=\int_{M\cap B_{r}^{c}(0)}|h_{ij,ij}-h_{ii,jj}|d\mu_{e}. \] We have \[ \lim_{r\rightarrow\infty}F(r)=0. \] So we have \begin{align*} & \lefteqn{}|\int_{int(\tilde{\Sigma}_{n})\cap B_{s/2}^{c}(0)}(h_{il,il}^{2/H}-h_{ii,ll}^{2/H})(x^{m}b^{m})dv|\leq C|\int_{int(\tilde{\Sigma}_{n})\cap B_{s/2}^{c}(0)}|h_{il,il}^{2/H}-h_{ii,ll}^{2/H}|dv\\ & =C\int_{int(\Sigma_{n})\cap B_{sH^{-1}}^{c}(0)}|h_{il,il}-h_{ii,ll}|dv\\ & \leq CF(sH^{-1}). \end{align*} So \[ \lim_{n\rightarrow\infty}\int_{int(\tilde{\Sigma}_{n})\cap B_{s/2}^{c}(0)}(h_{il,il}^{2/H}-h_{ii,ll}^{2/H})(x^{m}b^{m})dv=0. \] And \begin{align*} & \lefteqn{}|\int_{int(\tilde{\Sigma}_{n})\cap\partial B_{s/2}(0)}-\frac{1}{2}h_{ii}^{2/H}v^{m}b^{m}+\frac{1}{2}h_{il}^{2/H}v^{l}b^{i}\\ & -\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}x^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}x^{m}b^{m}d\mu_{e}|\\ & \leq Cs \end{align*} which is small when $s$ is small. For the integral \[ \int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}d\mu_{e}(\tilde{\Sigma}_{n}), \] by divergence formula \begin{align*} & \int_{\tilde{\Sigma}_{n}\cap B_{s/2}^{c}(0)}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}d\mu_{e}(\tilde{\Sigma}_{n})\\ & =\int_{int(\tilde{\Sigma}_{n})\cap\partial B_{s/2}(0)}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{2/H}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{2/H}d\mu_{e}(\tilde{\Sigma}_{n})\\ & -\frac{1}{2}\int_{int(\tilde{\Sigma}_{n})\cap B_{s/2}^{c}(0)}(h_{il,il}^{2/H}-h_{ii,ll}^{2/H})dv. \end{align*} The second integral on the right hand side converges to $0$ as before. The first integral on the right hand side is close to the mass integral on half sphere when $s$ is small. From RT condition it is half of the mass integral, which converges to $4\pi m,$ where $m$ is the mass of the end. (Note the limit is taken first as $n\rightarrow\infty,$ then $s\rightarrow0.$) \[ \] Now we deal with the integral on $\Sigma_{n}\cap B_{Kr_{0}}.$ \begin{align*} & \lefteqn{}\int_{\Sigma_{n}\cap B_{Kr_{0}}}-\frac{1}{2}h_{ij}A_{ij}v^{m}b^{m}+\frac{1}{2}h_{il}A_{im}v^{l}b^{m}-\frac{1}{2}v^{l}\partial_{i}h_{il}v^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}v^{m}b^{m}d\mu_{e}\\ & =\int_{\hat{\Sigma}_{n}\cap B_{K}}-\frac{1}{2}h_{ij}^{r_{0}}A_{ij}(\hat{\Sigma}_{n})v^{m}b^{m}+\frac{1}{2}h_{il}^{r_{0}}A_{im}(\hat{\Sigma}_{n})v^{l}b^{m}\\ & -\frac{1}{2}v^{l}\partial_{i}h_{il}^{r_{0}}v^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{r_{0}}v^{m}b^{m}d\mu_{e}(\hat{\Sigma}_{n}), \end{align*} where $\hat{\Sigma}_{n}=r_{0}^{-1}\Sigma_{n}.$ From Theorem \ref{Blow down by r_0^{-1}} and Lemma \ref{no neck lemma}, $A_{ij}(\hat{\Sigma}_{n})\rightarrow0$ and $v^{m}\rightarrow b^{m}.$ $h_{ij}^{r_{0}}$ and $h_{ij,k}^{r_{0}}$ is bounded. So the integral above converges to \[ \int_{\hat{\Sigma}_{n}\cap B_{K}}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{r_{0}}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{r_{0}}d\mu_{e}(\hat{\Sigma}_{n}). \] Again by divergence formula \begin{align*} & \int_{\hat{\Sigma}_{n}\cap B_{K}}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{r_{0}}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{r_{0}}d\mu_{e}(\hat{\Sigma}_{n})\\ & =\int_{\partial B_{K}\backslash int(\hat{\Sigma}_{n})}-\frac{1}{2}v^{l}\partial_{i}h_{il}^{r_{0}}+\frac{1}{2}v^{l}\partial_{l}h_{ii}^{r_{0}}d\mu_{e}-\frac{1}{2}\int_{B_{K}\backslash int(\hat{\Sigma}_{n})}(h_{il,il}^{r_{0}}-h_{ii,ll}^{r_{0}})d\mu_{e}. \end{align*} The second integral on the right hand side converges to $0.$ The first integral on the right hand side is close to the mass integral on half sphere when $K$ is large. So the integral converges to $-4\pi m.$ (Note the limit is taken first as $n\rightarrow\infty,$ then $K\rightarrow\infty.$) At last we deal with the intermediate part \[ \int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}-\frac{1}{2}h_{ij}A_{ij}v^{m}b^{m}+\frac{1}{2}h_{il}A_{im}v^{l}b^{m}-\frac{1}{2}v^{l}\partial_{i}h_{il}v^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}v^{m}b^{m}d\mu_{e}. \] First from the new estimate for the second fundamental form Lemma \ref{The new estimate for the second fundamental form} we have \begin{align*} \lefteqn{|\int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}-\frac{1}{2}h_{ij}A_{ij}v^{m}b^{m}+\frac{1}{2}h_{il}A_{im}v^{l}b^{m}|}\\ & \leq\sum_{i=1}^{l_{n}}\int_{\Sigma_{n}\cap(B_{Kr_{0}e^{iL}}\backslash B_{Kr_{0}e^{(i-1)L}})}C|x|^{-2}(e^{-\frac{i}{2}L}+e^{-\frac{i-1}{2}L})(r_{0}^{-\frac{1}{2}}+s)d\mu_{e}\\ & \leq C(r_{0}^{-\frac{1}{2}}+s). \end{align*} For the second part \[ \int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}-\frac{1}{2}v^{l}\partial_{i}h_{il}v^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}v^{m}b^{m}d\mu_{e}. \] For each $n$ we can choose $p_{n}\in\Sigma_{n}\cap B_{Kr_{0}e^{(\frac{l_{n}}{2}+1)L}}\backslash B_{Kr_{0}e^{\frac{l_{n}}{2}L}}$ such that for $v_{n}=v(p_{n})$ Corollary \ref{Choose a mean value of the normal vector} holds. So we have \begin{align*} \lefteqn{\int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}-\frac{1}{2}v^{l}\partial_{i}h_{il}v^{m}b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}v^{m}b^{m}d\mu_{e}}\\ & =\int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}-\frac{1}{2}v^{l}\partial_{i}h_{il}(v^{m}-v_{n}^{m})b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}(v^{m}-v_{n}^{m})b^{m}d\mu_{e}\\ & +(v_{n}^{m}b^{m})\int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}(-\frac{1}{2}v^{l}\partial_{i}h_{il}+\frac{1}{2}v^{l}\partial_{l}h_{ii})d\mu_{e}. \end{align*} For the first term on the right hand side, we have: \begin{align*} \lefteqn{|\int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}-\frac{1}{2}v^{l}\partial_{i}h_{il}(v^{m}-v_{n}^{m})b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}(v^{m}-v_{n}^{m})b^{m}d\mu_{e}|}\\ & \leq\sum_{i=1}^{l_{n}}|\int_{\Sigma_{n}\cap(B_{Kr_{0}e^{iL}}\setminus B_{Kr_{0}e^{(i-1)L}})}-\frac{1}{2}v^{l}\partial_{i}h_{il}(v^{m}-v_{n}^{m})b^{m}+\frac{1}{2}v^{l}\partial_{l}h_{ii}(v^{m}-v_{n}^{m})b^{m}d\mu_{e}|\\ & \leq\sum_{i=1}^{l_{n}/2}C(e^{-\frac{1}{2}iL}+e^{-\frac{1}{4}l_{n}L})(s+r_{0}^{-\frac{1}{2}})+\sum_{i=l_{n/2}+1}^{l_{n}}C(e^{-\frac{1}{4}l_{n}L}+e^{-\frac{1}{2}(l_{n}-i)L})(s+r_{0}^{-\frac{1}{2}})\\ & \leq C(s+r_{0}^{-\frac{1}{2}}) \end{align*} At last we prove \[ \lim_{s\rightarrow0,K\rightarrow\infty}\lim_{n\rightarrow\infty}\int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}(-\frac{1}{2}v^{l}\partial_{i}h_{il}+\frac{1}{2}v^{l}\partial_{l}h_{ii})d\mu_{e}=0. \] First for sufficiently large $r$ \begin{align*} & \int_{\Sigma_{n}}(-\frac{1}{2}v^{l}\partial_{i}h_{il}+\frac{1}{2}v^{l}\partial_{l}h_{ii})d\mu_{e}-\frac{1}{2}\int_{B_{r}(0)\backslash int(\Sigma_{n})}(h_{il,il}-h_{ii,ll})d\mu_{e}\\ & =\int_{\partial B_{r}(0)}(-\frac{1}{2}v^{l}\partial_{i}h_{il}+\frac{1}{2}v^{l}\partial_{l}h_{ii})d\mu_{e} \end{align*} by divergence formula. So \[ |-8\pi m-\int_{\Sigma_{n}}(-\frac{1}{2}v^{l}\partial_{i}h_{il}+\frac{1}{2}v^{l}\partial_{l}h_{ii})d\mu_{e}|\leq F(r_{0}). \] And from RT condition \[ \lim_{s\rightarrow0}\lim_{n\rightarrow\infty}\int_{\Sigma_{n}\cap(B_{sH^{-1}}^{c})}(-\frac{1}{2}v^{l}\partial_{i}h_{il}+\frac{1}{2}v^{l}\partial_{l}h_{ii})d\mu_{e}=-4\pi m, \] \[ \lim_{K\rightarrow\infty}\lim_{n\rightarrow\infty}\int_{\Sigma_{n}\cap(B_{Kr_{0}})}(-\frac{1}{2}v^{l}\partial_{i}h_{il}+\frac{1}{2}v^{l}\partial_{l}h_{ii})d\mu_{e}=-4\pi m. \] So we know \[ \lim_{s\rightarrow0,K\rightarrow\infty}\lim_{n\rightarrow\infty}\int_{\Sigma_{n}\cap(B_{sH^{-1}}\setminus B_{Kr_{0}})}(-\frac{1}{2}v^{l}\partial_{i}h_{il}+\frac{1}{2}v^{l}\partial_{l}h_{ii})d\mu_{e}=0. \] Now we combine all the terms we get \begin{align*} \lefteqn{\lim_{n\rightarrow\infty}\int_{\Sigma_{n}}(H-H_{e})<v\cdot b>_{e}d\mu_{e}}\\ & =-4\pi m-4\pi m\\ & =-8\pi m\\ \end{align*} where $m>0$ is the mass of the manifold. This is a contradiction. So we prove Lemma\ref{key lemma} and the main theorem. \[ \] \[ \]
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Sec1} Prediction and variable selection are two important goals in many contemporary large-scale problems. Many regularization methods in the context of penalized empirical risk minimization have been proposed to select important covariates. See, for example, Fan \& Lv (2010) for a review of some recent developments in high-dimensional variable selection. Penalized empirical risk minimization has two components: empirical risk for a chosen loss function for prediction, and a penalty function on the magnitude of parameters for reducing model complexity. The loss function is often chosen to be convex. The inclusion of the regularization term helps prevent overfitting when the number of covariates $p$ is comparable to or exceeds the number of observations $n$. Generally speaking, two classes of penalty functions have been proposed in the literature: convex ones and concave ones. When a convex penalty such as the lasso penalty \citep{Tibshirani96} is used, the resulting estimator is a well-defined global optimizer. For the properties of $L_1$-regularization methods, see, for example, \cite{CDS99}, \cite{EHJT04}, \cite{Zou06}, \cite{CT07}, \cite{RZ07}, and \cite{BRT09}. In particular, \cite{BRT09} proved that using the $L_1$-penalty leads to estimators satisfying the oracle inequalities under the prediction loss and $L_q$-loss, with $1 \leq q \leq 2$, in high-dimensional nonparametric regression models. An oracle inequality means that with an overwhelming probability, the loss of the regularized estimator is within a logarithmic factor, a power of $\log p$, of that of the oracle estimator, with the power depending on the chosen estimation loss. Despite these nice properties, the $L_1$-penalty tends to yield a larger model than the true one for optimizing predictions, and many of the selected variables may be insignificant, showing that the resulting method may not be ideal for variable selection. The relatively large model size also reduces the interpretability of the selected model. Concave penalties, on the other hand, have been shown to lead to nice variable selection properties. The oracle property was introduced in \cite{FL01} to characterize the performance of concave regularization methods, in relation to the oracle procedure knowing the true sparse model in advance. In fixed dimensions, concave regularization has been shown to have the oracle property, recovering the true model with asymptotic probability one. This work has been extended to higher dimensions in different contexts, and the key message is the same. See, for example, \cite{LF09}, \cite{Zhang10}, and \cite{FLv11}. In particular, the weak oracle property, a surrogate of the oracle property, was introduced in \cite{LF09}. When $p > n$, it is generally difficult to study the properties of the global optimizer for concave regularization methods. Thus, most studies have focused on some local optimizer that has appealing properties in high-dimensional settings. The sampling properties of the global optimizers for these methods are less well-understood in high dimensions. In this article, we characterize theoretically the global optimizer of the regularization method with the combined $L_1$ and concave penalty, in the setting of the high-dimensional linear model. We prove that the resulting estimator combines the prediction power of the $L_1$-penalty and the variable selection power of the concave penalty. On the practical side, the $L_1$-penalty contributes the minimum amount of regularization necessary to remove noise variables for achieving oracle prediction risk, while the concave penalty incorporates additional regularization to control model sparsity. On the theoretical side, the use of an $L_1$-penalty helps us to study the various properties of the global optimizer. Specifically, we prove that the global optimizer enjoys the oracle inequalities under the prediction loss and $L_q$-loss, with $1 \leq q \leq 2$, as well as an asymptotically vanishing bound on false sign rate. We also establish its oracle risk inequalities under various losses, as well as the sampling properties of computable solutions. In addition, we show that the refitted least-squares estimator can enjoy the oracle property, in the context of \cite{FL01}. These results are also closely related to those in \cite{ZZ12}. Our work complements theirs in three important respects. First, the bound on the number of false positives in \cite{ZZ12} is generally of the same order as the true model size, while our bound on the stronger measure of the rate of false signs can be asymptotically vanishing. Second, our estimation and prediction bounds depend only on the universal regularization parameter for the $L_1$-component and are free of the regularization parameter $\lambda$ for the concave component, whereas the bounds in \cite{ZZ12} generally depend on $\lambda$ alone. Third, our oracle risk inequalities are new and stronger than those for losses, since the risks involve the expectations of losses and thus provide a more complete view of the stability of the method. It is unclear whether the concave method alone may enjoy similar risk bounds. Our proposal shares a similar spirit to that in \cite{LW07}, who proposed a combination of $L_0$- and $L_1$-penalties for variable selection and studied its properties in linear regression with fixed dimensionality. Their new penalty yields more stable variable selection results than the $L_0$-penalty, and outperforms both $L_0$- and $L_1$-penalties in terms of variable selection, while maintaining good prediction accuracy. Our theoretical results and numerical study reveal that this advantage still exists in high dimensions and for more general concave penalties. Our work differs from theirs in two main respects: we provide more complete and unified theory in ultra-high dimensional settings, and we consider a large class of concave penalties with only mild conditions on their shape. The idea of combining strengths of different penalties has also been exploited in, for example, \cite{ZZ09}. \section{Model setting} \label{Sec2} Consider the linear regression model \begin{equation} \label{e001} y = X \beta + \varepsilon, \end{equation} where $y = (Y_1, \ldots, Y_n)^T$ is an $n$-dimensional vector of responses, $X = (x_1, \ldots, x_p)$ is an $n \times p$ design matrix, $\beta = (\beta_1, \ldots, \beta_p)^T$ is an unknown $p$-dimensional vector of regression coefficients, and $\varepsilon = (\varepsilon_1, \ldots, \varepsilon_n)^T$ is an $n$-dimensional vector of noises. We are interested in variable selection when the true regression coefficient vector $\beta_0 = (\beta_{0,1}, \ldots, \beta_{0,p})^T$ has many zero components. The main goal is to effectively identify the true underlying sparse model, that is, the support $\mathrm{supp}(\beta_0) = \{j = 1, \ldots, p: \beta_{0, j} \neq 0\}$, with asymptotic probability one, and to efficiently estimate the nonzero regression coefficients $\beta_{0,j}$'s. A popular approach to estimating sparse $\beta_0$ is penalized least squares, which regularizes the conventional least-squares estimation by penalizing the magnitude of parameters $|\beta_j|$. A zero component of the resulting estimate indicates that the corresponding covariate $x_j$ is screened from the model. Penalized least-squares estimation minimizes the objective function \[ (2n)^{-1} \|y - X \beta\|_2^2 + \|p_\lambda(\beta)\|_1 \] over $\beta \in\mathbb{R}^p$, where we use the compact notation $p_\lambda(\beta) = p_\lambda(|\beta|) = (p_\lambda(|\beta_1|), \ldots, p_\lambda(|\beta_p|))^T$ with $|\beta| = (|\beta_1|, \ldots, |\beta_p|)^T$, and $p_\lambda(t)$, $t \in [0, \infty)$, is a penalty function indexed by the regularization parameter $\lambda \geq 0$. The lasso (Tibshirani, 1996) corresponds to the $L_1$-penalty $p_{\lambda}(t) = \lambda t$. As shown in Bickel et al. (2009), the lasso enjoys the oracle inequalities for prediction and estimation, but it tends to yield large models. Concave penalties have received much attention due to their oracle properties. Yet, as discussed in \S\ref{Sec1}, the sampling properties of the global optimizer for concave regularization methods are relatively less well-understood in high dimensions. To overcome these difficulties, we suggest combining the $L_1$-penalty $\lambda_0 t$ with a concave penalty $p_\lambda(t)$, and study the resulting regularization problem \begin{equation} \label{e006} \min_{\beta\in \mathbb{R}^p} \Big\{(2n)^{-1} \|y - X \beta\|_2^2 + \lambda_0\|\beta\|_1 + \|p_\lambda(\beta)\|_1\Big\}, \end{equation} where $\lambda_0 = c \{(\log p)/n\}^{1/2}$ for some positive constant $c$. Throughout the paper, we fix such a choice of the universal regularization parameter for the $L_1$-penalty, and the minimizer of (\ref{e006}) is implicitly referred to as the global minimizer. The $L_1$-component $\lambda_0 \|\beta\|_1$ helps study the global minimizer of (\ref{e006}), and reflects the minimum amount of regularization for removing the noise in prediction. The concave component $\|p_\lambda(\beta)\|_1$ serves to adapt the model sparsity for variable selection. \section{Main results} \label{Sec3} \subsection{Hard-thresholding property} \label{Sec3.1} To understand why the combination of $L_1$- and concave penalties can yield better variable selection than can the $L_1$-penalty alone, we consider the hard-thresholding penalty $p_{\text{H}, \lambda}(t) = 2^{-1}\{\lambda^2 - (\lambda - t)_+^2\}$, $t \geq 0$. Assume that each covariate $x_j$ is rescaled to have $L_2$-norm $n^{1/2}$. Let $\widehat\bbeta = (\widehat\beta_1, \ldots, \widehat\beta_p)^T$ be the global minimizer of (\ref{e006}) with $p_{\lambda}(t) = p_{\text{H}, \lambda}(t)$. The global optimality of $\widehat\bbeta$ entails that each $\widehat\beta_j$ is the global minimizer of the corresponding univariate penalized least-squares problem along the $j$th coordinate. All these univariate problems share a common form, with generally different scalar $z$'s, \begin{equation} \label{e010} \widehat\beta(z) = \mbox{argmin}_{\beta \in \mathbb{R}} \left\{2^{-1} (z - \beta)^2 + \lambda_0 |\beta| + p_{\text{H}, \lambda}(|\beta|)\right\}, \end{equation} since all covariates have $L_2$-norm $n^{1/2}$. Simple calculus shows that the solution in (\ref{e010}) is \begin{equation} \label{e013} \widehat\beta(z) = \mathrm{sgn}(z) (|z| - \lambda_0) 1_{\{|z| > \lambda + \lambda_0\}}, \end{equation} so the resulting estimator has the same feature as the hard-thresholded estimator: each component is either zero or of magnitude larger than $\lambda$. This provides an appealing distinction between insignificant covariates, whose coefficients are zero and should be estimated as such, and significant covariates, whose coefficients are significantly nonzero and should be estimated as nonzero, improving the variable selection performance of soft-thresholding by $L_1$-penalty. The hard-thresholding feature is shared by many other penalty functions, as now shown. \begin{proposition} \label{Prop1} Assume that $p_{\lambda}(t)$, $t \geq 0$, is increasing and concave with $p_\lambda(t) \geq p_{\text{H}, \lambda}(t)$ on $[0, \lambda]$, $p_\lambda'\{(1 - c_1) \lambda\} \leq c_1 \lambda$ for some $c_1 \in [0, 1)$, and $-p''_\lambda(t)$ decreasing on $[0, (1 - c_1) \lambda]$. Then any local minimizer of (\ref{e006}) that is a global minimizer in each coordinate has the hard-thresholding feature that each component is either zero or of magnitude larger than $(1 - c_1) \lambda$. \end{proposition} Although we used the derivatives $p'_{\lambda}(t)$ and $p''_{\lambda}(t)$ in the above proposition, the results continue to hold if we replace $-p'_{\lambda}(t)$ with the subdifferential of $-p_{\lambda}(t)$, and $-p_{\lambda}''(t)$ with the local concavity of $p_{\lambda}(t)$ at point $t$, when the penalty function is nondifferentiable at $t$ (Lv \& Fan, 2009). The hard-thresholding penalty $p_{\text{H}, \lambda}(t)$ satisfies conditions of Proposition \ref{Prop1}, with $c_1 = 0$. This class of penalty functions also includes, for example, the $L_0$-penalty and the smooth integration of counting and absolute deviation penalty (Lv \& Fan, 2009), with suitably chosen $c_1 \in [0, 1)$ and tuning parameters. \subsection{Technical conditions} \label{Sec3.2} We consider a wide range of error distributions for the linear model (\ref{e001}). Throughout this paper, we make the following assumption on the distribution of model error $\varepsilon$: \begin{equation} \label{e024} {\rm pr}(\|n^{-1} X^T \varepsilon\|_\infty > \lambda_0/2) = O(p^{-c_0}), \end{equation} where $c_0$ is some arbitrarily large, positive constant depending only on $c$, the constant defining $\lambda_0$. This condition was imposed in \cite{FLv11}, who showed for independent $\varepsilon_1, \ldots, \varepsilon_n$ that Gaussian errors and bounded errors satisfy (\ref{e024}) without any extra assumption, and that light-tailed error distributions satisfy (\ref{e024}) with additional mild assumptions on the design matrix $X$. Without loss of generality, we assume that only the first $s$ components of $\beta_0$ are nonzero, where the true model size $s$ can diverge with the sample size $n$. Write the true regression coefficient vector as $\beta_0 = (\widetilde{\beta}_{0,1}^T, \widetilde{\beta}_{0,2}^T)^T$ with $\widetilde{\beta}_{0,1} = (\beta_{0, 1}, \ldots, \beta_{0, s})^T \in \mathbb{R}^{s}$ the subvector of all nonzero coefficients and $\widetilde{\beta}_{0,2} = 0$, and let $p_\lambda(\infty) = \lim_{t \rightarrow \infty} p_\lambda(t)$. We impose the following conditions on the design matrix and penalty function, respectively. \begin{condition} \label{cond1} For some positive constant $\kappa_0$, $\min_{\|\delta\|_2 = 1,\ \|\delta\|_0 < 2 s}n^{-1/2} \|X \delta\|_2 \geq \kappa_0$ and \begin{equation}\label{002} \kappa = \kappa(s, 7) = \min_{ \delta \neq0, \ \|\widetilde{\delta}_{2}\|_1 \leq 7 \|\widetilde{\delta}_{1}\|_1} \big\{n^{-1/2} \|X \delta\|_2/(\|\widetilde{\delta}_{1}\|_2 \vee \|\widetilde{\delta}'_2\|_2)\big\} > 0, \end{equation} where $\delta = (\widetilde{\delta}_1^T, \widetilde{\delta}_2^T)^T$ with $\widetilde{\delta}_1 \in \mathbb{R}^s$ and $\widetilde{\delta}'_2$ the subvector of $\widetilde{\delta}_2$ consisting of the components with the $s$ largest absolute values. \end{condition} \begin{condition} \label{cond2} The penalty $p_{\lambda}(t)$ satisfies the conditions of Proposition \ref{Prop1} with $p_\lambda'\{(1 - c_1) \lambda\} \leq \lambda_0/4$, and $\min_{j = 1, \ldots, s} |\beta_{0, j}| > \max\{(1 - c_1) \lambda, 2 \kappa_0^{-1} p_\lambda^{1/2}(\infty)\}$. \end{condition} The first part of Condition \ref{cond1} is a mild sparse eigenvalue condition, and the second part combines the restricted eigenvalue assumptions in \cite{BRT09}, which were introduced for studying the oracle inequalities for the lasso estimator and Dantzig selector \citep{CT07}. To see the intuition for (\ref{002}), recall that the ordinary least-squares estimation requires that the Gram matrix $X\tX$ be positive definite, that is, \begin{equation}\label{001} \min_{0\neq\delta \in \mathbb{R}^p}\big\{ n^{-1/2}\|X\delta\|_2/ \|\delta\|_2 \big\}>0. \end{equation} In the high-dimensional setting $p>n$, condition (\ref{001}) is always violated. Condition \ref{cond1} replaces the norm $\|\delta\|_2$ in the denominator of (\ref{001}) with the $L_2$-norm of only a subvector of $\delta$. Condition \ref{cond1} also has an additional bound involving $\|\widetilde{\delta}_2'\|_2$. This is needed only when dealing with the $L_q$-loss with $q \in (1, 2]$. For other losses, the bound can be relaxed to \[ \kappa = \kappa(s, 7) = \min_{ \delta \neq0, \ \|\widetilde{\delta}_{2}\|_1 \leq 7 \|\widetilde{\delta}_{1}\|_1} \left\{n^{-1/2} \|X \delta\|_2/\|\widetilde{\delta}_{1}\|_2 \right\} > 0. \] For simplicity, we use the same notation $\kappa$ in these bounds. In view of the basic constraint (\ref{e012}), the restricted eigenvalue assumptions in (\ref{002}) can be weakened to other conditions such as the compatibility factor or the cone invertibility factor \citep{ZZ12}. We adopt the assumptions in \cite{BRT09} to simplify our presentation. Condition \ref{cond2} ensures that the concave penalty $p_{\lambda}(t)$ satisfies the hard-thresholding property, requires that its tail should be relatively slowly growing, and puts a constraint on the minimum signal strength. \subsection{Asymptotic properties of global optimum} \label{Sec3.3} In this section, we study the sampling properties of the global minimizer $\widehat\bbeta$ of (\ref{e006}) with $p$ implicitly understood as $\max(n, p)$ in all bounds. To evaluate the variable selection performance, we consider the number of falsely discovered signs \[ \mbox{FS}(\widehat\bbeta) =|\{j = 1, \ldots, p: \mathrm{sgn}(\widehat\beta_j) \neq \mathrm{sgn}(\beta_{0,j})\}|, \] which is a stronger measure than the total number of false positives and false negatives. \begin{theorem} \label{Thm1} Assume that Conditions \ref{cond1}--\ref{cond2} and deviation probability bound (\ref{e024}) hold, and that $p_\lambda(t)$ is continuously differentiable. Then the global minimizer $\widehat\bbeta$ of (\ref{e006}) has the hard-thresholding property stated in Proposition \ref{Prop1}, and with probability $1 - O(p^{-c_0})$, satisfies simultaneously that \begin{align} \label{e034} n^{-1/2} \|X (\widehat\bbeta - \beta_0)\|_2 & = O(\kappa^{-1} \lambda_0 s^{1/2}),\\ \label{e035} \|\widehat\bbeta - \beta_0\|_q & = O(\kappa^{-2} \lambda_0 s^{1/q}), \quad q \in [1, 2], \\ \label{e033} \mathrm{FS}(\widehat\bbeta) & = O\{\kappa^{-4} (\lambda_0/\lambda)^2 s\}. \end{align} If in addition $\lambda \geq 56 (1- c_1)^{-1} \kappa^{-2} \lambda_0 s^{1/2}$, then with probability $1 - O(p^{-c_0})$, it also holds that $\mathrm{sgn}(\widehat\bbeta) = \mathrm{sgn}(\beta_0)$ and $\|\widehat\bbeta - \beta_0\|_\infty = O\{\lambda_0 \|(n^{-1} X_{1}^T X_{1})^{-1}\|_\infty\}$, where $X_{1}$ is the $n \times s$ submatrix of $X$ corresponding to $s$ nonzero $\beta_{0, j}$'s. \end{theorem} From Theorem \ref{Thm1}, we see that if $\lambda$ is chosen such that $\lambda_0/\lambda\rightarrow 0$, then the number of falsely discovered signs $\mathrm{FS}(\widehat\bbeta)$ is of order $o(s)$ and thus the false sign rate $\mathrm{FS}(\widehat\bbeta)/s$ is asymptotically vanishing. In contrast, \cite{BRT09} showed that under the restricted eigenvalue assumptions, the lasso estimator, with the $L_1$-component $\lambda_0 \|\beta\|_1$ alone, generally gives a sparse model with size of order $O(\phi_{\max} s)$, where $\phi_{\max}$ is the largest eigenvalue of the Gram matrix $n^{-1} X^T X$. This entails that the false sign rate for the lasso estimator can be of order $O(\phi_{\max})$, which does not vanish asymptotically. Similarly, \cite{ZZ12} proved that the number of false positives of the concave regularized estimator is generally of order $O(s)$, which means that the false sign rate can be asymptotically nonvanishing. The convergence rates in oracle inequalities (\ref{e034})--(\ref{e035}), involving both sample size $n$ and dimensionality $p$, are the same as those for the $L_1$-component alone in \cite{BRT09}, and are consistent with those for the concave component alone in \cite{ZZ12}. A distinctive feature is that our estimation and prediction bounds in (\ref{e034})--(\ref{e035}) depend only on the universal regularization parameter $\lambda_0 = c \{(\log p)/n\}^{1/2}$ for the $L_1$-component, and are independent of the regularization parameter $\lambda$ for the concave component. In contrast, the bounds in \cite{ZZ12} generally depend on $\lambda$ alone. The logarithmic factor $\log p$ reflects the general price one needs to pay to search for important variables in high dimensions. In addition, when the signal strength is stronger and the regularization parameter $\lambda$ is chosen suitably, with the aid of the concave component, we have a stronger variable selection result of sign consistency than using $L_1$-penalty alone, in addition to the oracle inequality. Thanks to the inclusion of the $L_1$-component, another nice feature is that our theory analyzes the sampling properties on the whole parameter space $\mathbb{R}^p$, the full space of all possible models, in contrast to the restriction to the union of lower-dimensional coordinate subspaces such as in \cite{FLv11}. The bound on the $L_\infty$-estimation loss in Theorem \ref{Thm1} involves $\|(n^{-1} X_{1}^T X_{1})^{-1}\|_\infty$, which is bounded from above by $s^{1/2} \|(n^{-1} X_{1}^T X_{1})^{-1}\|_2 \leq s^{1/2} \kappa_0^{-2}$. The former bound is in general tighter than the latter one. To see this, let us consider the special case when all column vectors of the $n\times s$ subdesign matrix $X_1$ have equal pairwise correlation $\rho \in [0,1)$. Then the Gram matrix takes the form $n^{-1} X_{1}^T X_{1} =(1 - \rho) I_s + \rho 1_s 1_s^T$. By the Sherman--Morrison--Woodbury formula, we have $(n^{-1}X_{1}^T X_{1})^{-1}= (1-\rho)^{-1}I_s - \rho(1-\rho)^{-1}\{1+(s-1)\rho\}^{-1}1_s 1_s^T$, which gives \[ \|(n^{-1}X_{1}^T X_{1})^{-1}\|_\infty = (1-\rho)^{-1} [1+ \rho(s-2) \{1+(s-1)\rho\}^{-1}] \leq 2 (1-\rho)^{-1}. \] It is interesting to observe that the above matrix $\infty$-norm has a dimension-free upper bound. Thus in this case, the bound on $L_\infty$-estimation loss becomes $O[\{(\log p)/n\}^{1/2}]$. Due to the presence of the $L_1$-penalty in (\ref{e006}), the resulting global minimizer $\widehat\bbeta$ characterized in Theorem \ref{Thm1} may not have the oracle property in the context of \cite{FL01}. This issue can be resolved using the refitted least-squares estimator on the support $\mathrm{supp}(\widehat\bbeta)$. \begin{corollary} \label{Cor1} Assume that all conditions of Theorem \ref{Thm1} hold, and let $\widetilde\bbeta$ be the refitted least-squares estimator given by covariates in $\mathrm{supp}(\widehat\bbeta)$, with $\widehat\bbeta$ the estimator in Theorem \ref{Thm1}. Then with probability $1 - O(p^{-c_0})$, $\widetilde\bbeta$ equals the oracle estimator, and has the oracle property if the oracle estimator is asymptotic normal. \end{corollary} Corollary \ref{Cor1} follows immediately from the second part of Theorem \ref{Thm1}. Additional regularity conditions ensuring the asymptotic normality of the oracle estimator can be found in, for example, Theorem 4 in \cite{FLv11}. \begin{theorem} \label{Thm2} Assume that conditions of Theorem \ref{Thm1} hold, with $\varepsilon_1, \ldots, \varepsilon_n$ independent and identically distributed as $\varepsilon_0$. Then the regularized estimator $\widehat\bbeta$ in Theorem \ref{Thm1} satisfies that for any $\tau >0$, \begin{align} \label{e052} E \{n^{-1} \|X (\widehat\bbeta - \beta_0)\|_2^2\} & = O(\kappa^{-2} \lambda_0^2 s + m_{2, \tau} + \gamma \lambda_0 p^{-c_0}), \\ \nonumber E (\|\widehat\bbeta - \beta_0\|_q^q) & = O[\kappa^{-2 q} \lambda_0^q s + (2 - q) \lambda_0^{-1} m_{2, \tau} + (q - 1) \lambda_0^{-2} m_{4, \tau} \\ \label{e054} & \quad + \{(2 - q) \gamma + (q - 1) \gamma^2\} p^{-c_0}], \quad q \in [1, 2], \\ \label{e055} E \{\emph{\mbox{FS}}(\widehat\bbeta)\} & = O\{\kappa^{-4} (\lambda_0/\lambda)^2 s + \lambda^{-2} m_{2, \tau} + (\gamma \lambda_0/\lambda^2 + s) p^{-c_0}\}, \end{align} where $m_{q, \tau} = E (|\varepsilon_0|^q 1_{\{|\varepsilon_0| > \tau\}})$ denotes tail moment and $\gamma = \|\beta_0\|_1 + s \lambda_0^{-1} p_\lambda(\infty) + \tau^2 \lambda_0^{-1}$. If in addition $\lambda \geq 56 (1- c_1)^{-1} \kappa^{-2} \lambda_0 s^{1/2}$, then we also have $E \{\emph{\mbox{FS}}(\widehat\bbeta)\} = O\{\lambda^{-2} m_{2, \tau} + (\gamma \lambda_0/\lambda^2 + s) p^{-c_0}\}$ and $E (\|\widehat\bbeta - \beta_0\|_\infty) = O\{\lambda_0 \|(n^{-1} X_{1}^T X_{1})^{-1}\|_\infty + \lambda_0^{-1} m_{2, \tau} + \gamma p^{-c_0}\}$. \end{theorem} Observe that $\lambda_0$ enters all bounds for the oracle risk inequalities, whereas $\lambda$ enters only the risk bound for the variable selection loss. This again reflects the different roles played by the $L_1$-penalty and concave penalty in prediction and variable selection. The estimation and prediction risk bounds in (\ref{e052})--(\ref{e054}) as well as the variable selection risk bound in (\ref{e055}) can have leading orders given in their first terms. To understand this, note that each of these first terms is independent of $\tau$ and $p^{-c_0}$, and the remainders in each upper bound can be sufficiently small, since $\tau$ and $c_0$ can be chosen arbitrarily large. In fact, for bounded error $\varepsilon_i$ with range $[-b, b]$, taking $\tau = b$ makes the tail moments $m_{q,\tau}$ vanish. For Gaussian error $\varepsilon_i \sim N(0, \sigma^2)$, by the Gaussian tail probability bound, we can show that $m_{q, \tau} = O[\tau^{q - 1} \exp\{-\tau^2/(2 \sigma^2)\}]$ for positive integer $q$. In general, the tail moments can have sufficiently small order by taking a sufficiently large $\tau$ diverging with $n$. All terms involving $p^{-c_0}$ can also be of sufficiently small order by taking a sufficiently large positive constant $c$ in $\lambda_0$; see (\ref{e024}). Our new oracle risk inequalities complement the common results on the oracle inequalities for losses. The inclusion of the $L_1$-component $\lambda_0 t$ stabilizes prediction and variable selection, and leads to oracle risk bounds. It is, however, unclear whether the concave method alone can enjoy similar risk bounds. \subsection{Asymptotic properties of computable solutions} \label{Sec3.4} In \S\ref{Sec3.3} we have shown that the global minimizer for combined $L_1$ and concave regularization can enjoy the appealing asymptotic properties. Such a global minimizer, however, may not be guaranteed to be found by a computational algorithm due to the general nonconvexity of the objective function in (\ref{e006}). Thus a natural question is whether these nice properties can be shared by the computable solution by any algorithm, where a computable solution is typically a local minimizer. Zhang \& Zhang (2012) showed that under regularity conditions, any two sparse local solutions can be close to each other. This result along with the sparsity of the global minimizer in Theorem \ref{Thm1} entails that any sparse computable solution, in the sense of being a local minimizer, can be close to the global minimizer, and thus can enjoy properties similar to the global minimizer. The following theorem establishes these results for sparse computable solutions. \begin{theorem} \label{Thm3} Let $\widehat\bbeta$ be a computable local minimizer of (\ref{e006}) that is a global minimizer in each coordinate produced by any algorithm satisfying $\|\widehat\bbeta\|_0 \leq c_2 s$ and $\|n^{-1} X^T (y - X \widehat\bbeta)\|_\infty = O(\lambda_0)$, $\lambda \geq c_3 \lambda_0$, and $\min_{\|\delta\|_2 = 1,\ \|\delta\|_0 \leq c_4 s}n^{-1/2} \|X \delta\|_2 \geq \kappa_0$ for some positive constants $c_2, c_3, \kappa_0$ and sufficiently large positive constant $c_4$. Then under conditions of Theorem \ref{Thm1}, $\widehat\bbeta$ has the same asymptotic properties as for the global minimizer in theorem \ref{Thm1}. \end{theorem} For practical implementation of method in (\ref{e006}), we employ the path-following coordinate optimization algorithm (Fan \& Lv, 2011; Mazumder et al., 2011) and choose the initial estimate as the lasso estimator $\widehat\bbeta_{\text{lasso}}$ with the regularization parameter tuned to minimize the cross-validated prediction error. An analysis of the convergence properties of such an algorithm was presented by \cite{LL13}. The use of the lasso estimator as the initial value has also been exploited in, for example, Zhang \& Zhang (2012). With the coordinate optimization algorithm, one can obtain a path of sparse computable solutions that are global minimizers in each coordinate. Theorem \ref{Thm3} suggests that a sufficiently sparse computable solution with small correlation between the residual vector and all covariates can enjoy desirable properties. \section{A simulation study} \label{Sec4} We simulated 100 data sets from the linear regression model (\ref{e001}) with $\varepsilon \sim N(0, \sigma^2 I_n)$ and $\sigma = $ 0$\cdot$25. For each simulated data set, the rows of $X$ were sampled as independent and identically distributed copies from $N(0, \Sigma_0)$ with $\Sigma_0 = ($0$\cdot$5$^{|i-j|})$. We considered $(n, p) = (80, 1000)$ and $(160, 4000)$, and set $\beta$ as $\beta_0 = (1, -$0$\cdot$5$, $ 0$\cdot$7$, -$1$\cdot$2$, -$0$\cdot$9$, $ 0$\cdot$3$, $ 0$\cdot$55$, 0, \ldots, 0)^T$. For each data set, we employed the lasso, combined $L_1$ and the smoothly clipped absolute deviation \citep{FL01}, combined $L_1$ and hard-thresholding, and combined $L_1$ and the smooth integration of counting and absolute deviation penalties to produce a sparse estimate. The minimax concave penalty in \cite{Zhang10} performed very similarly to the smoothly clipped absolute deviation penalty, so we omit its results to save space. The tuning parameters were selected using BIC. \begin{table} \def~{\hphantom{0}} \tbl{Means and standard errors (in parentheses) of different performance measures}{% \begin{tabular}{lccccc} \\ & Lasso & $L_1$+SCAD & $L_1$+Hard & $L_1$+SICA & Oracle \\[5pt] $n = 80$ & & & & & \\ PE ($\times 10^{-2}$) & 45$\cdot$0 (1$\cdot$7) & 8$\cdot$1 (0$\cdot$2) & 7$\cdot$0 (0$\cdot$1) & 7$\cdot$1 (0$\cdot$1) & 6$\cdot$9 (0$\cdot$0) \\ $L_2$-loss ($\times 10^{-2}$) & 86$\cdot$9 (1$\cdot$9) & 16$\cdot$8 (1$\cdot$0) & 11$\cdot$3 (0$\cdot$4) & 11$\cdot$3 (0$\cdot$5) & 9$\cdot$7 (0$\cdot$3) \\ $L_1$-loss ($\times 10^{-1}$) & 27$\cdot$6 (0$\cdot$6) & 3$\cdot$6 (0$\cdot$2) & 2$\cdot$5 (0$\cdot$1) & 2$\cdot$5 (0$\cdot$1) & 2$\cdot$1 (0$\cdot$1) \\ $L_\infty$-loss ($\times 10^{-2}$) & 48$\cdot$2 (1$\cdot$2) & 12$\cdot$1 (0$\cdot$8) & 7$\cdot$5 (0$\cdot$3) & 7$\cdot$5 (0$\cdot$3) & 6$\cdot$6 (0$\cdot$2) \\ FP & 26$\cdot$1 (0$\cdot$5) & 0$\cdot$2 (0$\cdot$0) & 0 (0) & 0 (0) & 0 (0) \\ FN & 1$\cdot$0 (0$\cdot$1) & 0$\cdot$1 (0$\cdot$0) & 0$\cdot$0 (0$\cdot$0) & 0$\cdot$0 (0$\cdot$0) & 0 (0)\\[5pt] $n = 160$ & & & & & \\ PE ($\times 10^{-2}$) & 16$\cdot$9 (0$\cdot$5) & 6$\cdot$7 (0$\cdot$0) & 7$\cdot$0 (0$\cdot$1) & 7$\cdot$0 (0$\cdot$1) & 6$\cdot$6 (0$\cdot$0) \\ $L_2$-loss ($\times 10^{-2}$) & 45$\cdot$3 (1$\cdot$0) & 7$\cdot$7 (0$\cdot$3) & 9$\cdot$2 (0$\cdot$4) & 9$\cdot$2 (0$\cdot$4) & 6$\cdot$6 (0$\cdot$2) \\ $L_1$-loss ($\times 10^{-1}$) & 16$\cdot$2 (0$\cdot$3) & 1$\cdot$7 (0$\cdot$1) & 2$\cdot$1 (0$\cdot$1) & 2$\cdot$1 (0$\cdot$1) & 1$\cdot$4 (0$\cdot$0) \\ $L_\infty$-loss ($\times 10^{-2}$) & 24$\cdot$9 (0$\cdot$6) & 5$\cdot$3 (0$\cdot$2) & 6$\cdot$0 (0$\cdot$2) & 5$\cdot$9 (0$\cdot$2) & 4$\cdot$4 (0$\cdot$1) \\ FP & 52$\cdot$8 (1$\cdot$1) & 0$\cdot$1 (0$\cdot$0) & 0$\cdot$7 (0$\cdot$1) & 0$\cdot$7 (0$\cdot$1) & 0 (0) \\ FN & 0 (0) & 0 (0) & 0 (0) & 0 (0) & 0 (0) \end{tabular}} \label{tab1} \begin{tabnote} $L_1$+SCAD, combined $L_1$ and smoothly clipped absolute deviation; $L_1$+Hard, combined $L_1$ and hard-thresholding; $L_1$+SICA, combined $L_1$ and smooth integration of counting and absolute deviation; PE, prediction error; FP, number of false positives; FN, number of false negatives. \end{tabnote} \end{table} We considered six performance measures for the estimate $\widehat\bbeta$: the prediction error, $L_2$-loss, $L_1$-loss, $L_\infty$-loss, the number of false positives, and the number of false negatives. The prediction error is defined as $E (Y - x^T \widehat\bbeta)^2$, with $(x^T, Y)$ an independent observation, which was calculated based on an independent test sample of size 10,000. The $L_q$-loss for estimation is $\|\widehat\bbeta - \beta_0\|_q$. A false positive means a selected covariate outside the true sparse model $\mathrm{supp}(\beta_0)$, and a false negative means a missed covariate in $\mathrm{supp}(\beta_0)$. Table \ref{tab1} lists the results under different performance measures. The combined $L_1$ and smoothly clipped absolute deviation, combined $L_1$ and hard-thresholding, and combined $L_1$ and smooth integration of counting and absolute deviation all performed similarly to the oracle procedure, outperforming the lasso. When the sample size increases, the performance of all methods tends to improve. Although theoretically the oracle inequalities for the $L_1$-penalty and combined $L_1$ and concave penalty can have the same convergence rates, the constants in these oracle inequalities matter in finite samples. This explains the differences in prediction errors and other performance measures in Table \ref{tab1} for various methods. We also compared our method with the concave penalty alone. Simulation studies suggest that they have similar performance, except that our method is more stable. To illustrate this, we compared the smoothly clipped absolute deviation with combined $L_1$ and the smoothly clipped absolute deviation. Boxplots of different performance measures by the two methods showed that the latter reduces the outliers and variability, and thus stabilizes the estimate. This result reveals that the same advantage as advocated in \cite{LW07} remains true in high dimensions, with more general concave penalties. \section{Real data analysis} \label{Sec5} We applied our method to the lung cancer data originally studied in Gordon et al. (2002) and analyzed in Fan \& Fan (2008). This consists of 181 tissue samples, with 31 from the malignant pleural mesothelioma of the lung, and 150 from the adenocarcinoma of the lung. Each sample tissue is described by 12533 genes. To better evaluate the suggested method, we randomly split the 181 samples into a training set and a test set such that the training set consists of 16 samples from the malignant pleural mesothelioma class and 75 samples from the adenocarcinoma class. Correspondingly, the test set has 15 samples from the malignant pleural mesothelioma class and 75 samples from the adenocarcinoma class. For each split, we employed the same methods as in \S\ref{Sec4} to fit the logistic regression model to the training data, and then calculated the classification error using the test data. The tuning parameters were selected using the cross-validation. We repeated the random splitting 50 times, and the means and standard errors of classification errors were 2$\cdot$960 (0$\cdot$254) for the lasso, 3$\cdot$080 (0$\cdot$262) for combined $L_1$ and the smoothly clipped absolute deviation, 2$\cdot$960 (0$\cdot$246) for combined $L_1$ and hard-thresholding, and 2$\cdot$980 (0$\cdot$228) for combined $L_1$ and the smooth integration of counting and absolute deviation. We also calculated the median number of variables chosen by each method: 19 for the first one, 11 for the second one, 11 for the third one, and 12 for the fourth one; the mean model sizes are almost the same as the medians. For each method, we computed the percentage of times each gene was selected, and list the most frequently chosen $m$ genes in the Supplementary Material, with $m$ equal to the median model size by the method. The sets of genes selected by the combined $L_1$ and concave penalties are subsets of those selected by the lasso. \section{Discussion} \label{Sec6} Our theoretical analysis shows that the regularized estimate, as the global optimum, given by combined $L_1$ and concave regularization enjoys the same asymptotic properties as the lasso estimator, but with improved sparsity and false sign rate, in ultra-high dimensional linear regression model. These results may be extended to more general model settings and other convex penalties, such as the $L_2$-penalty. To quantify the stability of variable selection, one can use, for example, the bootstrap method \citep{Efron79} to estimate the selection probabilities, significance, and estimation uncertainty of selected variables by the regularization method in practice. \section*{Acknowledgement} The authors sincerely thank the editor, an associate editor, and two referees for comments that significantly improved the paper. This work was supported by the U.S. National Science Foundation and the University of Southern California. \section*{Supplementary material} \label{SM} Supplementary material available at {\it Biometrika}\ online includes the proofs of Proposition \ref{Prop1} and Theorem \ref{Thm3}, and further details for \S\ref{Sec5}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The mechanism of electroweak symmetry breaking (EWSB) is one of the central unanswered questions of particle physics. While the Standard Model (SM) with one Higgs doublet provides a simple EWSB mechanism that is so far in agreement with experiment, LEP has already excluded a SM Higgs with mass below $114.4$~GeV~\cite{Barate:2003sz}. Also, the ATLAS and CMS experiments at the Large Hadron Collider (LHC) are quickly excluding the SM Higgs. At~$95\%$ confidence level, the ATLAS experiment currently rules out the mass ranges $110-117.5$, $118.5-122.5$, and $129-539$ GeV~\cite{ATLAS}, and CMS excludes $129-525$ GeV~\cite{CMS}. As the limits on the SM Higgs boson tighten, we are motivated to investigate other possible mechanisms for EWSB. Two such examples are supersymmetry and technicolor~\cite{Weinberg:1975gm}. One may also note that electroweak precision constraints prefer a SM Higgs with mass at or slightly below $100$~GeV~\cite{:2010zz}, beneath the LEP direct detection bounds. Hence, a particularly interesting class of models are those that contain a SM-like Higgs boson that escapes the strictest LEP bounds and has mass nearer that preferred by precision measurements. This can be accomplished in theories with a SM-like Higgs boson that preferentially undergoes a cascade decay into light SM particles. In particular, if the Higgs boson dominantly decays via a cascade $h\rightarrow 2\eta\rightarrow 4X$, where $\eta$ is a pseudoscalar and $X$ are light SM particles, the LEP bounds can be significantly relaxed~\cite{Dermisek:2005ar,Dobrescu:2000yn}. There has been much recent work on detecting a Higgs boson decaying to two pseudoscalars and the pseudoscalar subsequently decaying to $2\gamma$, $2\tau$, $2b$, $2\mu$ and $2g$~\cite{Chang:2006bw,Falkowski:2010hi,Chen:2010wk}. If the pseudoscalars preferentially decay to two $b$'s, the Higgs mass bounds from LEP are only mildly reduced to $110$~GeV~\cite{Schael:2006cr}. Hence, to evade the most stringent LEP bounds, many models of this type require the mass of the pseudoscalar to be less than the bottom quark threshold, $2m_b$, eliminating the $\eta\rightarrow b\bar{b}$ decay. However, one example, the so-called ``charming Higgs" model~\cite{Bellazzini:2009kw}, has matter representations that suppress the couplings of the pseudoscalar to down type quarks and charged leptons. Consequently, the pseudoscalar decay to charm quarks is dominant over the decay to bottom quarks or $\tau$ leptons, even for pseudoscalar masses above the bottom quark threshold. In such a scenario, the Higgs boson can still have a mass around $m_Z$ and avoid the most stringent LEP bounds. In this paper we study the Higgs production in association with a $W$ at the LHC with a subsequent Higgs decay $h\rightarrow2\eta\rightarrow4c$. Specifically, we consider the case where the Higgs boson mass, $m_h$, is below the LEP bounds and the $\eta$ mass, $m_\eta$, is above the bottom quark threshold, $m_\eta> 2m_b$. Hadronically decaying Higgs bosons are difficult to observe at the LHC due to the large QCD backgrounds. We therefore consider the scenario where $m_\eta\ll m_h$. In this case the $\eta$s become highly boosted and their decay products are collimated into single jets. Our signal jets then originate from color singlet particles decaying to two quarks while the QCD background jets originate from massless colored partons. Hence, the signal jets have a mass scale and substructure distinct from the background. Using the techniques developed in Ref.~\cite{Butterworth:2008iy}, we decompose the jets into subjets and perform a substructure analysis. The paper is structured as follows. In Section~\ref{Model.SEC} we outline the simplified model we utilize. We study the observability of the charming Higgs associated production with a vector boson via a complete signal and background analysis in Section~\ref{Numerical.SEC}. In Section~\ref{Subjet.SEC}, we demonstrate that a subjet analysis improves the observability of the charming Higgs at the LHC. Finally, in Section~\ref{Conc.SEC} we summarize our results and conclude. \section{Simplified Model} \label{Model.SEC} In the models of interest~\cite{Bellazzini:2009kw,Bellazzini:2009xt}, the Higgs boson arises as a pseudo-Goldstone boson (pGB) of an approximate global symmetry. The symmetry breaking pattern of these models gaurantees an additional light SM singlet pseudoscalar, $\eta$. The pseudoscalar then has derivative couplings to the SM-like Higgs boson, $h$: \begin{eqnarray} \mathcal{L}\approx -h(\partial_\mu \eta)^2\frac{v_{EW}}{\sqrt{2} f^2}\left(1-\frac{v^2_{EW}}{f^2}\right)^{-1/2} \end{eqnarray} where $v_{EW}=174$~GeV is the electroweak breaking scale, and $f$ is a global symmetry breaking scale. For $f$ values on the order of the electroweak breaking scale, $h$ will dominantly decay to two $\eta$'s. For example, if $f=350-400$~GeV and the Higgs boson mass, $m_h$, is around the $Z$-mass, the branching ratio of $h$ to $2\eta$ is $80-90\%$ and $h$ to $b\bar{b}$ is $10-20\%$, consistent with the LEP bounds~\cite{Bellazzini:2009kw,Bellazzini:2009xt}. The couplings of the pseudoscalar to a SM fermion are of the form \begin{eqnarray} iy_{f}\eta \bar{f}\gamma_5 f. \label{etaFF.eq} \end{eqnarray} Typically, $\eta$ couples most strongly to top and bottom quarks. Hence, for $m_\eta>2m_b$, the $\eta$ predominantly decays into $b\bar{b}$ and the LEP bound is only slightly alleviated to $m_h>110$~GeV~\cite{Schael:2006cr}. However, in the so-called ``charming Higgs" model~\cite{Bellazzini:2009xt}, the $\eta$ coupling to bottom quarks is suppressed by higher order operators. Also, $\eta$ couples to $\tau$s through the mixing of $\tau$ with a heavy partner, whereas the coupling to charm quark does not suffer from these suppressions. Hence, the dominant decay mode is then $\eta\rightarrow c\bar{c}$ for all values of $m_\eta$. For simplicity and clarity, throughout the rest of this paper we take the branching ratios of the Higgs boson BR$(h\rightarrow\eta\eta)=1$ and pseudoscalar BR$(\eta\rightarrow c\bar{c})=1$. For smaller branching ratios, the signal cross sections can be simply scaled. \section{Signal and Background} \label{Numerical.SEC} In the context of the simplified model in Sec.~\ref{Model.SEC}, we study the production of a Higgs boson in association with a $W$ boson, $pp\rightarrow Wh$, with the subsequent decay $h\rightarrow 2\eta\rightarrow 4c$. Additionally, we consider events where the vector boson decays to leptons, which allows our signal events to be more easily triggered and avoids pure QCD backgrounds. We allow the $W$ to decay to both electrons and muons. The associated production with a $Z$-boson is also possible; however, the branching ratio of $Z$ to leptons is less than $W$ to leptons and so we focus on purely $Wh$ production. Similar studies have been performed for $h\rightarrow 4g$ and $h\rightarrow 4b$~\cite{Falkowski:2010hi,Chen:2010wk}. All signal and background events are generated using MadGraph5~\cite{Alwall:2011uj} and then showered using Herwig~6.5.10~\cite{Corcella:2000bw}. The charming Higgs model is incorporated into MadGraph using FeynRules~v1.6~\cite{Christensen:2008py}. Jets are clustered using FastJet~v2.4.2~\cite{Cacciari:2006sm} and the $k_T$ algorithm~\cite{Catani:1993hr} with a radius $R=0.5$. For the purposes of this study, we consider a Higgs mass of $100$~GeV and an $\eta$ mass of $12$~GeV. With this parameter choice the $\eta$s will be highly boosted and their individual decay products will be indistinguishable. Hence, our signal consists of a lepton, two jets, and missing energy. After showering and clustering, we require at least two jets to pass the basic acceptance cuts \begin{eqnarray} p^j_T>30~{\rm GeV}~~~~~~|y^j|<2.5,\label{cuts1J.EQ} \end{eqnarray} where $p_T$ and $y$ are transverse momentum and rapidity, respectively. Additionally, to trigger on the signal we apply the lepton and missing transverse energy cuts \begin{eqnarray} &&p^\ell_T>30~{\rm GeV}~~~~~ |y^\ell|<2.5~~~~~~ E\!\!\!\!\slash_{T}>25~{\rm GeV}.\label{cuts1L.EQ} \end{eqnarray} The signal cross section after these cuts is shown in the second column of Table~\ref{xsect.TAB}. \begin{table}[tb] \caption{Cross section for signal ($Wh$) and background, signal to background ratio, and signal significance with consecutive cuts at the 14 TeV LHC. Negligible backgrounds are indicated by --.} \label{xsect.TAB} \begin{center} \begin{tabular}{|l|c|c|c|c||c|} \hline $\sigma$(fb) & Cuts Eq.~(\ref{cuts1J.EQ})+(\ref{cuts1L.EQ}) & +~Eq.~(\ref{cuts2.EQ}) & +~$n^{\rm pass}_j=2$ & +~Eq.~(\ref{cuts3.EQ}) & +subjet cuts\\ \hline \hline $Wh$ & 84 & 27 & 22 & 3.3 & 1.1 \\ \hline\hline $Wjj$ & $1.4\times 10^6$ & $5.3\times10^4$ & $4.4\times10^4$ & 450 & 1.2 \\ \hline $WW$ & $1.7\times 10^3$ & $41$ & $31$ & $0.91$ & $1.3\times 10^{-2}$ \\ \hline $WZ$ & $470$ & $50$ & $42$ & $13$ & $1.2\times10^{-2}$ \\ \hline $tq$ & $7.1\times 10^3$ & $170$ & $140$ & $0.28$ & -- \\ \hline $tW$ & $3.3\times 10^3$ & $150$ & $76$ & $3.5$ & $3.3\times10^{-2}$ \\ \hline $tbW$ & $2.3\times10^5$ & $7.5\times 10^3$ & $1.5\times10^3$ & 78 & -- \\ \hline\hline ~~~$t\bar{t}$ & $4.3\times 10^4$ & $1.4\times 10^3$ & $96$ & 26 & -- \\ \hline\hline $S/B$ & $5.1\times10^{-5}$ & $4.4\times10^{-4}$ & $4.9\times10^{-4}$ & $6.1\times10^{-3}$ & 0.85 \\ \hline $S/\sqrt{S+B}$ ($30$ fb$^{-1}$) & 0.36 & 0.60 & 0.57 & 0.78 & 3.8 \\ \hline $S/\sqrt{S+B}$ ($100$ fb$^{-1}$) & 0.66 & 1.1 & 1.0 & 1.4 & 7.0 \\ \hline \end{tabular} \end{center} \end{table} In Fig.~\ref{mass.FIG} we present the signal invariant mass distributions of (a) the dijet system consisting of the two hardest jets, $m_{jj}$, and (b) the individual jet masses, $m_j$, for the hardest (solid) and second hardest (dashed) jets. The cuts in Eqs.~(\ref{cuts1J.EQ}) and (\ref{cuts1L.EQ}) have been applied. As expected, the invariant mass of the dijets is peaked at the higgs mass $m_h\,=\,100$~GeV. Also, since each jet originates from a massive particle, the individual jet masses peak at $m_j= m_\eta$. The differences in the hardest and second hardest jet mass distributions can be understood by noting that higher order QCD processes can generate a jet mass. For jets originating from massless partons, the average jet mass is approximately~\cite{arXiv:0712.2447} \begin{eqnarray} \langle m^2_j\rangle \approx C\frac{\alpha_s}{\pi}p^2_TR^2, \label{mjpt.EQ} \end{eqnarray} where $R$ is the jet radius and $C$ depends on the initial partons. Hence, jets with lower $p_T$ have on average lower masses than higher $p_T$ jets. This effect can be seen in Fig.~\ref{mass.FIG}(b), where the second hardest jet is more (less) likely than the hardest jet to have a jet mass below (above) $m_\eta\,=\,12$~GeV. Based on these observations we apply the invariant mass cuts \begin{eqnarray} 90~{\rm GeV}\,<\,m_{jj}\,<\,110~{\rm GeV}~~{\rm and}~~8~{\rm GeV}\,<\,m_{j}\,<\,16~{\rm GeV}. \label{cuts2.EQ} \end{eqnarray} The effect of the invariant mass cuts on the signal is shown in the second column of Table~\ref{xsect.TAB}. \begin{figure}[tb] \centering \subfigure[]{ \label{mjj.FIG} \includegraphics[width=0.31\textwidth,clip,angle=-90]{m2j.eps} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip,angle=-90]{mj.eps} \label{mj.FIG} } \caption{ Invariant mass distributions of the signal $Wh\rightarrow \ell\nu 2j$ at the 14 TeV LHC for (a) the two hardest jets and (b) the individual jet invariant mass for the hardest (solid) and second hardest (dashed) jets. Cuts in Eq.~(3.1) and (3.2) have been applied.} \label{mass.FIG} \end{figure} The irreducible backgrounds are the QCD background $Wjj$ and the electroweak backgrounds $WW/WZ\rightarrow \ell\nu 2j$. Other contributing reducible backgrounds are $tq\rightarrow \ell\nu bj$, $tW\rightarrow \ell\nu b 2j$, $t\bar{t}\rightarrow \ell\nu b\bar{b} 2j$, and $tbW\rightarrow \ell\nu2b2j$. The effects of the cuts in Eqs.~(\ref{cuts1J.EQ}), (\ref{cuts1L.EQ}), and (\ref{cuts2.EQ}) on the background cross sections are shown in the second and third columns of Table~\ref{xsect.TAB}. Note that the $t\bar{t}$ processes are included in the $tbW$ background and are shown only for reference. As can be seen, the invariant mass cuts greatly reduce all backgrounds. To further reduce the irreducible backgrounds, we require that the number of jets, $n^{\rm pass}_j$, passing the cuts in Eq.~(\ref{cuts1J.EQ}) to be equal to two. The effects of the $n^{\rm pass}_j=2$ cut on signal and background are shown in the fourth column of Table~\ref{xsect.TAB}. After the previous cuts, $Wjj$ is still the dominant background. Equation~(\ref{mjpt.EQ}) implies that the effect of the jet mass cut in Eq.~(\ref{cuts2.EQ}) is to cause the QCD jets to strongly peak at low $p_T$ with a short tail into the the high $p_T$ region. Whereas, since the signal jets originate from massive particles, the jet mass cut is expected have much less effect on the signal jet $p_T$ distributions. Hence, the signal jets are expected to have longer tails into the high $p_T$ region than the background jets. In Fig.~\ref{pT.FIG} we show the transverse momentum distribution of the (a) hardest and (b) second hardest jets that pass the cuts in Eqs.~(\ref{cuts1J.EQ}), (\ref{cuts1L.EQ}), and (\ref{cuts2.EQ}) for both our $Wh$ signal (solid) and $Wjj$ background (dashed). As expected, the background jets peak at lower $p_T$ and have shorter tails into the high $p_T$ region than the respective signal jets. Hence, we apply the further $p_T$ cuts on signal and background jets: \begin{eqnarray} p^j_T({\rm hard})\,>\,~100~{\rm GeV}~~{\rm and}~~p^j_T({\rm second~hardest})\,>\,~50~{\rm GeV} \label{cuts3.EQ} \end{eqnarray} The effect of these cuts on signal and background are shown in the fifth column of Table~\ref{xsect.TAB}. \begin{figure}[tb] \centering \subfigure[]{ \label{ptWh.FIG} \includegraphics[width=0.45\textwidth,clip]{ptj_hard.eps} } \subfigure[]{ \includegraphics[width=0.45\textwidth,clip]{ptj_soft.eps} \label{ptWjj.FIG} } \caption{Transverse momentum distribution of the two jets passing the cuts in Eqs.~(3.1), (3.2), and (3.4) for (a) hardest and (b) second hardest jets at the 14 TeV LHC for both the $Wh$ signal (solid) and $Wjj$ background (dashed). } \label{pT.FIG} \end{figure} After all the above cuts, the signal is still difficult to observe over the background. With a luminosity of $30$ fb$^{-1}$ the significance at the $14$ TeV LHC is still only $0.78\sigma$, improving only slightly to $1.4\sigma$ at $100$ fb$^{-1}$. As mentioned previously, our signal jets originate from color singlet massive particles while the background jets originate from colored massless partons. We now exploit those differences and perform a subjet analysis to increase the signal significance. \subsection{Jet Substructure} \label{Subjet.SEC} For the subjet analysis we follow the standard procedure given in Ref.~\cite{Butterworth:2008iy}. First, for the two jets passing all previous cuts, the final step in the jet reconstruction is reversed to determine the two leading subjets: $j_1$ and $j_2$ with $m_{j_1}>m_{j_2}$. Since the parent jet originates from a massive particle and the subjets from massless partons, there is expected to be a significant mass drop between the subjets and parent jet. Hence the subjet masses are required to meet the criteria $m_{j_1}<\mu\,m_j$, where $m_j$ is the parent jet's mass and $\mu$ is a free parameter. Also, for the decay $\eta\rightarrow c\bar{c}$ the splitting should not to be too asymmetric \begin{eqnarray} \frac{\min({p^{j_1}_{T}}^2,{p^{j_2}_{T}}^2)}{m^2_j}\Delta R^2_{j_1,j_2}>y_{cut}, \end{eqnarray} where $\Delta R^2_{j_1,j_2}=(\phi^{j_1}-\phi^{j_2})^2+(y^{j_1}-y^{j_2})^2$ measures the separation of the subjets in the $y-\phi$ plane and $\phi^{j_i}$ is the azimuthal angle of subjet $j_i$. If the subjets do not satisfy the mass drop and asymmetric splitting criteria, the parent jet is replaced by $j_1$ and the procedure is repeated. As noted in Ref.~\cite{Butterworth:2008iy}, for $\mu\gtrsim 1/\sqrt{3}$ the three-body decay $\eta\rightarrow c\bar{c}g$ will pass the mass drop criterion if the decay is in the Mercedes configuration in the $\eta$ rest frame. Following Refs.~\cite{Chen:2010wk,Butterworth:2008iy} we take $\mu=0.67$ and $y_{cut}=0.12$. \begin{figure}[tb] \centering \subfigure[]{ \label{sig_logkt.FIG} \includegraphics[width=0.45\textwidth,clip]{logkt_hard.eps} } \subfigure[]{ \includegraphics[width=0.45\textwidth,clip]{logkt_soft.eps} \label{wjj_logkt.FIG} } \caption{ $\log\sqrt{d}$ distributions at the 14 TeV LHC for (a) the hardest jet and (b) second hardest jet for both $Wh$ signal (solid) and $Wjj$ background (dashed). Cuts in Eq.~(3.1), (3.2), and (3.4) have been applied and $d$ is measured in units of GeV$^2$.} \label{KT.FIG} \end{figure} Each jet in our signal is expected to result from the $\eta\rightarrow2c$ decay, where the two quarks will be collimated into a single jet. Hence the KT distance of the two subjets $d=\min({p^{j_1}_T}^2,{p^{j_2}_T}^2)\Delta R^2_{j_1,j_2}/R^2\sim \mathcal{O}(m^2_\eta)$~\cite{Butterworth:2002tt}. Figure~\ref{KT.FIG} shows the $\log\sqrt{d}$ distributions for the (a) hardest and (b) second hardest jets with $d$ measured in GeV$^2$ and after the cuts in Eqs.~(\ref{cuts1J.EQ}),~(\ref{cuts1L.EQ}), and (\ref{cuts2.EQ}) are applied. The $Wh$ signal distribution is shown with solid lines and the $Wjj$ background with dashed. Since we apply the jet mass cut in Eq.~(\ref{cuts2.EQ}), both the signal and background distributions are peaked near the same value of $\log\sqrt{d}$. However, since the signal jets have a natural mass scale of $m_\eta=12$~GeV, the signal distributions are peaked narrowly at $\log\sqrt{d}\sim\log 12\approx 2.5$. Also, since the $Wjj$ jets originate from massless partons, the background distribution has a significant tail in the low $\log\sqrt{d}$ region and is peaked at a slightly lower value than $2.5$. We therefore apply the subjet cuts: \begin{eqnarray} 2\,<\,\log\sqrt{d}\,<3. \label{cuts5.EQ} \end{eqnarray} \begin{figure}[tb] \centering \subfigure[]{ \label{sipol_hard.FIG} \includegraphics[width=0.45\textwidth,clip]{dipol_hard.eps} } \subfigure[]{ \includegraphics[width=0.45\textwidth,clip]{dipol_soft.eps} \label{dipol_soft.FIG} } \caption{ Dipolarity distributions of the signal $Wh\rightarrow \ell\nu 2j$ at the 14 TeV LHC for (a) the hardest jet and (b) second hardest jet for both $Wh$ signal (solid) and background (dashed). Cuts in Eqs.~(3.1), (3.2), and (3.4) have been applied.} \label{dipol.FIG} \end{figure} Since our signal jets originate from color singlet particles, the two subjets within each jet are expected to form a color dipole. Hence, for our signal most of the radiation is expected to lie between the two subjets; whereas, the radiation in the background jets is expected to be more uniformly distributed since the jets originate from colored partons. The observable ``dipolarity"~\cite{Hook:2011dp} has been proposed to measure whether or not subjets form a color dipole: \begin{eqnarray} \mathcal{D}=\frac{1}{\Delta R^2_{j_1,j_2}}\sum_{i\epsilon J}\frac{p_{T_i}}{p_{T_J}}R^2_i \end{eqnarray} where the sum runs over all calorimeter cells within a jet, $p_{T_i}$ is the $p_T$ in a given calorimeter cell, and $p_{T_J}$ is the jet's total $p_T$. The distance between a given calorimeter cell and the line connecting the centers of the two subjets in the $y-\phi$ plane is given by $R_i$. Fig.~\ref{dipol.FIG} shows the dipolarity of the (a) hardest and (b) second hardest jets after cuts in Eq.~(\ref{cuts1J.EQ}), (\ref{cuts1L.EQ}), and (\ref{cuts2.EQ}). Again, the $Wh$ signal is shown with solid lines and the $Wjj$ background with dashed lines. For this distribution the dipolarity has been calculated at the hadronic level. As can be clearly seen, the distribution for the signal jets is peaked at low dipolarity and the $Wjj$ distributions are more uniformly spread. We therefore apply the dipolarity cut \begin{eqnarray} \mathcal{D}\,<\,~0.1 \label{cuts6.EQ} \end{eqnarray} The effects of the cuts of Eqs.~(\ref{cuts5.EQ}) and (\ref{cuts6.EQ}) are shown in the last column of Table~\ref{xsect.TAB}. Including the subjet cuts improves the statistical significance of the signal to $3.8\sigma$ at $30$ fb$^{-1}$ and $7.0\sigma$ at 100 fb$^{-1}$ of integrated luminosity at the 14 TeV LHC. A statistical significance of $5\sigma$ can be obtained with approximately $50$ fb$^{-1}$ of data. \section{Conclusions} \label{Conc.SEC} The LHC is quickly excluding the remaining parameter ranges for the SM Higgs boson. Together with LEP direct detection limits, the only remaining available masses for a low mass SM Higgs are $117.5-118.5$~GeV and $122.5-129$~GeV~\cite{Barate:2003sz,ATLAS,CMS}. As these limits tighten, we are motivated to investigate other mechanisms of EWSB. Particularly interesting models are those that escape the LEP bounds and can have a Higgs boson with a mass around $100$~GeV. This can be accomplished if the Higgs boson undergoes a cascade decay $h\rightarrow 2\eta\rightarrow 4X$, where $\eta$ is a pseudoscalar and $X$ are light SM particles~\cite{Dermisek:2005ar}. We have studied the observability of a Higgs boson in the ``charming Higgs" model~\cite{Bellazzini:2009kw}. In this model the Higgs decays via $h\rightarrow2\eta\rightarrow4c$ and can escape the LEP bounds. In particular, we considered the associated production of a Higgs boson with leptonically decaying $W$ for masses $m_h=100$ GeV and $m_\eta=12$ GeV. Since $m_h\gg m_\eta$, the decay products of $\eta\rightarrow c\bar{c}$ are highly collimated and form single jets. Our signal then consists of a charged lepton, missing energy, and two jets originating from the two $\eta$s from the Higgs decay. Such a signal is typically overwhelmed by the $Wjj$ QCD background. By applying cuts to the invariant mass and transverse momentum of the jets, as well as requiring that exactly two jets satisfy our basic acceptance cuts, we can partially lift this signal above the standard model background at the LHC, but still only reach $S/\sqrt{S+B}$ of $0.78\sigma$ at $30$ fb$^{-1}$ and $1.4\sigma$ at $100$ fb$^{-1}$ at the $14$ TeV LHC. Our analysis can be greatly improved by applying jet substructure techniques. We require a significant mass drop from jet to subjet and fairly symmetric splitting of momentum between the two subjets. Additionally, we place new cuts on the KT distance between our subjets (which should be peaked around $m_{\eta}^2$ for our signal) and the jet dipolarity~\cite{Hook:2011dp} (because our signal comes from color singlets whereas our background is dominated by colored particles). Including these subjet cuts allows us to achieve a statistical significance of almost $4\sigma$ at $30$ fb$^{-1}$ and $7\sigma$ at $100$ fb$^{-1}$ at the $14$~TeV LHC. A $5\sigma$ discovery can be achieved with $50$ fb$^{-1}$ of data. The procedures presented here are quite general and applicable to situations in which the Higgs boson undergoes the decay $h\rightarrow 2\eta\rightarrow 4j$ with $m_\eta\ll m_h$. Also, the branching ratio of $h\rightarrow 4c$ may differ from one and the Higgs mass may be lager than $100$~GeV. Simply scaling the signal rate with branching ratio, we estimate that with $100$~fb$^{-1}$ of data at the $14$ TeV LHC, branching ratios of BR$(h\rightarrow 2\eta\rightarrow 4c)=0.35$ and $0.64$ are observable at the $3\sigma$ and $5\sigma$ levels, respectively. Also, as the Higgs mass increases the dijet invariant mass cuts will need to be tuned and the signal rate will decrease slightly due to the higher final state invariant mass. However, the favored signal dijet mass is further from the $Z$-pole and the dijet invariant mass cuts will be more efficient in reducing the $WW$ and $WZ$ backgrounds. With the tuned dijet invariant mass cuts, the QCD background is expected to decrease at least as quickly as the signal rate. Hence, a slightly higher Higgs mass is not expected to significantly alter our conclusions. \section{Acknowledgement} We would like to thank Prof. Tao Han for suggesting this project and for helpful discussions. This work was supported in part by the U.S.~DOE under Grant No.~DE-FG02-95ER40896. IL was also supported in part by the US~DOE Grant No.~DE-AC02-98CH10886.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} The epoch of reionisation (EoR) is the focus of significant theoretical and observational research efforts due to its importance in the understanding of cosmic evolution. The time of its onset and duration are related to fundamental information about the first stars, galaxies, and accreting black holes, including mass, radiative output, and composition \citep{b6}. Theoretical models place the EoR between redshifts of \textcolor{black}{$20\gtrsim z \gtrsim 6$}, after which the intergalactic medium (IGM) is observed to be fully ionised \citep{b1, b3, b2}. The duration of the EoR is difficult to predict theoretically \textcolor{black}{ \citep{b18, b8a}} and observational constraints are just beginning to emerge \textcolor{black}{\citep{ b4c, b52, b70, b65, b01, b15a}}. At radio wavelengths, the 21~cm hyperfine transition of neutral hydrogen (rest frequency of 1420~MHz) provides a versatile signal for studying \textcolor{black}{the epochs of Cosmic Dawn and Reionization} by probing the temperature and ionisation state fraction of neutral hydrogen gas in the IGM. The detectable brightness temperature due to redshifted 21~cm emission or absorption from the early IGM is given by \citep{b5, b3b, b6, b7, b8} \begin{equation} \label{eq:sky_temp} T_{\text{b}}(z) \approx 27x_{\text{HI}}\left(\frac{1+z}{10}\right)^\frac{1}{2}\left(1-\frac{T_{\text{CMB}}}{T_\text{S}}\right) \text{mK,} \end{equation} where $x_{\text{HI}}$ is the neutral fraction of the gas, $T_\text{S}$ is the 'spin' temperature that describes the relative populations of the ground and the hyperfine excited states, and $T_\text{CMB}$ is the temperature of the cosmic microwave background (CMB) radiation, all of which depend upon $z$ implicitly. \begin{figure} \centering \includegraphics{figure_01} \caption{A sample of theoretical models for the 21~cm brightness temperature with various values for model parameters (courtesy \citealt{b8a}). The models predict the magnitude of the 21~cm signal to peak between 15 and 40~mK in the interval between redshifts 6 and 20.} \label{fig:Theory} \end{figure} The spin temperature amplitude is affected by UV radiation via the Wouthuysen-Field mechanism \citep{b9,b10}, collisions with IGM gas, and interactions with CMB photons. Thus $T_\text{b}$ can be either positive or negative depending upon the spin temperature relative to the CMB temperature. The shape of the \textcolor{black}{$T_\text{b}$} vs redshift curve indicates the relative strength and timing of the early processes \textcolor{black}{mentioned above} and theoretical studies have varied the model parameters to produce families of resultant curves (see e.g. \citealt{b6a, b6, b12, b18, b19, b8a}). There are two main approaches to studying the EoR at radio wavelengths. The first method attempts to detect the EoR statistically, primarily through the redshifted 21~cm fluctuation power spectrum, and eventually to image large structures directly. Efforts such as the Low-Frequency Array for Radio astronomy (LOFAR\footnotemark\footnotetext{www.lofar.org}, \textcolor{black}{\citealt{b14, b81}}), the Precision Array to Probe the Epoch of Re-ionization (PAPER\footnotemark\footnotetext{http://eor.berkeley.edu/}, \citealt{b01, b15a}), the Murchison Widefield Array (MWA\footnotemark\footnotetext{www.mwatelescope.org}, \citealt{b16}; \citealt{b40}), the Square Kilometer Array (SKA\footnotemark\footnotetext{www.skatelescope.org}, \citealt{b17}), and the Hydrogen Epoch of Reionization Arrays (HERA\footnotemark\footnotetext{http://reionization.org/}) are radio interferometers currently operating or in development that aim to recover the redshifted 21~cm power spectrum. The second method aims to detect the global 21~cm signal through full sky observations using a single antenna. A global signal antenna will respond to a range of frequencies and $T_\text{b}(z)$ will correspond to the frequency range of redshifted 21 cm emissions. The predicted spectral signature is broadband between 50 and 200~MHz (30 $\leq$ $z$ $\leq$ 6), with a peak absolute amplitude between 10 and 200 mK dependent upon particular star formation model parameters chosen (see Fig. \ref{fig:Theory}, data from \citealt{b8a}). Galactic and extragalactic continuum foregrounds from synchrotron and free-free emission are approximately four orders of magnitude larger, with typical sky temperatures of 250~K at 150~MHz away from the \textcolor{black}{Galactic Centre}. The foregrounds generally exhibit smooth, power-law-like spectra that must be subtracted from observations to reveal the 21~cm signal. Several global 21~cm experiments are in progress using various radio receiver architectures and antenna design styles. The major efforts include the Experiment to Detect the Global EoR Signature (EDGES) \citep{b4c}, Dark Ages Radio Explorer (DARE) \citep{b23}, Broadband Instrument for Global Hydrogen Reionisation Signal (BIGHORNS) \citep{b46}, Shaped Antenna measurement of background Radio Spectrum (SARAS) \citep{b49, b49a}, Large-aperture Experiment to detect the Dark Age (LEDA) \citep{b21}, and Sonda Cosmol\'{o}gica de las Islas para la Detecci\'{o}n de Hidr\'{o}geno Neutro (SCI-HI) \citep{b62}. Several studies have addressed experimental calibration issues \textcolor{black}{\citep{b4d, b49, b21}} as well as the frequency dependence of the antenna beams \textcolor{black}{\citep{b60,b47,b21}}. In this study we focus on antenna beam effects in the detection of the global 21~cm signature in the range of 13.2 $> z >$ 6.4 (100 $< \nu <$ 190~MHz) \textcolor{black}{across sky position in right ascension and declination, which we map to the local sidereal time (LST) and latitude of the experiment deployment site}. The antenna is a critical element \textcolor{black}{of the EDGES system and since it is not embedded in an array, its beam} cannot be readily calibrated as part of the observing program (see \citealt{b21}). The antenna must be designed carefully and modeled accurately {\it a priori} to compensate for its characteristics during observations. The primary design objective for the antenna \textcolor{black}{beam is a directivity pattern that varies smoothly in frequency}. Chromatic antenna beams are undesirable because they can couple the relatively large angular fluctuations in the Galactic foreground into \textcolor{black}{spectral structures that may confuse global 21~cm signatures}. \textcolor{black}{\cite{b47} have evaluated the chromatic effects of the ionosphere with the LOFAR low frequency antenna beam, and \cite{b21} investigated the chromatic effects of a detailed sky foreground model with analytical forms of the LEDA \textcolor{black}{dipole} beam. Here, we compare the chromatic effects of two dipole-based antennas and an idealized reference antenna, over deployment latitude and LST, in the context of the EDGES project. We isolate the effects of the antenna beams by ignoring ionospheric effects and} \textcolor{black}{by adopting a power-law model for the foreground emission.} In Section~\ref{sec:methods} we describe the antennas and the method used to calculate their response. In Section~\ref{sec:results} we discuss the results of our simulations and conclude in Section~\ref{sec:conclusion} with a summary of key findings and a discussion of implications and potential future investigation paths. \section{METHODS} \label{sec:methods} We base the instrument model in our simulations on the EDGES project (\citealt{b4a, b4b, b4c, b4d}). It employs a broadband dipole-like antenna sensitive to wavelengths between 3 and 1.6~meters (100 - 190~MHz). The antenna is connected to a radio receiver that amplifies and conditions the signal before passing it to a digital spectrometer that samples the spectrum with 6~kHz resolution. The receiver utilizes laboratory calibration prior to deployment, augmented with a three-position hot/cold calibration switching scheme during operations \citep{b4d}, to achieve an accuracy of $\sim$0.01 \% in measured antenna temperature as a function of wavelength. The impedance match of the antenna connection to the receiver is measured \textit{in situ} by periodically switching a Vector Network Analyzer (VNA) into the electrical path. Although the EDGES calibration scheme is sufficient to correct for undesirable electronic effects in the measured spectrum, it does not compensate for chromatic beam effects. \subsection{Antennas} \begin{figure} \centering \includegraphics{figure_02} \caption{Fourpoint antenna (left) and blade antenna (right) are shown in a top view with dimension-indicating arrows to denote the individual panel widths and lengths listed in Table \ref{tab:antennas}.} \label{fig:Fourpoint_and_Blade} \end{figure} \begin{figure*} \begin{minipage}[b]{0.99\linewidth} \centering \includegraphics{figure_03_left} \includegraphics{figure_03_right} \end{minipage} \caption{(left) Photograph of the fourpoint antenna as deployed by EDGES in 2015. The fourpoint design has a downward pointing rim (1.8~cm) on the perimeter of each panel and uses discrete capacitors between the panels at the outer edge as well as a tuning capacitor half-way up the tubes which are part of the Roberts balun. (right) \textcolor{black}{Photograph} of the blade antenna which does not use inter-panel capacitors, a balun tuning capacitor, nor a perimeter rim. (common) Both designs use fiberglass support tubes, four for the fourpoint and eight for the blade. Both antennas use a tuning capacitor on the top of the panels between the balun tubes to improve impedance matching. Surrounding the tubes at the base, a short rectangular enclosure shields against vertical currents in the tubes.} \label{fig:antenna_images} \end{figure*} In this study, we analyse three horizontal planar dipole-like antennas placed over a ground plane. Each antenna is tuned to respond in the EDGES band. The three antenna types are: 1) The EDGES ``fourpoint'' antenna deployed to date, which is based on the fourpoint design of \citet{b24}; 2) a ``blade'' design that shows superior beam qualities in simulations and is being considered as a potential successor to the fourpoint design; and 3) an analytic \nicefrac[]{1}{2}-$\lambda$ wire dipole, which is included as an analytic comparison. The fourpoint and blade antennas are shown in Figs.~\ref{fig:Fourpoint_and_Blade}~-~\ref{fig:antenna_images} and Table \ref{tab:antennas} summarizes the design and model parameters of the antennas. Numerical time-domain electromagnetic simulations were performed using CST (Computer Simulation Technology) Microwave Studio for the fourpoint and blade antennas. All of the antenna components were simulated except for fiberglass support legs and cable connectors. Metal structures were modeled with their actual dimensions and thicknesses, but ground planes were modeled as infinite metal sheets. \textcolor{black}{We found that the choice of CST settings can affect the chromaticity of the modeled beams by introducing numerical artifacts from rounding and precision errors. In particular, we needed to carefully select transient power dissipation thresholds and mesh grid resolutions in order to minimize such artifacts while maintaining efficient processing times. To fine-tune our CST settings, we performed a convergence study in which we perturbed the physical antenna dimensions (by of order 1\%) in our CST models while adjusting simulator settings until the resulting outputs converged to small variations. We found this test for convergence of results, using nearly identical antenna models, to be a good probe of the level of numerical artifacts in the antenna simulations.} \textcolor{black}{Our final antenna simulations were performed using CST settings that led to no more than 0.02 dB variations in reflection coefficients between perturbed antenna models over the frequency range of interest and yielded no more than 1.5 mK variations in foreground-subtracted residuals using a 5-term polynomial fit after our full analysis for antennas modeled at -26 degrees latitude with LST of 4 h.} The mesh cell counts were 13~million cells for the fourpoint antenna and 6~million cells for the blade antenna. Simulations required approximately 20~min. for the blade and 40~min. for the fourpoint antenna when using an NVIDIA \textcolor{black}{M2090} GPU accelerator. Peak memory requirements were modest at less than 8~GB. We briefly describe each antenna below. \begin{table*} \caption{Antenna Features (refer to Figs. \ref{fig:Fourpoint_and_Blade}-\ref{fig:antenna_images})} \label{tab:antennas} \begin{tabular} {@{}lc@{\hspace{2.5em}}c@{\hspace{0.7em}}c@{\hspace{0.7em}}c@{\hspace{0.7em}}c@{}} \hline & \multicolumn{2}{c}{\underline{3 dB Beamwidth at 150 MHz}}&Height above& Panel Width& Panel Length\\ Antenna& $\phi$=0$^{\circ}$ &$\phi$=90$^{\circ}$ &ground plane& $\bot$ to excitation axis& $\parallel$ to excitation axis\\ & (Degrees)& (Degrees)&(cm)& (cm)& (cm)\\ \hline \nicefrac[]{1}{2}-$\lambda$ wire dipole & 70 & 111 & 42.5&N/A&84.9$^a$\\ Fourpoint & 66 & 105 & 51.8&53.0&69.7\text{ } \\ Blade & 72 & 110 & 52.0&62.6&48.8\text{ } \\ \hline \textcolor{black}{$^a$dipole full length} \end{tabular} \end{table*} {\bf Fourpoint antenna:} The fourpoint antenna uses four diamond-shaped panels \textcolor{black}{arranged in a } planar structure. One pair of opposing panels is electrically active, while the other pair serves as a parasitic capacitance via a vertical rim along the panel's perimeter, to enhance both the beam's symmetry and the antenna's impedance match to the receiver (Fig. \ref{fig:antenna_images}). A Roberts transmission-line balun \citep{b30} is used to transition from the panels to the receiver. Discrete tuning capacitors located at roughly the middle of the Roberts balun and near the edges of the panels, along with a capacitive top plate above the central region of the antenna, improve the impedance match of the antenna to the receiver. EDGES has deployed this style of antenna at the Murchison Radio-astronomy Observatory (MRO) in Western Australia for several observing seasons. A version of this style without the Roberts balun provided the data used to set a lower limit on the duration of the reionisation epoch \citep{b4c}. {\bf Blade antenna:} The blade-shaped antenna is simpler than the fourpoint design, because it uses two flat rectangular panels (no rim on the perimeter), a top capacitor, the Roberts Balun, and a small shield at the bottom (Fig. \ref{fig:antenna_images}). There are no inter-panel tuning capacitors nor a balun tuning capacitor. The beam has less variation with frequency than the fourpoint design as can be seen in Fig. \ref{fig:cross-sections}. The \textcolor{black}{ground plane consists of} a 5~m $\times$ 5~m solid aluminum plate underneath the antenna, with four wire mesh extensions (each 2~m $\times$ 5~m) which form a `plus-shaped' ground plane. This antenna \textcolor{black}{was deployed in the field for the first time in July 2015}. {\bf Ideal \nicefrac[]{1}{2}-$\lambda$ wire antenna:} \label{subsec:half_wavelength} The theoretical reference beam in our study is that of an \textcolor{black}{infinitesimally} thin \nicefrac[]{1}{2}-$\lambda$ horizontal wire dipole antenna placed \textcolor{black}{a quarter wavelength, at a reference wavelength $\lambda_0$,} above an infinite ground plane. \textcolor{black}{It has a near ideal beam shape that} will be explained in subsequent sections. The equation for the beam can be derived from the finite vertical dipole wire antenna \citep{b29} rotated on its side with a ground plane serving as the array factor and is given by: \begin{equation} \label{eq:DipoleLong_beam} B_{\nicefrac[]{1}{2}\text{-}\lambda} = \left[\frac{\cos(\frac{\pi L}{\lambda} \cos\theta') - \cos(\frac{\pi L}{\lambda })} {\sin\theta'}\right]^2 \sin^2\left(2\pi h\cos\theta\right), \end{equation} where $\theta'=\cos^{-1}(\sin\theta \sin\phi)$, $\theta$ and $\phi$ are the spherical angle coordinates with $\theta=0$ aligned to principal axis orthogonal to the ground plane and $\phi=0$ aligned to the active excitation axis of the antenna, $L$ is the full length of the antenna, and $h$ is the \textcolor{black}{height of the antenna above the ground plane in reference wavelengths.} Figure \ref{fig:cross-sections} shows the beam pattern variation of each antenna model with frequency for $\phi$ = 0$^\circ$ and 90$^\circ$ and illustrates the frequency-dependent variations in these antennas over the wide range of operating frequencies as well as the angular variation with elevation. As common to most dipole-based antennas, all three antennas have large primary beams (fields of view) that span up to $\sim$$110^{\circ}$ FHWM. The beam patterns of the antennas vary smoothly with respect to angle and with respect to frequency as viewed in Fig. \ref{fig:cross-sections}, but for global 21~cm measurements, smaller features that vary by less than 1\% \textcolor{black}{that are not readily apparent in the figure,} can cause undesirable chromatic effects. When the height of an antenna above the ground plane becomes larger than a quarter wavelength, structures in the beam near the zenith and sidelobes begin to form at that frequency. To avoid this unwanted structure, one would ideally place the antenna at a quarter wavelength above the ground plane for the highest frequency (shortest wavelength), ensuring that the height remains under a quarter wavelength for all frequencies. However, the height of the fourpoint antenna is chosen as a compromise to optimize the impedance match while keeping the beam shape smooth. For the fourpoint and blade antennas, we use a height above the ground plane that is equivalent to a quarter wavelength at 150~MHz. This produces a minor ($<$ 0.2~dB) local minimum at the zenith for frequencies above 150~MHz. The theoretical \nicefrac[]{1}{2}-$\lambda$ wire antenna was placed slightly lower above the ground plane, comparable to a quarter wavelength at 176.7~MHz, in order to create local minima of similar amplitude to those from the fourpoint antenna. \begin{figure} \centering \includegraphics{figure_04_top} \includegraphics{figure_04_middle} \includegraphics{figure_04_bottom} \caption{Cross-sections of simulated beam patterns for (top) the fourpoint antenna, (middle) the blade antenna, and (bottom) the analytic \nicefrac[]{1}{2}-$\lambda$ wire antenna at $\phi= 0^\circ$ (E-plane) and 90$^\circ$ (H-plane) for various frequencies. The active excitation axis of each antenna is defined as $\phi=0^\circ$. The horizon response ($\theta \approx 90^\circ$) is generally largest and most frequency-dependent for the fourpoint. Non-smooth beam changes are very small and not visible at these scales. Derivative plots are best used to detect these changes and are discussed in section \ref{subsec:origin}.} \label{fig:cross-sections} \end{figure} \subsection{Sky Model} The sky foreground model in our simulations is based on the \citet{b31} map at 408~MHz as shown in Figure \ref{fig:Haslam_sky_plot} with approximately 0.6$^\circ$ angular resolution. We model $T_\text{sky}$ as a simple power law by scaling the sky temperature as a function of frequency according to: \begin{equation} \label{eq:Tsky} T_{\text{sky}}(\nu,\boldsymbol{\zeta}) =T_\text{Haslam}(\boldsymbol{\zeta}) \left(\frac{\nu}{408\, \mathrm{ MHz}}\right)^{-\beta}, \end{equation} where $\nu$ is frequency, $\boldsymbol{\zeta}$ is the sky coordinate vector, and $\beta$ is the power law spectral index which we set to 2.5 \citep{b4a}. Our sky model does not contain the EoR signal, nor does it include ionospheric distortions (see \citealt{b47, b48, b58} for a discussion of ionospheric effects). \textcolor{black}{Earlier studies have used \textcolor{black}{3-parameter} sky models \citep{b45}, and more recently have considered complex sky models of up to 6$^\text{th}$ order (7 terms) \citep{b21}}. We have not included this structure in our sky model in order to maintain simplicity. Using a simple spectral model is sufficient here because we find (discussed in detail below) that the chromatic effects of our modeled antenna beams produce \textcolor{black}{a larger magnitude of spectral structure at high-orders} and, hence, will require similarly high-order model fits. \textcolor{black}{We assume that} any complicated spectral structure inherent to the sky will be removed along with the chromatic beam effects. \begin{figure} \centering \includegraphics{figure_05} \caption{Sky model at 150~MHz extrapolated from the 408~MHz sky map of \citealt{b31}.} \label{fig:Haslam_sky_plot} \end{figure} \subsection{Measurement Equation} Assuming an ideal receiver, we calculate the simulated antenna temperature using: \begin{equation} \label{eq:T_antenna} T_\text{ant}(\nu,\hat{\boldsymbol{n}}) = \int_{\Omega}T_\text{sky}(\nu,\boldsymbol{\zeta}) B(\nu,\boldsymbol{\zeta},\hat{\boldsymbol{n}}) \mathrm{d}\Omega , \end{equation} where $\hat{\boldsymbol{n}}(\alpha, \delta, \psi)$ is the antenna pointing vector and is a function of the right ascension and declination ($\alpha, \delta$) of the primary beam axis that is perpendicular to the antenna ground plane. The orientation of the antenna along its third degree of freedom is specified by the angle of the active antenna axis east of north and is labeled $\psi$. $B(\nu, \boldsymbol{\zeta},\hat{\boldsymbol{n}})$ is the chromatic beam for a given antenna, pointing, and orientation, and is normalized to a unit integral at each frequency. For an antenna on the surface of the Earth pointed toward zenith, the pointing declination of its primary axis corresponds to the latitude of the antenna's deployment site, and the pointing right ascension corresponds to the LST at the site. As the earth rotates, the antenna pointing changes direction, altering the mapping of its beam pattern onto sky coordinates. \subsection{Figure of Merit} We define the figure of merit (FoM) for assessing the significance of chromatic beam effects as the RMS residuals to a least squares best fit model: \begin{equation} \label{eq:FoM} \mathrm{FoM(\hat{\boldsymbol{n}})} = \sqrt{\left < \left [T_\text{ant}(\nu,\hat{\boldsymbol{n}}) - T_\text{model}(\nu,\hat{\boldsymbol{n}}) \right]^2 \right >_{\nu}}. \end{equation} Chromatic antenna beams that couple little structure into the measured spectrum will produce small FoM values. A good antenna for the global redshifted 21~cm measurement would yield residuals well below the expected 21~cm signal strength of 10 - 40~mK in the frequency range of 100 - 190~MHz while minimizing the number of free parameters in the model. For our model equation, we use an N-term power-law polynomial given by: \begin{equation} \label{eq:Power_Law_2} T_\text{model} = \sum_{i=0}^{N-1} a_i\left(\frac{\nu}{\nu_0}\right)^{-2.5+i}. \end{equation} This polynomial form generally produces good fits at low-orders in existing EDGES measurements. The antenna temperature produced by our sky model and an ideal beam that is invariant with frequency could be fit exactly with only one term: $T_\text{model} = a_{0} \nu^{-2.5}$, but as we will show, realistic antennas require that N$\approx$\textcolor{black}{5}. In order to simultaneously remove the foreground and estimate the global 21~cm signal, the number of terms in $T_\text{model}$ would have to be augmented with at least one more term to solve for the global 21~cm signature. \textcolor{black}{With a 21~cm model term added to the fitting equation, we define the signal to noise ratio (SNR) to be} \begin{equation} \label{eq:SNR} \textcolor{black}{SNR = \frac{T_{peak}}{[(\sigma_{rms}^2+\sigma_T^2)(\bold{X}^t\bold{X})^{-1}_{21~cm}]^{1/2}},} \end{equation} \textcolor{black}{where $T_{peak}$ = 20 mK is a nominal value of the global 21~cm peak signal strength, $\sigma_{_T}$ is the typical thermal noise expected from averaging multiple observations, $\sigma_{rms}=\mathrm{FoM}$ is the rms error of the residuals to the fit, and $\sigma_{rms}^2(\bold{X}^t\bold{X})^{-1}_{21~cm}$ is the 21~cm \textcolor{black}{auto-covariance} term in the covariance matrix. The design matrix $\bold{X}$ has N columns (equal to the number of terms in the fitting equation) and one row for every discrete frequency (91 in our case).} \textcolor{black}{We choose a Gaussian shaped 21~cm fitting term inspired by the emission and absorption signatures in Fig. \ref{fig:Theory} in the form of} \begin{equation} \label{eq:EOR} T_{21}=a_{_{N+1}}e^{-\frac{(\nu-\nu_0)^2}{2\sigma^2}}, \end{equation} \textcolor{black}{where $\nu_0$ = 150~MHz and $\sigma$ is related to the Full Width Half Maximum (FWHM) by $\sigma = \text{FWHM}/(2\sqrt{2\text{ln}2})$. The noise estimate is based upon averaging a week of observation data using spectral channels of 1.0~MHz and assumes the nosie is Gaussian and spectrally flat. Typical values are under 3~mK.} \textcolor{black}{Properly accounting for the effects of covariance between the chromatic foregrounds and the signal parameters requires one to marginalize over foreground parameters. We use the SNR as defined in Eq. \ref{eq:SNR} as an approximate method to account for degeneracies arising from covariance between foreground and signal parameters.} \textcolor{black}{Using a higher-order foreground model lowers the RMS error, but raises the covariance of the 21~cm signal estimate which lowers the SNR value, indicating there should also be an upper limit to the number of terms one should use to remove the foreground and this will be explored in Section \ref{sec:results}. Since $(\bold{X}^t\bold{X})^{-1}_{21~cm}$ depends only upon the design matrix X, which is not a function of the antenna, the antenna location, or the antenna pointing, we will focus on our FoM in Section 3 as a sufficient metric for analyzing differences between antennas and pointings.} \section{RESULTS} \label{sec:results} In this section we present the FoM for each of the three antenna beam models. We investigate antenna pointings spanning the entire sky in right ascension and declination. In order to facilitate interpretation for ground-based drift-scan experiments, where the pointing parameters are typically dictated by site location and observing time, we label the plots in this section with latitude (declination) and LST (right ascension), although we note that the results are equally valid for proposed space-based experiments such as the Dark Ages Radio Explorer (DARE, \citealt{b23}) that would more naturally label the pointings with right ascension and declination. Figure~\ref{fig:pcolor_poly6_M} shows the FoM as a function of latitude and LST for all three antennas using using N=6 polynomial terms. The resolution of the map is 1$^\circ$ in latitude and 4 minutes in LST, yielding a $(181\times360)$ data array. As evident in the figure, the fourpoint antenna yields the worst performance with typical FoM values ranging between 20 and 100~mK. Nevertheless, there are regions of the sky where the FoM reaches 4~mK and observations with a fourpoint antenna can still enable the global redshifted 21~cm measurement. The blade antenna, on the other hand, yields FoM values less than 1~mK over most of the sky, easily facilitating measurement of the global 21~cm signal. The analytic \nicefrac[]{1}{2}-$\lambda$ wire antenna model produces the best results of all three antennas, with sub-mK residuals. In all three cases, the FoM tends to reach its largest (worst) values when the Galactic plane and/or \textcolor{black}{Galactic Centre} is at or above the horizon. \subsection{Antenna Orientation} For each antenna model and pointing we investigated two antenna orientations: north-south (NS) aligned and east-west (EW) aligned excitation axis. Orientation of the excitation axis is not a major factor if averaged over the entire latitude-LST range, but for a given latitude, one orientation may prove to have lower FoM values over a particular LST range. For the EDGES deployment, we find that a NS orientation is marginally superior at the deployment latitude of $-26^\circ$ (see Fig. \ref{fig:pcolor_poly6_M}). \subsection{Deployment Latitude} Several sites have been considered and used for global redshifted 21~cm experiments. Latitude $26^\circ$S is approximately the latitude of the two SKA sites (in South Africa and Australia). EDGES and BIGHORNS are both currently deployed at the MRO, which is the SKA site in Western Australia. DARE has tested prototype equipment at the MRO, as well as at the Greenbank observatory at latitude $38^\circ$N. SCI-HI is deployed at Guadalupe Island off the western coast of Mexico at 29$^{\circ}$2'N and is considering deployment at either Isla Socorro (18$^{\circ}$48'N) or Isla Clari\'{o}n (18$^{\circ}$22'N). Two other remote islands that are of interest to global 21 cm projects are Kiritimati (Christmas Island) located 2000 km due south of Hawaii in the middle of the Pacific Ocean near the equator at 1$^{\circ}$52′N and Tristan da Cunha in the southern Atlantic Ocean between the southern tip of Africa and South America at 37$^{\circ}$4′S. The number of terms needed in the $T_\text{model}$ power-law polynomial fit to remove the chromatic beam and foreground to a certain RMS error level at a specific latitude and LST is dependent upon the type of antenna chosen. We have calculated the FoM for polynomials of length N=3 to 7 and have examined the results. The FoM values decrease as the number of terms in the polynomial increases, resulting in an increase in the number of points with sub-mK level FoM values. The histograms in Fig. \ref{fig:FoM_histogram} quantify this trend for N=5 and N=6 for the three antennas. The counts are the number of latitude-LST grid points of the NS orientation of Fig.~\ref{fig:pcolor_poly6_M} that fall into FoM bins. The histogram plots indicate that the foreground can be removed in more Lat/LST locations with a given polynomial length using the blade antenna than using the fourpoint antenna. For a given latitude of deployment, we take a cut through the plots of Fig. \ref{fig:pcolor_poly6_M} for the blade antenna at latitudes $26^\circ$S and $38^\circ$N with a NS aligned excitation axis and display the results in Fig.~\ref{fig:blade_cut}, which shows the FoM for the blade as a function of LST for polynomial lengths between N=3 and 7. As expected, the FoM improves significantly as the number of polynomial terms is increased. At N=5, the FoM falls below the expected 21-cm signal strength across all LSTs. The fourpoint and analytic \nicefrac[]{1}{2}-$\lambda$ wire antennas yield similar progressions, but with different overall amplitudes (not shown). Table \ref{tab:FoM} lists the FoM values for polynomials of length 3 through 7 terms for the antennas discussed at latitude $-26^\circ$ at a relatively low FoM region near 4 h and at a relatively high FoM region near 17 h. For this latitude, Table~\ref{tab:FoM} indicates that the blade and fourpoint antenna are comparable for low values of N near regions of low FoM, but the blade performs better for higher values of N and especially for regions of high FoM. \textcolor{black}{All three antenna types listed can remove the sky foreground if six polynomial terms are used, and all but the fourpoint antenna are still acceptable with a 5 term polynomial. For 5-term fits and higher, the FoM performance between each of the three antennas improves by approximately an order of magnitude from the best-case \nicefrac[]{1}{2}-$\lambda$ dipole, to the blade, to the fourpoint design.} \subsection{Global 21 cm Signal Detectability} \textcolor{black}{While the above FoM anlysis establishes the relative performance between the antennas and for different pointings, we turn now to the SNR as defined in Eq. 7 to characterize the absolute performance of the antennas. Referring to Table \ref{tab:SNR}, a global 21~cm signal with a 20~MHz FWHM parameter is detectable with all 3 antennas using either 5 or 6 polynomial terms in the foreground fitting equation. However, a global 21~cm signal with a 40 MHz FWHM parameter is not detectable (SNR $<$ 2) for any of the antennas when a 6 term polynomial is used, given the assumed noise levels and the given latitude. The low FoM values of the blade antenna for a 5 term fit raises the SNR to 4.6 and 6.7, when the thermal noise is 3~mK and 2~mK respectively, and enables a global 21~cm signal detection even when it has a 40 MHz FWHM.} \begin{table} \caption{FoM for polynomial lengths between 3 and 7 terms at latitude $-26^\circ$ and two LST values} \begin{tabular} {l c c c c c c} \hline Antenna & \multicolumn{5}{c}{FoM (mK) (LST = 4 h, $-26^\circ$)} \\ \multicolumn{1}{r}{Terms} &3&4&5&6&7\\ \hline Fourpoint & 143 &19.5&8.20&3.93&3.86 \\ Blade & 162 & 22.2&0.67&0.53&0.07\\ \nicefrac[]{1}{2}-$\lambda$ wire dipole & 3.45 & 0.16&0.06&0.01&$<$0.01\\ \hline & \multicolumn{5}{c}{FoM (mK) (LST = 17 h, $-26^\circ$)}\\ \hline Fourpoint & 2200 &219 & 96.6 & 33.8 & 7.44 \\ Blade & 999 & 121 & 6.60 & 3.88 & 0.53\\ \nicefrac[]{1}{2}-$\lambda$ wire dipole & 118 & 7.60 & 0.24 & $<$0.01 & $<$0.01\\ \hline \end{tabular} \label{tab:FoM} \end{table} \begin{table} \caption{\textcolor{black}{SNR for polynomial lengths between 3 and 7 terms at latitude $-26^\circ$ \textcolor{black}{with the Galatic Centre below the horizon}. Assumed spectral noise of 2-3~mK achieved by several nights of observational data averaging. One additional Gaussian 21~cm term added to the polynomial terms of the fit equation with FWHM of 20~MHz and 40~MHz.}} \centering \begin{tabular}{lc@{\hspace{2.7em}}c@{\hspace{2.7em}}c@{\hspace{2.5em}}c@{\hspace{0.5em}}c@{}} \hline Antenna & \multicolumn{5}{c}{SNR (FWHM = 20~MHz, Noise = 3~mK)} \\ \multicolumn{1}{r}{Poly Terms} &3&4&5&6&7\\ \hline Fourpoint & 0.5 & 2.4 & 7.8 & 5.3 & 5.4 \\ Blade & 0.5 & 1.8 & 12 &7.0&7.0\\ \nicefrac[]{1}{2}-$\lambda$ wire dipole & 14 & 13 & 12 &7.0 &7.0\\ \hline & \multicolumn{5}{c}{SNR (FWHM = 40~MHz, Noise = 3~mK)}\\ \hline Fourpoint & 0.6 & 1.1 & 3.4 & 0.9 & 0.9 \\ Blade & 0.6 & 0.8 & 4.6 & 1.1 & 1.1\\ \nicefrac[]{1}{2}-$\lambda$ wire dipole & 13 & 5.1 & 4.7 &1.1 & 1.1\\ \hline & \multicolumn{5}{c}{SNR (FWHM = 20~MHz, Noise = 2~mK)}\\ \hline Fourpoint & 0.5 &2.4&8.8&6.4&6.6 \\ Blade & 0.5 & 1.8&17&11&11\\ \nicefrac[]{1}{2}-$\lambda$ wire dipole & 17& 19&18&11&11\\ \hline & \multicolumn{5}{c}{SNR (FWHM = 40~MHz, Noise = 2~mK)}\\ \hline Fourpoint & 0.6 & 1.1 & 4.0 & 1.0 & 1.1 \\ Blade & 0.6 & 0.8 & 6.7 & 1.6 & 1.7\\ \nicefrac[]{1}{2}-$\lambda$ wire dipole & 18 & 7.7 & 7.0 &1.7 & 1.7\\ \hline \end{tabular} \label{tab:SNR} \end{table} \subsection{\textcolor{black}{Spectral Derivative of Antenna Directivity}} \label{subsec:origin} During antenna design and simulation, visually examining beam plots of directivity vs elevation angle, $\theta$, \textcolor{black}{or even 3D plots at various frequencies} will not reveal chromatic issues with the beam, because the magnitude of the relevant beam features are on the order of 0.1 to 1.0 percent. \textcolor{black}{To examine the frequency structure at these small levels of change}, we take the derivative of the beam directivity with respect to frequency. Figure \ref{fig:Derivatives} shows the beam derivative with respect to frequency vs. zenith angle for values of $\phi$ at $0^\circ$ and $90^\circ$ for the three antennas. Excessive variation (rapid variation or multiple inflection points) in the beam derivative plot will indicate that the antenna will not perform well for 21 cm observations (high FoM values). Referring to the plots in Fig. \ref{fig:Derivatives}, all antennas show a decrease near the zenith with increasing frequency, consistent with the beginning of structure due to \textcolor{black}{ the height above the ground plane becoming a higher fraction of wavelengths} as discussed in section \ref{subsec:half_wavelength}. The fourpoint antenna shows greater magnitude changes and additional structure both at the zenith and in other locations compared to the other two antennas. The blade antenna is more similar in amplitude and features to the analytic reference antenna than the fourpoint antenna, and correspondingly, the FoM values of the blade antenna are superior to those of the fourpoint antenna, i.e., the amount of beam structure correlates with FoM values. \textcolor{black}{The more complex shape of the fourpoint antenna, as compared to the blade antenna, may lead to more significant changes in the current flow pattern with frequency and consequently a larger change in beam shape with frequency.} \begin{figure*} \begin{minipage}[b]{0.99\linewidth} \centering \includegraphics{figure_06_top_left} \includegraphics{figure_06_top_right} \end{minipage} \begin{minipage}[b]{0.99\linewidth} \centering \includegraphics{figure_06_middle_left} \includegraphics{figure_06_middle_right} \end{minipage} \begin{minipage}[b]{0.99\linewidth} \centering \includegraphics{figure_06_bottom_left} \includegraphics{figure_06_bottom_right} \caption{FoM as a function of latitude (declination) and LST (right ascension) for the three antennas. From top to bottom, the panels show the fourpoint, blade, and analytic dipole models. The excitation axis orientation is NS for the left column and EW for the right column. A six term polynomial was used for the fit. The colour scale indicates the FoM magnitude and spans a different range for each antenna but is kept constant across the rows. The fourpoint performs the worst and yields acceptably low FoM values (below $\sim$10~mK) only in a few areas of the sky (dark-blue regions in the top panel). The blade antenna performs well with the FoM below 10~mK across the entire sky. The \nicefrac[]{1}{2}-$\lambda$ analytic wire dipole model performs the best with sub-mK residuals. The alignment does not greatly affect the distribution of FoM values, but for specific latitudes, one orientation may be better than another, suggesting that orientation choice must be evaluated for both the type of antenna used and the target deployment latitude. In all cases, the performance is best when the Galactic plane is below the horizon and generally worst when the Galactic plane and/or \textcolor{black}{Centre} (17~h 45~m LST, $-29^{\circ}$ dec) are visible near the horizon or at moderate zenith angles.} \label{fig:pcolor_poly6_M} \end{minipage} \end{figure*} \begin{figure} \centering \includegraphics{figure_07_top} \includegraphics{figure_07_bottom} \caption{Plots of the FoM distribution for the fourpoint, blade, and analytic \nicefrac[]{1}{2}-$\lambda$ wire dipole antennas with a NS orientation for fits to (top) a five term polynomial and (bottom) a six term polynomial. The counts are the number of latitude and LST grid points that fall into FoM bins. The trend towards lower FoM values with increasing polynomial terms is evident as well as the relative performance of the three antennas. \textcolor{black}{The fourpoint FoM is below 10~mK for 3\% of the data points using a 5 term fit and 12\% for a 6 term fit, while the blade is below 10~mK for 99.7 \% of the data points. The blade FoM is an order of magnitude better as the FoM is below 1~mK for 25\% of the data points using a 5 term fit and 73\% for a 6 term fit.}} \label{fig:FoM_histogram} \end{figure} \begin{figure} \centering \includegraphics{figure_08_top} \includegraphics{figure_08_bottom} \caption{Blade FoM vs LST at latitude $-26^{\circ}$ (top) and latitude 38$^{\circ}$ (bottom), which correspond approximately to the latitude of the SKA sites in South Africa and Australia, as well as the EDGES fourpoint location, and to the latitude of the Greenbank Observatory respectively. The antenna excitation axis is aligned NS. The curves illustrate the effects of varying the number of polynomial terms in the $T_\text{model}$ fit, from N=3 (top) to N=7 (bottom). For this antenna, acceptable FoMs were achieved with as few as 5 polynomial terms. The fourpoint and analytic \nicefrac[]{1}{2}-$\lambda$ wire antennas show similar progressions, but with different relative amplitudes.} \label{fig:blade_cut} \end{figure} \section{CONCLUSION} \label{sec:conclusion} We \textcolor{black}{evaluated two dipole-based antennas used by EDGES and one idealized reference antenna} to assess the effects of frequency-dependent beam shapes on the ability to remove the foreground from global redshifted 21~cm measurements \textcolor{black}{and detect the redshifted global 21~cm signal}. \textcolor{black}{Across the full latitude-LST space we found that the fourpoint antenna produced sub 10~mK FoM values in 3\% and 12\% of the locations for \textcolor{black}{foreground fits using polynomials with 5 and 6 terms respectively}, while the \textcolor{black}{FoM values of the blade antenna were} below 10~mK in over 99\% of the locations for both fits. Furthermore, \textcolor{black}{FoM values of foreground fitting for the blade antenna were} below 1~mK in 25 \% and 72 \% of the locations for 5 and 6 term fits respectively. We note that the optimum choice of E-W or N-S excitation axis orientation depends upon specific deployment location as one orientation was not always better than the other.} The fourpoint antenna is only suitable at a few restricted locations on the sky using a 5 or 6 term fit, while the blade antenna provides adequate FoM performance across the entire sky when using a \textcolor{black}{5 or} 6 term polynomial to remove the foreground. \textcolor{black}{In our simulations, a narrow 21 cm signal corresponding to a rapid reionization over 20 MHz was detectable for all antennas assuming 3~mK of thermal noise. The SNR values indicate detection is favorable for either a 5 or 6 term foreground removal fit. For a 5 term fit, the \textcolor{black}{SNR values of the} blade and analytical dipole are nearly the same ranging from 12 to 18, and the fourpoint \textcolor{black}{values range} between 7.8 and 8.8. For a 6 term fit, the \textcolor{black}{SNR values of the} blade and analytical dipole are the same ranging from 7 to 11, and the fourpoint \textcolor{black}{values range} between 5.3 and 6.4 depending upon the thermal noise.} \textcolor{black}{A slower reionization over 40 MHz is not detectable with any of our antennas when the foreground is fitted with a 6 term polynomial as the SNR is no greater than 1.7. When a five term polynomial is used the SNR increases and the detection is again favorable. The SNR for the blade antenna is between 4.6 and 6.7, between 3.4 and 4.0 for the fourpoint, and between 4.7 and 7.0 for the analytical dipole, again depending upon thermal noise. Based upon this ananlysis we conclude that the blade antenna using a five term polynomial with thermal noise averaged down to $<$ 3~mK is capable of detecting or placing meaningful limits on the global 21 cm signal during reionization.} During antenna design the beam derivative with respect to frequency plot is a convenient tool to quickly assess the frequency structure in the beam and thus the ensuing effectiveness of foreground removal. This method can reveal problems quickly and requires little computing power. Although we studied the frequency range 100 - 190~MHz, the results we have reported can be applied to other frequency ranges since the properties of an antenna scale linearly in wavelength with the physical size of the antenna. For example, global 21~cm experiments targeting the First Light signal between 50 and 100~MHz can also use these results by scaling the antenna design by a factor of two and halving the frequency. The FoM scales as $\nu^{-2.5}$. A variety of non-dipole antenna designs have been considered for global 21~cm experiments, such as log-dipole and horn antennas. Most of these antennas have considerably more structure than simple dipole antennas and can be expected to exhibit even larger chromatic effects. Detailed investigation of other antennas is left for future work. \section*{Acknowledgments} This work was supported by the NSF through research awards for the Experiment to Detect the Global EoR Signature (AST-0905990 and AST-1207761) and by NASA through Cooperative Agreements for the Lunar University Network for Astrophysics (NNA09DB30A) and the Nancy Grace Roman Technology Fellowship (NNX12AI17G). EDGES is located at the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Several years ago, I had the opportunity to give in several venues a keynote talk and to write an associated overview article on the general topic of ``Algorithmic and Statistical Perspectives on Large-Scale Data Analysis''~\cite{algstat10_CHAPTER}. By the \emph{algorithmic perspective}, I meant roughly the approach that someone trained in computer science might adopt \footnote{From this perspective, primary concerns include database issues, algorithmic questions such as models of data access, and the worst-case running time of algorithms for a given objective function; but there can be a lack of appreciation, and thus associated cavalierness, when it comes to understanding how the data can be messy and noisy and poorly-structured in ways that adversely affect how confident one can be in the conclusions that one draws about the world as a result of the output of one's fast algorithms.} and by the \emph{statistical perspective}, I meant roughly the approach that someone trained in statistics, or in some area such as scientific computing where strong domain-specific assumptions about the data are routinely made, might adopt \footnote{From this perspective, primary concerns include questions such as how well the objective functions being considered conform to the phenomenon under study, how best to model the noise properties in the data, and whether one can make reliable predictions about the world from the data at hand; but there tends to be very little interest in understanding either computation \emph{per se} or the downstream effects that constraints on computation can have on the reliability of statistical inference.} My main thesis was twofold. First, motivated by problems drawn from a wide range of application domains that share the common feature that they generate very large quantities of data, we are being forced to engineer a union between these two extremely different perspectives or worldviews on what the data are and what are interesting or fruitful ways to view the data. Second, rather than \emph{first} making statistical modeling decisions, independent of algorithmic considerations, and \emph{then} applying a computational procedure as a black box---which is quite typical in small-scale and medium-scale applications and which is more natural if one adopts one perspective or the other---in many large-scale applications it will be more fruitful to understand and exploit what may be termed the statistical properties \emph{implicit} in worst-case algorithms. I illustrated these claims with two examples from genetic and Internet applications; and I noted that this approach of more closely coupling the computational procedures used with a statistical understanding of the data seems particularly appropriate more generally for very large-scale data analysis problems. Here, I would like to revisit these questions, with an emphasis on describing in more detail particularly fruitful directions to consider in order to ``bridge the gap'' between the theory and practice of Modern Massive Data Set (MMDS) analysis. On the one hand, very large-scale data are typically stored in some sort of database, either a variant of a traditional relational database or a filesystem associated with a supercomputer or a distributed cluster of relatively-inexpensive commodity machines. On the other hand, it is often noted that, in large part because they are typically generated in automated and thus relatively-unstructured ways, data are becoming increasingly ubiquitous and cheap; and also that the scarce resource complementary to large-scale data is the ability of the analyst to understand, analyze, and extract insight from those data. As anyone who has ``rolled up the sleeves'' and worked with real data can attest, real data are messy and noisy and poorly-structured in ways that can be hard to imagine before (and even sometimes after) one sees them. Indeed, there is often quite a bit of very practical ``heavy lifting,'' \emph{e.g.}, cleaning and preparing the data, to be done before starting to work on the ``real'' problem---to such an extent that many would say that big data or massive data applications are basically those for which the preliminary heavy lifting \emph{is} the main problem. This clearly places a premium on algorithmic methods that permit the analyst to ``play with'' the data and to work with the data interactively, as initial ideas are being tested and statistical hypotheses are being formed. Unfortunately, this is not the sort of thing that is easy to do with traditional databases. To address these issues, I will discuss a notion that lies at the heart of the disconnect between the algorithmic perspective and the statistical perspective on data and data analysis. This notion, often called \emph{regularization} or \emph{statistical regularization}, is a traditional and very intuitive idea. Described in more detail in Section~\ref{sxn:thoughts:regularization}, regularization basically has to do with how robust is the output of an algorithm to the noise properties of the input data. It is usually formulated as a tradeoff between ``solution quality'' (as measured, \emph{e.g.}, by the value of the objective function being optimized) and ``solution niceness'' (as measured, \emph{e.g.}, by a vector space norm constraint, a smoothness condition, or some other related measure of interest to a downstream analyst). For this reason, when applied to noisy data, regularized objectives and regularized algorithms can lead to output that is ``better'' for downstream applications, \emph{e.g.}, for clustering or classification or other things of interest to the domain scientist, than is the output of the corresponding unregularized algorithms. Thus, although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data \footnote{Clearly, there will be a problem if the output of a computer scientist's algorithm is manifestly meaningless in terms of the motivating application or if the statistician's objective function takes the age of the universe to optimize. The point is that, depending on one's perspective, data are treated as a black box with respect to the algorithm, or vice versa; and this leads one to formulate problems in very different ways. From an algorithmic perspective, questions about the reliability and robustness of the output to noise in the input are very much secondary; and from a statistical perspective, the same is true regarding the details of the computation and the consequences of resource constraints on the computation.} I will also discuss how, by adopting a very non-traditional perspective on approximation algorithms (or, equivalently, a non-traditional perspective on statistical regularization), one can in many cases satisfy the bicriteria of having algorithms that are scalable to very large data sets and that also have good statistical or inferential or predictive properties. Basically, the non-traditional perspective is that approximate computation---either in the sense of approximation algorithms in theoretical computer science or in the sense of heuristic design decisions (such as binning, pruning, and early stopping) that practitioners must make in order to implement their algorithms in real systems---often \emph{implicitly} leads to some sort of regularization. That is, approximate computation, \emph{in and of itself}, can implicitly lead to statistical regularization. This is very different than the usual perspective in approximation algorithms, where one is interested in solving a given problem, but since the problem is intractable one ``settles for'' the output of an approximation. In particular, this means that, depending on the details of the situation, approximate computation can lead to algorithms that are both faster \emph{and} better than are algorithms that solve the same problem exactly. While particular examples of this phenomenon are well-known, typically heuristically and amongst practitioners, in my experience the general observation is quite surprising to both practitioners and theorists of both the algorithmic perspective and the statistical perspective on~data. Thus, I will use three ``case studies'' from recent MMDS analysis to illustrate this phenomenon of \emph{implicit regularization via approximate computation} in three somewhat different ways. The first involves computing an approximation to the leading nontrivial eigenvector of the Laplacian matrix of a graph; the second involves computing, with two very different approximation algorithms, an approximate solution to a popular version of the graph partitioning problem; and the third involves computing an approximation to a locally-biased version of this graph partitioning problem. In each case, we will see that approximation algorithms that are run in practice implicitly compute smoother or more regular answers than do algorithms that solve the same problems exactly. Characterizing and exploiting the implicit regularization properties underlying approximation algorithms for large-scale data analysis problems is not the sort of analysis that is currently performed if one adopts a purely algorithmic perspective or a purely statistical perspective on the data. It is, however, clearly of interest in many MMDS applications, where anything but scalable algorithms is out of the question, and where ignoring the noise properties of the data will likely lead to meaningless output. As such, it represents a challenging interdisciplinary research front, both for theoretical computer science---and for database theory in particular---as well as for theorists and practitioners of statistical data analysis more generally. \section{Some general observations \ldots} \label{sxn:thoughts} Before proceeding further, I would like to present in this section some general thoughts. Most of these observations will be ``obvious'' to at least some readers, depending on their background or perspective, and most are an oversimplified version of a much richer story. Nevertheless, putting them together and looking at the ``forest'' instead of the ``trees'' should help to set the stage for the subsequent discussion. \subsection{\ldots\hspace{.5mm} on models of data} \label{sxn:thoughts:models} It helps to remember that data are whatever data are-- records of banking and other financial transactions, hyperspectral medical and astronomical images, measurements of electromagnetic signals in remote sensing applications, DNA microarray and single-nucleotide polymorphism measurements, term-document data from the Web, query and click logs at a search engine, interaction properties of users in social and information networks, corpora of images, sounds, videos, etc. To do something useful with the data, one must first model them (either explicitly or implicitly\footnote{By implicitly, I mean that, while computations always return answers (yes, modulo issues associated with the Halting Problem, infinite loops, etc.), in many cases one can say that a given computation is the ``right'' thing to do for a certain class of data. For example, performing matrix-based computations with $\ell_2$-based objectives often has an interpretation in terms of underlying Gaussian processes. Thus, performing that computation in some sense implicitly amounts to assuming that that is what the data ``look like.''}) in some way. At root, a \emph{data model} is a mathematical structure such that---given hardware, communication, input-output, data-generation, sparsity, noise, etc. considerations---one can perform computations of interest to yield useful insight on the data and processes generating the data. As such, choosing an appropriate data model has algorithmic, statistical, and implementational aspects that are typically intertwined in complicated ways. Two criteria to keep in mind in choosing a data model are the following. \begin{itemize} \item First, on the \emph{data acquisition or data generation side}, one would like a structure that is ``close enough'' to the data, \emph{e.g.}, to the processes generating the data or to the noise properties of the data or to natural operations on the data or to the way the data are stored or accessed, that modeling the data with that structure does not do too much ``damage'' to the~data. \item Second, on the \emph{downstream or analysis side}, one would like a structure that is at a ``sweet spot'' between descriptive flexibility and algorithmic tractability. That is, it should be flexible enough that it can describe a range of types of data, but it should not be so flexible that it can do ``anything,'' in which case computations of interest will likely be intractable and inference will be problematic. \end{itemize} \noindent Depending on the data and applications to be considered, the data may be modeled in one or more of several ways. \begin{itemize} \item \emph{Flat tables and the relational model.} Particularly common in database theory and practice, this model views the data as one or more two-dimensional arrays of data elements. All members of a given column are assumed to be similar values; all members of a given row are assumed to be related to one another; and different arrays can be related to one another in terms of predicate logic and set theory, which allows one to query, \emph{e.g.}, with SQL or a variant, the data. \item \emph{Graphs, including special cases like trees and expanders.} This model is particularly common in computer science theory and practice; but it is also used in statistics and machine learning, as well as in scientific computation, where it is often viewed as a discretization of an underlying continuous problem. A graph $G=(V,E)$ consists of a set of vertices $V$, that can represent some sort of ``entities,'' and a set of edges $E$, that can be used to represent pairwise ``interactions'' between two entities. There is a natural geodesic distance between pairs of vertices, which permits the use of ideas from metric space theory to develop algorithms; and from this perspective natural operations include breadth-first search and depth-first search. Alternatively, in spectral graph theory, eigenvectors and eigenvalues of matrices associated with the graph are of interest; and from this perspective, one can consider resistance-based or diffusion-based notions of distance between pairs of vertices. \item \emph{Matrices, including special cases like symmetric positive semidefinite matrices.} An $m \times n$ real-valued matrix $A$ provides a natural structure for encoding information about $m$ objects, each of which is described by $n$ features; or, if $m=n$, information about the correlations between all $m$ objects. As such, this model is ubiquitous in areas of applied mathematics such as scientific computing, statistics, and machine learning, and it is of increasing interest in theoretical computer science. Rather than viewing a matrix simply as an $m \times n$ array of numbers, one should think of it as representing a linear transformation between two Euclidean spaces, $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$; and thus vector space concepts like dot products, orthogonal matrices, eigenvectors, and eigenvalues are natural. In particular, matrices have a very different semantics than tables in the relational model, and Euclidean spaces are much more structured objects than arbitrary metric spaces. \end{itemize} Of course, there are other ways to model data---\emph{e.g.}, DNA sequences are often fruitfully modeled by strings---but matrices and graphs are most relevant to our discussion~below. Database researchers are probably most familiar with the basic flat table and the relational model and its various extensions; and there are many well-known advantages to working with them. As a general rule, these models and their associated logical operations provide a powerful way to process the data at hand; but they are much less well-suited for understanding and dealing with imprecision and the noise properties in that data. (See~\cite{claremont08,madskills09} and references therein.) For example, historically, the focus in database theory and practice has been on business applications, \emph{e.g.}, automated banking, corporate record keeping, airline reservation systems, etc., where requirements such as performance, correctness, maintainability, and reliability (as opposed to prediction or inference) are crucial. The reason for considering more sophisticated or richer data models is that much of the ever-increasing volume of data that is currently being generated is either relatively-unstructured or large and internally complex in its original form; and many of these noisy unstructured data are better-described by (typically sparse and poorly-structured) graphs or matrices than as dense flat tables. While this may be obvious to some, the graphs and matrices that arise in MMDS applications are very different than those arising in classical graph theory and traditional numerical linear algebra; and thus modeling large-scal \footnote{Clearly, large or big or massive means different things to different people in different applications. Perhaps the most intuitive description is that one can call the size of a data set: \emph{small} if one can look at the data, fairly obviously see a good solution to problems of interest, and find that solution fairly easily with almost any ``reasonable'' algorithmic tool; \emph{medium} if the data fit in the RAM on a reasonably-priced laptop or desktop machine and if one can run computations of interest on the data in a reasonable length of time; and \emph{large} if the data doesn't fit in RAM or if one can't relatively-easily run computations of interest in a reasonable length of time. The point is that, as one goes from medium-sized data to large-scale data sets, the main issue is that one doesn't have random access to the data, and so details of communication, memory access, etc., become paramount~concerns.} data by graphs and matrices poses very substantial challenges, given the way that databases (in computer science) have historically been constructed, the way that supercomputers (in scientific computing) have historically been designed, the tradeoffs that are typically made between faster CPU time and better IO and network communication, etc. \subsection{\ldots\hspace{.5mm} on the relationship between algorithms and data} \label{sxn:thoughts:relationship} Before the advent of the digital computer, the natural sciences (and to a lesser extent areas such as social and economic sciences) provided a rich source of problems; and statistical methods were developed in order to solve those problems. Although these statistical methods typically involved computing something, there was less interest in questions about the nature of computation \emph{per se}. That is, although computation was often crucial, it was in some sense secondary to the motivating downstream application. Indeed, an important notion was (and still is) that of a \emph{well-posed problem}---roughly, a problem is well-posed if: a solution exists; that solution is unique; and that solution depends continuously on the input data in some reasonable topology. Especially in numerical applications, such problems are sometimes called \emph{well-conditioned problems} \footnote{In this case, the \emph{condition number} of a problem, which measures the worst-case amount that the solution to the problem changes when there is a small change in the input data, is small for well-conditioned problems.} From this perspective, it simply doesn't make much sense to consider algorithms for problems that are not well-posed---after all, any possible algorithm for such an ill-posed problem will return answers that are not be meaningful in terms of the domain from which the input data are drawn. With the advent of the digital computer, there occurred a split in the yet-to-be-formed field of computer science. The split was loosely based on the application domain (scientific computing and numerical computation versus business and consumer applications), but relatedly based on the type of tools used (continuous mathematics like matrix analysis and probability versus discrete mathematics like combinatorics and logic); and it led to two very different perspectives (basically the statistical and algorithmic perspectives) on the relationship between algorithms and data. On the one hand, for many numerical problems that arose in applications of continuous mathematics, a two-step approach was used. It turned out that, even when working with a given well-conditioned problem \footnote{Thus, the first step is to make sure the problem being considered is well-posed. Replacing an ill-posed problem with a related well-posed problem is common and is, as I will describe in Section~\ref{sxn:thoughts:regularization}, a form of regularization.} certain algorithms that solved that problem ``exactly'' in some idealized sense performed very poorly in the presence of ``noise'' introduced by the peculiarities of roundoff and truncation errors. Roundoff errors have to do with representing real numbers with only finitely-many bits; and truncation errors arise since only a finite number of iterations of an iterative algorithm can actually be performed. The latter are important even in ``exact arithmetic,'' since most problems of continuous mathematics cannot even in principle be solved by a finite sequence of elementary operations; and thus, from this perspective, fast algorithms are those that converge quickly to approximate answers that are accurate to, \emph{e.g}, $2$ or $10$ or $100$ digits of~precision. This led to the notion of the \emph{numerical stability} of an algorithm. Let us view a numerical algorithm as a function $f$ attempting to map the input data $x$ to the ``true'' solution $y$; but due to roundoff and truncation errors, the output of the algorithm is actually some other $y^*$. In this case, the \emph{forward error} of the algorithm is $\Delta y = y^* - y$; and the \emph{backward error} of the algorithm is the smallest $\Delta x$ such that $f(x + \Delta x) = y^*$. Thus, the forward error tells us the difference between the exact or true answer and what was output by the algorithm; and the backward error tells us what input data the algorithm we ran actually solved exactly. Moreover, the forward error and backward error for an algorithm are related by the condition number of the problem---the magnitude of the forward error is bounded above by the condition number multiplied by the magnitude of the backward error \footnote{My apologies to those readers who went into computer science, and into database theory in particular, to avoid these sorts of numerical issues, but these distinctions really do matter for what I will be describing below.} In general, a backward stable algorithm can be expected to provide an accurate solution to a well-conditioned problem; and much of the work in numerical analysis, continuous optimization, and scientific computing can be seen as an attempt to develop algorithms for well-posed problems that have better stability properties than the ``obvious'' unstable~algorithm. On the other hand, it turned out to be much easier to study computation \emph{per se} in discrete settings (see~\cite{Sma90,BCSS96} for a partial history), and in this case a simpler but coarser one-step approach prevailed. First, several seemingly-different approaches (recursion theory, the $\lambda$-calculus, and Turing machines) defined the same class of functions. This led to the belief that the concept of computability is formally captured in a qualitative and robust way by these three equivalent processes, independent of the input data; and this highlighted the central role of logic in this approach to the study of computation. Then, it turned out that the class of computable functions has a rich structure---while many problems are solvable by algorithms that run in low-degree polynomial time, some problems seemed not to be solvable by anything short of a trivial brute force algorithm. This led to the notion of the complexity classes P and NP, the concepts of NP-hardness and NP-completeness, etc., the success of which led to the belief that the these classes formally capture in a qualitative and robust way the concept of computational tractability and intractability, independent of any posedness questions or any assumptions on the input data. Then, it turned out that many problems of practical interest are intractable---either in the sense of being NP-hard or NP-complete or, of more recent interest, in the sense of requiring $O(n^2)$ or $O(n^3)$ time when only $O(n)$ or $O(n \log n)$ time is available. In these cases, computing some sort of approximation is typically of interest. The modern theory of approximation algorithms, as formulated in theoretical computer science, provides forward error bounds for such problems for ``worst-case'' input. These bounds are worst-case in two senses: first, they hold uniformly for all possible input; and second, they are typically stated in terms of a relatively-simple complexity measure such as problem size, independent of any other structural parameter of the input data \footnote{The reason for not parameterizing running time and approximation quality in terms of structural parameters is that one can encode all sorts of pathological things in combinatorial parameters, thereby obtaining trivial results.} While there are several ways to prove worst-case bounds for approximation algorithms, a common procedure is to take advantage of relaxations---\emph{e.g.}, solve a relaxed linear program, rather than an integer program formulation of the combinatorial problem~\cite{Vazirani01,HLW06_expanders}. This essentially involves ``filtering'' the input data through some other ``nicer,'' often convex, metric or geometric space. Embedding theorems and duality then bound how much the input data are distorted by this filtering and provide worst-case quality-of-approximation guarantees~\cite{Vazirani01,HLW06_expanders}. \subsection{\ldots\hspace{.5mm} on explicit and implicit regularization} \label{sxn:thoughts:regularization} The term \emph{regularization} refers to a general class of methods~\cite{Neu98,CH02,BL06} to ensure that the output of an algorithm is meaningful in some sense---\emph{e.g.}, to the domain scientist who is interested in using that output for some downstream application of interest in the domain from which the data are drawn; to someone who wants to avoid ``pathological'' solutions; or to a machine learner interested in prediction accuracy or some other form of inference. It typically manifests itself by requiring that the output of an algorithm is not overly sensitive to the noise properties of the input data; and, as a general rule, it provides a tradeoff between the quality and the niceness of the solution. Regularization arose in integral equation theory where there was interest in providing meaningful solutions to ill-posed problems~\cite{TikhonovArsenin77}. A common approach was to assume a smoothness condition on the solution or to require that the solution satisfy a vector space norm constraint. This approach is followed much more generally in modern statistical data analysis~\cite{hast-tibs-fried}, where the posedness question has to do with how meaningful it is to run a given algorithm, given the noise properties of the data, if the goal is to predict well on unseen data. One typically considers a loss function $f(x)$ that specifies an ``empirical penalty'' depending on both the data and a parameter vector $x$; and a regularization function $g(x)$ that provides ``geometric capacity control'' on the vector~$x$. Then, rather than minimizing $f(x)$ exactly, one exactly solves an optimization problem of the~form: \begin{equation} \label{eqn:reg-gen} \hat{x} = \mbox{argmin}_x f(x) + \lambda g(x) , \end{equation} where the parameter $\lambda$ intermediates between solution quality and solution niceness. Implementing regularization explicitly in this manner leads to a natural interpretation in terms of a trade-off between optimizing the objective and avoiding over-fitting the data; and it can often be given a Bayesian statistical interpretation \footnote{Roughly, such an interpretation says that if the data are generated according to a particular noise model, then $g(\cdot)$ encodes ``prior assumptions'' about the input data, and regularizing with this $g(\cdot)$ is the ``right'' thing to do~\cite{hast-tibs-fried}.} By optimizing exactly a combination of two functions, though, regularizing in this way often leads to optimization problems that are harder (think of $\ell_1$-regularized $\ell_2$-regression) or at least no easier (think of $\ell_2$-regularized $\ell_2$-regression) than the original problem, a situation that is clearly unacceptable in many MMDS~applications. On the other hand, regularization is often observed as a side-effect or by-product of other design decisions \footnote{See~\cite{Neu98,CH02,BL06,hast-tibs-fried,MO11-implementing} and references therein for more details on these examples.} For example, ``binning'' is often used to aggregate the data into bins, upon which computations are performed; ``pruning'' is often used to remove sections of a decision tree that provide little classification power; taking measures to improve numerical properties can also penalize large weights (in the solution vector) that exploit correlations beyond the level of precision in the data generation process; and ``adding noise'' to the input data before running a training algorithm can be equivalent to Tikhonov regularization. More generally, it is well-known amongst practitioners that certain heuristic approximations that are used to speed up computations can also have the empirical side-effect of performing smoothing or regularization. For example, working with a truncated singular value decomposition in latent factor models can lead to better precision and recall; ``truncating'' to zero small entries or ``shrinking'' all entries of a solution vector is common in iterative algorithms; and ``early stopping'' is often used when a learning model such as a neural network is trained by an iterative gradient descent algorithm. Note that in addition to its use in making ill-posed problems well-posed---a distinction that is not of interest in the study of computation \emph{per se}, where a sharp dividing line is drawn between algorithms and input data, thereby effectively assuming away the posedness problem---the use of regularization blurs the rigid lines between algorithms and input data in other ways \footnote{In my experience, researchers who adopt the algorithmic perspective are most comfortable when given a well-defined problem, in which case they develop algorithms for that problem and ask how those algorithms behave on the worst-case input they can imagine. Researchers who adopt the statistical perspective will note that formulating the problem is typically the hard part; and that, if a problem is meaningful and well-posed, then often several related formulations will behave similarly for downstream applications, in a manner quite unrelated to their worst-case~behavior.} For example, in addition to simply modifying the objective function to be optimized, regularization can involve adding to it various smoothness constraints---some of which involve modifying the objective and then calling a black box algorithm, but some of which are more simply enforced by modifying the steps of the original algorithm. Similarly, binning and pruning can be viewed as preprocessing the data, but they can also be implemented inside the algorithm; and adding noise to the input before running a training algorithm is clearly a form of preprocessing, but empirically similar regularization effects are observed when randomization is included inside the algorithm, \emph{e.g.}, as with randomized algorithms for matrix problems such as low-rank matrix approximation and least-squares approximation~\cite{Mah-mat-rev_BOOK}. Finally, truncating small entries of a solution vector to zero in an iterative algorithm and performing early stopping in an iterative algorithm are clearly heuristic approximations that lead an algorithm to compute some sort of approximation to the solution that would have been computed had the truncation and early stopping not been performed. \section{Three examples of implicit regularization} \label{sxn:comp-reg} In this section, I will discuss three case studies that illustrate the phenomenon of implicit regularization via approximate computation in three somewhat different ways. For each of these problems, there exists strong underlying theory; and there exists the practice, which typically involves approximating the exact solution in one way or another. Our goal will be to understand the differences between the theory and the practice in light of the discussion from Section~\ref{sxn:thoughts}. In particular, rather than being interested in the output of the approximation procedure insofar as it provides an approximation to the exact answer, we will be more interested in what the approximation algorithm actually computes, whether that approximation can be viewed as a smoother or more regular version of the exact answer, and how much more generally in database theory and practice similar ideas can be applied. \subsection{Computing the leading nontrivial eigenvector of a Laplacian matrix} \label{sxn:eigenvector} The problem of computing eigenvectors of the Laplacian matrix of a graph arises in many data analysis applications, including (literally) for Web-scale data matrices. For example, the leading nontrivial eigenvector, \emph{i.e.}, the eigenvector, $v_2$, associated with the smallest non-zero eigenvalue, $\lambda_2$, is often of interest: it defines the slowest mixing direction for the natural random walk on the graph, and thus it can be used in applications such as viral marketing, rumor spreading, and graph partitioning; it can be used for classification and other common machine learning tasks; and variants of it provide ``importance,'' ``betweenness,'' and ``ranking'' measures for the nodes in a graph. Moreover, computing this eigenvector is a problem for which there exists a very clean theoretical characterization of how approximate computation can implicitly lead to statistical regularization. Let $A$ be the adjacency matrix of a connected, weighted, undirected graph $G=(V,E)$, and let $D$ be its diagonal degree matrix. That is, $A_{ij}$ is the weight of the edge between the $i^{th}$ node and the $j^{th}$ node, and $D_{ii}=\sum_{j:(ij)\in E} A_{ij}$. The \emph{combinatorial Laplacian} of $G$ is the matrix $L = D-A$. Although this matrix is defined for any graph, it has strong connections with the Laplace-Beltrami operator on Riemannian manifolds in Euclidean spaces. Indeed, if the graph is a discretization of the manifold, then the former approaches the latter, under appropriate sampling and regularity assumptions. In addition, the \emph{normalized Laplacian} of~$G$ is $\mathcal{L} = D^{-1/2}LD^{-1/2} = I - D^{-1/2}AD^{-1/2}$. This degree-weighted Laplacian is more appropriate for graphs with significant degree variability, in large part due to its connection with random walks and other diffusion-based processes. For an $n$ node graph, $\mathcal{L}$ is an $n \times n$ positive semidefinite matrix, \emph{i.e.}, all its eigenvalues $\lambda_1 \le \lambda_2 \le \cdots \le \lambda_n$ are nonnegative, and for a connected graph, $\lambda_1 = 0$ and $\lambda_2 >0$. In this case, the degree-weighted all-ones vector, \emph{i.e.}, the vector whose $i^{th}$ element equals (up to a possible normalization) $D_{ii}$ and which is often denoted $v_1$, is an eigenvector of $\mathcal{L}$ with eigenvalue zero, \emph{i.e.}, $\mathcal{L} v_1 = 0 v_1$. For this reason, $v_1$ is often called trivial eigenvector of $\mathcal{L}$, and it is the next eigenvector that is of interest. This leading nontrivial eigenvector, $v_2$, is that vector that optimizes the Rayleigh quotient, defined to be $x^T\mathcal{L}x$ for a unit-length vector $x$, over all vectors perpendicular to the trivial eigenvector \footnote{Eigenvectors of $\mathcal{L}$ can be related to generalized eigenvectors of $L$: if $\mathcal{L}x = \lambda x$, then $Ly = \lambda D y$, where $y=D^{-1/2}x$.} In most applications where this leading nontrivial eigenvector is of interest, other vectors can also be used. For example, if $\lambda_2$ is not unique then $v_2$ is not uniquely-defined and thus the problem of computing it is not even well-posed; if $\lambda_3$ is very close to $\lambda_2$, then any vector in the subspace spanned by $v_2$ and $v_3$ is nearly as good (in the sense of forward error or objective function value) as $v_2$; and, more generally, \emph{any} vector can be used with a quality-of-approximation loss that depends on how far it's Rayleigh quotient is from the Rayleigh quotient of $v_2$. For most small-scale and medium-scale applications, this vector $v_2$ is computed ``exactly'' by calling a black-box solver \footnote{To the extent, as described in Section~\ref{sxn:thoughts:relationship}, that any numerical computation can be performed ``exactly.''} It could, however, be approximated with an iterative method such as the Power Metho \footnote{The Power Method takes as input any $n \times n$ symmetric matrix $A$ and returns as output a number $\lambda$ and a vector $v$ such that $Av = \lambda v$. It starts with an initial vector, $\nu_0$, and it iteratively computes $\nu_{t+1} = A \nu_t /||A\nu_t||_2$. Under weak assumptions, it converges to $v_{max}$, the dominant eigenvector of $A$. The reason is clear: if we expand $\nu_0 = \sum_{i=1}^{n} \gamma_i v_i$ in the basis provided by the eigenfunctions $\{v_i\}_{i=1}^{n}$ of $A$, then $\nu_t = \sum_{i=1}^{n} \gamma_i^t v_i \rightarrow v_{max}$. Vanilla versions of the Power Method can easily be improved (at least when the entire matrix $A$ is available in RAM) to obtain better stability and convergence properties; but these more sophisticated eigenvalue algorithms can often be viewed as variations of it. For instance, Lanczos algorithms look at a subspace of vectors generated during the iteration.} or by running a random walk-based or diffusion-based procedure; and in many larger-scale applications this is~preferable. Perhaps the most well-known example of this is the computation of the so-called PageRank of the Web graph~\cite{PBMW99}. As an example of a spectral ranking method~\cite{Vig09_TR}, PageRank provides a ranking or measure of importance for a Web page; and the Power Method has been used extensively to perform very large-scale PageRank computations~\cite{berkhin05_pagerank}. Although it was initially surprising to many, the Power Method has several well-known advantages for such Web-scale computations: it can be implemented with simple matrix-vector multiplications, thus not damaging the sparsity of the (adjacency or Laplacian) matrix; those matrix-vector multiplications are strongly parallel, permitting one to take advantage of parallel and distributed environments (indeed, MapReduce was originally developed to perform related Web-scale computations~\cite{DG04}); and the algorithm is simple enough that it can be ``adjusted'' and ``tweaked'' as necessary, based on systems considerations and other design constraints. Much more generally, other spectral ranking procedures compute vectors that can be used instead of the second eigenvector $v_2$ to perform ranking, classification, clustering, etc.~\cite{Vig09_TR}. At root, these random walk or diffusion-based methods assign positive and/or negative ``charge'' (or relatedly probability mass) to the nodes, and then they let the distribution evolve according to dynamics derived from the graph structure. Three canonical evolution dynamics are the following. \begin{itemize} \item \textbf{Heat Kernel.} Here, the charge evolves according to the heat equation $\frac{\partial H_t}{\partial t} = - \mathcal{L} H_t$. That is, the vector of charges evolves as $ H_t = \exp ( -t\mathcal{L} ) = \sum_{k=0}^{\infty} \frac{(-t)^k}{k!}\mathcal{L}^k $, where $t \ge 0$ is a time parameter, times an input seed distribution vector. \item \textbf{PageRank.} Here, the charge evolves by either moving to a neighbor of the current node or teleporting to a random node. That is, the vector of charges evolves as \begin{equation} \label{eqn:page-rank} R_{\gamma} = \gamma \left(I-\left(1-\gamma \right)M \right)^{-1} , \end{equation} where $M = AD^{-1}$ is the natural random walk transition matrix associated with the graph and where $\gamma \in (0,1)$ is the so-called teleportation parameter, times an input seed vector. \item \textbf{Lazy Random Walk.} Here, the charge either stays at the current node or moves to a neighbor. That is, if $M$ is the natural random walk transition matrix, then the vector of charges evolves as some power of $ W_{\alpha}= \alpha I + (1-\alpha)M $, where $\alpha \in (0,1)$ represents the ``holding probability,'' times an input seed vector. \end{itemize} \noindent In each of these cases, there is an input ``seed'' distribution vector, and there is a parameter ($t$, $\gamma$, and the number of steps of the Lazy Random Walk) that controls the ``aggressiveness'' of the dynamics and thus how quickly the diffusive process equilibrates. In many applications, one chooses the initial seed distribution carefull \footnote{In particular, if one is interested in global spectral graph partitioning, as in Section~\ref{sxn:partitioning}, then this seed vector could have randomly positive entries or could be a vector with entries drawn from $\{-1,+1\}$ uniformly at random; while if one is interested in local spectral graph partitioning~\cite{Spielman:2004,andersen06local,Chung07_heatkernelPNAS,MOV09_TRv3}, as in Section~\ref{sxn:local}, then this vector could be the indicator vector of a small ``seed set'' of nodes.} and/or prevents the diffusive process from equilibrating to the asymptotic state. (That is, if one runs any of these diffusive dynamics to a limiting value of the aggressiveness parameter, then under weak assumptions an exact answer is computed, independent of the initial seed vector; but if one truncates this process early, then some sort of approximation, which in general depends strongly on the initial seed set, is computed.) The justification for doing this is typically that it is too expensive or not possible to solve the problem exactly; that the resulting approximate answer has good forward error bounds on it's Rayleigh quotient; and that, for many downstream applications, the resulting vector is even better (typically in some sense that is not precisely described) than the exact answer. To formalize this last idea in the context of classical regularization theory, let's ask what these approximation procedures actually compute. In particular, do these diffusion-based approximation methods exactly optimize a regularized objective of the form of Problem~(\ref{eqn:reg-gen}), where $g(\cdot)$ is nontrivial, \emph{e.g.}, some well-recognizable function or at least something that is ``little-o'' of the length of the source code, and where $f(\cdot)$ is the Rayleigh quotient? To answer this question, recall that $v_2$ exactly solves the following optimization problem. \begin{equation} \begin{aligned} & \underset{x}{\text{minimize}} & & x^T\mathcal{L}x \\ & \text{subject to} & & x^Tx = 1, \\ & & & x^T D^{1/2} 1 = 0 . \end{aligned} \label{eqn:mo-unreg-vp} \end{equation} The solution to Problem~(\ref{eqn:mo-unreg-vp}) can also be characterized as the solution to the following SDP (semidefinite program). \begin{equation} \begin{aligned} & \underset{X}{\text{minimize}} & & \mathrm{Tr}(\mathcal{L} X) \\ & \text{subject to} & & X \succeq 0, \\ & & & \mathrm{Tr}(X) = 1, \\ & & & X D^{1/2} 1 = 0, \end{aligned} \label{eqn:mo-unreg-sdp} \end{equation} where $\mathrm{Tr}(\cdot)$ stands for the matrix Trace operation. Problem~(\ref{eqn:mo-unreg-sdp}) is a relaxation of Problem~(\ref{eqn:mo-unreg-vp}) from an optimization over unit vectors to an optimization over distributions over unit vectors, represented by the density matrix $X$. These two programs are equivalent, however, in that the solution to Problem~(\ref{eqn:mo-unreg-sdp}), call it $X^{*}$, is a rank-one matrix, where the vector into which that matrix decomposes, call it $x^{*}$, is the solution to Problem~(\ref{eqn:mo-unreg-vp}); and vice versa. Viewing $v_2$ as the solution to an SDP makes it easier to address the question of what is the objective that approximation algorithms for Problem~(\ref{eqn:mo-unreg-vp}) are solving exactly. In particular, it can be shown that these three diffusion-based dynamics arise as solutions to the following regularized~SDP. \begin{equation} \begin{aligned} & \underset{X}{\text{minimize}} & & \mathrm{Tr}(\mathcal{L} X) + \tfrac{1}{\eta} G(X) \\ & \text{subject to} & & X \succeq 0, \\ & & & \mathrm{Tr}(X) = 1, \\ & & & X D^{1/2} 1 = 0, \end{aligned} \label{eqn:mo-reg-sdp} \end{equation} where $G(\cdot)$ is a regularization function, which is the generalized entropy, the log-determinant, and a certain matrix-$p$-norm, respectively~\cite{MO11-implementing}; and where $\eta$ is a parameter related to the aggressiveness of the diffusive process~\cite{MO11-implementing}. Conversely, solutions to the regularized SDP of Problem~(\ref{eqn:mo-reg-sdp}) for appropriate values of $\eta$ can be computed \emph{exactly} by running one of the above three diffusion-based approximation algorithms. Intuitively, $G(\cdot)$ is acting as a penalty function, in a manner analogous to the $\ell_2$ or $\ell_1$ penalty in Ridge regression or Lasso regression, respectively; and by running one of these three dynamics one is \emph{implicitly} making assumptions about the functional form of $G(\cdot)$ \footnote{For readers interested in statistical issues, I should note that one can give a statistical framework to provide a Bayesian interpretation that makes this intuition precise~\cite{PM11}. Readers not interested in statistical issues should at least know that these assumptions are implicitly being made when one runs such an approximation algorithm.} More formally, this result provides a very clean theoretical characterization of how each of these three approximation algorithms for computing an approximation to the leading nontrivial eigenvector of a graph Laplacian can be seen as exactly optimizing a regularized version of the same problem. \subsection{Graph partitioning} \label{sxn:partitioning} Graph partitioning refers to a family of objective functions and associated approximation algorithms that involve cutting or partitioning the nodes of a graph into two sets with the goal that the cut has good quality (\emph{i.e.}, not much edge weight crosses the cut) as well as good balance (\emph{i.e.}, each of the two sets has a lot of the node weight) \footnote{There are several standard formalizations of this bi-criterion, \emph{e.g.}, the graph bisection problem, the $\beta$-balanced cut problem, and quotient cut formulations. In this article, I will be interested in conductance, which is a quotient cut formulation, but variants of most of what I say will hold for the other formulations.} As such, it has been studied from a wide range of perspectives and in a wide range of applications. For example, it has been studied for years in scientific computation (where one is interested in load balancing in parallel computing applications), machine learning and computer vision (where one is interested in segmenting images and clustering data), and theoretical computer science (where one is interested in it as a primitive in divide-and-conquer algorithms); and more recently it has been studied in the analysis of large social and information networks (where one is interested in finding ``communities'' that are meaningful in a domain-specific context or in certifying that no such communities exist). Given an undirected, possibly weighted, graph $G=(V,E)$, the \emph{conductance $\phi(S)$ of a set of nodes $S \subset V$} is: \begin{equation} \phi(S) = \frac{ |E(S, \overline{S})| }{ \min\{A(S),A(\bar{S})\} } , \label{eqn:conductance_set} \end{equation} where $E(S, \overline{S})$ denotes the set of edges having one end in $S$ and one end in the complement $\overline{S}$; where $|\cdot|$ denotes cardinality (or weight); where $A(S)=\sum_{i \in S} \sum_{j \in V} A_{ij}$; and where $A$ is the adjacency matrix of a graph \footnote{For readers more familiar with the concept of expansion, where the \textit{expansion $\alpha(S)$ of a set of nodes $S \subseteq V$} is $\alpha(S) = |E(S, \overline{S})| / \min\{|S|,|\overline{S}|)\}$, the conductance is simply a degree-weighted version of the expansion.} In this case, the \textit{conductance of the graph $G$} is: \begin{equation} \phi(G) = \min_{S \subseteq V} \phi(S) . \label{eqn:conductance_graph} \end{equation} \noindent Although exactly solving the combinatorial Problem~(\ref{eqn:conductance_graph}) is intractable, there are a wide range of heuristics and approximation algorithms, the respective strengths and weaknesses of which are well-understood in theory and/or practice, for approximately optimizing conductance. Of particular interest here are \emph{spectral methods} and \emph{flow-based methods} \footnote{Other methods include local improvement methods, which can be used to clean up partitions found with other methods, and multi-resolution methods, which can view graphs at multiple size scales. Both of these are important in practice, as vanilla versions of spectral algorithms and flow-based algorithms can easily be improved with them.} Spectral algorithms compute an approximation to Problem~(\ref{eqn:conductance_graph}) by solving Problem~(\ref{eqn:mo-unreg-vp}), either exactly or approximately, and then performing a ``sweep cut'' over the resulting vector. Several things are worth noting. \begin{itemize} \item First, Problem~(\ref{eqn:mo-unreg-vp}) is a relaxation of Problem~(\ref{eqn:conductance_graph}), as can be seen by replacing the $x\in\{-1,1\}^{n}$ constraint in the corresponding integer program with the constraint $x\in\mathbb{R}^{n}$ subject to $x^Tx=1$, \emph{i.e.}, by satisfying the combinatorial constraint ``on average''. \item Second, this relaxation effectively embeds the data on the one-dimensiona \footnote{One can also view this as ``embedding'' a scaled version of the complete graph into the input graph. This follows from the SDP formulation of Problem~(\ref{eqn:mo-unreg-sdp}); and this is of interest since a complete graph is like a constant-degree expander---namely, a metric space that is ``most unlike'' low-dimensional Euclidean spaces such as one-dimensional lines---in terms of its cut structure~\cite{LLR95_JRNL,Leighton:1999}. This provides tighter duality results, and the reason for this connection is that the identity on the space perpendicular to the degree-weighted all-ones vector is the Laplacian matrix of a complete graph~\cite{MOV09_TRv3}.} span of $v_2$---although, since the distortion is minimized only on average, there may be some pairs of points that are distorted a lot. \item Third, one can prove that the resulting partition is ``quadratically good,'' in the sense that the cut returned by the algorithm has conductance value no bigger than $\phi$ if the graph actually contains a cut with conductance $O(\phi^2)$~\cite{Cheeger69_bound,Chung:1997}. This bound comes from a discrete version of Cheeger's inequality, which was originally proved in a continuous setting for compact Riemannian manifolds; and it is parameterized in terms of a structural parameter of the input, but it is independent of the number $n$ of nodes in the graph. \item Finally, note that the worst-case quadratic approximation factor is \emph{not} an artifact of the analysis---it is obtained for spectral methods on graphs with ``long stringy'' pieces~\cite{guatterymiller98}, basically since spectral methods confuse ``long paths'' with ``deep cuts''---and that it is a very ``local'' property, in that it is a consequence of the connections with diffusion and thus it is seen in locally-biased versions of the spectral method~\cite{Spielman:2004,andersen06local,Chung07_heatkernelPNAS,MOV09_TRv3}. \end{itemize} Flow-based algorithms compute an approximation to Problem~(\ref{eqn:conductance_graph}) by solving an all-pairs multicommodity flow problem. Several things are worth noting. \begin{itemize} \item First, this multicommodity flow problem is a relaxation of Problem~(\ref{eqn:conductance_graph}), as can be seen by replacing the $x\in\{-1,1\}^{n}$ constraint (which provides a particular semi-metric) in the corresponding integer program with a general semi-metric constraint. \item Second, this procedure effectively embeds the data into an $\ell_1$ metric space, \emph{i.e.}, a real vector space $\mathbb{R}^{n}$, where distances are measured with the $\ell_1$ norm. \item Third, one can prove that the resulting partition is within an $O(\log n)$ factor of optimal, in the sense that the cut returned by the algorithm has conductance no bigger than $O(\log n)$, where $n$ is the number of nodes in the graph, times the conductance value of the optimal conductance set in the graph~\cite{LLR95_JRNL,Leighton:1999,HLW06_expanders}. This bound comes from Bourgain's result which states that any $n$-point metric space can be embedded into Euclidean space with only logarithmic distortion, a result which clearly depends on the number $n$ of nodes in the graph but which is independent of any structural parameters of the~graph. \item Finally, note that the worst-case $O(\log n)$ approximation factor is \emph{not} an artifact of the analysis---it is obtained for flow-based methods on constant-degree expander graphs~\cite{LLR95_JRNL,Leighton:1999,HLW06_expanders}---and that it is a very ``global'' property, in that it is a consequence of the fact that for constant-degree expanders the average distance between all pairs of nodes is~$O(\log n)$. \end{itemize} Thus, spectral methods and flow-based methods are complementary in that they relax the combinatorial problem of optimizing conductance in very different ways \footnote{For readers familiar with recent algorithms based on semidefinite programming~\cite{ARV_CACM08}, note that these methods may be viewed as combining spectral and flow in a particular way that, in addition to providing improved worst-case guarantees, also has strong connections with boosting~\cite{hast-tibs-fried}, a statistical method which in many cases is known to avoid over-fitting. The connections with what I am discussing in this article remain to be explored.} they succeed and fail for complementary input (\emph{e.g.}, flow-based methods do not confuse ``long paths'' with ``deep cuts,'' and spectral methods do not have problems with constant-degree expanders); and they come with quality-of-approximation guarantees that are structurally very different \footnote{These differences highlight a rather egregious theory-practice disconnect (that parallels the algorithmic-statistical disconnect). In my experience, if you ask nearly anyone within theoretical computer science what is a good algorithm for partitioning a graph, they would say flow-based methods---after all flow-based methods run in low-degree polynomial time, they achieve $O(\log n)$ worst-case approximation guarantees, etc.---although they would note that spectral methods are better for expanders, basically since the quadratic of a constant is a constant. On the other hand, nearly everyone outside of computer science would say spectral methods do pretty well for the data in which they are interested, and they would wonder why anyone would be interested in partitioning a graph without any good~partitions.} For these and other reasons, spectral and flow-based approximation algorithms for the intractable graph partitioning problem provide a good ``hydrogen atom'' for understanding more generally the disconnect between the algorithmic and statistical perspectives on data. Providing a precise statement of how spectral and flow-based approximation algorithms implicitly compute regularized solutions to the intractable graph partitioning problem (in a manner, \emph{e.g.}, analogous to how truncated diffusion-based procedures for approximating the leading nontrivial eigenvector of a graph Laplacian exactly solve a regularized version of the problem) has \emph{not}, to my knowledge, been accomplished. Nevertheless, this theoretical evidence---\emph{i.e.}, that spectral and flow-based methods are effectively ``filtering'' the input data through very different metric and geometric place \footnote{That is, whereas traditional regularization takes place by solving a problem with an \emph{explicitly-imposed geometry}, where an explicit norm constraint is added to ensure that the resulting solution is ``small,'' one can view the steps of an approximation algorithm as providing an \emph{implicitly-imposed geometry}. The details of how and where that implicitly-imposed geometry is ``nice'' will determine the running time and quality-of-approximation guarantees, as well as what input data are particularly challenging or well-suited for the approximation algorithm. ---suggests that this phenomenon exists. To observe this phenomenon empirically, one should work with a class of data that highlights the peculiar features of spectral and flow-based methods, \emph{e.g.}, that has properties similar to graphs that ``saturate'' spectral's and flow's worst-case approximation guarantees. Empirical evidence~\cite{LLDM08_communities_CONF,LLM10_communities_CONF} clearly demonstrates that large social and information networks have these properties---they are strongly expander-like when viewed at large size scales; their sparsity and noise properties are such that they have structures analogous to stringy pieces that are cut off or regularized away by spectral methods; and they often have structural regions that at least locally are meaningfully low-dimensional. Thus, this class of data provides a good ``hydrogen atom'' for understanding more generally the regularization properties implicit in graph approximation algorithms. \begin{figure} \begin{center} \subfigure[Objective function value]{ \includegraphics[width=0.30\linewidth]{AuthToPap-dblp_c-h-2.eps} \label{compactness-vs-cuts-fig:obj} } \subfigure[One ``niceness'' measure]{ \includegraphics[width=0.30\linewidth]{AuthToPap-dblp_c-h-1.eps} \label{compactness-vs-cuts-fig:nice1} } \subfigure[Another ``niceness'' measure]{ \includegraphics[width=0.30\linewidth]{AuthToPap-dblp_c-h-4.eps} \label{compactness-vs-cuts-fig:nice2} } \end{center} \caption{ Scatter plot (on log-log scales) of size-resolved conductance (in Fig.~\ref{compactness-vs-cuts-fig:obj}) and two ``niceness'' measures (Fig.~\ref{compactness-vs-cuts-fig:nice1} shows average shortest path length and Fig.~\ref{compactness-vs-cuts-fig:nice2} shows the ratio of external conductance to the internal conductance) for clusters found in the \textsc{AtP-DBLP} (\textsc{AuthToPap-dblp}) network with a spectral algorithm (blue) and a flow-based algorithm (red). See~\cite{LLDM08_communities_CONF,LLM10_communities_CONF} for details. For all plots, lower values of the Y-axis are ``better.'' In this and other examples, the flow-based algorithm (red, Metis+MQI) generally yields clusters with better conductance scores, while the spectral algorithm (blue, LocalSpectral) generally yields clusters that are~nicer. } \label{compactness-vs-cuts-fig} \end{figure} In light of this, let's say that we are interested in finding reasonably good clusters of size $10^3$ or $10^4$ nodes in a large social or information network. (See~\cite{algstat10_CHAPTER} for why this might be interesting.) In that case, Figure~\ref{compactness-vs-cuts-fig} presents very typical results. Figure~\ref{compactness-vs-cuts-fig:obj} presents a scatter plot of the size-resolved conductance of clusters found with a flow-based approximation algorithm (in red) and a spectral-based approximation algorithm (in blue) \footnote{Ignore the ``size-resolved'' aspect of these plots, since by assumption we are interested in clusters of roughly $10^3$ or $10^4$ nodes (but~\cite{LLDM08_communities_CONF,LLM10_communities_CONF} provides details on this); and don't worry about the details of the flow-based and spectral-based procedures, except to say that there is a nontrivial theory-practice gap (again,~\cite{LLDM08_communities_CONF,LLM10_communities_CONF} provides details).} In this plot, lower values on the Y-axis correspond to better values of the objective function; and thus the flow-based procedure is unambiguously better than the spectral procedure at finding good-conductance clusters. On the other hand, how useful these clusters are for downstream applications is also of interest. Since we are not explicitly performing any regularization, we do not have any explicit ``niceness'' function, but we can examine empirical niceness properties of the clusters found by the two approximation procedures. Figures~\ref{compactness-vs-cuts-fig:nice1} and~\ref{compactness-vs-cuts-fig:nice2} presents these results for two different niceness measures. Here, lower values on the Y-axis correspond to ``nicer'' clusters, and again we are interested in clusters with lower Y-axis values. Thus, in many cases, the spectral procedure is clearly better than the flow-based procedure at finding nice clusters with reasonably good conductance values. Formalizations aside, this empirical tradeoff between solution quality and solution niceness is basically the defining feature of statistical regularization---except that we are observing it here as a function of two different approximation algorithms for the same intractable combinatorial objective function. That is, although we have not explicitly put any regularization term anywhere, the fact that these two different approximation algorithms essentially filter the data through different metric and geometric spaces leaves easily-observed empirical artifacts on the output of those approximation algorithms \footnote{For other data---in particular, constant-degree expanders---the situation should be reversed. That is, theory clearly predicts that locally-biased flow-based algorithms~\cite{andersen08soda} will have better niceness properties than locally-biased spectral-based algorithms~\cite{andersen06local}. Observing this empirically on real data is difficult since data that are sufficiently unstructured to be expanders, in the sense of having no good partitions, tend to have very substantial degree heterogeneity.} One possible response to these empirical results is is to say that conductance is not the ``right'' objective function and that we should come up with some other objective to formalize our intuition \footnote{Conductance probably is the combinatorial quantity that most closely captures the intuitive bi-criterial notion of what it means for a set of nodes to be a good ``community,'' but it is still very far from perfect on many real data.} but of course that other objective function will likely be intractable, and thus we will have to approximate it with a different spectral-based or flow-based (or some other) procedure, in which case the same implicit regularization issues will arise~\cite{LLDM08_communities_CONF,LLM10_communities_CONF}. \subsection{Computing locally-biased graph partitions} \label{sxn:local} In many applications, one would like to identify locally-biased graph partitions, \emph{i.e.}, clusters in a data graph that are ``near'' a prespecified set of nodes. For example, in nearly every reasonably large social or information network, there do not exist large good-conductance clusters, but there are often smaller clusters that are meaningful to the domain scientist~\cite{andersen06seed,LLDM08_communities_CONF,LLM10_communities_CONF}; in other cases, one might have domain knowledge about certain nodes, and one might want to use that to find locally-biased clusters in a semi-supervised manner~\cite{MOV09_TRv3}; while in other cases, one might want to perform algorithmic primitives such as solving linear equations in time that is nearly linear in the size of the graph~\cite{Spielman:2004,andersen06local,Chung07_heatkernelPNAS}. One general approach to problems of this sort is to modify the usual objective function and then show that the solution to the modified problem inherits some or all of the nice properties of the original objective. For example, a natural way to formalize the idea of a locally-biased version of the leading nontrivial eigenvector of $\mathcal{L}$ that can then be used in a locally-biased version of the graph partitioning problem is to modify Problem~(\ref{eqn:mo-unreg-vp}) with a locality constraint as follows. \begin{equation} \begin{aligned} & \underset{x}{\text{minimize}} & & x^T\mathcal{L}x \\ & \text{subject to} & & x^Tx = 1, \\ & & & x^T D^{1/2} 1 = 0 \\ & & & (x^T D^{1/2} s)^2 \geq \kappa , \end{aligned} \label{eqn:mov-vp} \end{equation} where $s$ is a vector representing the ``seed set,'' and where $\kappa$ is a locality parameter. This \emph{locally-biased} version of the usual spectral graph partitioning problem was introduced in~\cite{MOV09_TRv3}, where it was shown that solution inherits many of the nice properties of the solution to the usual global spectral partitioning problem. In particular, the exact solution can be found relatively-quickly by running a so-called Personalized PageRank computation; if one performs a sweep cut on this solution vector in order to obtain a locally-biased partition, then one obtains Cheeger-like quality-of-approximation guarantees on the resulting cluster; and if the seed set consists of a single node, then this is a relaxation of the following \emph{locally-biased graph partitioning problem}: given as input a graph $G=(V,E)$, an input node $u$, and a positive integer $k$, find a set of nodes $S \subseteq V$ achieving \begin{equation} \phi(u,k,G) = \min_{S \subseteq V: u\in S, \mathrm{vol}(S) \le k} \phi(S) , \end{equation} \emph{i.e.}, find the best conductance set of nodes of volume no greater than $k$ that contains the input node~$v$~\cite{MOV09_TRv3}. This ``optimization-based approach'' has the advantage that it is explicitly solving a well-defined objective function, and as such it is useful in many small-scale to medium-scale applications~\cite{MOV09_TRv3}. But this approach has the disadvantage, at least for Web-scale graphs, that the computation of the locally-biased eigenvector ``touches'' all of the nodes in the graph---and this is very expensive, especially when one wants to find small clusters. An alternative more ``operational approach'' is to do the following: run some sort of procedure, the steps of which are similar to the steps of an algorithm that would solve the problem exactly; and then either use the output of that procedure in a downstream application in a manner similar to how the exact answer would have been used, or prove a theorem about that output that is similar to what can be proved for the exact answer. As an example of this approach, \cite{Spielman:2004,andersen06local,Chung07_heatkernelPNAS} take as input some seed nodes and a locality parameter and then run a diffusion-based procedure to return as output a ``good'' cluster that is ``nearby'' the seed nodes. In each of these cases, the procedure is similar to the usual procedure \footnote{Namely, the three diffusion-based procedures that were described in Section~\ref{sxn:eigenvector}: \cite{Spielman:2004} performs truncated random walks; \cite{andersen06local} approximates Personalized PageRank vectors; and~\cite{Chung07_heatkernelPNAS} runs a modified heat kernel procedure.} except that at each step of the algorithm various ``small'' quantities are truncated to zero (or simply maintained at zero), thereby minimizing the number of nodes that need to be touched at each step of the algorithm. For example,~\cite{Spielman:2004} sets to zero very small probabilities, and \cite{andersen06local} uses the so-called \emph{push algorithm}~\cite{JW03,Vig11_TR} to concentrate computational effort on that part of the vector where most of the nonnegligible changes will take~place. The outputs of these \emph{strongly local spectral methods} obtain Cheeger-like quality-of-approximation guarantees, and by design these procedures are extremely fast---the running time depends on the size of the output and is independent even of the number of nodes in the graph. Thus, an advantage of this approach is that it opens up the possibility of performing more sophisticated eigenvector-based analytics on Web-scale data matrices; and these methods have already proven crucial in characterizing the clustering and community structure of social and information networks with up to millions of nodes~\cite{andersen06seed,LLDM08_communities_CONF,LLM10_communities_CONF}. At present, though, this approach has the disadvantage that it is very difficult to use: the exact statement of the theoretical results is extremely complicated, thereby limiting its interpretability; it is extremely difficult to characterize and interpret for downstream applications what actually is being computed by these procedures, \emph{i.e.}, it is not clear what optimization problem these approximation algorithms are solving exactly; and counterintuitive things like a seed node not being part of ``its own cluster'' can easily happen. At root, the reason for these difficulties is that the truncation and zeroing-out steps implicitly regularize---but they are done based on computational considerations, and it is not known what are the implicit statistical side-effects of these design decisions. The precise relationship between these two approaches has not, to my knowledge, been characterized. Informally, though, the truncating-to-zero provides a ``bias'' that is analogous to the early-stopping of iterative methods, such as those described in Section~\ref{sxn:eigenvector}, and that has strong structural similarities with thresholding and truncation methods, as commonly used in $\ell_1$-regularization methods and optimization more generally~\cite{FHT00}. For example, the update step of the push algorithm, as used in~\cite{andersen06local}, is a form of stochastic gradient descent~\cite{GM-unpub}, a method particularly well-suited for large-scale environments due to its connections with regularization and boosting~\cite{bottou-2010}; and the algorithm terminates after a small number of iterations when a truncated residual vector equals zero~\cite{GM-unpub}, in a manner similar to other truncated gradient methods~\cite{LLZ09}. Perhaps more immediately-relevant to database theory and practice as well as to implementing these ideas in large-scale statistical data analysis applications is simply to note that this operational and interactive approach to database algorithms is \emph{already} being adopted in practice. For example, in addition to empirical work that uses these methods to characterize the clustering and community structure of large networks~\cite{andersen06seed,LLDM08_communities_CONF,LLM10_communities_CONF}, the body of work that uses diffusion-based primitives in database environments includes an algorithm to estimate PageRank on graph streams~\cite{DGP08}, the approximation of PageRank on large-scale dynamically-evolving social networks~\cite{BCG10}, and a MapReduce algorithm for the approximation of Personalized PageRank vectors of all the nodes in a graph~\cite{BCX11}. \section{Discussion and conclusion} \label{sxn:conc} Before concluding, I would like to share a few more general thoughts on approximation algorithm theory, in light of the above discussion. As a precursor, I should point out the obvious fact that the modern theory of NP-completeness is an extremely useful theory. It is a theory, and so it is an imperfect guide to practice; but it is a useful theory in the sense that it provides a qualitative notion of fast computation, a robust guide as to when algorithms will or will not perform well, etc \footnote{For readers familiar with Linear Programming and issues associated with the simplex algorithm versus the ellipsoid algorithm, it is probably worth viewing this example as the ``exception that proves the rule.''} The theory achieved this by considering computation \emph{per se}, as a one-step process that divorced the computation from the input and the output except insofar as the computation depended on relatively-simple complexity measures like the size of the input. Thus, the success is due to the empirical fact that many natural problems of interest are solvable in low-degree polynomial time, that the tractability status of many of the ``hardest'' problems in NP is in some sense equivalent, and that neither of these facts depends on the input data or the posedness of the problem. I think it is also fair to say that, at least in a very wide range of MMDS applications, the modern theory of approximation algorithms is nowhere near as analogously useful. The bounds the theory provides are often very weak; the theory often doesn't provide constants which are of interest in practice; the dependence of the bounds on various parameters is often not even qualitatively right; and in general it doesn't provide analogously qualitative insight as to when approximation algorithms will and will not be useful in practice for realistic noisy data. One can speculate on the reasons---technically, the combinatorial gadgets used to establish approximability and nonapproximability results might not be sufficiently robust to the noise properties of the input data; many embedding methods, and thus their associated bounds, tend to emphasize the properties of ``far apart'' data points, while in most data applications ``nearby'' information is more reliable and more useful for downstream analysis; the geometry associated with matrices and spectral graph theory is much more structured than the geometry associated with general metric spaces; structural parameters like conductance and the isoperimetric constant are robust and meaningful and not brittle combinatorial constructions that encode pathologies; and ignoring posedness questions and viewing the analysis of approximate computation as a one-step process might simply be too coarse. The approach I have described involves going ``beyond worst-case analysis'' to addressing questions that lie at the heart of the disconnect between what I have called the algorithmic perspective and the statistical perspective on large-scale data analysis. At the heart of this disconnect is the concept of regularization, a notion that is almost entirely absent from computer science, but which is central to nearly every application domain that applies algorithms to noisy data. Both theoretical and empirical evidence demonstrates that approximate computation, in and of itself, can implicitly lead to statistical regularization, in the sense that approximate computation---either approximation algorithms in theoretical computer science or heuristic design decisions that practitioners must make in order to implement their algorithms in real systems---often implicitly leads to some sort of regularization. This suggests treating statistical modeling questions and computational considerations on a more equal footing, rather than viewing either one as very much secondary to the~other. The benefit of this perspective for database theory and the theory and practice of large-scale data analysis is that one can hope to achieve bicriteria of having algorithms that are scalable to very large-scale data sets and that also have well-understood inferential or predictive properties. Of course, this is not a panacea---some problems are simply hard; some data are simply too noisy; and running an approximation algorithm may implicitly be making assumptions that are manifestly violated by the data. All that being said, understanding and exploiting in a more principled manner the statistical properties that are implicit in scalable worst-case algorithms should be of interest in many very practical MMDS applications.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{section-intro} The Gel'fand inverse problem, formulated by I. Gel'fand \cite{G}, concerns finding the topology, differential structure and Riemannian metric of a compact manifold with boundary from the spectral data for the Neumann Laplacian on the boundary (or the boundary values of the Green's function). The uniqueness of the Gel'fand inverse problem was proved in \cite{BK}, see also \cite{AKKLT,Caday,KOP}, in the form of an inverse spectral problem: the geometry of a compact Riemannian manifold with boundary is uniquely determined by the boundary spectral data for the Neumann Laplacian, namely the Neumann eigenvalues and the boundary values of the corresponding eigenfunctions. On a given domain of the Euclidean space, the Gel'fand problem was reduced in \cite{NSU} to inverse coefficient problems for elliptic equations which were solved in \cite{Astala,Nachman1,Nachman2,SyUl}, see also \cite{DosSantos,Guillarmou,Isozaki,KenigSalo,KSU,U}, and the stability of the solutions of these problems has been studied in \cite{A,AlS,SyUl2}. The Gel'fand inverse problem is ill-posed in the sense of Hadamard, as one can make large changes to the geometry of the interior without affecting the boundary spectral data much. One approach of stabilizing the inverse problem is to study the conditional stability by assuming \emph{a priori} knowledge of the desired quantities, for instance higher regularity of coefficients \cite{A}, and higher regularity of Riemannian metrics if they are close to Euclidean \cite{SU}. For a general Riemannian manifold, it is natural to impose \emph{a priori} bounds on geometric parameters such as the diameter, injectivity radius and sectional curvature. An abstract continuity result for the problem was proved in \cite{AKKLT}, however with no stability estimates, and the related determination of the smooth structure was shown in \cite{FIKLN}. One could also consider the inverse interior problem, that is, an inverse problem on closed manifolds analogous to the Gel'fand problem. For the inverse interior problem where the eigenfunctions are measured in a ball, the unique solvability of the problem was proved in \cite{KrKL} and a quantitative stability estimate has recently been obtained in \cite{BKL3}. The main purpose of this paper is to obtain a quantitative stability estimate for the Gel'fand inverse problem on Riemannian manifolds with boundary. The key result used to establish the uniqueness of the Gel'fand inverse problem was the unique continuation \cite{T} for the wave operator from a subset of the boundary. Its stability is essential to the stability of the inverse problem. The stability of the unique continuation on Riemannian manifolds has been investigated independently in \cite{BKL2,BKL1} for closed manifolds, and in \cite{LL} without dependence on geometric parameters. In this paper, we prove an explicit stability estimate (Theorem \ref{main1} and Proposition \ref{wholedomain}) for the unique continuation from a subset of the boundary. In our estimates, the constants explicitly depend only on intrinsic geometric parameters. Equipped with these results, we are able to solve the Gel'fand inverse problem using an algorithm that behaves in a stable way, when the spectral data are perturbed with small errors. We hope our results may have applications in medicine, especially to cancer treatment, more concretely, to imaging necessary for radiation therapy (e.g. the navigation of cyber knives) and for ultrasound surgery, see e.g. \cite{W}. In these treatment, many thin beams of X-rays or high amplitude ultrasound waves are concentrated in the cancerous tissue and the planning of the treatment requires stable imaging methods. A significant potential instance is the focused ultrasound surgery \cite{Tempany}, where a cancerous tissue is destroyed by an excessive heat dose generated by focused ultrasound waves. The location where the ultrasound waves are focused is determined by the intrinsic Riemannian metric corresponding to the wave speed of acoustic waves, see \cite{Dahl,L}. An important observation is that the intensity of the impact decreases approximately quadratically as one moves away from the focusing spot, both for thin X-ray beams coming from different directions and for focused ultrasound waves. (Say, if the spot where the rays are focused is of size $1$ mm, then the intensity at $1$ cm is about $1/100$ weaker than in the spot.) This observation is of key importance for such treatment to work, yet also explains the importance of extremely high precision in navigation. In particular, in an anisotropic medium where the inverse problem is not uniquely solvable in Euclidean coordinates, see \cite{Sylvester}, it is beneficial to do imaging in the same Riemannian structure that determines the wave propagation. The imaging of the Riemannian metric associated with the wave propagation is an inverse problem for the wave equation, which is equivalent, see \cite{KKLM}, to the Gel'fand inverse problem studied in this paper. Numerical methods to solve these problems have been studied in \cite{HoopOksanen1,HoopOksanen2}. \medskip Let $(M,g)$ be a compact, connected, orientable Riemannian manifold of dimension $n\geqslant 2$ with smooth boundary $\partial M$. We consider the manifold $M$ in the class $\mathcal{M}_n(D,K_1,K_2,i_0,r_0)$ of bounded geometry defined by the bounds on the diameter $\textrm{diam}(M)$, the injectivity radius $\textrm{inj}(M)$, the Riemannian curvature tensor $R_M$ of $M$, and the second fundamental form $S$ of the boundary $\partial M$ embedded in $M$: $$\textrm{diam}(M)\leqslant D, \quad \textrm{inj}(M)\geqslant i_0,$$ $$\|R_M\|_{C^0}\leqslant K_1^2,\quad \|S\|_{C^0}\leqslant K_1,$$ \begin{equation}\label{boundedgeometry} \sum_{i=1}^5 \|\nabla^i R_M\|_{C^0}\leqslant K_2,\quad \sum_{i=1}^4 \|\nabla^i S\|_{C^0}\leqslant K_2, \end{equation} where $\nabla^i$ denotes the $i$-th covariant derivative on $M$. The injectivity radius for a manifold with boundary is defined in Section \ref{subsection-bounded}. In addition, we impose the lower bound on the following quantity $r_{\textrm{CAT}}(M)$ (Definition \ref{Def-CATradius}): \begin{equation}\label{bound-CATradius} r_{\textrm{CAT}}(M)\geqslant r_0\, , \end{equation} where $r_{\textrm{CAT}}(M)$ is defined as the largest number $r$, such that any pair of points with distance less than $r$ is connected by a unique distance-minimizing geodesic (possibly touching the boundary) of $M$. This quantity is known to be positive for a compact Riemannian manifold with smooth boundary. For Riemannian manifolds without boundary, the condition (\ref{bound-CATradius}) is already incorporated in the lower bound for the injectivity radius. Denote by $\lambda_j$ ($j\geqslant 1$) the $j$-th eigenvalue of the (nonnegative) Laplace-Beltrami operator on $M$ with the Neumann boundary condition at $\partial M$, and by $\varphi_j$ an (smooth) eigenfunction with respect to $\lambda_j$. We know that $0=\lambda_1<\lambda_2\leqslant \cdots \leqslant \lambda_j\leqslant \lambda_{j+1}\leqslant \cdots$, and $\lambda_j\to +\infty$ as $j\to +\infty$. Assume the eigenfunctions are orthonormalized with respect to the $L^2$-norm of $M$. In particular $\varphi_1=vol_n(M)^{-1/2}$. The \emph{Neumann boundary spectral data} of $M$ refers to the collection of data $\big(\partial M,g_{_{\partial M}}, \{\lambda_j,\varphi_j|_{\partial M}\}_{j=1}^{\infty}\big)$, which consists of the the boundary $\partial M$ and its intrinsic metric $g_{_{\partial M}}$, the Neumann eigenvalues and the boundary values of a choice of orthonormalized Neumann eigenfunctions. \begin{deferror}\label{deferror} We say a collection of data $\big(\partial M,g_{_{\partial M}},\{\lambda_j^a,\varphi_j^a|_{\partial M}\}_{j=1}^{J}\big)$ is a $\delta$-approximation of the Neumann boundary spectral data of $(M,g)$ (in $C^2$) for some $\delta\geqslant J^{-1}$, if there exists a choice of Neumann boundary spectral data $\{\lambda_j,\varphi_j|_{\partial M}\}_{j=1}^{\infty}$ such that the following three conditions are satisfied for all $j\leqslant \delta^{-1}$: \begin{enumerate}[(1)] \item $\lambda_j^a\in [0,\infty)$, $\varphi_j^a |_{\partial M}\in C^2(\partial M)$; \item $\big|\sqrt{\lambda_j}-\sqrt{\lambda_j^a} \big|<\delta$; \item $\big\|\varphi_j - \varphi_j^a \big\|_{C^{0,1}(\partial M)}+ \big\|\nabla_{\partial M}^2 (\varphi_j- \varphi_j^a)|_{\partial M} \big\|_{C^0}< \delta$, where $\nabla_{\partial M}^2$ denotes the second covariant derivative with respect to the induced metric $g_{_{\partial M}}$ on $\partial M$. \end{enumerate} Let $M_1,M_2$ be two Riemannian manifolds with isometric boundaries, and let $\Phi:\partial M_1\to \partial M_2$ be the Riemannian isometry (diffeomorphism) between boundaries. We say the Neumann boundary spectral data of $M_1,M_2$ are $\delta$-close, if the pull-back via $\Phi$ of the Neumann boundary spectral data of $M_2$ (or $M_1$) is a $\delta$-approximation of the Neumann boundary spectral data of $M_1$ (or $M_2$). \end{deferror} Note that the definition above is coordinate-free. The second covariant derivative of a function is called the Hessian of the function, which is a symmetric (0,2)-tensor. In a local coordinate on $\partial M$, Definition \ref{deferror}(3) translates to $(\varphi_j- \varphi_j^a)|_{\partial M}$ having small $C^2$-norm. A similar definition in $L^2$-norm was seen in \cite{BKL3}. If finite boundary spectral data $\{\lambda_j,\varphi_j|_{\partial M}\}_{j=1}^{J}$ are known without error, then this set of finite data is a $\delta$-approximation of the Neumann boundary spectral data with $\delta=J^{-1}$ by definition. If we are given a certain choice of Neumann boundary spectral data, then Definition \ref{deferror}(3) is equivalent to the existence of orthogonal matrices acting on eigenfunctions in eigenspaces, such that the condition is satisfied by the given spectral data after applying these matrices. \medskip The main purpose of this paper is to prove the following stability estimate for the reconstruction of a manifold from the Neumann boundary spectral data. \begin{stability}\label{stability} There exists $\delta_0=\delta_0(n,D,K_1,K_2,i_0,r_0) >0$ such that the following holds. If we are given a $\delta$-approximation of the Neumann boundary spectral data of a Riemannian manifold (with boundary) $M\in \mathcal{M}_n(D,K_1,K_2,i_0,r_0)$ for $\delta<\delta_0$, then we can construct a finite metric space $X$ directly from the given boundary data such that $$d_{GH}(M,X)< C_1\Big( \log\big(|\log\delta|\big) \Big)^{-C_2},$$ where $d_{GH}$ denotes the Gromov-Hausdorff distance between metric spaces. The constant $C_1$ depends only on $n,D,K_1,K_2,i_0,r_0$, and the constant $C_2$ explicitly depends only on $n$. \end{stability} Theorem \ref{stability} implies the stability of the Gel'fand inverse problem. \begin{Cor1}\label{Cor1} There exists $\delta_0=\delta_0(n,D,K_1,K_2,i_0,r_0)>0$ such that the following holds. Suppose two Riemannian manifolds $M_1,M_2\in \mathcal{M}_n(D,K_1,K_2,i_0,r_0)$ have isometric boundaries and their Neumann boundary spectral data are $\delta$-close for $\delta<\delta_0$. Then $M_1$ is diffeomorphic to $M_2$, and $$d_{GH}(M_1,M_2)< C_1 \Big( \log\big(|\log\delta|\big) \Big)^{-C_2}.$$ \end{Cor1} \begin{remark0} The dependency of $C_1,\delta_0$ is not explicit. An explicit estimate with dependence additionally on $vol_n(M),vol_{n-1}(\partial M)$ can be obtained, but this process results in a third logarithm. More details can be found in Appendix \ref{constants}. If any explicitness for the results is not of interest, the bounds (\ref{boundedgeometry}) we assumed on the Riemannian curvature tensor and the second fundamental form can be relaxed to bounds on Ricci curvatures of $M,\partial M$ and the mean curvature of $\partial M$, due to Corollary 2 in \cite{KKL2}. \end{remark0} We do not know if the $\log$-$\log$ type of estimates above is optimal. For an analogous inverse interior problem on a closed manifold where spectral data are measured in a ball, a stability estimate of the $\log$-$\log$ type is known (Theorem 1 in \cite{BKL3}). On the other hand, the counterexample in \cite{M} shows that the stability estimate cannot be better than logarithmic. \smallskip The main obstacle to proving Theorem \ref{stability} is to obtain a uniform stability estimate for the unique continuation in a class of Riemannian manifolds with bounded geometry, and without loss of domain in the domain of dependence. Equipped with such an estimate (Theorem \ref{main1}), we can adopt the approach introduced in \cite{KKL0} (also used in \cite{BKL3}) to obtain a stability estimate for the Gel'fand inverse problem. Namely, we apply a quantitative version of the boundary control method to evaluate an approximate volume for the domain of dependence. The error of the approximate volume can be made arbitrarily small as long as sufficient boundary spectral data are known. Then we define approximations to the boundary distance functions through slicing procedures, from which the manifold can be reconstructed (\cite{KKL2}). The method we use to prove Theorem \ref{main1} may be of independent interest. Essentially it is proved by propagating local stability estimates to obtain a global estimate. However, the presence of manifold boundaries brings considerable trouble in defining the process, especially when the path of propagation touches the boundary. One straightforward approach would be to avoid the boundary. Namely, one can approximate a geodesic touching the boundary with a curve in the interior of the manifold, and propagate local estimates through balls along this curve. This approach works well if $T$ is larger than the diameter of the manifold, in which case the domain of dependence is smooth, i.e. the whole manifold. However, difficulties arise for an arbitrary $T$, where the domain of dependence has corners. An estimate obtained with this approach may not be uniform in a class of manifolds. Our method directly defines a series of non-characteristic domains through which local estimates are propagated, using the intrinsic distance of the manifold and the distance to the boundary. This is made possible by directly handling geodesics near the boundary. These domains are globally defined in a coordinate-free way. The boundaries of these domains normally have the shape of a hyperboloid and warp quickly near the boundary (and the injectivity radius). In this way, the local estimates propagate (almost) along distance-minimizing geodesics, and naturally produce a uniform global estimate depending only on intrinsic geometric parameters. \smallskip This paper is organized as follows. We review relevant concepts and the unique continuation in Section \ref{pre}. Section \ref{section-uc} is devoted to obtaining an explicit stability estimate (Theorem \ref{main1} and Proposition \ref{wholedomain}) for the unique continuation from a subset of the boundary. Section \ref{section-uc} uses several independent lemmas and their proofs can be found in Section \ref{auxiliary}. In Section \ref{section-projection}, we apply Theorem \ref{main1} to introduce the essential step of our reconstruction method where we compute, in a stable way, how the Fourier coefficients of a function (with respect to the basis of eigenfunctions) change, when the function is multiplied by an indicator function of a union of balls with center points on the boundary. The new feature of this method is that it is directly based on the unique continuation theorem. The main results Theorem \ref{stability} and Theorem \ref{Cor1} are proved in Section \ref{section-appro}, with the dependency of constants on geometric parameters derived in Appendix \ref{constants}. \subsection*{Acknowledgement} Yaroslav Kurylev worked with us until the middle of this project and quit when he could not work anymore, just a couple of months before he passed away. We cannot formally include him as a co-author, but we still consider him to be one of the authors of this paper. We would like to thank A. Petrunin for helpful discussions. D. Burago was partially supported by NSF grant DMS-1205597. S. Ivanov was partially supported by RFBR grant 20-01-00070. M. Lassas and J. Lu were partially supported by Academy of Finland, grants 273979, 284715, 312110, and Finnish Centre of Excellence in Inverse Modelling and Imaging. \section{Preliminaries}\label{pre} \subsection{Bounded geometry} \label{subsection-bounded} Let $(M,g)\in \mathcal{M}_n(D,K_1,K_2,i_0,r_0)$ be a compact, connected, orientable Riemannian manifold of dimension $n\geqslant 2$ with smooth boundary $\partial M$. The $C^0$-norm of the Riemannian curvature tensor $R_M$ appeared in (\ref{boundedgeometry}) is defined as $$\|R_M\|_{C^0}=\sup_{x\in M}\big|R_M|_x \big|\, ,$$ where $\big|R_M|_x\big|$ denotes the operator norm of $R_M$ at $x\in M$ as a multi-linear operator to $\mathbb{R}$. The $C^0$-norms of $S$ and the covariant derivatives are defined in the same way. In this paper, we usually omit the subscript $C^0$ for brevity. Since the Riemannian curvature tensor is completely determined by the sectional curvatures, assuming a bound on the curvature tensor is equivalent to assuming a bound on sectional curvatures. By the Gauss equation, the bounds on the curvature tensor of $M$ and the second fundamental form of $\partial M$ yield a bound on the curvature tensor $R_{\partial M}$ of $\partial M$ (when $\partial M$ is at least two-dimensional), also denoted by $K_1^2$. Without loss of generality, assume $K_1,K_2>0$. From now on, we denote $\|A \|=\|A\|_{C^0}$ for a tensor field $A$ on $M$. For convenience, we denote $$\|R_M\|_{C^k}=\|R_M\|+\sum_{i=1}^k \|\nabla^i R_M\|,\quad \|S\|_{C^k}=\|S\|+\sum_{i=1}^k \|\nabla^i S\|.$$ Then the curvature bound assumptions in (\ref{boundedgeometry}) are written as $$\|R_M\|\leqslant K_1^2,\quad \|S\|\leqslant K_1,\quad \|R_{\partial M}\|\leqslant K_1^2,$$ $$\|R_M\|_{C^5}\leqslant K_1^2+K_2,\quad \|S\|_{C^4}\leqslant K_1+K_2,\quad \|R_{\partial M}\|_{C^4}\leqslant C(K_1,K_2).$$ The boundary $\partial M$ is said to admit a boundary normal neighborhood of width $r$ if the exponential map $(z,s)\mapsto \exp_z(s\textbf{n}_z)$ defines a homeomorphism from $\partial M\times [0,r]$ to the $r$-neighborhood of $\partial M$, where $\textbf{n}_z$ denotes the inward-pointing unit normal vector at $z\in\partial M$ (see e.g. Section 2.1.16 in \cite{KKL}). The \emph{boundary injectivity radius} $i_b(M)$ of $M$ is defined as the largest number with the following property that $\partial M$ admits a boundary normal neighborhood of width $r$ for any $r<i_b(M)$. The injectivity radius $\textrm{inj}(M)$ of $M$ is usually defined as the largest number $r\leqslant \min\{\textrm{inj}(\partial M),i_b(M)\}$ satisfying the following condition: the open ball $B_{r}(x)$ of radius $r$ is a domain of Riemannian normal coordinates on $M$ centered at any $x\in M$ with $d(x,\partial M)\geqslant r$. This definition of the injectivity radius for a manifold with boundary gives little information on the geometry near the boundary. We find it convenient to consider the following quantity. \begin{Def-CATradius}\label{Def-CATradius} For $x\in M$, $r_{\textrm{CAT}}(x)$ is defined to be the largest number $r$, such that the (distance-)minimizing geodesic of $M$ connecting $x$ and any $y\in B_r(x)$ is unique. Define $$r_{\textrm{CAT}}(M)=\inf_{x\in M} r_{\textrm{CAT}}(x).$$ We call this quantity the radius of radial uniqueness (or CAT radius). \end{Def-CATradius} The radius of radial uniqueness is positive for a compact Riemannian manifold with smooth boundary (Lemma \ref{CATradius}(1)). This definition is a natural extension of the injectivity radius for manifolds without boundary. More precisely, for a Riemannian manifold without boundary, $\min\{\pi/\sqrt{K},r_{\textrm{CAT}}\}$ gives a lower bound for the injectivity radius, where $K$ is the upper bound for the sectional curvatures. The radius of radial uniqueness has an immediate connection with metric spaces of curvature bounded above in the sense of Alexandrov. A metric space has curvature bounded above (globally) by $K>0$ if every minimizing geodesic triangle in the space has perimeter less than $2\pi/\sqrt{K}$, and has each of its angles at most equal to the corresponding angle in a triangle with the same side-lengths in the surface of constant curvature $K$. This space is denoted by CAT$(K)$. A CAT$(K)$ space has the property that any pair of points with distance less than $\pi/\sqrt{K}$ is connected by a unique (within the space) minimizing geodesic, and the geodesic continuously depends on its endpoints. It is well-known that a Riemannian manifold $M$ with smooth boundary is locally CAT$(K)$, where $K$ is the upper bound for the sectional curvatures of $M$ and the second fundamental form of $\partial M$ (the Characterization Theorem in \cite{ABB2}). In fact, more is known: the open ball around any point in $M$ of the radius $\min\{\pi/2\sqrt{K},r_{\textrm{CAT}}(M)\}$ is CAT$(K)$ (Theorem 4.3 in \cite{AB}). This is where the notation $r_{\textrm{CAT}}$ comes from. The CAT space provides useful non-differential tools to work with manifold boundaries where the standard differential machinery is often problematic. \subsection{Wave operator and the unique continuation} The Laplace-Beltrami operator $\Delta_g$ with respect to the metric $g$ has the following form in local coordinates $(x^1,\cdots,x^n)$: \begin{equation}\label{Laplacian} \Delta_g=\frac{1}{\sqrt{\det(g_{ij})}} \sum_{i,j=1}^n \frac{\partial}{\partial x^i} \Big(\sqrt{\det(g_{ij})} g^{ij}\frac{\partial}{\partial x^j}\Big). \end{equation} Then the wave operator $P=\partial_t^2-\Delta_g$ has the following form in local coordinates: \begin{eqnarray}\label{Pdef} P&=& \frac{\partial^2}{\partial t^2}-\frac{1}{\sqrt{\det(g_{ij})}} \sum_{i,j=1}^n \frac{\partial}{\partial x^i} \Big(\sqrt{\det(g_{ij})} g^{ij}\frac{\partial}{\partial x^j}\Big) \\ &=&\frac{\partial^2}{\partial t^2}-\sum_{i,j=1}^n g^{ij}\frac{\partial^2}{\partial x^i \partial x^j} + \textrm{lower order terms}. \nonumber \end{eqnarray} The Riemannian metric $g$ approximates the standard Euclidean metric in small scale. In sufficiently small coordinate charts, the Laplace-Beltrami operator is a strongly elliptic operator given by the formula (\ref{Laplacian}). However, the wave operator of the form above is only locally defined on manifolds, different from the wave operator on Euclidean spaces with global coefficients. In the boundary normal neighborhood of $\partial M$, it is convenient to use the boundary normal coordinate $(x^1,\cdots,x^{n-1},x^n)$, where $(x^1,\cdots,x^{n-1})$ is a choice of coordinate at the nearest point on $\partial M$ and $x^n=d(x,\partial M)$. In other words, the coordinate $(x^1,\cdots,x^{n-1},d(x,\partial M))$ is defined by pushing forward the local coordinate $(x^1,\cdots,x^{n-1})$ on $\partial M$ via the family of exponential maps $z\mapsto\exp_z(s\textbf{n}_z)$ from the boundary in the normal direction. Note that the choice of coordinate on $\partial M$ is fixed. Hence by the Gauss lemma, the metric $g$ has the form of a product metric in such coordinate: $$g=(d x^n)^2+\sum_{\alpha,\beta=1}^{n-1}g_{\alpha\beta}dx^{\alpha}dx^{\beta}.$$ On the boundary $\partial M$, two frequent choices of coordinates are the geodesic normal coordinates and the harmonic coordinates. In this paper, we use the geodesic normal coordinates of $\partial M$. Namely, at any point on $\partial M$, we have a geodesic normal coordinate $(x^{\alpha})_{\alpha=1}^{n-1}$ in the ball (of $\partial M$) of a sufficiently small radius, such that \begin{equation}\label{coorb} \frac{1}{2}|\xi|^2\leqslant \sum_{\alpha,\beta=1}^{n-1} g^{\alpha\beta}\xi_{\alpha}\xi_{\beta} \leqslant 2|\xi|^2\; (\xi\in\mathbb{R}^{n-1}),\quad \|g_{\alpha\beta}\|_{C^{1}}\leqslant 2,\quad \|g_{\alpha\beta}\|_{C^{4}}\leqslant C(n,K_1,K_2,i_0). \end{equation} It is known that the radius of the ball in which the conditions above are satisfied is uniformly bounded below by a positive number explicitly depending on $n, \|R_{\partial M}\|_{C^1},i_0$ (Lemma 8 in \cite{HV} and Theorem A in \cite{E}). We denote this uniform radius by $r_g(\partial M)$. \smallskip We recall that for an open subset $\Gamma$ of the boundary and $T>0$, the \emph{domain of influence} of the subset $\Gamma$ at a time $t\in [0,T]$ is defined by \begin{equation}\label{def-Mt} M(\Gamma,t)=\big\{x\in M: d(x,\Gamma)<t \big\}, \end{equation} where $d$ denotes the intrinsic distance function of $M$. The \emph{double cone of influence} of $\Gamma\times [-T,T]$ is defined by \begin{equation}\label{def-Kcone} K(\Gamma,T)=\big\{(x,t)\in M\times [-T,T] : d(x,\Gamma)< T-|t| \big\}. \end{equation} The wave operator $P$ enjoys the unique continuation property from the boundary, namely if the Cauchy boundary data of a wave $u$ (a solution of the wave equation $Pu=0$) vanish on $\Gamma\times [-T,T]$, i.e. $$u|_{\Gamma\times [-T,T]}=0,\quad \frac{\partial u}{\partial \textbf{n}} \big|_{\Gamma \times [-T,T]}=0,$$ then the wave vanishes in the double cone of influence $K(\Gamma,T)$ (\cite{T} or e.g. Theorem 3.16 in \cite{KKL}). Here $\textbf{n}$ denotes the unit normal vector field on $\partial M$ pointing inwards. We are interested in its stability: when the Cauchy boundary data are small on $\Gamma\times [-T,T]$, we consider if the wave is small in the double cone. The following global stability result on Tataru's unique continuation principle (\cite{T}) was proved in \cite{BKL2}, from which the stability of the unique continuation from a ball on a closed Riemannian manifold can be obtained (Theorem 3.3 in \cite{BKL2}). \begin{global1}\label{global}(Theorem 1.2 in \cite{BKL2}) Let $\Omega_{bd}$ be a bounded connected open subset of $\mathbb{R}^n\times \mathbb{R}$ and $P$ be the wave operator (\ref{Pdef}). Assume $u\in H^1(\Omega_{bd})$ and $Pu\in L^2(\Omega_{bd})$. In $\Omega_{bd}$, we assume the existence of a finite number of connected open subsets $\Omega_{j}^0$ and $\Omega_{j}$, $j=1,2,\dots,J$ a connected set $\Upsilon$ and functions $\psi_j$ satisfying the following assumptions. \begin{enumerate}[(1)] \item $\psi_j\in C^{2,1}(\Omega_{bd})$; $p(\cdot,\nabla \psi_{j})\neq 0$ and $\nabla \psi_j\neq 0$ in $\Omega_{j}^0$, where $p$ denotes the principle symbol of the wave operator $P$. \item ${\rm supp}(u)\cap \Upsilon=\emptyset$; there exists $\psi_{max,j}\in\mathbb{R}$ such that $\emptyset\neq\{y\in \Omega_{j}^0: \psi_j(y)> \psi_{max,j}\}\subset \overline{\Upsilon}_j$, where $\Upsilon_j=\Omega_{j}^0\cap(\cup_{l=1}^{j-1}\Omega_l\cup \Upsilon)$. \item $\Omega_j=\{y\in \Omega_{j}^0-\overline{\Upsilon}_j: \psi_j(y) > \psi_{min,j}\}$ for some $\psi_{min,j}\in\mathbb{R}$, and $dist(\partial\Omega_{j}^0,\Omega_j)>0$. \item $\overline{\Omega}$ is connected, where $\Omega=\cup_{j=1}^J \Omega_j$. \end{enumerate} Then the following estimate holds for $\Omega$ and $\Omega^0=\cup_{j=1}^J\Omega_{j}^0$: $$\|u\|_{L^2(\overline{\Omega})}\leqslant C \frac{\|u\|_{H^1(\Omega^0)}}{\Big(\log\big(1+\frac{\|u\|_{H^1(\Omega^0)}}{\|Pu\|_{L^2(\Omega^0)}}\big)\Big)^{\theta}}\, ,$$ where $\theta\in(0,1)$ is arbitrary, and the constant $C$ explicitly depends on $\theta$, $\psi_j$, $ dist(\partial\Omega_{j}^0,\Omega_j)$, $\|g^{ij}\|_{C^1}$, $vol_{n+1}(\Omega_{bd})$. \end{global1} The intuition behind this result is propagating the unique continuation step by step to cover a large domain, as long as the error introduced in each step is small. The set $\Upsilon$ is the initial domain where the function $u$ vanishes, and $\Omega_j$ is the domain propagated by the unique continuation at the $j$-th step. The estimate is obtained by propagating local stability estimates, and the assumptions make sure that certain support conditions (Assumption A1 in \cite{BKL1}) required by the local stability estimates are satisfied at every step. For some simple cases, one choice of the domains and functions is enough, for example if the function $u$ initially vanishes over a ball in $\mathbb{R}^{n}$. However, these assumptions are rather restrictive for general cases, and multiple iterations of the domains and functions need to be carefully constructed to handle the difficulties brought by the geometry of the boundary and the injectivity radius. Note that the constant in the estimate depends on higher derivatives of $\psi_j$ in $\Omega_{j}^0$. It is crucial to construct the required domains where $\psi_j$ has uniformly bounded higher derivatives. Although Theorem \ref{global} is formulated in Euclidean spaces, it applies to manifolds since it is obtained by propagating local stability estimates, which can be done in local coordinate charts. \subsection{Notations}\label{subsection-notations} We introduce several notations that we frequently use in this paper. Denote by $vol_k$ the $k$-dimensional Hausdorff measure on $M$. When the Hausdorff dimension of a set in question is clear, we omit the subscript $k$. In particular, we denote by $vol(M)$ the Riemannian volume of $M$, and by $vol(\partial M)$ the Riemannian volume of $\partial M$ with respect to the induced metric on $\partial M$. Given an open subset $\Gamma\subset\partial M$, we define the following domain with a positive parameter $h<1$ by \begin{equation}\label{Omegaht} \Omega_{\Gamma,T}(h)=\big\{(x,t)\in M\times [-T,T]: T-|t|-d(x,\Gamma) >\sqrt{h},\; d(x,\partial M-\Gamma)>h \big\}, \end{equation} and we write $\Omega(h)$ for short. Note that $\Omega(h)$ is a subset of the double cone of influence $K(\Gamma,T)$, and $\Omega(h)$ approximates $K(\Gamma,T)$ as $h\to 0$. If $\Gamma=\partial M$, the set above is defined with the last condition dropped. In this paper, our consideration always includes the possibility that $\Gamma=\partial M$. For the sole purpose of incorporating this special case notation-wise in later proofs, we set any distance from the empty set to be infinity. Given a function $u:\partial M\times [-T,T]\to \mathbb{R}$ and an open subset $\Gamma\subset \partial M$, we define the following norm \begin{equation}\label{H21} \|u\|_{H^{2,2}(\Gamma \times [-T,T])}^2=\int_{-T}^T \big(\|u(\cdot,t)\|_{H^2(\Gamma)}^2 +\|\partial_t u(\cdot,t)\|_{L^2(\Gamma)}^2+\|\partial_t^2 u(\cdot,t)\|_{L^2(\Gamma)}^2\big)\, dt, \end{equation} if $u(\cdot,t)\in H^2(\Gamma)$ and $\partial_t u(\cdot,t),\,\partial_t^2 u(\cdot,t) \in L^2(\Gamma)$ for all $|t|\leqslant T$. We say $u\in H^{2,2}(\Gamma\times[-T,T])$ if the norm above is finite, and we call it the $H^{2,2}$-norm. \section{Stability of the unique continuation}\label{section-uc} In this section, we obtain an explicit estimate on the stability of the unique continuation for the wave operator, provided small Cauchy data on a connected open subset of the manifold boundary. First we state this result as follows. \begin{main1}\label{main1} Let $M\in \mathcal{M}_n(D,K_1,K_2,i_0,r_0)$ be a compact, orientable Riemannian manifold with smooth boundary $\partial M$, and let $\Gamma$ (possibly $\Gamma=\partial M$) be a connected open subset of $\partial M$ with smooth embedded boundary. Denote by $i_b(\overline{\Gamma})$ the boundary injectivity radius of $\overline{\Gamma}$. Then there exist a constant $C_3>0$, that explicitly depends on $n,T,D,K_1,\|\nabla R_M\|_{C^0},\|\nabla S\|_{C^0},$ $i_0,r_0,\hbox{vol}_n(M),\hbox{vol}_{n-1}(\Gamma)$, an absolute constant $C_4>0$, and a sufficiently small constant $h_0>0$, that explicitly depends on $n,T,K_1,K_2,i_0,r_0,i_b(\overline{\Gamma}),vol_{n-1}(\partial M)$, such that the following holds. Suppose $u\in H^2(M\times[-T,T])$ is a solution of the non-homogeneous wave equation $Pu=f$ with $f\in L^2(M\times [-T,T])$. Assume the Cauchy data satisfy \begin{eqnarray}\label{smoothness of C-data} u|_{\partial M\times [-T,T]}\in H^{2,2}(\partial M \times [-T,T]),\quad \frac{\partial u}{\partial \mathbf{n}} \in H^{2,2}(\partial M \times [-T,T]). \end{eqnarray} If \begin{eqnarray}\label{quantitative smoothness} \|u\|_{H^1(M\times[-T,T])}\leqslant \Lambda_0,\quad \|u\|_{H^{2,2}(\Gamma\times [-T,T])}+\big\|\frac{\partial u}{\partial \mathbf{n}}\big\|_{H^{2,2}(\Gamma\times [-T,T])}\leqslant \varepsilon_0, \end{eqnarray} then for $0<h<h_0$, we have $$\|u\|_{L^2(\Omega(h))}\leqslant C_3 \exp(h^{-C_4 n})\frac{\Lambda_0+h^{-\frac{1}{2}}\varepsilon_0}{\bigg(\log \big(1+\frac{\Lambda_0+h^{-\frac{1}{2}}\varepsilon_0}{\|Pu\|_{L^2(M\times[-T,T])}+h^{-\frac{3}{2}}\varepsilon_0}\big)\bigg) ^{\frac{1}{2}}}\, .$$ The domain $\Omega(h)$ and the $H^{2,2}$-norm are defined in Section \ref{subsection-notations}. As a consequence, the following estimate holds for any $\theta\in (0,1)$ by interpolation: $$\|u\|_{H^{1-\theta}(\Omega(h))}\leqslant C_3^{\theta}\exp(h^{-C_4 n})\frac{\Lambda_0+h^{-\frac{1}{2}}\varepsilon_0}{\bigg(\log \big(1+\frac{\Lambda_0+h^{-\frac{1}{2}}\varepsilon_0}{\|Pu\|_{L^2(M\times[-T,T])}+h^{-\frac{3}{2}}\varepsilon_0}\big)\bigg) ^{\frac{\theta}{2}}}\, .$$ \end{main1} \begin{remark} In Theorem \ref{main1}, the different smoothness indexes of the Sobolev spaces in the qualitative smoothness assumption $u\in H^2(M\times [-T,T])$ and in the quantitative bounds for the Sobolev norms \eqref{quantitative smoothness} are related to the smooth extension of the weak solution of the wave equation to a boundary layer. We note that the non-uniform smoothness assumptions are typical, and sometimes also optimal, for the weak solutions of the wave equation with the Neumann boundary condition, see \cite{LT}. We also note that in Theorem \ref{main1}, the assumption $u\in H^2(M\times [-T,T])$ can be relaxed to the assumption that $u$ is a weak solution of the wave equation $Pu=f$ with the Neumann boundary condition, where $f\in L^2(M \times [-T,T])$, and $u$ and its Neumann boundary value $\partial_{\bf n}u|_{\partial M \times [-T,T]}$ satisfy \begin{equation*} u \in C([-T,T];H^1(M))\cap C^1([-T,T];L^2(M)),\quad \partial_{\bf n}u|_{\partial M \times [-T,T]}\in L^2(\partial M \times [-T,T]). \end{equation*} Then, by \cite[Thm.\ A]{LT}, the Dirichlet boundary value is a well-defined function $u|_{\partial M \times [-T,T]}\in L^2(\partial M \times [-T,T])$. In this case, \eqref{smoothness of C-data} can be viewed as an additional smoothness requirement for the Dirichlet and the Neumann boundary values of $u$. This relaxation of the smoothness assumptions only affects the last part of the proof of Lemma \ref{extension}, and this lemma can be proved via the weak version of Green's formula. \end{remark} Our method can also be used to derive a stability estimate for the unique continuation from any open domain in the interior of $M$, as long as the boundary of the domain is smoothly embedded in $M$. In this way, a stability estimate can be obtained on domains arbitrarily close to the double cone of influence from the interior domain in question, which provides a generalization of Theorem 3.3 in \cite{BKL2}. We remark that as the domain approaches the double cone of influence, the estimate above grows exponentially. This $\exp$-dependence and the $\log$-type of the estimate itself eventually lead to the two logarithms in Theorem \ref{stability}. We also mention that Proposition \ref{area} may be of independent interest, which provides an explicit uniform bound for the Hausdorff measure of the boundary of the domain of influence. Most of this section is occupied by the proof of Theorem \ref{main1}. First we properly extend the manifold, the wave operator $P$ and the wave $u$, so that $Pu$ stays small on the manifold extension over $\Gamma$, given sufficiently small Cauchy data on $\Gamma$. The extension of $u$ is cut off near the boundary in the manifold extension, from which we start propagating the unique continuation. Then we carefully construct a series of domains satisfying the assumptions in Theorem \ref{global}, such that the union of these domains approximates the double cone of influence. Thus Theorem \ref{global} gives a stability estimate on domains arbitrarily close to the double cone of influence. The main difficulty lies in actually finding that series of domains satisfying the properties stated above, as the assumptions in Theorem \ref{global} (essentially assumptions for local estimates) are rather restrictive for a general manifold with boundary. This requires us to directly deal with the intrinsic distance and (distance-minimizing) geodesics of the manifold. In this section, we use several independent lemmas and their proofs can be found in Section \ref{auxiliary}. \smallskip Theorem \ref{main1} yields the following stable continuation result on the whole domain of influence $M(\Gamma,T)$. \begin{wholedomain}\label{wholedomain} Let $M\in \mathcal{M}_n(D,K_1,K_2,i_0,r_0)$ be a compact Riemannian manifold with smooth boundary $\partial M$, and let $\Gamma$ (possibly $\Gamma=\partial M$) be a connected open subset of $\partial M$ with smooth embedded boundary. Suppose $u\in H^2(M\times[-T,T])$ is a solution of the wave equation $Pu(x,t)=0$ with the Neumann boundary condition $\partial_{\bf n}u|_{\partial M \times [-T,T]}=0$ and the initial condition $\partial_t u(\cdot,0)=0$. Assume the Dirichlet boundary value of $u$ satisfy $$u|_{\partial M\times [-T,T]}\in H^{2,2}(\partial M \times [-T,T]).$$ If $$\|u(\cdot,0)\|_{H^1(M)}\leqslant \Lambda,\quad \|u\|_{H^{2,2}(\Gamma\times [-T,T])}\leqslant \varepsilon_0,$$ then for $0<h<h_0$, the following estimate holds: $$\|u(\,\cdotp,0)\|_{L^2(M(\Gamma,T))} \leqslant C_3^{\frac{1}{3}}h^{-\frac{2}{9}}\exp(h^{-C_4 n}) \frac{\Lambda+h^{-\frac{1}{2}}\varepsilon_0}{\big(\log (1+h+h^{\frac{3}{2}}\frac{\Lambda}{\varepsilon_0})\big) ^{\frac{1}{6}}}+C_5\Lambda h^{\frac{1}{3\max{\{n,3\}}}}.$$ Here $C_3$ explicitly depends on $n,T,D,\|R_M\|_{C^1},\|S\|_{C^1},i_0,r_0,vol(M),vol_{n-1}(\Gamma)$; $C_4$ is an absolute constant; $C_5$ explicitly depends on $n,\|R_M\|_{C^1},\|S\|_{C^1},i_0, vol(M),vol(\partial M)$; $h_0>0$ is a sufficiently small constant explicitly depending on $n,T,K_1,K_2,i_0,r_0,i_b(\overline{\Gamma}),vol(\partial M)$. \end{wholedomain} We postpone the proof of Proposition \ref{wholedomain} after the proof of Theorem \ref{main1}. \smallskip \subsection{Extension of manifolds} \label{subsection-extension} \hfill \medskip Let $(M,g)\in \mathcal{M}_n(D,K_1,K_2,i_0,r_0)$ be a compact, orientable Riemannian manifold with bounded geometry defined in Section \ref{section-intro}. \begin{extensionmetric}\label{extensionmetric} For sufficiently small $\delta_{ex}$ explicitly depending on $n,K_1,K_2,i_0,vol(\partial M)$, we can extend $(M,g)$ to a Riemannian manifold $(\widetilde{M},\widetilde{g})$ with smooth boundary such that the following properties are satisfied. \begin{enumerate}[(1)] \item $\widetilde{M}-M$ lies in a normal neighborhood of $\partial M$ in $\widetilde{M}$, and $\widetilde{d}(x,\partial M)=\delta_{ex}$ for any $x\in \partial \widetilde{M}$, where $\widetilde{d}$ denotes the distance function of $\widetilde{M}$. \item $\widetilde{g}$ is of $C^{3,1}$ in some atlas on $\widetilde{M}$, in which $$\|\widetilde{g}_{ij}|_{\widetilde{M}-M}\|_{C^1}\leqslant C(K_1),\quad \|\widetilde{g}_{ij}|_{\widetilde{M}-M}\|_{C^4}\leqslant C(n,K_1,K_2,i_0).$$ \item $\|R_{\widetilde{M}}\|\leqslant 2K_1^2$, $\|S_{\partial \widetilde{M}}\|\leqslant 2K_1$ and $\|\nabla R_{\widetilde{M}}\|\leqslant 2K_2$, where $S_{\partial \widetilde{M}}$ denotes the second fundamental form of $\partial \widetilde{M}$ in $\widetilde{M}$. \end{enumerate} As a consequence, we have\\ (4) \,$r_{\textrm{CAT}}(\widetilde{M})\geqslant \min \big\{C(K_1),i_0/4,r_0/2 \big\}$. \end{extensionmetric} \begin{proof} We glue a collar $\partial M \times [-\delta_{ex},0]$ for $0<\delta_{ex}<\min\{1,i_0/2\}$ onto $M$ by identifying $\partial M\times \{0\}$ of the collar with $\partial M$. Denote the topological space after the gluing procedure by $\widetilde{M}$. Any $(y,\rho)\in \partial M \times [-\delta_{ex},0]$ admits coordinate charts by extending boundary normal coordinate charts at $(y,-\rho)\in M$. The transition maps are clearly smooth and therefore $\widetilde{M}$ is a smooth manifold. Let $\{y_i\}$ be a maximal $r_g(\partial M)/2$-separated set (and hence an $r_g(\partial M)/2$-net) in $\partial M$. Let $U_i$ be the ball of radius $r_g(\partial M)$ in $\partial M$ around $y_i$, and therefore $\{U_i\}$ is an open cover of $\partial M$. We take a partition of unity $\{\phi_i\}$ subordinate to $\{U_i\}$ satisfying $$\|\phi_i\|_{C^s}\leqslant C\,r_g(\partial M)^{-s},\textrm{ for }s\in [1,4].$$ Then $\{\widetilde{U}_i:=U_i\times [-\delta_{ex},0]\}$ is an open cover of the collar $\partial M \times [-\delta_{ex},0]$, and $\{\widetilde{\phi}_i\}$ is a partition of unity subordinate to this cover satisfying the same bound on $C^s$-norm, where $\widetilde{\phi}_i$ is defined by $\widetilde{\phi}_i(y,\rho)=\phi_i(y)$ for $(y,\rho)\in \partial M \times [-\delta_{ex},0]$. We choose the geodesic normal coordinate $(y^{\alpha})_{\alpha=1}^{n-1}$ on each $U_i$ such that (\ref{coorb}) holds. Within each coordinate chart $\widetilde{U}_i$, we define the metric components at $(y,\rho)\in \widetilde{U}_i$ as follows: $\widetilde{g}^{(i)}_{\rho\rho}=1$, and $\widetilde{g}^{(i)}_{\alpha \rho}=0$ for $\alpha,\beta=1,\cdots,n-1$, and \begin{equation*} \widetilde{g}^{(i)}_{\alpha\beta}(y,\rho)=g^{(i)}_{\alpha\beta}(y,0)+\rho \frac{\partial g^{(i)}_{\alpha\beta}}{\partial \rho} (y,0)+\frac{\rho^2}{2} \frac{\partial^2 g^{(i)}_{\alpha\beta}}{\partial \rho^2} (y,0)+ \frac{\rho^3}{6} \frac{\partial^3 g^{(i)}_{\alpha\beta}}{\partial \rho^3} (y,0), \;\textrm{ for }\rho\leqslant 0. \end{equation*} Then one can define a Riemannian metric $\widetilde{g}$ on $\partial M\times [-\delta_{ex},0]$ through partition of unity: \begin{equation}\label{metricpartition} \widetilde{g}|_{(y,\rho)}=\sum_i \widetilde{\phi}_i(y,\rho) g^{(i)}|_{(y,\rho)}=\sum_i \phi_i(y) g^{(i)}|_{(y,\rho)},\;\textrm{ for }\rho\leqslant 0. \end{equation} At $(y,\rho\in \mathbb{R}_{+})\in M$ with respect to the boundary normal coordinate of $\partial M$ in $M$, define $\widetilde{g}=g$. Due to the Riccati equation (e.g. Theorem 2 in \cite{PP}, p44), the derivatives of $g^{(i)}_{\alpha\beta}$ with respect to $\rho$ at $\rho=0$ up to the third order can be expressed in terms of the components of $S$, $R_M$ and $\nabla R_{M}$. Then the curvature bound assumptions (\ref{boundedgeometry}) implies that $\widetilde{g}_{\alpha\beta}^{(i)}$ is of $C^4$ within each coordinate chart $\widetilde{U}_i$. Now let us consider the coordinate charts $U_i\times [-\delta_{ex},i_0)$. In this coordinate, the components $\widetilde{g}_{\alpha\beta}^{(i)}$ is of $C^{3,1}$ in the normal direction, and $C^4$ in other directions. Therefore $\widetilde{g}$ is of $C^{3,1}$ in the local coordinate charts $\{U_i\times [-\delta_{ex},i_0)\}$. Furthermore, it follows from a straightforward calculation that for $\rho\leqslant 0$, $$\big|\widetilde{g}^{(i)}_{\alpha\beta}(y,\rho)-g^{(i)}_{\alpha\beta}(y,0)\big|\leqslant C(\|R_M\|_{C^1},\|S\|) |\rho|,$$ $$\Big|\frac{\partial \widetilde{g}^{(i)}_{\alpha\beta}}{\partial \rho}(y,\rho)-\frac{\partial g^{(i)}_{\alpha\beta}}{\partial \rho}(y,0)\Big|\leqslant C(\|R_M\|_{C^1},\|S\|) |\rho|,$$ $$\Big|\frac{\partial \widetilde{g}^{(i)}_{\alpha\beta}}{\partial x_T}(y,\rho)-\frac{\partial g^{(i)}_{\alpha\beta}}{\partial x_T}(y,0)\Big|\leqslant C(\|R_M\|_{C^2},\|S\|_{C^1}) |\rho|.$$ For higher order derivatives, we have \begin{equation*} \bigg|\frac{\partial^{k+l} \widetilde{g}^{(i)}_{\alpha\beta}}{\partial x_T^k \partial \rho^l}(y,\rho)-\frac{\partial^{k+l} g^{(i)}_{\alpha\beta}}{\partial x_T^k \partial \rho^l}(y,0)\bigg|\leqslant C(\|R_M\|_{C^5},\|S\|_{C^4}) |\rho|, \;\textrm{ for }k+l\leqslant 4,\, l\leqslant 3. \end{equation*} Note that $\partial^4 \widetilde{g}^{(i)}_{\alpha\beta}/\partial \rho^4=0$ by definition. Recall that the $C^4$-norm of $\phi_i$ is uniformly bounded by $C\,r_g(\partial M)^{-4}$, and $r_g(\partial M)$ explicitly depends on $n,\|R_{\partial M}\|_{C^1},i_0$. Furthermore, the total number of coordinate charts $U_i$ is bounded by $C(n,K_1)vol(\partial M)r_g(\partial M)^{-n+1}$. Hence by (\ref{metricpartition}), the estimates above hold for $\widetilde{g}_{\alpha\beta}$ and $g_{\alpha\beta}$ with another constant $C(n,\|R_M\|_{C^5},\|S\|_{C^4},$ $i_0,vol(\partial M))$. Therefore we can restrict the extension width $\delta_{ex}$ to be sufficiently small explicitly depending only on $n,K_1,K_2,i_0,vol(\partial M)$, such that the matrix $(\widetilde{g}_{\alpha\beta})$ is nondegenerate and hence a metric, and \begin{equation}\label{curvatureextended} \|\widetilde{g}_{\alpha\beta}|_{\widetilde{M}-M}\|_{C^1}\leqslant 4 K_1+4,\quad \|\widetilde{g}_{\alpha\beta}|_{\widetilde{M}-M}\|_{C^4}\leqslant C(n,K_1,K_2,i_0), \end{equation} $$\|R_{\widetilde{M}}\|\leqslant 2K_1^2, \quad \|S_{\partial \widetilde{M}}\|\leqslant 2K_1, \quad \|\nabla R_{\widetilde{M}}\|\leqslant 2K_2.$$ Here the first inequality is due to (\ref{coorb}) and the definition that $\partial_{\rho} g_{\alpha\beta} |_{\partial M}=2S_{\alpha\beta}$, where $S_{\alpha\beta}$ denotes the components of the second fundamental form $S$ of $\partial M$. The bound on $S_{\partial \widetilde{M}}$ follows from the bound on $\partial_{\rho} \widetilde{g}_{\alpha\beta}|_{\widetilde{M}-M}$. With this type of extension, $\widetilde{g}$ is also a product metric in the collar, which implies that the integral curve of $\partial/\partial \rho$ minimizes length and is hence a minimizing geodesic. This shows that for any $x=(y,\rho)\in \partial M \times[-\delta_{ex},0]$, we have $\widetilde{d}(x,\partial M)=-\rho$, which yields property (1). The property (4) is due to properties (1-3) and Lemma \ref{CATradius}(2). \end{proof} \smallskip \noindent \textbf{Coordinate system.} From now on, we extend the manifold $(M,g)$ to $(\widetilde{M},\widetilde{g})$ such that Lemma \ref{extensionmetric} holds. We say $(\widetilde{M},\widetilde{g})$ is an extension of $(M,g)$ with the extension width $\delta_{ex}$. We choose a coordinate system on $\widetilde{M}$ as follows. In the boundary normal (tubular) neighborhood of $\partial M$, we choose the boundary normal coordinate of $\partial M$. Let $\{y_i\}$ be a maximal $r_g(\partial M)/2$-separated set in $\partial M$, and $U_i$ be the ball of radius $r_g(\partial M)$ in $\partial M$ around $y_i$. The proof of Lemma \ref{extensionmetric} shows that $\widetilde{g}$ is of $C^{3,1}$ in the coordinate charts $U_i\times [-\delta_{ex},i_0)$. In each coordinate chart, we choose the boundary normal coordinate $(x^1,\cdots,x^{n-1},\rho(x))$ of $\partial M$, where $(x^1,\cdots,x^{n-1})$ is the geodesic normal coordinate of $\partial M$ such that (\ref{coorb}) holds. The coordinate function $\rho(x)$ in the normal direction is defined as \begin{equation}\label{def-rhox} \rho(x) = \left\{ \begin{array}{ll} d(x,\partial M), & \mbox{if $x\in M$}; \\ -\widetilde{d}(x,\partial M), & \mbox{if $x\in \widetilde{M}-M$}.\end{array} \right. \end{equation} Note that $\widetilde{d}(x,\partial M)=d(x,\partial M)$ for $x\in M$. Lemma \ref{extensionmetric}(2) shows that the metric components on $\widetilde{M}-M$ have uniformly bounded $C^4$-norm. On the other side, due to Lemma \ref{riccati}, we can find a uniform width $r_b=r_b(K_1,i_0)$, such that the $C^4$-norm of metric components is uniformly bounded by $C(n,K_1,K_2,i_0)$ in the boundary normal coordinate of width $r_b$ in $M$. Consequently, we have a uniform bound for the $C^{3,1}$-norm of metric components in the coordinate charts $U_i\times [-\delta_{ex},i_0)$. For any point $x\in M$ with $d(x,\partial M)> r_b/2$, we choose the geodesic normal coordinate of $M$ around $x$ of the radius $ \min\{r_b/2,r_g(x)\}$, such that the $C^4$-norm of metric components is uniformly bounded. By Lemma 8 in \cite{HV} and Theorem A in \cite{E}, this radius is uniformly bounded below by $n,\|R_M\|_{C^1},i_0,r_b$. Denote by $r_g$ the minimum of this radius and $r_g(\partial M)$, and therefore $r_g$ explicitly depends only on $n,\|R_M\|_{C^1},\|S\|_{C^1},i_0$. Combining these two types of coordinates, we have a coordinate system on $\widetilde{M}$ in which the metric components satisfy the following properties: $$\frac{1}{4}|\xi|^2\leqslant \sum_{i,j=1}^{n} \widetilde{g}^{ij}\xi_{i}\xi_{j} \leqslant 4|\xi|^2\; (\xi\in\mathbb{R}^n),$$ \begin{equation}\label{metricboundex} \|\widetilde{g}_{ij}\|_{C^{1}}\leqslant C(n,\|R_{M}\|_{C^1},\|S\|_{C^1}),\quad \|\widetilde{g}_{ij}\|_{C^{3,1}}\leqslant C(n,K_1,K_2,i_0). \end{equation} Observe that for any $x\in \widetilde{M}$, the ball $\widetilde{B}_{r_g/2}(x)$ of $\widetilde{M}$ or the cylinder $B_{\partial M}(y,r_g/2)\times (\rho-r_g/2,\rho+r_g/2)$ is contained in at least one of the coordinate charts defined above, where $x=(y,\rho)$ if $x$ is in the boundary normal coordinate of $\partial M$. To see this, it suffices to show that for any $y\in \partial M$, the ball $B_{\partial M}(y,r_g/2)$ of $\partial M$ is contained in at least one of $U_i$. The latter statement is a direct consequence of the fact that $\{y_i\}$ is an $r_g(\partial M)/2$-net in $\partial M$. \smallskip \subsection{Extension of functions} \hfill \medskip Let $(\widetilde{M},\widetilde{g})$ be an extension of $(M,g)$ satisfying Lemma \ref{extensionmetric} with the extension width $\delta_{ex}$. Points in the boundary normal neighborhood of $\partial M$ have coordinates $(x^1,\cdots,x^{n-1},\rho(x))$, where $\rho(x)$ is defined in (\ref{def-rhox}). We write the coordinate as $(x_T,\rho(x))$ for short, where $x_T=(x^1,\cdots,x^{n-1})$ denotes the tangential coordinate. We define an extension of functions on $M$ to $\widetilde{M}$ as follows. Given a function $u$ on $M$ and its Cauchy data $u,\, \frac{\partial u}{\partial \mathbf{n}}$ on $\partial M$, we extend $u$ to a function $\widetilde{u}_{ex}$ on $\widetilde{M}$ by \begin{equation*} \widetilde{u}_{ex}(x_T,\rho,t) = \left\{ \begin{array}{ll} u(x_T,\rho,t), & \mbox{if $\rho \geqslant 0$};\\ u(x_T,0,t)+\rho \frac{\partial u}{\partial \mathbf{n}}(x_T,0,t), & \mbox{if $\rho<0$}.\end{array} \right. \end{equation*} For $0<h<\delta_{ex}$, we define another function $\widetilde{u}:\widetilde{M}\times [-T,T]\to \mathbb{R}$ by $\widetilde{u}=u$ on $M\times [-T,T]$, and \begin{equation}\label{extensionu} \widetilde{u}(x_T,\rho,t)=\phi(\frac{\rho}{h}) \widetilde{u}_{ex} (x_T,\rho,t), \; \textrm{ for }\rho< 0, \end{equation} where $\phi$ is a monotone increasing smooth function vanishing on $(-\infty,-1]$ and equals to $1$ on $[0,\infty)$ with $\|\phi\|_{C^2}\leqslant 8$. Then $\widetilde{u}=0$ when $\rho\leqslant -h$. \begin{extension}\label{extension} Let $(\widetilde{M},\widetilde{g})$ be an extension of $(M,g)$ satisfying Lemma \ref{extensionmetric} with the extension width $\delta_{ex}$. Let $\Gamma$ be a connected open subset of $\partial M$. Assume $$u|_{\partial M\times [-T,T]}\in H^{2,2}(\partial M \times [-T,T]),\quad \frac{\partial u}{\partial \mathbf{n}} \in H^{2,2}(\partial M \times [-T,T]).$$ Then we have $$\|\widetilde{u}\|^2_{H^1(\Omega_{\Gamma}\times[-T,T])}\leqslant Ch^{-1}\|u\|^2_{H^1(\Gamma\times [-T,T])}+Ch \big\|\frac{\partial u}{\partial \mathbf{n}} \big\|^2_{H^1(\Gamma\times [-T,T])},$$ and \begin{eqnarray*} \|(\partial_{t}^2 -\Delta_{\widetilde{g}})\widetilde{u}\|^2_{L^2(\Omega_{\Gamma}\times [-T,T])} \leqslant Ch^{-3}\|u\|^2_{H^{2,2}(\Gamma\times [-T,T])} + Ch^{-1}\big\|\frac{\partial u}{\partial \mathbf{n}} \big\|^2_{H^{2,2}(\Gamma\times [-T,T])}, \end{eqnarray*} where $\Omega_{\Gamma}=\Gamma\times[-\delta_{ex},0]$ denotes the part of the manifold extension over $\Gamma$, and the constants explicitly depend on $n,K_1$. Furthermore, suppose $u\in H^2(M\times[-T,T])$ is a solution of the non-homogeneous wave equation $Pu=f$ with $f\in L^2(M\times [-T,T])$. Then $\widetilde{u}\in H^1(\widetilde{M}\times [-T,T])$ and $(\partial_{t}^2 -\Delta_{\widetilde{g}})\widetilde{u} \in L^2(\widetilde{M}\times [-T,T])$. \end{extension} \begin{proof} First we estimate the $H^1$-norm of $\widetilde{u}$ over $\Omega_{\Gamma}$. Here we only estimate the dominating term in $h$; the other terms can be done in the same way. Denote by $\partial_{\alpha},\partial_{n},\partial_t$ the derivatives with respect to $x^{\alpha},x^n$ coordinates and time $t$, respectively. We denote $\partial_{\alpha} u, \partial_{n} u,\partial_t u$ evaluated at $(x_T,0,t)$ by $u_{\alpha},u_{n},u_t$ and $ \phi^{\prime}(s)=\frac{d}{ds}\phi(s)$, evaluated at $s=\rho/h$. In addition, whenever we write the function $u$ without specifying where it is evaluated, the evaluation is also done at $(x_T,0,t)$. By the definition of $\widetilde{u}$, \begin{equation}\label{un} (\partial_n \widetilde{u})(x_T,\rho,t)=h^{-1} (u+\rho u_n) \phi^{\prime} + u_n\phi. \end{equation} Since $\widetilde{u}$ vanishes unless $\rho\in [-h,0]$, we have \begin{eqnarray*} \|\partial_n \widetilde{u}\|^2_{L^2(\Omega_{\Gamma}\times [-T,T])} &=& \int_{-T}^{T} \int_{\Gamma}\int_{-\delta_{ex}}^0 \big| h^{-1} (u+\rho u_n) \phi^{\prime} + u_n\phi \big|^2 dx_T d\rho dt\\ &\leqslant& C\int_{-T}^{T} \int_{\Gamma}\int_{-h}^0 (h^{-2}u^2+h^{-2}\rho^2 u_n^2+u_n^2) dx_Td\rho dt \\ &\leqslant& C\int_{-T}^{T} \int_{\Gamma} (h^{-1}u^2+hu_n^2) dx_Tdt \\ &=& Ch^{-1}\|u\|_{L^2(\Gamma\times [-T,T])}^2+Ch\big\|\frac{\partial u}{\partial \mathbf{n}} \big\|_{L^2(\Gamma\times [-T,T])}^2. \end{eqnarray*} Next we estimate the Laplacian of $\widetilde{u}$ over $\Omega_{\Gamma}$ for $\rho\in[-h,0]$. In the boundary normal coordinates of our choice, by definition (\ref{Laplacian}) we have \begin{eqnarray*} \Delta_{\widetilde{g}}\widetilde{u}&=&\sum_{i,j=1}^n \frac{1}{\sqrt{|\widetilde{g}|}} \partial_i \big(\sqrt{|\widetilde{g}|} \widetilde{g}^{ij}\partial_j \widetilde{u} \big) \\ &=& \frac{1}{\sqrt{|\widetilde{g}|}} \partial_{n} \big(\sqrt{|\widetilde{g}|} \widetilde{g}^{nn}\partial_{n} \widetilde{u}\big)+ \sum_{\alpha,\beta=1}^{n-1} \frac{1}{\sqrt{|\widetilde{g}|}} \partial_{\alpha} \big(\sqrt{|\widetilde{g}|} \widetilde{g}^{\alpha\beta}\partial_{\beta} \widetilde{u} \big) \\ &=& A_1+A_2, \end{eqnarray*} where $|\widetilde{g}|$ denotes the determinant of the matrix $(\widetilde{g}_{ij})$. We estimate $A_2$ as follows. \begin{eqnarray*} A_2(x_T,\rho,t) &=& \sum_{\alpha,\beta=1}^{n-1} \frac{1}{\sqrt{|\widetilde{g}|}} \partial_{\alpha} \big(\sqrt{|\widetilde{g}|} \widetilde{g}^{\alpha\beta}\partial_{\beta} \widetilde{u} \big) \\ &=& \sum_{\alpha,\beta} \frac{\partial_{\alpha} |\widetilde{g}|}{2|\widetilde{g}|} \widetilde{g}^{\alpha \beta} \partial_{\beta}\widetilde{u}+ (\partial_{\alpha} \widetilde{g}^{\alpha \beta})( \partial_{\beta}\widetilde{u})+ \widetilde{g}^{\alpha \beta} \partial_{\alpha}\partial_{\beta}\widetilde{u}. \end{eqnarray*} Hence we have \begin{eqnarray*} |A_2(x_T,\rho,t)| &\leqslant& C \sum_{\alpha,\beta} (|u_{\beta}|+h |u_{n\beta}|)+C \sum_{\alpha,\beta}|\partial_{\alpha}\partial_{\beta}(u+\rho u_n)|(x_T,0,t)\\ &\leqslant& C\sum_{\alpha,\beta} (|u_{\alpha\beta}|+h |u_{n\alpha\beta}|)+C\sum_{\beta}(|u_{\beta}|+h|u_{n\beta}|), \end{eqnarray*} where the constants explicitly depend on $n,K_1$ due to the $C^1$ metric bound (\ref{curvatureextended}). Finally we estimate $A_1$ and the time derivatives. Since $\widetilde{g}^{nn}=1$, we know that $$A_1(x_T,\rho,t)=\frac{\partial_n |\widetilde{g}|}{2|\widetilde{g}|} \partial_{n}\widetilde{u} + \partial_{n}^2\widetilde{u}.$$ We differentiate (\ref{un}) again: \begin{eqnarray*} (\partial_{n}^2\widetilde{u})(x_T,\rho,t) = h^{-2}(u+\rho u_n)\phi^{\prime\prime}+ 2h^{-1}u_n\phi^{\prime}. \end{eqnarray*} Hence we have \begin{eqnarray*} \big| \big((\partial_{t}^2 -\partial_{n}^2)\widetilde{u} \big)(x_T,\rho,t)\big|&=&\big| (u_{tt}+\rho u_{ntt})\phi-(\partial_{n}^2\widetilde{u})(x_T,\rho,t) \big|\\ &\leqslant& Ch^{-2}|u|+Ch^{-1}|u_n| +C|u_{tt}|+Ch|u_{ntt}|, \end{eqnarray*} which leads to a similar estimate for $(\partial_{t}^2 \widetilde{u}-A_1)(x_T,\rho,t)$ by (\ref{un}). Thus, \begin{eqnarray*}\label{waveextension} \big| \big((\partial_{t}^2 -\Delta_{\widetilde{g}})\widetilde{u} \big)(x_T,\rho,t)\big|&\leqslant& Ch^{-2}|u|+Ch^{-1}|u_n| +C(|u_{tt}|+h|u_{ntt}|) \nonumber \\ &+&C\sum_{\alpha,\beta} \big(|u_{\alpha}|+|u_{\alpha\beta}|+h|u_{n\alpha}|+h|u_{n\alpha\beta}| \big), \end{eqnarray*} where all terms on the right-hand side are boundary data evaluated at $(x_T,0,t)$. Then the second estimate of the lemma immediately follows from integrating the last inequality. \smallskip Now we additionally assume that $u\in H^2(M\times[-T,T])$ is a (strong) solution of the non-homogeneous wave equation $Pu=f$ with $f\in L^2(M\times [-T,T])$. By the regularity result for the wave equation (e.g. Theorem 2.30 in \cite{KKL}), the solution $u$ is in the energy class $$u \in C([-T,T];H^1(M))\cap C^1([-T,T];L^2(M)).$$ From the definition (\ref{extensionu}), the weak derivatives of $\widetilde{u}(\cdot,t)$ exist on $\widetilde{M}$ for any fixed $t\in [-T,T]$. Since the Cauchy data are in $H^{2,2}$, we have $\widetilde{u}(\cdot,t)\in H^1(\widetilde{M})$ for all $t$ directly by definition (\ref{extensionu}), and therefore $\widetilde{u}\in H^1(\widetilde{M}\times [-T,T])$. Since the Cauchy data are in $H^{2,2}$, the definition (\ref{extensionu}) also indicates that $\widetilde{u}\in H^{2,2}\big((\widetilde{M}-M)\times [-T,T]\big)$. Hence over $\widetilde{M}-M$, $$\widetilde{f}_{ex}:=(\partial_{t}^2 -\Delta_{\widetilde{g}})\widetilde{u} \in L^2\big((\widetilde{M}-M) \times [-T,T]\big).$$ Define a function $\widetilde{f}:\widetilde{M}\times [-T,T]\to \mathbb{R}$ by $\widetilde{f}=f$ over $M$ and $\widetilde{f}=\widetilde{f}_{ex}$ over $\widetilde{M}-M$. Clearly $\widetilde{f}\in L^2(\widetilde{M}\times [-T,T])$. Thus the only part left is to show that $(\partial_{t}^2 -\Delta_{\widetilde{g}})\widetilde{u}=\widetilde{f}$ on $\widetilde{M}\times [-T,T]$ in the weak form. Observe that the wave equation on either $M$ or $\widetilde{M}-M$ is well-defined pointwise. Then for any test function $\varphi\in H_0^{1}(\widetilde{M}\times [-T,T])$, by applying the wave equation separately on $M,\,\widetilde{M}-M$ and Green's formula, we have \begin{eqnarray*} &&\int_{-T}^T \int_{\widetilde{M}} \Big(-\partial_{t} \widetilde{u}\, \partial_t \varphi + \langle \nabla \widetilde{u}, \nabla \varphi \rangle_{\widetilde{g}} \Big) = \int_{-T}^T \int_{M\cup(\widetilde{M}-M)} \Big(-\partial_{t} \widetilde{u}\, \partial_t \varphi + \langle \nabla \widetilde{u}, \nabla \varphi \rangle_{\widetilde{g}} \Big) \\ &&= \int_{-T}^T \int_{M} f\varphi - \int_{-T}^T \int_{\partial M} \frac{\partial u}{\partial \mathbf{n}} \varphi+ \int_{-T}^T \int_{\widetilde{M}-M}\widetilde{f}_{ex} \varphi+\int_{-T}^T \int_{\partial M} \frac{\partial \widetilde{u}}{\partial \mathbf{n}} \varphi\, . \end{eqnarray*} Due to the definition (\ref{extensionu}), the normal derivative of $\widetilde{u}$ from either side of $\partial M$ coincides and hence the boundary terms cancel out. This shows that the wave equation is satisfied on $\widetilde{M}\times[-T,T]$ in the weak form, with the source term in $L^2(\widetilde{M}\times[-T,T])$. \end{proof} \smallskip \subsection{Distance functions} \hfill \medskip Later in the proof of Theorem \ref{main1}, we will need to switch back and forth to different distance functions. The following lemma shows relations between distance functions. \begin{distances}\label{distances} Let $(\widetilde{M},\widetilde{g})$ be an extension of $(M,g)$ satisfying Lemma \ref{extensionmetric} with the extension width $\delta_{ex}$. Denote the distance functions of $M$ and $\widetilde{M}$ by $d$ and $\widetilde{d}$, respectively. Then there exists a uniform constant $r_b$ explicitly depending only on $K_1,i_0$, such that the following inequality holds for any $x,y\in M$ as long as $\delta_{ex}\leqslant r_b$: $$\widetilde{d}(x,y)\leqslant d(x,y)\leqslant (1+3K_1\delta_{ex})\widetilde{d}(x,y)\, .$$ If $x,y\in \widetilde{M}-M$, then the second inequality holds after replacing $d(x,y)$ with $d(x^{\perp},y^{\perp})$, where $x^{\perp}$ denotes the normal projection of $x$ onto $\partial M$. If $x\in \widetilde{M}-M,\,y\in M$, then the second inequality holds for $d(x^{\perp},y)$. Furthermore, if a minimizing geodesic of $\widetilde{M}$ between $x,y \in \widetilde{M}$ lies in the boundary normal (tubular) neighborhood of $\partial M$ of width $\delta_{ex}$, then we have $$d_{\partial M}(x^{\perp},y^{\perp})\leqslant (1+3K_1\delta_{ex})\widetilde{d}(x,y)\, ,$$ where $d_{\partial M}$ denotes the intrinsic distance function of $\partial M$. \end{distances} \begin{proof} The first inequality is trivial and we prove the second inequality. Consider any (distance) minimizing geodesic $\widetilde{\gamma}$ of $\widetilde{M}$ from $x$ to $y$, and its length $L(\widetilde{\gamma})$ satisfies $L(\widetilde{\gamma})=\widetilde{d}(x,y)$ by definition. It is known that $\widetilde{\gamma}$ is a $C^1$ curve with arclength parametrization (e.g. Section 2 in \cite{ABB}). Observe that the second inequality follows trivially if $\widetilde{\gamma}$ lies entirely in $M$. Since the statement of the lemma is independent of the choice of coordinates, we work in the boundary normal coordinate $(x^1,\cdots,x^{n-1},\rho(x))$ of $\partial M$. Suppose $\widetilde{\gamma}$ lies entirely in $\widetilde{M}-\textrm{int}(M)$ with both endpoints $x,y$ on $\partial M$. Consider the normal projection, denoted by $\gamma$, of $\widetilde{\gamma}$ onto the boundary $\partial M$ with respect to the boundary normal coordinate. More precisely, if $\widetilde{\gamma}(s)=(x_1(s),\cdots,x_{n-1}(s),x_n(s))$ in a boundary normal coordinate near a point on $\widetilde{\gamma}$, then its normal projection has the form $\gamma(s)=(x_1(s),\cdots,x_{n-1}(s),0)$. The fact that $\widetilde{\gamma}$ is of $C^1$ implies that $x_i(s)$ is a $C^1$ function for any $i$. Hence $\gamma$ is a $C^1$ (possibly not regular or simple) curve in $\partial M$ from $x$ to $y$ with the induced parametrization from $\widetilde{\gamma}$. Note that $\gamma$ may not be differentiable with respect to its own arclength parameter. As a consequence, the length $L(\gamma)$ of $\gamma$ can be written as: $$L(\gamma)=\int_0^{L(\widetilde{\gamma})} \sqrt{g(\gamma^{\prime}(s),\gamma^{\prime}(s))} \,ds = \int_0^{L(\widetilde{\gamma})} \sqrt{g \big(\widetilde{\gamma}^{\prime}_T(s)|_{\gamma(s)},\widetilde{\gamma}^{\prime}_T(s)|_{\gamma(s)} \big)} \,ds,$$ where $\widetilde{\gamma}^{\prime}_T(s)$ denotes the vector field with constant coefficients in the frame $(\frac{\partial}{\partial x^1},\cdots,\frac{\partial}{\partial x^{n-1}})$, with the coefficients being the tangential components of the tangent vector $\widetilde{\gamma}^{\prime}(s)$ of $\widetilde{\gamma}$. Note that $\widetilde{\gamma}^{\prime}_T(s)$ is a Jacobi field for the normal coordinate function $\rho(x)$. For every fixed $s$, by the definition of the second fundamental form (more precisely the shape operator), $$\frac{\partial}{\partial \rho}\widetilde{g}_{\rho}(\widetilde{\gamma}^{\prime}_T,\widetilde{\gamma}^{\prime}_T) = 2\widetilde{g}_{\rho}(S_{\rho}(\widetilde{\gamma}^{\prime}_T),\widetilde{\gamma}^{\prime}_T),$$ where $\widetilde{g}_{\rho}$ and $S_{\rho}$ denote the metric and the shape operator of the equidistant hypersurface from $\partial M$ (in $\widetilde{M}-M$) with distance $|\rho|$ (i.e. the level set $\widetilde{d}(\cdot,\partial M)=|\rho|$). Observe that Lemma \ref{riccati} holds in the boundary normal neighborhood of $\partial M$ regardless of which side the neighborhood extends to, thanks to Lemma \ref{extensionmetric}(3). Then the first part of Lemma \ref{riccati} indicates that for sufficiently small $|\rho|$ depending only on $K_1,i_0$, $$\big|\frac{\partial}{\partial \rho}\widetilde{g}_{\rho}(\widetilde{\gamma}^{\prime}_T,\widetilde{\gamma}^{\prime}_T) \big| \leqslant 4K_1\widetilde{g}_{\rho}(\widetilde{\gamma}^{\prime}_T,\widetilde{\gamma}^{\prime}_T).$$ Thus by Gronwall's inequality, we have $$g(\widetilde{\gamma}^{\prime}_T|_{\gamma},\widetilde{\gamma}^{\prime}_T|_{\gamma}) \leqslant \widetilde{g}_{\rho}(\widetilde{\gamma}^{\prime}_T,\widetilde{\gamma}^{\prime}_T) e^{4K_1|\rho|}.$$ Since the extended metric $\widetilde{g}$ is a product metric in the boundary normal coordinate, then $\widetilde{g}(\widetilde{\gamma}^{\prime}_T|_{\widetilde{\gamma}},\widetilde{\gamma}^{\prime}_T|_{\widetilde{\gamma}})\leqslant \widetilde{g}(\widetilde{\gamma}^{\prime},\widetilde{\gamma}^{\prime})$. Hence for sufficiently small $\delta_{ex}$ depending only on $K_1$ and $|\rho|\leqslant \delta_{ex}$, we obtain \begin{eqnarray*} L(\gamma) &\leqslant& e^{2K_1|\rho|} \int_0^{L(\widetilde{\gamma})} \sqrt{\widetilde{g}_{\rho}(\widetilde{\gamma}^{\prime}_T(s),\widetilde{\gamma}^{\prime}_T(s))}\, ds\\ &\leqslant& e^{2K_1 \delta_{ex}} \int_0^{L(\widetilde{\gamma})} \sqrt{\widetilde{g}(\widetilde{\gamma}^{\prime}(s),\widetilde{\gamma}^{\prime}(s))} \, ds \leqslant (1+3K_1\delta_{ex})\widetilde{d}(x,y), \end{eqnarray*} which yields the second inequality by definition. In general, if $\widetilde{\gamma}$ crosses $\partial M$ with both endpoints in $M$, we can divide $\widetilde{\gamma}$ into segments in $M$ and segments in $\widetilde{M}-M$. The lemma is trivially satisfied for the endpoints of any segment in $M$. Any (continuous) segment in $\widetilde{M}-M$ has endpoints on $\partial M$ and lies entirely in $\widetilde{M}-\textrm{int}(M)$. Thus we apply the argument above for every segment in $\widetilde{M}-M$ and the estimate follows. Finally, if the endpoints of $\widetilde{\gamma}$ are not both in $M$, then its projection $\gamma$ is a curve between the projections of the endpoints of $\widetilde{\gamma}$ onto $M$. This concludes the proof for the first part of the lemma. Now we prove the second part of the lemma. Let $\widetilde{\gamma}$ be the minimizing geodesic of $\widetilde{M}$ from $x$ to $y$ lying in the boundary normal tubular neighborhood of $\partial M$. If $\widetilde{\gamma}$ lies entirely in $M$ or $\widetilde{M}-\textrm{int}(M)$, one can use the previous argument to project $\widetilde{\gamma}$ to a curve on $\partial M$ and show the same estimate as the first part. The only difference is that when $x,y$ are not in $\partial M$, the projection $\gamma$ is a curve on $\partial M$ from $x^{\perp}$ to $y^{\perp}$. In general, the estimate follows from dividing $\widetilde{\gamma}$ into segments in $M$ and in $\widetilde{M}-M$, and projecting both types of segments onto $\partial M$. \end{proof} \begin{definition}\label{Mhdh} For $h<i_0/2$, we consider the submanifold $$M_{h}=\big\{x\in M: d(x, \partial M)\geqslant h \big\}.$$ Denote by $d_h:M_h\times M_h \to \mathbb{R}$ the intrinsic distance function of the submanifold $M_h$, and we extend it to any point $x\in \widetilde{M}-M_h$ by \begin{equation}\label{dh} d_h (x,z)=d_h (x^{\perp_h},z)+h^{-1}\widetilde{d}(x,x^{\perp_h}), \textrm{ for }z\in M_h,\, x\in \widetilde{M}-M_h, \end{equation} where $x^{\perp_h}\in \partial M_h$ is the unique normal projection of $x\in \widetilde{M}-M_h$ onto $\partial M_h$ within the boundary normal neighborhood of $\partial M$ such that $\widetilde{d}(x,x^{\perp_h})=\widetilde{d}(x,\partial M_h)$. In this definition we require at least one of the points belongs to $M_h$. Note that a similar notation $x^{\perp}$ denotes the normal projection of $x$ onto $\partial M$. \end{definition} Thus the path between $z\in M_h$ and a point $x\in \widetilde{M}-M_h$ realizing $d_h(x,z)$ is a broken curve consisting of a geodesic of $M_h$ and a vertical line of the boundary neighborhood (see Figure \ref{figure0}). In general, the intrinsic distance function of a manifold with boundary is at most of $C^{1,1}$: the function $d_h(\cdot,z)$ is at most of $C^{1,1}$ even on $M_h-\{z\}$. We need to smoothen it in order to match the $C^{2,1}$ regularity required by Theorem \ref{global}. \begin{definition}\label{definition-dhs} For a fixed $z\in M_h$ and any $x\in M$, we denote by $d_h^s (x,z)$ the smoothening of $d_h(x,z)$ via convolution in a ball of radius $r<\delta_{ex}/2$ around the center $x$ with respect to the distance $\widetilde{d}$ of $\widetilde{M}$. More precisely, \begin{equation}\label{dhsdef} d_h^s(x,z)=c_{n}r^{-n}\int_{\widetilde{M}}k_1\big(\frac{\widetilde{d}(y,x)}{r}\big)d_h(y,z)dy, \end{equation} where $k_1:\mathbb{R}\to \mathbb{R}$ is a nonnegative smooth mollifier supported on $[1/2,1]$, and $dy$ denotes the Riemannian volume form on $\widetilde{M}$. The constant $c_n$ is the normalization constant such that \begin{equation}\label{normalization} c_{n}r^{-n}\int_{\mathbb{R}^n} k_1\big(\frac{|v|}{r}\big)dv=1, \end{equation} where $dv$ denotes the Euclidean volume form on $\mathbb{R}^n$. \end{definition} \begin{dhsC21}\label{dhsC21} Let $\delta_{ex}$ be sufficiently small determined in Lemma \ref{extensionmetric}. For sufficiently small $r$ depending on $n,K_1,K_2,i_0,r_0,r_g$, the function $d_h^s(\cdot,z)$ is of $C^{2,1}$ on $M$ for any fixed $z\in M_h$. Furthermore, in the coordinates of our choice, the $C^{2,1}$-norm of $d_h^s(\cdot,z)$ is uniformly bounded explicitly depending on $r,n,\|R_M\|_{C^1}$. \end{dhsC21} \begin{proof} By Lemma \ref{extensionmetric}(4), for sufficiently small $\delta_{ex}$, we know $r_{\textrm{CAT}}(\widetilde{M})$ is bounded below by $C(K_1,i_0,r_0)$. We restrict the smoothening radius to be less than this lower bound: $r<C(K_1,i_0,r_0)$. Then for any $y\in \widetilde{B}_r(x)$, there is a unique minimizing geodesic between $x$ and $y$. Furthermore, no conjugate points occur along geodesics of length less than $\pi/2K_1$ (Corollary 3 in \cite{ABB2}). Since $\widetilde{B}_r(x)\cap \partial \widetilde{M}=\emptyset$ for any $x\in M$ as $r<\delta_{ex}/2$, then $\widetilde{d}(\cdot,x)$ is simply a geodesic distance function in the ball of the smoothening radius around any $x\in M$. As a consequence, $\widetilde{d}(\cdot,x)$ is differentiable on $\widetilde{B}_r(x)$ and $|\nabla \widetilde{d}(\cdot,x)|=1$. By our choice of coordinate charts in Section \ref{subsection-extension}, for any $x^{\prime}\in \widetilde{M}$, the ball $\widetilde{B}_{r_g/2}(x^{\prime})$ or the cylinder $B_{\partial M}(y,r_g/2)\times (\rho-r_g/2,\rho+r_g/2)$ is contained in at least one of the coordinate charts defined in Lemma \ref{extensionmetric}, where $x^{\prime}=(y,\rho)$ if $x^{\prime}$ is in the boundary normal coordinate of $\partial M$. Then by Lemma \ref{distances}, the ball $\widetilde{B}_{r_g/4}(x^{\prime})$ of $\widetilde{M}$ is contained in one of the coordinate charts if we choose a smaller $r_b$ depending on $K_1$. Hence for $r<r_g/4$, $\widetilde{B}_r(x)$ is contained in one of these coordinate charts for any $x\in M$, and therefore $\widetilde{d}(\cdot,x)$ is of $C^{2,1}$ on $\widetilde{B}_r(x)-\{x\}$ by Lemma \ref{extensionmetric}(2) and Theorem 2.1 in \cite{DK}. Observe that $\widetilde{d}(\cdot,x)$ is bounded below by $r/2$ in the support of $k_1$, which yields a bound on higher derivatives of $\widetilde{d}(\cdot,x)$. This shows that the function $d_h^s(\cdot,z)$ is of $C^{2,1}$. To estimate the $C^{2,1}$-norm of $d_h^s(\cdot,z)$, it suffices to estimate the $C^{2,1}$-norm of $\widetilde{d}(\cdot,y)$ on the annulus $\widetilde{B}_{r}(y)-\widetilde{B}_{r/2}(y)$. Due to the Hessian comparison theorem (e.g. Theorem 27 in \cite{PP}, p175), for sufficiently small $r$ depending on $K_1$, we have $\|\widetilde{\nabla}^2 \widetilde{d}(\cdot,y) \|\leqslant 4r^{-1}$ on the annulus, where $\widetilde{\nabla}^2$ denotes the second covariant derivative on $\widetilde{M}$. In a local coordinate $(x^1,\cdots,x^n)$ on $\widetilde{M}$, the covariant derivative has the form (e.g. Chapter 2 in \cite{PP}, p32) \begin{equation}\label{Hessian-local} \big(\widetilde{\nabla}^2 \widetilde{d}(\cdot,y)\big)(\frac{\partial}{\partial x^k},\frac{\partial}{\partial x^l})=\frac{\partial^2}{\partial x^k \partial x^l} \widetilde{d}(\cdot,y)-\sum_{i=1}^n \widetilde{\Gamma}_{kl}^i \frac{\partial}{\partial x^i} \widetilde{d}(\cdot,y), \quad k,l=1,\cdots,n. \end{equation} Hence in the coordinate charts of our choice, for sufficiently small $r$, (\ref{metricboundex}) yields \begin{equation}\label{dC2} \|\widetilde{d}(\cdot,y)\|_{C^2} \leqslant C r^{-1},\textrm{ on } \widetilde{B}_{r}(y)-\widetilde{B}_{r/2}(y). \end{equation} An estimate on the $C^{2,1}$-norm can be obtained by differentiating the radial Riccati equation (e.g. Proposition 7 in \cite{PP}, p.\ 47) satisfied by the Hessian $\widetilde{\nabla}^2\widetilde{d}(\cdot,y)$ in a $C^1$ geodesic normal coordinate around $y$. This is possible because $\widetilde{g}$ is at least of $C^{1,1}$ in any geodesic normal coordinate of $\widetilde{M}$ by Theorem 2.1 in \cite{DK}. Alternatively, one can differentiate the equation $\partial_r \widetilde{g}_r=2\widetilde{\nabla}^2\widetilde{d}(\cdot,y)$, where $\partial_r$ denotes the radial direction in a geodesic normal coordinate around $y$, and $\widetilde{g}_r$ is the family of metrics on the unit sphere $S^{n-1}$ such that $\widetilde{g}=dr^2+\widetilde{g}_r$. Then on the annulus, the proof of Lemma 8 in \cite{HV} gives a bound $$\|\widetilde{\nabla}^3\widetilde{d}(\cdot,y)\|\leqslant C(n, \|R_{\widetilde{M}}\|_{C^1})r^{-2}.$$ Hence by differentiating the formula (\ref{Hessian-local}), for sufficiently small $r$ depending on $n,K_1,K_2,i_0$, we obtain \begin{equation}\label{dC21} \|\widetilde{d}(\cdot,y)\|_{C^{2,1}}\leqslant C(n, \|R_{\widetilde{M}}\|_{C^1})r^{-2},\textrm{ on } \widetilde{B}_{r}(y)-\widetilde{B}_{r/2}(y). \end{equation} Then a straightforward differentiation yields an estimate on the $C^{2,1}$-norm of $d_h^s(\cdot,z)$. \end{proof} \smallskip \subsection{Proof of Theorem \ref{main1}} \label{subsection3.4} \hfill \medskip Now we prove the main technical result Theorem \ref{main1}, by constructing the functions and domains assumed in Theorem \ref{global}. The proof consists of several parts. To begin with, let $h$ be a positive number satisfying $h<\min\{1/5,i_0/10,r_b/10\}$, where $r_b=r_b(K_1,i_0)$ is the width of the boundary normal neighborhood determined in Lemma \ref{riccati}. For sufficiently small $h$ only depending on $n,K_1,K_2,i_0,vol(\partial M)$, we extend $(M,g)$ to $(\widetilde{M},\widetilde{g})$ with the extension width $\delta_{ex}=5h$ such that Lemma \ref{extensionmetric} holds. Then we extend $u$ to $\widetilde{u}$ by (\ref{extensionu}) with the cut-off width $h$. Let $r_g$ be the uniform radii of $C^1$ geodesic normal coordinates of $M$ and $\partial M$ such that metric bounds (\ref{metricboundex}) hold. We have shown that $r_g$ explicitly depends on $n,\|R_M\|_{C^1},\|S\|_{C^1},i_0$. Now we collect all these relevant parameters and impose the following requirements on the choice of $h$ due to technical reasons: \begin{equation} 0<h<\min\big\{\frac{1}{10}, \frac{T}{8},\frac{i_0}{10},\frac{r_0}{10},\frac{r_g}{10},\frac{r_b}{10},\frac{i_b(\overline{\Gamma})}{10},\frac{\pi}{12K_1}\big\}. \end{equation} The part of the manifold extension over $\Gamma$ is denoted by $\Omega_{\Gamma}=\Gamma\times [-5h,0]$. The number $\min\{1,T^{-1}\}$ will be frequently used in this proof and we denote it by \begin{equation}\label{aT} a_T=\min\{1,T^{-1}\}. \end{equation} We restrict the choice of $h$ once again, such that for sufficiently small $h$, \begin{equation}\label{CATchoice} r_{\textrm{CAT}}(M_h)\geqslant \min\big\{\frac{2}{3}r_0,\frac{\pi}{2K_1}\big\},\quad r_{\textrm{CAT}}(\widetilde{M})\geqslant \min\big\{\frac{2}{3}r_0,\frac{\pi}{2K_1}\big\}. \end{equation} This is possible due to Lemma \ref{CATradius}. We remark that the dependency of $h$ is not explicit in Lemma \ref{CATradius}(3), and one can instead use the explicit lower bound in Lemma \ref{CATradius}(2). With the choice of $\delta_{ex}=5h$ and $h$ as above, the function $d_h(\cdot,z)$ defined in (\ref{dh}) is Lipschitz with a Lipschitz constant $2h^{-1}$ (Lemma \ref{dhs}(3)). In Definition \ref{definition-dhs}, we set the smoothening radius to be $r=a_T h^3$. Then it follows that $|d_h^s(x,z)-d_h(x,z)| < 2a_T h^2$ for any $x\in M$ (Lemma \ref{dhs}(4)). Assume $h$ is sufficiently small so that Lemma \ref{dhsC21} holds. For any $z\in M_h$ and $x\in M$ satisfying $h/4\leqslant d_h (x,z)\leqslant \min\{i_0/2,r_0/2,\pi/6K_1\}$, we have $|\nabla_x d_h^s(x,z)|> 1-2h$ (Lemma \ref{dd}). Outside the injectivity radius this gradient can be 0 if cut points are involved. This lower bound being close to 1 is crucial for our method to ensure no loss of domain, and we define $d_h$ (\ref{dh}) with the $h^{-1}$ scaling in the boundary neighborhood specifically to guarantee it. While this lower bound is almost trivial when $z$ is far from $\partial M_h$, careful treatment is required when the manifold boundary is involved. \smallskip For $|b|\leqslant 5h$, we define the following set: \begin{equation}\label{Gammabh} \Gamma_{b}(h)=\big\{x\in \widetilde{M}: \rho(x)=b, \, x^{\perp}\in \Gamma, \, d_{\partial M}(x^{\perp},\partial \Gamma)\geqslant h \big\}, \end{equation} where $\partial \Gamma$ denotes the boundary of $\Gamma$ in $\partial M$. The function $\rho(x)$ is the coordinate function in the normal direction defined in (\ref{def-rhox}). Note that if $\Gamma=\partial M$, the last two conditions above automatically satisfy, and then the set above is simply the level set of the normal coordinate function. Recall that $\widetilde{u}$, the extension of $u$ to $\widetilde{M}$ defined by (\ref{extensionu}), vanishes on $\Gamma_b(0)$ for all $b\leqslant -h$. The set $\Gamma_{-2h}(0)$ is the set from which we intend to propagate the unique continuation. More precisely, we start the propagation from an $h$-net in $\Gamma_{-2h}(8h)$. The reason of this specific choice is the following. \begin{sublemmainitial}\label{sublemmainitial} For sufficiently small $h$ only depending on $K_1$, we have $$\widetilde{d}\big(z,\partial (M\cup \Omega_{\Gamma})-\partial\widetilde{M} \big)\geqslant 7h, \textrm{ for any }z\in \Gamma_{-2h}(8h),$$ where $\Omega_{\Gamma}=\Gamma\times [-5h,0]$ is the part of the manifold extension over $\Gamma$. \end{sublemmainitial} \begin{proof} Let $y$ be a point in $\partial (M\cup \Omega_{\Gamma})-\partial\widetilde{M}$ realizing the distance to $z$. Suppose $\widetilde{d}(z,y)<7h$. Then the minimizing geodesic of $\widetilde{M}$ from $z$ to $y$ lies in the boundary normal (tubular) neighborhood of $\partial M$ of width $5h$. Hence Lemma \ref{distances} implies that $$d_{\partial M}(z^{\perp},y^{\perp})\leqslant (1+15K_1 h)\widetilde{d}(z,y) < 7h(1+15K_1h)\, .$$ However, we know $d_{\partial M}(z^{\perp},y^{\perp})\geqslant 8h$ by the definition (\ref{Gammabh}). Hence we get a contradiction for sufficiently small $h$ only depending on $K_1$. \end{proof} \begin{figure}[h] \includegraphics[scale=0.5]{Figure0} \caption{Domains for the initial step. Enclosed by the red solid line is the domain we work in, and it is close to $\Gamma$.} \label{figure0} \end{figure} \medskip \textbf{Initial Step.} As the initial step, we propagate the unique continuation from outside the manifold $M$ to a region close to $\Gamma$ in $M$. \smallskip Consider the function $\xi:[0,+\infty)\to\mathbb{R}$ defined by \begin{equation}\label{xidef} \xi(x)=\frac{(h-x)^3}{h^3}, \textrm{ for } x\in [0,h], \end{equation} and $\xi(x)=0$ for $x>h$. This function is of $C^{2,1}$ on $[0,+\infty)$ and monotone decreasing. Let $\{z_{0,j}\}_{j=1}^{J(0)}$ be an $h$-net in $\Gamma_{-2h}(8h)$: that is, for any $z\in \Gamma_{-2h}(8h)$, there exists some $z_{0,j}$ such that $\widetilde{d}(z,z_{0,j})<h$. We define \begin{equation}\label{psi0} \psi_{0,j}(x,t)=\bigg(\Big(1-\xi \big(6h-\widetilde{d}(x,z_{0,j})\big)\Big)T-\widetilde{d}(x,z_{0,j})\bigg)^2-t^2, \end{equation} and consider the following domains (see Figure \ref{figure0}): \begin{equation}\label{Omega00} \Omega^0_{0,j} =\big\{(x,t)\in \widetilde{M}\times[-T,T]: \psi_{0,j}(x,t) > h^2,\; \rho(x)>-\frac{3}{2}h \big\}. \end{equation} Note that in general, the domain characterized by $\psi_{0,j}(x,t) > h^2$ has two connected components. Here we define $\Omega^0_{0,j}$ to be the connected component characterized by $\big(1-\xi(6h-\widetilde{d}(x,z_{0,j}))\big)T-\widetilde{d}(x,z_{0,j})>0$.\footnote{\,Throughout the proof, whenever we define a domain using level sets of a similar function, we exactly mean this one type of connected component.} Then we define \begin{equation}\label{Upsilon} \Upsilon=\{x\in \Omega_{\Gamma}: -2h\leqslant \rho(x)\leqslant -h\}\times [-T,T], \end{equation} and \begin{equation}\label{Omega0j} \Omega_{0,j} =\big\{(x,t)\in \Omega^0_{0,j}-\Upsilon: \psi_{0,j}(x,t) > 4h^2 \big\}. \end{equation} Now we prove that the conditions assumed in Theorem \ref{global} satisfy for $\psi_{0,j}$, $\Omega_{0,j}^0$, $\Omega_{0,j}$, $\Upsilon$, $\psi_{max,0}=(T-h)^2$, and therefore Theorem \ref{global} applies. A stability estimate will be derived at the end of the proof. \medskip \noindent (1) We show that $\psi_{0,j}$ is of $C^{2,1}$ and non-characteristic in $\Omega^0_{0,j}$. Indeed, for any $(x,t)\in \Omega^0_{0,j}$, we have $\widetilde{d}(x,z_{0,j})< 6h$ by the definition of $\psi_{0,j}$. Hence any minimizing geodesic of $\widetilde{M}$ from $z_{0,j}$ to $x$ must not intersect $\partial \widetilde{M}$; otherwise the length of such geodesic would exceed $6h$ due to the condition that $\rho(x)>-3h/2$. Furthermore, by our choice $h<\min\{r_0/10,\pi/12K_1\}$ and (\ref{CATchoice}), the minimizing geodesic from $z_{0,j}$ to any $x\in \widetilde{B}_{6h}(z_{0,j})$ is unique and no conjugate points can occur. Therefore $\widetilde{d}(\cdot,z_{0,j})$ is a $C^{2,1}$ geodesic distance function in $\Omega^0_{0,j}$, which shows that $\psi_{0,j}$ is of $C^{2,1}$ in $\Omega^0_{0,j}$. Moreover, since $\widetilde{d}(x,z_{0,j})>h/2$ for any $(x,t)\in \Omega^0_{0,j}$ by definition, the $C^{2,1}$-norms of $\widetilde{d}(\cdot,z_{0,j})$ and $\psi_{0,j}$ are uniformly bounded in $\Omega^0_{0,j}$ due to (\ref{dC21}). Next we prove that $\psi_{0,j}$ is non-characteristic in $\Omega^0_{0,j}$. For any $(x,t)\in \Omega^0_{0,j}$, $$\nabla_x \psi_{0,j}=2\Big(\big(1-\xi(6h-\widetilde{d}(x,z_{0,j}))\big)T-\widetilde{d}(x,z_{0,j})\Big) \big(\xi^{\prime}T\nabla_x\widetilde{d}(x,z_{0,j})-\nabla_x\widetilde{d}(x,z_{0,j}) \big).$$ Note that $\xi^{\prime}$ is evaluated at $6h-\widetilde{d}(x,z_{0,j})$ in the formula above. Since $\xi^{\prime}\leqslant 0$, then $$\big|\xi^{\prime}T\nabla_x\widetilde{d}(x,z_{0,j})-\nabla_x\widetilde{d}(x,z_{0,j}) \big|\geqslant |\nabla_x\widetilde{d}(x,z_{0,j})|=1.$$ Hence, \begin{eqnarray*} p \big((x,t),\nabla \psi_{0,j} \big) &=& \sum_{k,l=1}^n \widetilde{g}^{kl} (\partial_{x_k}\psi_{0,j})( \partial_{x_l}\psi_{0,j})-|\partial_t\psi_j|^2 =|\nabla_x \psi_{0,j}|^2-|\partial_t \psi_{0,j}|^2 \nonumber\\ &\geqslant& 4\Big(\big(1-\xi(6h-\widetilde{d}(x,z_{0,j}))\big)T-\widetilde{d}(x,z_{0,j})\Big)^2-4t^2 \nonumber \\ &=& 4\psi_{0,j}^2(x,t) > 4h^2. \end{eqnarray*} \noindent (2) The extended function $\widetilde{u}$ defined by (\ref{extensionu}) vanishes on $\Upsilon$. We claim that $\emptyset\neq\{(x,t)\in \Omega^0_{0,j}: \psi_{0,j}(x,t) > (T-h)^2\}\subset \Upsilon$. Indeed, for any $(x,t)$ in the set, it satisfies that $\widetilde{d}(x,z_{0,j})<h$, which indicates $\rho(x)<-h$. On the other hand, Sublemma \ref{sublemmainitial} implies that $x\in \Omega_{\Gamma}$, and therefore $(x,t)\in \Upsilon$. For the non-emptyness, consider the point $x_j\in \Gamma_{-5h/4}(0)$ such that $\widetilde{d}(x_j,z_{0,j})=3h/4$ (i.e. $x_j$ is the projection of $z_{0,j}$ onto $\Gamma_{-5h/4}(0)$). By definition, we have $\psi_{0,j}(x_j,0)=(T-3h/4)^2>(T-h)^2$. This also shows that $(x_j,0)\in \Omega_{0,j}^0$ by definition when $T>2h$, which yields the non-emptyness. \noindent (3) We show that $dist_{\widetilde{M}\times \mathbb{R}}(\partial \Omega^0_{0,j},\Omega_{0,j})>0$. It suffices to prove $\overline{\Omega}_{0,j}\subset \Omega^0_{0,j}$. For any $(x,t)\in \Omega^0_{0,j}$, we have $\widetilde{d}(x,z_{0,j})< 6h$ by the definition of $\psi_{0,j}$, which implies that $\Omega^0_{0,j}\subset M\cup \Omega_{\Gamma}$ due to Sublemma \ref{sublemmainitial}. This indicates that the boundaries of $\Omega_{0,j}^0,\Omega_{0,j}$ are determined only by $\psi_{0,j}$ and $\rho(x)$. Since we know $\rho(x)>-h$ for any $(x,t)\in \Omega_{0,j}$ by definition, then clearly $\overline{\Omega}_{0,j}\subset \Omega^0_{0,j}$. \noindent (4) We claim that $\cup_{j=1}^{J(0)} \Omega_{0,j}$ is connected and therefore its closure is connected. Take two reference points $z_{0,j_1},z_{0,j_2}$ satisfying $\widetilde{d}(z_{0,j_1},z_{0,j_2})<3h$. Consider $(z_{0,j_1}^{\perp},0)\in \partial M\times [-T,T]$. Directly checked by the definition of $\Omega_{0,j}$, this point $(z_{0,j_1}^{\perp},0)$ is in both $\Omega_{0,j_1}$ and $\Omega_{0,j_2}$. In particular, this shows $\Omega_{0,j_1}\cap\Omega_{0,j_2}\neq\emptyset$ if $\widetilde{d}(z_{0,j_1},z_{0,j_2})<3h$. Since each $\Omega_{0,j}$ is path connected, so is $\Omega_{0,j_1}\cup\Omega_{0,j_2}$. The claim follows from the fact that for any two points in the $h$-net $\{z_{0,j}\}$, we can find a chain of $\{z_{0,j}\}$ such that every pair of adjacent points in this chain has distance less than $3h$. \medskip In order to propagate further in subsequent steps, we need to estimate how much $\cup_{j} \Omega_{0,j}$ covers in the original manifold $M$. \begin{sublemmainitial2}\label{sublemmainitial2} $\big(\bigcup_{b\in [0,2h]}\Gamma_b(8h)\big)\times [-T+6h,T-6h] \subset \bigcup_{j=1}^{J(0)} \Omega_{0,j}.$ \end{sublemmainitial2} \begin{proof} For any $(x,t)$ in the left-hand set, there exists $j_0$ such that $\widetilde{d}(x,z_{0,j_0})< 5h$ due to the definition of $h$-net, which indicates that the $\xi$ term in $\psi_{0,j_0}$ (\ref{psi0}) vanishes. Thus $$\psi_{0,j_0}(x,t)=(T-\widetilde{d}(x,z_{0,j_0}))^2-t^2 > (T-5h)^2-(T-6h)^2>5h^2,$$ where we used $T>8h$. This shows that $(x,t)$ is in both $\Omega^0_{0,j_0}$ and $\Omega_{0,j_0}$. \end{proof} \bigskip \textbf{Subsequent Steps.} After the initial step, the reference set is moved to $\Gamma_{h}(8h)$ and unique continuation is propagated up to $\Gamma_{2h}(8h)$. Let $\{z_{1,j}\}$ be an $h$-net in $\Gamma_h(10h)\subset M_h$ with respect to $d_h$. Note that here the range of the $j$ index is different from that of the $j$ index in the initial step, and a precise notation would be $\{z_{1,j}\}_{j=1}^{J(1)}$. We omit this dependence on the step number to keep the notations short. Set $T_1=T-6h$ and $\rho_0=\min\{i_0/2,r_0/2,r_g/4,\pi/6K_1\}$. We divide into Case 1 and Case 2 depending on if $T$ is larger than $\rho_0$. \begin{figure}[h] \includegraphics[scale=0.5]{Figure1} \caption{Domains for Case 1 or the first step in Case 2. Enclosed by the red solid lines is the domain we work in, and its boundary consists of two disjoint parts. This domain never reaches outside distance $\rho_0$, which is marked by the upper red dotted line. The blue dashed line $\Gamma_2$ is the reference set for the second step in Case 2.} \label{figure1} \end{figure} \medskip \textbf{Case 1:} $T\leqslant \rho_0=\min\big\{i_0/2,r_0/2,r_g/4,\pi/6K_1\big\}$. \smallskip For any $(x,t)\in M\times [-T_1,T_1]$, we define the following $C^{2,1}$ functions \begin{equation}\label{psi} \psi_j(x,t)=\bigg(\Big(1-\xi\big(d(x,\partial M)\big)\Big)T_1-d_h^s(x,z_{1,j})\bigg)^2-t^2, \end{equation} and consider the domains\footnote{\,the connected component characterized by $\big(1-\xi(d(x,\partial M))\big)T_1-d_h^s(x,z_{1,j})>0$.} \begin{equation}\label{Omega0} \Omega^0_{j} =\big\{(x,t)\in M\times [-T_1,T_1]: \psi_j(x,t) > 8T^2 h \big\}-\{x: d_h^s(x,z_{1,j})\leqslant \frac{h}{2}\}\times [-T_1,T_1]. \end{equation} Observe that $\xi(d(x,\partial M))<1$ in $\Omega_j^0$ and hence $\Omega_j^0$ never intersect with $\partial M$ at any time. For any $(x,t)\in \Omega_{j}^0$, we have $h/2<d_h^s(x,z_{1,j})<T_1\leqslant \rho_0-6h$ by definition. Then Lemma \ref{dhs}(4) indicates that $h/4<d_h(x,z_{1,j})<\min\{i_0/2,r_0/2,\pi/6K_1\}$, and hence Lemma \ref{dd} applies. Then we define \begin{equation}\label{Omegajcase1} \Omega_j=\big\{(x,t)\in \Omega^0_j-\cup_j \overline{\Omega}_{0,j}: \psi_j(x,t) > 9T^2 h \big\}. \end{equation} Now we prove that the conditions assumed in Theorem \ref{global} satisfy for $\psi_{j}$, $\Omega_{j}^0$, $\Omega_{j}$, $\psi_{max}=(T_1-3h/4)^2$, together with relevant functions and domains in the initial step. The relevant domains are illustrated in Figure \ref{figure1}. \medskip First we show that $\psi_j$ is non-characteristic at any $(x,t)\in \Omega^0_j$. For $x\in M-M_h$, $$\nabla_x\psi_j=2\Big( \big(1-\xi(d(x,\partial M))\big)T_1-d_h^s(x,z_{1,j})\Big)\big(-\xi^{\prime}T_1\nabla_x d(x,\partial M) - \nabla_x d_h^s(x,z_{1,j})\big).$$ Note that $\xi^{\prime}$ is evaluated at $d(x,\partial M)$ in the formula above. For $x\in M-M_h$ with $d(x,\partial M_h)\geqslant a_T h^3$, the vectors $\nabla_x d_h(x,z_{1,j})$ and $\nabla_x d_h^s(x,z_{1,j})$ only differ by a small component $C(n,K_1,K_2) h^2$ due to (\ref{bdhcloseness}). In particular, $\langle \nabla_x d_h(x,z_{1,j}),$ $\nabla_x d_h^s(x,z_{1,j})\rangle >0$ for sufficiently small $h$ depending on $n,K_1,K_2$. Hence by the definition of $d_h$ (\ref{dh}), \begin{equation*}\label{opposite} \langle \nabla_x d(x,\partial M),\nabla_x d_h^s(x,z_{1,j})\rangle=-h\langle \nabla_x d_h(x,z_{1,j}),\nabla_x d_h^s(x,z_{1,j})\rangle<0. \end{equation*} Then by Lemma \ref{dd} and $\xi^{\prime}\leqslant 0$, we have $$\big|-\xi^{\prime}T_1\nabla_x d(x,\partial M) - \nabla_x d_h^s(x,z_{1,j})\big|\geqslant |\nabla_x d_h^s(x,z_{1,j})|>1-2h.$$ For $x\in M-M_h$ with $d(x,\partial M_h)< a_T h^3$, we have $|\xi^{\prime}(d(x,\partial M))|< 3a_T^2 h^{3}\leqslant 3T^{-1} h^3$ at such points by definitions (\ref{xidef}) and (\ref{aT}). Therefore for any $x\in M-M_h$ and sufficiently small $h$, we have \begin{eqnarray}\label{dpsi} |\nabla_x\psi_j|&>& 2|(1-\xi)T_1-d_h^s|(1-2h-3h^3) \nonumber \\ &>& 2|(1-\xi)T_1-d_h^s|(1-3h). \end{eqnarray} On the other hand, if $x\in M_h$, then $\xi$ term vanishes and the estimate above holds. Hence for any $(x,t)\in\Omega^0_j$, \begin{eqnarray}\label{ppsi} p \big((x,t),\nabla \psi_j \big) &=&|\nabla_x \psi_j|^2-|\partial_t \psi_j|^2 \nonumber\\ &>& 4\big((1-\xi)T_1-d_h^s \big)^2(1-3h)^2-4t^2 \nonumber \\ &>& 4\psi_j(x,t) - 24T^2 h > 8T^2 h. \end{eqnarray} This shows that $\psi_j$ is non-characteristic at any $(x,t)\in\Omega^0_j$. It is straightforward to show the connectedness of $(\cup_j \overline{\Omega}_j) \cup (\cup_j\overline{\Omega}_{0,j})$ in the same way as we did for $\cup_j\Omega_{0,j}$ in the initial step. The other conditions assumed in Theorem \ref{global} follow from Sublemma \ref{sublemma0} below and Sublemma \ref{sublemmainitial2}. \begin{sublemma0}\label{sublemma0} For sufficiently small $h<1/8$ depending on $K_1$, we have $$\emptyset\neq \big\{(x,t)\in \Omega_j^0: \psi_j(x,t)>(T_1-\frac{3}{4}h)^2 \big\}\subset \big(\bigcup_{b\in [0,2h]}\Gamma_b(8h)\big)\times [-T_1,T_1],$$ and $dist_{\widetilde{M}\times \mathbb{R}}(\partial \Omega^0_j,\Omega_j)>0$. \end{sublemma0} \begin{proof} The non-emptyness follows from definition. For any $(x,t)$ in the left-hand set, we know $d_h^s(x,z_{1,j})<3h/4$ by definition. Hence it suffices to show that \begin{equation}\label{sublemma0d} \big\{x: d_h^s(x,z_{1,j})\leqslant \frac{3}{4}h \big\}\subset \bigcup_{b\in [0,2h]}\Gamma_b(8h). \end{equation} For any $x$ in the left-side set in (\ref{sublemma0d}), Lemma \ref{dhs}(4) indicates that $d_h(x,z_{1,j})<h$ and hence $\rho(x)<2h$. This checks the condition on $\rho(x)$ in (\ref{Gammabh}). We proceed to check the rest of the conditions in (\ref{Gammabh}). If $x\in M_h$, then by Lemma \ref{distances}, \begin{eqnarray*} d_{\partial M}(x^{\perp},z_{1,j}^{\perp})&\leqslant& (1+15K_1 h)\widetilde{d}(x,z_{1,j}) \\ &\leqslant& (1+15K_1 h)d_h(x,z_{1,j})<h(1+15K_1 h)\, . \end{eqnarray*} If $x\in M-M_h$, then $d_h(x^{\perp_h},z_{1,j})<d_h(x,z_{1,j})<h$ by definition (\ref{dh}). Hence, \begin{eqnarray*} d_{\partial M}(x^{\perp},z_{1,j}^{\perp}) &=&d_{\partial M}((x^{\perp_h})^{\perp},z_{1,j}^{\perp}) \\ &\leqslant& (1+15K_1 h)d_h(x^{\perp_h},z_{1,j})<h(1+15K_1 h)\, , \end{eqnarray*} where we used the fact that $(x^{\perp_h})^{\perp}=x^{\perp}$. Therefore in either case, for sufficiently small $h$ depending only on $K_1$, we have $d_{\partial M}(x^{\perp},z_{1,j}^{\perp})$ $<2h$. Then the fact that $d_{\partial M}(z_{1,j}^{\perp},\partial \Gamma)\geqslant 10h$ yields $x^{\perp}\in \Gamma$ and $d_{\partial M}(x^{\perp},\partial \Gamma)> 8h$. This completes the proof of (\ref{sublemma0d}) and consequently the first statement of the sublemma. For the second statement, it suffices to prove $\overline{\Omega}_j \subset \Omega_j^0$. For any $(x,t)\in \overline{\Omega}_j$, clearly we have $\psi_j(x,t)\geqslant 9T^2 h>8T^2h$ and $(x,t)\notin \cup_{j}\Omega_{0,j}$ by definition (\ref{Omegajcase1}). To show $(x,t)\in \Omega_j^0$, we only need to show $(x,t)\notin \{x:d_h^s(x,z_{1,j})\leqslant h/2\}\times [-T_1,T_1]$. This is a direct consequence of the fact that a larger cylinder $\{x:d_h^s(x,z_{1,j})\leqslant 3h/4\}\times [-T_1,T_1]$ is strictly contained in the open set $\cup_{j}\Omega_{0,j}$, due to (\ref{sublemma0d}) and Sublemma \ref{sublemmainitial2}. An explicit lower bound for the distance between their boundaries is estimated in Lemma \ref{mindistance}. \end{proof} \smallskip \noindent\textbf{Error estimate for Case 1.} We prove that $\overline{\Omega}=(\cup_j \overline{\Omega}_j) \cup (\cup_j\overline{\Omega}_{0,j})$ almost covers the domain of influence in the original manifold $M$. More precisely, we prove that there exists $C^{\prime}=C^{\prime}(T,K_1)$ such that $\Omega(C^{\prime}h)\subset \overline{\Omega}$. Since $\Omega(C^{\prime}h)\subset M\times [-T,T]$, it suffices to show that $M\times [-T,T]-\overline{\Omega}\subset M\times [-T,T]-\Omega(C^{\prime}h)$. For any $(x,t)\in M\times [-T,T]-\overline{\Omega}$, by the definitions (\ref{psi}), (\ref{Omega0}), (\ref{Omegajcase1}), we know that one of the following two situations must happen:\\ (1) $d(x, \partial M)< h$; \\ (2) $x\in M_h$ and $d_h^s(x,z_{1,j})> T_1-\sqrt{t^2+9T^2 h}$ for any $z_{1,j}$. \smallskip We analyze these two situations separately as follows. \noindent \textbf{(1)} By virtue of Sublemma \ref{sublemmainitial2} and the definition (\ref{Gammabh}), the situation (1) implies that $x^{\perp}\notin \Gamma$, or $x^{\perp}\in \Gamma$ and $d_{\partial M}(x^{\perp},\partial \Gamma)<8h$, or $|t|> T-6h$. The condition $x^{\perp}\notin \Gamma$ indicates that $d(x,\partial M-\Gamma)<h$. If $x^{\perp}\in \Gamma$ and $d_{\partial M}(x^{\perp},\partial \Gamma)<8h$, then by the triangle inequality, $$d(x,\partial\Gamma)\leqslant d(x,x^{\perp})+d(x^{\perp},\partial \Gamma)\leqslant h+d_{\partial M}(x^{\perp},\partial \Gamma)<9h,$$ which yields $d(x,\partial M-\Gamma)<9h$ due to $\partial \Gamma\subset \partial M-\Gamma$. If $|t|>T-6h$, then the following inequality is trivially satisfied: $$T-|t|-\sqrt{6h}<6h-\sqrt{6h}<0\leqslant d(x,\Gamma).$$ Note that if $\Gamma=\partial M$, the first two possibilities automatically do not occur and hence only the last inequality above is valid under the first situation. \noindent \textbf{(2)} By Lemma \ref{dhs}(4), the situation (2) implies that $d_h(x,z_{1,j})> T_1-|t|-3T\sqrt{h}-2h^2$ for $x\in M_h$ and any $z_{1,j}$. Since $\{z_{1,j}\}$ is an $h$-net in $\Gamma_{h}(10h)$ with respect to $d_h$, we have $$d_h(x,\Gamma_{h}(10h))> T_1-|t|-3T\sqrt{h}-h-2h^2.$$ Then we apply Lemma \ref{distances} after replacing $M,\widetilde{M}$ with $M_h,M$: $$d(x,\Gamma_{h}(10h))(1+6K_1 h)\geqslant d_h(x,\Gamma_{h}(10h))>T_1-|t|-3T\sqrt{h}-h-2h^2,$$ where we used the fact that the second fundamental form of $\partial M_h$ is bounded by $2K_1$ due to Lemma \ref{riccati}. Hence by the triangle inequality, $$d(x,\Gamma_0(10h))>(T_1-|t|-3T\sqrt{h}-h-2h^2)(1+6K_1 h)^{-1}-h.$$ For any $y\in \Gamma-\Gamma_0(10h)$, $y$ lies in the boundary normal neighborhood of $\partial \Gamma$ in $\Gamma$ due to $10h<i_b(\overline{\Gamma})$. Hence $d(y,\Gamma_0(10h))\leqslant d_{\partial M}(y,\Gamma_0(10h))\leqslant 10h$. Then, \begin{eqnarray*} d(x,y)&\geqslant& d(x,\Gamma_0(10h))-d(y,\Gamma_0(10h)) \\ &>& (T-|t|-3T\sqrt{h}-7h-2h^2)(1+6K_1 h)^{-1}-11h, \end{eqnarray*} where we used $T_1=T-6h$. Hence we arrive at \begin{equation*}\label{type2} d(x,\Gamma) > T-|t|-C(T,K_1)\sqrt{h}. \end{equation*} Finally we combine these two situations together, and we have proved that $(x,t)\in M\times [-T,T]-\Omega(Ch)$ for $C=\max\{C(T,K_1)^2,9\}$ by definition (\ref{Omegaht}). Therefore, there exists $C^{\prime}=C^{\prime}(T,K_1)$ such that $\Omega(C^{\prime}h)\subset \overline{\Omega}$, and a stability estimate can be obtained on $\Omega(C^{\prime}h)$ from Theorem \ref{global}. The stability estimate will be derived at the end of the proof. \bigskip \textbf{Case 2:} $T>\rho_0=\min\big\{i_0/2,r_0/2,r_g/4,\pi/6K_1 \big\}$. \begin{figure}[h] \includegraphics[scale=0.5]{Figure2} \caption{Domains for the second step in Case 2. Enclosed by the red solid lines is the domain we work in. The blue dashed line $\Gamma_3$ is the reference set for the third step. From here, the procedure is entirely done in $M$.} \label{figure2} \end{figure} \smallskip As Lemma \ref{dd} is only valid within the injectivity radius, we define the procedure step by step and each step is done within the injectivity radius. Recall that $\{z_{1,j}\}$ be an $h$-net in $\Gamma_h(10h)\subset M_h$ with respect to $d_h$, and $T_1=T-6h$. For the first step, we define functions $\psi_{1,j}$ by adding to (\ref{psi}) another term associated with $T_1$: \begin{equation}\label{psi1} \psi_{1,j}(x,t)=\bigg(\Big(1-\xi \big(d(x,\partial M)\big)-\xi \big(\rho_0-d_h^s(x,z_{1,j})\big)\Big)T_1-d_h^s(x,z_{1,j})\bigg)^2-t^2, \end{equation} and consider the domains \begin{equation}\label{Omega01} \Omega^0_{1,j} =\big\{(x,t)\in M\times [-T_1,T_1]: \psi_{1,j}(x,t) > 8T^2 h \big\}-\{x: d_h^s(x,z_{1,j})\leqslant \frac{h}{2}\}\times [-T_1,T_1]. \end{equation} One can compare these definitions here with those in Case 1. It is clear that the regions $\Omega_{1,j}^0$ stay within half the injectivity radius. The gradient of $\psi_{1,j}$ has the following form: \begin{eqnarray*} \nabla_x\psi_{1,j}&=&2\Big(\big(1-\xi(d(x,\partial M))-\xi(\rho_0-d_h^s(x,z_{1,j}))\big)T_1-d_h^s(x,z_{1,j})\Big) \\ &&\big(-\xi^{\prime}T_1\nabla_x d(x,\partial M) +\xi^{\prime}T_1\nabla_x d_h^s(x,z_{1,j})-\nabla_x d_h^s(x,z_{1,j})\big). \end{eqnarray*} The vector part of $\nabla_x \psi_{1,j}$ consists of $\nabla_x d(x,\partial M)$ and $\nabla_x d_h^s(x,z_j)$, the same as in Case 1. Furthermore, the form for the vector part is the same as that in Case 1 up to multiplication by a positive function, since $\xi^{\prime}\leqslant 0$. Hence one obtains the same lower bounds for the length of the gradient and the principle symbol as (\ref{dpsi}) and (\ref{ppsi}). It follows that $\psi_{1,j}$ is non-characteristic in $\Omega^{0}_{1,j}$. And we define $\psi_{max,1}$ and $\Omega_{1,j}$ the same as in Case 1 (see Figure \ref{figure1}). More precisely, define $\psi_{max,1}=(T_1-3h/4)^2$ and \begin{eqnarray}\label{1other} \Omega_{1,j}=\big\{(x,t)\in \Omega^0_{1,j}-\cup_j \overline{\Omega}_{0,j}: \psi_{1,j}(x,t) > 9T^2 h \big\}. \end{eqnarray} Since (\ref{sublemma0d}) is still valid, Sublemma \ref{sublemma0} holds for $\psi_{1,j},\Omega_{1,j}^0,\Omega_{1,j}$. Hence Theorem \ref{global} applies to the first step. We stop the procedure right after the first step if $T_1-\rho_0-3T\sqrt{h}\leqslant 2h$. For the second step, we need to choose a new set of reference points. Observe that the first step propagates past the level set $\Gamma_2:=\{x\in M_h: d_h(x,\Gamma_{h}(10h))=\rho_0-4h\}$ due to Lemma \ref{dhs}(4) and the procedure stopping criterion $T_1-\rho_0-3T\sqrt{h}>2h$. We choose the new reference points $\{z_{2,j}\}$ as an $h$-net in $\Gamma_2$ with respect to $d_h$. At $\Gamma_2$, the square of the maximal time allowed is $(T_1-\rho_0+4h)^2-9T^2h$, and we set the time range $T_2$ for the second step as $T_2=T_1-\rho_0-3T\sqrt{h}$. The procedure stopping criterion indicates that $T_2>2h$. Then we define the functions $$\psi_{2,j}(x,t)=\bigg(\Big(1-\xi \big(d(x,\partial M)\big)-\xi \big(\rho_0-d_h^s(x,z_{2,j})\big)\Big)T_2-d_h^s(x,z_{2,j})\bigg)^2-t^2.$$ To apply Theorem \ref{global}, we need to ensure that small neighborhoods around the new reference points are contained in the regions already propagated by the unique continuation in the first step. To that end, we define $\psi_{max,2}=(T_2-a_T h)^2$, where $a_T=\min\{1,T^{-1}\}$, and $$\Omega^{0}_{2,j}=\big\{(x,t)\in M\times [-T_2,T_2]: \psi_{2,j}(x,t) > 8T^2h \big\}-\{x: d_h^s(x,z_{2,j})\leqslant \frac{1}{2}a_T h \}\times [-T_2,T_2],$$ $$\Omega_{2,j}=\big\{(x,t)\in\Omega^{0}_{2,j}-\big((\cup_j \overline{\Omega}_{1,j})\cup (\cup_j \overline{\Omega}_{0,j})\big): \psi_{2,j}(x,t) > 9T^2 h \big\}.$$ These domains are illustrated in Figure \ref{figure2}. The specific choice of $\psi_{max,2}$ is justified in Sublemma \ref{sublemma2} a bit later, to ensure that $\emptyset\neq\{(x,t)\in \Omega^{0}_{2,j}: \psi_{2,j}(x,t)>\psi_{max,2}\}\subset (\cup_j \overline{\Omega}_{1,j})\cup (\cup_j \overline{\Omega}_{0,j})$. \smallskip Now we define the remaining steps iteratively. We define the reference sets as $$\Gamma_{i}=\big\{x\in M_h: d_h(x, \Gamma_1)=(i-1)(\rho_0-4h) \big\}, \quad i\geqslant 2,$$ where $\Gamma_1=\Gamma_{h}(10h)\subset M_h$. The reference points $\{z_{i,j}\}$ are defined as an $h$-net in $\Gamma_i$ with respect to $d_h$. Note that the range of $j$ index for each step $i$ is different, and the notation $\{z_{i,j}\}$ here is short for $\{z_{i,j}\}_{j=1}^{J(i)}$. We define the $C^{2,1}$ functions $\psi_{i,j}$ as follows. $$\psi_{i,j}(x,t)=\bigg(\Big(1-\xi \big(d(x,\partial M)\big)-\xi \big(\rho_0-d_h^s(x,z_{i,j})\big)\Big)T_{i}-d_h^s(x,z_{i,j})\bigg)^2-t^2,$$ where $T_{i}=T_{i-1}-\rho_0-3T\sqrt{h}$ with $T_1=T-6h$. We stop the procedure at the $i$-th step if $T_{i+1}\leqslant 2h$ or $\Gamma_{i+1}=\emptyset$. The regions $\Omega^{0}_{i,j}$ and $\Omega_{i,j}$ for $i\geqslant 2$ are defined as\footnote{\,the connected component characterized by $\big(1-\xi(d(x,\partial M))-\xi(\rho_0-d_h^s(x,z_{i,j}))\big)T_{i}-d_h^s(x,z_{i,j})>0$.} $$\Omega^{0}_{i,j}=\big\{(x,t)\in M\times [-T_i,T_i]:\psi_{i,j}(x,t) >8T^2h \big\}-\{x: d_h^s(x,z_{i,j})\leqslant \frac{1}{2}a_T h\}\times [-T_i,T_i].$$ $$\Omega_{i,j}=\big\{(x,t)\in \Omega^{0}_{i,j}-\cup_{l=0}^{i-1}\cup_{j} \overline{\Omega}_{l,j}: \psi_{i,j}(x,t) > 9T^2 h \big\},$$ where $a_T=\min\{1,T^{-1}\}$ in (\ref{aT}). It follows that $\psi_{i,j}$ is non-characteristic in $\Omega^{0}_{i,j}$ in the same way as for $\psi_{1,j}$. Due to Sublemma \ref{sublemma2} below, Theorem \ref{global} applies with $\psi_{max,i}=(T_i-a_T h)^2$. \begin{figure}[h] \includegraphics[width=8cm, height=10cm]{Figure3} \caption{The procedure of a three-step propagation besides the initial step. The red solid lines enclose the whole region $\Omega=\cup_{i,j}\Omega_{i,j}$ propagated by the unique continuation. The black dotted line represents the optimal region, while the blue dotted line represents the actual region we can estimate.} \label{figure3} \end{figure} \begin{sublemma1}\label{sublemma1} For $i\geqslant 2$ and any $z\in \Gamma_i$, we have $d_h(z,\Gamma_{i-1})=\rho_0-4h$. \end{sublemma1} \begin{proof} Let $z_1\in \Gamma_1$ be a point in $\Gamma_1$ such that $d_h(z,z_1)=d_h(z,\Gamma_1)$. Take a minimizing geodesic of $M_h$ from $z$ to $z_1$ and the geodesic intersects with $\Gamma_{i-1}$ at $z_{i-1}\in\Gamma_{i-1}$. This geodesic has length $(i-1)(\rho_0-4h)$, and its segment from $z_{i-1}$ to $z_1$ has length at least $(i-2)(\rho_0-4h)$ by definition. Hence $d_h(z,\Gamma_{i-1})\leqslant d_h(z,z_{i-1})\leqslant \rho_0-4h$. On the other hand, for any $z^{\prime}\in\Gamma_{i-1}$, we have $d_h(z,z^{\prime})\geqslant d_h(z,\Gamma_1)-d_h(z^{\prime},\Gamma_1)=\rho_0-4h$, which shows $d_h(z,\Gamma_{i-1})\geqslant \rho_0-4h$. \end{proof} \begin{sublemma2}\label{sublemma2} For $i\geqslant 2$ and sufficiently small $h<\min\{1/2,T/4\}$ depending on $n,K_1,i_0$, we have $dist_{\widetilde{M}\times \mathbb{R}} (\partial \Omega^{0}_{i,j},\Omega_{i,j})>0$, and $$\emptyset\neq \big\{(x,t)\in \Omega^{0}_{i,j}: \psi_{i,j}(x,t)>(T_i-a_T h)^2 \big\}\subset \cup_{l=0}^{i-1}\cup_j\overline{\Omega}_{l,j}.$$ \end{sublemma2} \begin{proof} We prove the following stronger statement: \begin{equation}\label{dinclusion} \{x: d_h^s(x,z_{i,j}) \leqslant a_T h\}\times [-T_i-h,T_i+h] \subset \cup_{l=0}^{i-1}\cup_j\overline{\Omega}_{l,j}. \end{equation} More precisely, for any $(x,t)$ in the left-hand set, we prove that if $(x,t)\notin \cup_{l=0}^{i-2}\cup_j\overline{\Omega}_{l,j}$, then $(x,t)\in \cup_j \Omega_{i-1,j}$. By Sublemma \ref{sublemma1} and the fact that $\{z_{i-1,j}\}$ is an $h$-net in $\Gamma_{i-1}$, we can find some $z_{i-1,j_0}$ such that $d_h(z_{i,j},z_{i-1,j_0})<\rho_0-3h$. Then for any $(x,t)$ in the left-hand set in (\ref{dinclusion}), Lemma \ref{dhs}(2) implies that for sufficiently small $h$ depending on $n,K_1,i_0$, \begin{equation}\label{sl} d_h^s(x,z_{i-1,j_0}) < d_h^s(x,z_{i,j})+(\rho_0-3h)(1+CnK_1^2 h^6) < \rho_0-\frac{3}{2}h, \end{equation} which indicates that $\xi(\rho_0-d_h^s(x,z_{i-1,j_0}))$ vanishes. We claim that $(x,t)\in \Omega_{i-1,j_0}$. To prove this, by the definition of $\psi_{i-1,j},\Omega_{i-1,j}$ and the condition that $(x,t)\notin \cup_{l=0}^{i-2}\cup_j\overline{\Omega}_{l,j}$, we only need to show that $$\psi_{i-1,j_0}(x,t)=\Big( \big(1-\xi(d(x,\partial M))\big)T_{i-1}-d_h^s(x,z_{i-1,j_0})\Big)^2-t^2 > 9T^2 h.$$ Since $|t|\leqslant T_i+h$, it is enough to show $$\big(1-\xi(d(x,\partial M)\big)T_{i-1}-d_h^s(x,z_{i-1,j_0}) > T_{i}+h+3T\sqrt{h}.$$ Now since $d_h^s(x,z_{i,j})\leqslant h/T$, by the definition of $d_h$ and Lemma \ref{dhs}(4) we have $$d(x,\partial M)\geqslant h-d_h(x,z_{i,j})h> h-(\frac{h}{T}+\frac{2h^2}{T})h>h-\frac{2h^2}{T},$$ which implies by the definition of $\xi$ (\ref{xidef}), $$\xi(d(x,\partial M))< \xi(h-\frac{2h^2}{T}) = \frac{8 h^3}{T^3}.$$ Since $T_i=T_{i-1}-\rho_0-3T\sqrt{h}$ by definition, we have by (\ref{sl}), \begin{eqnarray*} \big(1-\xi(d(x,\partial M))\big)T_{i-1}-d_h^s(x,z_{i-1,j_0}) &>& T_{i-1}-\xi(d(x,\partial M)) T_{i-1}-\rho_0+\frac{3}{2}h \\ &>& T_i+3T\sqrt{h}+\frac{3}{2}h-\frac{8 h^3}{T^3}T \\ &>& T_i+3T\sqrt{h}+h. \end{eqnarray*} This proves $(x,t)\in \Omega_{i-1,j_0}$ and hence (\ref{dinclusion}). The inclusion (\ref{dinclusion}) shows that $\{x: d_h^s(x,z_{i,j}) \leqslant a_T h/2\}\times [-T_i,T_i]$ is strictly contained in $\cup_{l=0}^{i-1}\cup_j\overline{\Omega}_{l,j}$, which implies that $\overline{\Omega}_{i,j}\subset \Omega_{i,j}^0$. An explicit lower bound for the distance between their boundaries is estimated in Lemma \ref{mindistance}. For the second statement of the sublemma, by (\ref{dinclusion}), \begin{eqnarray*} \{\psi_{i,j}(x,t)>(T_i-a_T h)^2\}\subset \{d_h^s(x,z_{i,j}) < a_T h\}\times (-T_i,T_i) \subset \cup_{l=0}^{i-1}\cup_j\overline{\Omega}_{l,j}. \end{eqnarray*} The non-emptyness directly follows from the definition of $\Omega^{0}_{i,j}$. \end{proof} \smallskip \noindent\textbf{Error estimate for Case 2.} Finally we show that $\overline{\Omega}=\cup_{i\geqslant 0}\cup_{j} \overline{\Omega}_{i,j}$ almost covers the domain of influence in the original manifold $M$ (see Figure \ref{figure3}). More precisely, we prove that there exists $C^{\prime}=C^{\prime}(T,D,K_1,i_0,r_0,r_g)$ such that $\Omega(C^{\prime}h)\subset \overline{\Omega}$. The idea of the proof is similar to that for Case 1, and we omit the parts of the proof identical to Case 1. For any $(x,t)\in M\times [-T,T]-\overline{\Omega}$, one of the following two situations must happen:\\ (1) $d(x, \partial M)< h$; \\ (2) $x\in M_h$ and $d_h^s(x,z_{i,j})> \big(1-\xi(\rho_0-d_h^s(x,z_{i,j}))\big)T_i-\sqrt{t^2+9T^2 h}$ for any $z_{i,j}\,(i\geqslant 1)$. \\ The situation (1) implies that $d(x,\partial M-\Gamma)<9h$ or $d(x,\Gamma)>T-|t|-\sqrt{6h}$ by the same argument as for Case 1. \smallskip Now we focus on the situation (2) when $x\in M_h$. Lemma \ref{dhs}(4) yields that for any $z_{i,j}\,(i\geqslant 1)$, \begin{equation}\label{errorcase2} d_h(x,z_{i,j})> \big(1-\xi(\rho_0-d_h^s(x,z_{i,j}))\big)T_i-|t|-3T\sqrt{h}-2h^2. \end{equation} Let $z_1\in \Gamma_1$ be a point in $\Gamma_1$ such that $d_h(x,z_1)=d_h(x,\Gamma_1)$, and take a minimizing geodesic of $M_h$ from $x$ to $z_1$. Observe that this minimizing geodesic intersects with each $\Gamma_i$ at most once; otherwise it would fail to minimize the distance $d_h(x,\Gamma_1)$. Furthermore, due to the continuity of the distance function $d_h(\cdot,\Gamma_1)$, if the minimizing geodesic intersects with $\Gamma_i$, then it intersects with $\Gamma_l$ for all $1\leqslant l<i$. Suppose the minimizing geodesic intersects with $\Gamma_i$ at $z_i\in\Gamma_i$ for $1\leqslant i\leqslant m$, and the intersection does not occur at any nonempty $\Gamma_i$ for $i>m$. Then by Sublemma \ref{sublemma1}, we have \begin{eqnarray}\label{dhGamma1} d_h(x,\Gamma_1)=d(x,z_1)&=&d_h(x,z_m)+\sum_{i=1}^{m-1}d_h(z_i,z_{i+1}) \nonumber \\ &\geqslant& d_h(x,z_{m})+(m-1)(\rho_0-4h). \end{eqnarray} We claim that $d_h(x,z_m)\leqslant \rho_0-3h$. Suppose not, and by the inequality above, we have $d_h(x,\Gamma_1)>m(\rho_0-4h)$. This implies that $\Gamma_{m+1}\neq\emptyset$ and any minimizing geodesic from $x$ to $\Gamma_1$ must intersect with $\Gamma_{m+1}$, which is a contradiction. Since $\Gamma_m\neq\emptyset$ by assumption, the step $m$ of our procedure takes place as long as $T_m>2h$ by our stopping criterion. However if $T_m\leqslant 2h$, the procedure stops at some previous step. \smallskip \noindent \textbf{(i)} $T_m>2h$. On $\Gamma_m$, we can find some $z_{m,j}$ such that $d_h(z_m,z_{m,j})<h$ since $\{z_{m,j}\}$ is an $h$-net. Then it follows that $d_h(x,z_{m,j})<\rho_0-2h$. Lemma \ref{dhs}(4) indicates that $d_h^s(x,z_{m,j})<\rho_0-h$. Hence $\xi(\rho_0-d_h^s(x,z_{m,j}))$ in (\ref{errorcase2}) vanishes. Then by (\ref{dhGamma1}), \begin{eqnarray*} d_h(x,\Gamma_1)&>& d_h(x,z_{m,j})-h+(m-1)(\rho_0-4h) \\ &>& T_m-|t|-3T\sqrt{h}-h-2h^2+(m-1)(\rho_0-4h) \\ &=& T_1-|t|-3mT\sqrt{h}-h-4(m-1)h-2h^2, \end{eqnarray*} where we used $T_m=T_1-(m-1)(\rho_0+3T\sqrt{h})$ by the definition of $T_i$. \smallskip \noindent \textbf{(ii)} $T_m\leqslant 2h$. From $T_m=T_1-(m-1)(\rho_0+3T\sqrt{h})$, we have $$T_1\leqslant (m-1)(\rho_0+3T\sqrt{h})+2h.$$ Hence by (\ref{dhGamma1}), we still get a similar estimate as the previous situation: \begin{eqnarray*} d_h(x,\Gamma_1)\geqslant (m-1)(\rho_0-4h)&\geqslant& T_1-3(m-1)T\sqrt{h}-2h-4(m-1)h \\ &\geqslant& T_1-|t|-3(m-1)T\sqrt{h}-2h-4(m-1)h. \end{eqnarray*} \smallskip From here, one can follow the rest of the estimates for Case 1 and obtains $$d(x,\Gamma)>T-|t|-C(m,T,K_1)\sqrt{h}.$$ Combining these situations together, we have proved that $(x,t)\in M\times [-T,T]-\Omega(Ch)$ for $C=\max\{C(m,T,K_1)^2,9\}$ by definition (\ref{Omegaht}). Therefore, there exists $C^{\prime}=C^{\prime}(m,T,K_1)$ such that $\Omega(C^{\prime}h)\subset \overline{\Omega}$. The only part left is to estimate the upper bound for $m$. By assumption, $\Gamma_m\neq\emptyset$ and hence $\Gamma_m$ must be taken before $d_h(\cdot,\Gamma_1)$ reaches outside the diameter of $M_h$. Due to Lemma \ref{distances} for $M_h,M$, the diameter of $M_h$ is bounded by $6 D/5$ for sufficiently small $h$ depending only on $K_1$. Thus by the definition of $\Gamma_i$, we have $$m\leqslant\big[\frac{6D}{\rho_0}\big]+1,$$ where $\rho_0=\min\big\{i_0/2,r_0/2,r_g/4,\pi/6K_1 \big\}$ depends only on $n,\|R_M\|_{C^1},\|S\|_{C^1},i_0,r_0$. \medskip \noindent\textbf{Stability estimate.} With all the functions and domains we have constructed, the only part left is to apply Theorem \ref{global}. From the error estimate above, we have proved that there exists $C^{\prime}=C^{\prime}(T,D,K_1,i_0,r_0,r_g)$ such that $\Omega(C^{\prime}h)\subset \overline{\Omega}=\cup_{i\geqslant 0}\cup_{j} \overline{\Omega}_{i,j}$, where $r_g$ is a constant depending only on $n,\|R_M\|_{C^1},\|S\|_{C^1},i_0$. Recall that $\widetilde{u}$ is an extension of $u$ to $\widetilde{M}$ defined by (\ref{extensionu}). Theorem \ref{global} yields the following stability estimate on $\overline{\Omega}$ and hence on $\Omega(C^{\prime}h)$. $$\|u\|_{L^2(\Omega(C^{\prime}h))}\leqslant \|\widetilde{u}\|_{L^2(\overline{\Omega})}\leqslant C \frac{\|\widetilde{u}\|_{H^1(\Omega^0)}}{\Big(\log \big(1+\frac{\|\widetilde{u}\|_{H^1(\Omega^0)}}{\|P\widetilde{u}\|_{L^2(\Omega^0)}}\big)\Big)^{\frac{1}{2}}}, $$ where $\Omega^0=\cup_{i\geqslant 0}\cup_{j} \Omega_{i,j}^0$. During the initial step, we have show $\Omega^0_{0,j}\subset M\cup \Omega_{\Gamma}$, and $\Omega^0_{i,j}$ is defined in $M\times [-T,T]$ for all $i\geqslant 1$. Hence $\Omega^0\subset (M\cup \Omega_{\Gamma})\times [-T,T]$. Since the function $x\mapsto x(\log (1+x))^{-1/2}$ is non-decreasing on $[0,+\infty)$, we have $$\|u\|_{L^2(\Omega(C^{\prime}h))}\leqslant C \frac{\|\widetilde{u}\|_{H^1((M\cup\Omega_{\Gamma})\times[-T,T])}}{\Big(\log \big(1+\frac{\|\widetilde{u}\|_{H^1((M\cup\Omega_{\Gamma})\times [-T,T])}}{\|P\widetilde{u}\|_{L^2((M\cup\Omega_{\Gamma})\times[-T,T])}}\big)\Big)^{\frac{1}{2}}}.$$ Therefore, the desired stability estimate follows from Lemma \ref{extension} after replacing $h$ by $h/C^{\prime}$. The number of domains in each step is not consequential to the estimate as long as relevant quantities of $\psi_{i,j}$ are uniformly bounded. The dependency of the constant is calculated in the Appendix \ref{constants}. The second statement of the theorem is due to the following interpolation formula for bounded domains with locally Lipschitz boundary: $$\|u\|_{H^{1-\theta}}\leqslant \|u\|_{L^2}^{\theta}\|u\|_{H^1}^{1-\theta},\quad \theta\in (0,1).$$ This concludes the proof of Theorem \ref{main1}. \begin{remark} If we define $d_h$ (\ref{dh}) with $h^{-2}$ scaling in the boundary neighborhood and require $h<T^{-1}$, then the level sets of $\psi_j$ (\ref{psi}) automatically do not intersect with $\partial M$ even without the $\xi\big(d(x,\partial M)\big)$ term. However, the extra condition $h<T^{-1}$ is not ideal and we want to choose the parameter $h$ as large as possible for a large $T$, considering the stability estimate grows exponentially in $h$. In addition, we frequently used the number $a_T=\min\{1,T^{-1}\}$ exactly for the same purpose. \end{remark} \begin{remark} In the definition of $\Omega_{i,j}^0$ for Case 2, we removed the region where points are $a_T h/2$-close to the reference points, and this region is contained in the set propagated by the unique continuation from previous steps by Sublemma \ref{sublemma2}. The $h^{-1}$ scaling in the definition of $d_h$ (\ref{dh}) directly affects the order of this number $a_T h/2$. Without the scaling the order of this number would be of $h^2$. \end{remark} \smallskip \subsection{Applications of the quantitative unique continuation}\hfill \medskip Due to the trace theorem, Theorem \ref{main1} yields the following estimate on the initial value. \begin{initial}\label{initial} Let $M\in \mathcal{M}_n(D,K_1,K_2,i_0,r_0)$ be a compact Riemannian manifold with smooth boundary $\partial M$, and let $\Gamma$ (possibly $\Gamma=\partial M$) be a connected open subset of $\partial M$ with smooth embedded boundary. Suppose $u\in H^2(M\times[-T,T])$ is a solution of the wave equation $Pu=0$. Assume the Cauchy data satisfy $$u|_{\partial M\times [-T,T]}\in H^{2,2}(\partial M \times [-T,T]),\quad \frac{\partial u}{\partial \mathbf{n}} \in H^{2,2}(\partial M \times [-T,T]).$$ If $$\|u\|_{H^1(M\times[-T,T])}\leqslant \Lambda_0,\quad \|u\|_{H^{2,2}(\Gamma\times [-T,T])}+\big\|\frac{\partial u}{\partial \mathbf{n}}\big\|_{H^{2,2}(\Gamma\times [-T,T])}\leqslant \varepsilon_0,$$ then for sufficiently small $h$, we have $$\|u(x,0)\|_{L^2(\Omega(2h,0,3))} \leqslant C_3^{\frac{1}{3}}h^{-\frac{2}{9}}\exp(h^{-C_4 n}) \frac{\Lambda_0+h^{-\frac{1}{2}}\varepsilon_0}{\big(\log (1+h+h^{\frac{3}{2}}\frac{\Lambda_0}{\varepsilon_0})\big) ^{\frac{1}{6}}}\, ,$$ where $C_3,C_4$ are constants independent of $h$, and their dependency on geometric parameters is stated in Theorem \ref{main1}. For a fixed $t\in [-T,T]$, the domain $\Omega(h,t,m)$ is defined as follows: \begin{equation}\label{Omegahtm} \Omega(h,t,m)=\big\{x\in M: T-|t|-d(x,\Gamma) >h^{\frac{1}{m}},\; d(x,\partial M-\Gamma)>h^{\frac{1}{m}}\big\}. \end{equation} \end{initial} \begin{proof} Observe that $\Omega(2h,0,3)\times (-t_0,t_0)\subset \Omega(h)$ with $t_0=(\sqrt[3]{2}-1)\sqrt[3]{h}$ by definition. Then we take $\theta=1/3$ in Theorem \ref{main1} and apply the trace theorem (Theorem 6.6.1 in \cite{BL}): there exists a constant $C$ such that \begin{eqnarray*} \|u(x,0)\|_{L^2(\Omega(2h,0,3))} &\leqslant& C t_0^{-\frac{2}{3}}\|u(x,t)\|_{H^{\frac{2}{3}}(\Omega(2h,0,3)\times(-t_0,t_0))} \\ &\leqslant& 4C h^{-\frac{2}{9}}\|u(x,t)\|_{H^{\frac{2}{3}}(\Omega(h))}. \end{eqnarray*} \end{proof} \begin{remark} Note that the constant $C_3$ in Corollary \ref{initial} is not exactly the same as the constant $C_3$ in Theorem \ref{main1}. However, they depend on the same set of geometric parameters. In this paper, we keep the same notation for constants if operations do not introduce any new parameter. \end{remark} The following independent result gives an explicit estimate on the Hausdorff measure of the boundary of the domain of influence, which shows that the region Corollary \ref{initial} does not cover has a uniformly controlled small volume. \begin{area}\label{area} Let $M$ be a compact Riemannian manifold with smooth boundary. For any measurable subset $\Gamma\subset \partial M$ and any $t\geqslant 0$, the following explicit estimate applies: $$vol_{n-1} \big(\partial M(\Gamma,t)\big)<C_5 \big(n,\|R_M\|_{C^1},\|S\|_{C^1},i_0,vol(M),vol(\partial M) \big),$$ where $M(\Gamma,t)$ is defined in (\ref{def-Mt}). As a consequence, the estimate above implies the following volume estimate due to the Co-Area formula. Namely, for any $t,\gamma\geqslant 0$, we have $$vol_n \big(M(\Gamma,t+\gamma)-M(\Gamma,t)\big)< C_5 \big(n,\|R_M\|_{C^1},\|S\|_{C^1},i_0,vol(M),vol(\partial M) \big)\gamma.$$ \end{area} \begin{proof} Denote the level set of the distance function by $\Sigma_t=\{x \in \textrm{int}(M): d(x,\Gamma)=t\}$. For any point in $\Sigma_t$, there exists a minimizing geodesic from the point to the subset $\Gamma$. These minimizing geodesics do not intersect with $\Sigma_t$ except at the initial points by definition. Moreover, they do not intersect each other in the interior of $M$, as geodesics would fail to minimize distance past a common point in the interior of $M$. Define $l(x)$ to be the infimum of the distances between a point $x\in \Sigma_t$ and the first intersection points with the boundary along all minimizing geodesics from $x$ to $\Gamma$, and to be infinity if any minimizing geodesic from $x$ to $\Gamma$ does not intersect $\partial M-\overline{\Gamma}$. For sufficiently small $\epsilon>0$ chosen later, denote $$\Sigma_t(\epsilon)=\big\{x\in \Sigma_t: \frac{\epsilon}{2}<l(x)\leqslant \epsilon \big\}.$$ Denote by $U(\Sigma_t(\epsilon))$ the set of all points on all minimizing geodesics from $\Sigma_t(\epsilon)$ to $\Gamma$ and consider the set $U(\Sigma_t(\epsilon))\cap \Sigma_{t^{\prime}}$ for $t^{\prime}\in [t-\epsilon/4,t)$. Clearly the set $U(\Sigma_t(\epsilon))\cap \Sigma_{t^{\prime}}$ does not intersect with $\partial M$ by definition. Furthermore, it is contained in the $C(n,\|R_M\|_{C^1},\|S\|_{C^1})\epsilon^2$-neighborhood of the boundary $\partial M$ if $\epsilon$ is not greater than $\epsilon_0(n,\|R_M\|_{C^1},\|S\|_{C^1},i_0)$, due to Lemma \ref{geodiff}. Since the distance function $d(\cdot,\Gamma)$ is Lipschitz with the Lipschitz constant 1, it is differentiable almost everywhere by Rademacher's theorem and its gradient has length at most $1$. The existence of minimizing geodesics from $\Gamma$ yields that the gradient of $d(\cdot,\Gamma)$ has length at least 1 wherever it exists. Hence the gradient of $d(\cdot,\Gamma)$ has unit length almost everywhere. We apply the Co-Area formula (e.g. Theorem 3.1 in \cite{F}) to the sets $U(\Sigma_t(\epsilon))\cap \Sigma_{t^{\prime}}$ with the distance function $d(\cdot,\Gamma)$. Then by Lemma \ref{areaLipschitz} and Lemma \ref{geodiff}, we have \begin{eqnarray*} \frac{\epsilon}{4} vol_{n-1}(\Sigma_t(\epsilon)) &<& 5^{n-1} \int_{t-\epsilon/4}^t vol_{n-1}\big(U(\Sigma_t(\epsilon))\cap \Sigma_{t^{\prime}}\big) dt^{\prime} \\ &=& 5^{n-1} vol_n \Big( \bigcup_{t^{\prime}\in [t-\epsilon/4,t)} \big(U(\Sigma_t(\epsilon))\cap \Sigma_{t^{\prime}}\big) \Big) \\ &<& 5^{n-1} C(n,\|R_M\|_{C^1},\|S\|_{C^1})\epsilon^2 vol(\partial M). \end{eqnarray*} Then for $\epsilon\leqslant \epsilon_0$ we get $$vol_{n-1}(\Sigma_t(\epsilon))<C(n,\|R_M\|_{C^1},\|S\|_{C^1},vol(\partial M))\epsilon.$$ Hence we have an estimate on the measure of $U_t(\epsilon_0):=\{x\in \Sigma_t: l(x)\leqslant \epsilon_0\}$: \begin{eqnarray*} vol_{n-1}(U_t(\epsilon_0))&=&vol_{n-1} \big(\bigcup_{k=0}^{\infty}\Sigma_t(\epsilon_0 2^{-k}) \big)=\sum_{k=0}^{\infty} vol_{n-1} \big(\Sigma_t(\epsilon_0 2^{-k}) \big) \\ &<& C(n,\|R_M\|_{C^1},\|S\|_{C^1},vol(\partial M))\epsilon_0\sum_{k=0}^{\infty}2^{-k} \\ &<& C(n,\|R_M\|_{C^1},\|S\|_{C^1},i_0,vol(\partial M)). \end{eqnarray*} \indent As for the other part $\Sigma_t-U_t(\epsilon_0)$, if $t>\epsilon_0$, the minimizing geodesics from the points of $\Sigma_t-U_t(\epsilon_0)$ to $\Gamma$ do not intersect the boundary within distance $\epsilon_0$. By the same argument as above, we can control the measure in question in terms of the volume of the manifold: $$ \frac{\epsilon_0}{2} vol_{n-1}(\Sigma_t-U_t(\epsilon_0)) < 5^{n-1}vol(M),$$ which implies that $$vol_{n-1}(\Sigma_t-U_t(\epsilon_0)) < C(n,\|R_M\|_{C^1},\|S\|_{C^1},i_0,vol(M)).$$ Since the part of $\partial M(\Gamma,t)$ on the boundary is bounded by $vol(\partial M)$, the measure estimate for $\partial M(\Gamma,t)$ follows. If $t\leqslant \epsilon_0$, the domain of influence is contained in the boundary normal neighborhood of width $t$. The minimizing geodesics from points of $\Sigma_t-U_t(\epsilon_0)$ to $\Gamma$ do not intersect the boundary within distance $t/2$. Then by the same argument as before, we have $$\frac{t}{2} vol_{n-1}(\Sigma_t-U_t(\epsilon_0))< 5^{n-1}vol(\partial M)t,$$ which completes the measure estimate for $\partial M(\Gamma,t)$. The $n$-dimensional volume estimate directly follows from the measure estimate for $\partial M(\Gamma,t)$ and the Co-Area formula. \end{proof} Due to the Sobolev embedding theorem and Corollary \ref{initial}, we next prove Proposition \ref{wholedomain}. \begin{proof}[Proof of Proposition \ref{wholedomain}] Due to Corollary \ref{initial}, we only need an estimate in $M(\Gamma,T)-\Omega(2h,0,3)$. By the definition (\ref{Omegahtm}) and Proposition \ref{area}, we have $$vol\big(M(\Gamma,T)-\Omega(2h,0,3)\big)<vol\big(M(\Gamma,T)-M(\Gamma,T-(2h)^{\frac{1}{3}}\big)+vol(\partial M)(2h)^{\frac{1}{3}}<C h^{\frac{1}{3}}.$$ Since $u(x,0)\in H^1(M)$, by the Sobolev embedding theorem we have for $n\geqslant 3$, $$\|u(x,0)\|_{L^{\frac{2n}{n-2}}(M)}\leqslant C\|u(x,0)\|_{H^1(M)}\leqslant C\Lambda,$$ which implies that $$\|u(x,0)\|_{L^2(M(\Gamma,T)-\Omega(2h,0,3))}\leqslant \|u(x,0)\|_{L^{\frac{2n}{n-2}}(M)} \Big(vol\big(M(\Gamma,T)-\Omega(2h,0,3)\big)\Big)^{\frac{1}{n}} \leqslant C\Lambda h^{\frac{1}{3n}}. $$ For $n=2$, we have $$\|u(x,0)\|_{L^6(M)}\leqslant C\|u(x,0)\|_{W^{1,\frac{3}{2}}(M)}\leqslant C\Lambda,$$ which implies that $$\|u(x,0)\|_{L^2(M(\Gamma,T)-\Omega(2h,0,3))}\leqslant \|u(x,0)\|_{L^6 (M)} \Big(vol \big(M(\Gamma,T)-\Omega(2h,0,3) \big)\Big)^{\frac{1}{3}} \leqslant C\Lambda h^{\frac{1}{9}}. $$ Then the proposition follows from Corollary \ref{initial}, and the regularity result for the wave equation (e.g. Theorem 2.30 in \cite{KKL}): namely, $$\max_{t\in [-T,T]}\|u(x,t)\|_{H^1(M)}\leqslant C(T) \|u(x,0)\|_{H^1(M)}.$$ This proves Proposition \ref{wholedomain}. \end{proof} \section{Fourier coefficients and the multiplication by an indicator function}\label{section-projection} In this section, we present the essential step of our reconstruction method where we compute how the Fourier coefficients of a function (with respect to the basis of eigenfunctions) change, when the function is multiplied by an indicator function of a union of balls with center points on the boundary. This step is based on the stability estimate for the unique continuation we have obtained in Section \ref{section-uc}. The results in this section will be applied to study the stability of the manifold reconstruction from boundary spectral data in the next section. \smallskip Let $M$ be a compact Riemannian manifold with smooth boundary $\partial M$. Given a small number $\eta>0$, we choose subsets of $\partial M$ in the following way. Suppose $\{\Gamma_i\}_{i=1}^N$ are disjoint open connected subsets of $\partial M$ satisfying $$\partial M =\bigcup_{i=1}^N \overline{\Gamma}_i, \quad \textrm{diam}(\Gamma_i)\leqslant \eta,$$ where the diameter is measured with respect to the distance of $M$. Assume that every $\Gamma_i$ contains a ball (of $\partial M$) of radius $\eta/6$. Without loss of generality, we assume every $\partial\Gamma_i$ is smooth embedded and admits a boundary normal neighborhood of width $\eta/10$. This is because one always has the choice to propagate the unique continuation from the smaller ball of radius $\eta/6$. An error of order $\eta$ does not affect our final result. Let $\alpha=(\alpha_0,\alpha_1,\cdots,\alpha_N)$ with $\alpha_k\in [\eta, D]\cup\{0\}\; (k=0,\cdots,N)$ be a multi-index, where $D$ is the upper bound for the diameter of $M$. Set $\Gamma_0=\partial M$. We define the domain of influence associated with $\alpha$ by \begin{equation}\label{Malpha} M_{\alpha}:=\bigcup_{k=0}^N M(\Gamma_k, \alpha_k)=\bigcup_{k=0}^N \big\{ x \in M: d(x,\Gamma_k) < \alpha_k \big\}. \end{equation} We will only be concerned with (nonempty) domains of influence with the initial time range $\alpha_k\geqslant \eta$. Hence for sufficiently small $\eta$ explicitly depending on geometric parameters, Proposition \ref{wholedomain} applies with $h<\eta/100$, since $i_b(\overline{\Gamma}_k)\geqslant \eta/10$ for all $k\geqslant 1$ by assumption. \smallskip We are given a function $u\in H^3(M)$ with \begin{equation*}\label{aprior} \|u\|_{L^2(M)}=1,\;\;\|u\|_{H^3(M)}\leqslant \Lambda. \end{equation*} \begin{u0}\label{def-u0} For a small parameter $\gamma\in (0,N^{-2})$, we can construct a function $u_0\in H^3(M)$ such that $$u_0 |_{M_{\alpha}}=0, \;\;\; u_0 |_{M^c_{\alpha+\gamma}}=u, \;\;\; \|u_0\|_{L^2(M)}\leqslant 1,$$ \begin{equation}\label{Hnorm} \|u_0\|_{H^s(M)}\leqslant C_0 \Lambda \gamma^{-s}, \textrm{ for }s\in [1,3], \end{equation} where $\alpha+\gamma=(\alpha_0+\gamma,\alpha_1+\gamma,\cdots,\alpha_N+\gamma)$, and $C_0$ is a constant explicitly depending on geometric parameters. \end{u0} \begin{proof} Let $\{x_l\}$ be a maximal $\gamma/2$-separated set in $M$, and $\{\phi_l\}$ be a partition of unity subordinate to the open cover $\{B_{\gamma/2}(x_l)\}$ of $M$ such that $\|\phi_l\|_{C^s}\leqslant C \gamma^{-s}$. Then the desired function $u_0$ can be defined as \begin{equation}\label{def-u0-partition} u_0(x)=\sum_{\textrm{supp}(\phi_l)\cap M_{\alpha}=\emptyset} \phi_l(x) u(x)\, ,\quad x\in M. \end{equation} The first three conditions are clearly satisfied. To prove the $H^s$-norm condition, we only need to show that the number of nonzero terms in the sum (\ref{def-u0-partition}) is uniformly bounded. Given an arbitrary point $x\in M$, any $B_{\gamma/2}(x_l)$ with $\phi_l(x)\neq 0$ is contained in $B_{\gamma}(x)$. By the definition of a $\gamma/2$-separated set, $\{B_{\gamma/4}(x_l)\}$ do not intersect with each other. Hence it suffices to estimate the number of disjoint balls of radius $\gamma/4$ in a ball of radius $\gamma$. For sufficiently small $\gamma$, the volume of a ball of radius $\gamma$ is bounded from both sides by $C\gamma^n$, which yields that the maximal number of balls is bounded by a constant independent of $\gamma$. To obtain an explicit estimate, it is convenient to work in a Riemannian extension of $M$, for instance in $\widetilde{M}$ defined in Lemma \ref{extensionmetric}. Then an explicit estimate for the maximal number follows from Lemma \ref{distances} and (\ref{Jacobian}). \end{proof} Note that due to Proposition \ref{area}, we have \begin{equation}\label{layervolume} vol(M_{\alpha+\gamma}-M_{\alpha})< (N+1)C_5\gamma < 2C_5\gamma^{\frac{1}{2}}. \end{equation} \smallskip \subsection{Approximation results with spectral data without error}\hfill \medskip Suppose the first $J$ Neumann boundary spectral data $\{\lambda_j,\varphi_j|_{\partial M}\}_{j=1}^{J}$ are known without error. Let $u\in H^3(M)$ be a given function with $\|u\|_{L^2(M)}=1$ and $\|u\|_{H^3(M)}\leqslant \Lambda$. Let $u_0$ be defined in Lemma \ref{def-u0}. We define $u_{J}$ to be the projection of $u_0$ onto the first $J$ eigenspaces $\mathcal{V}_{J}= \textrm{span}\{\varphi_1,\cdots,\varphi_{J}\}\subset C^{\infty}(M)$ with respect to the $L^2(M)$-norm: \begin{equation}\label{u2} u_{J}=\sum_{j=1}^{J}\langle u_0,\varphi_j\rangle \varphi_j \in \mathcal{V}_{J}. \end{equation} We consider the following initial value problem for the wave equation with the Neumann boundary condition: \begin{eqnarray*} \partial_t^2 W -\Delta_g W &=& 0,\quad \textrm{ on } \textrm{int}(M)\times \mathbb{R}, \\ \frac{\partial W}{\partial\mathbf{n}}\big|_{\partial M\times \mathbb{R}} &=& 0, \quad \partial_t W|_{t=0}=0,\\ W|_{t=0}&=& v. \end{eqnarray*} Denote by $W(v)$ the solution of the wave equation above with the initial value $v$. Then we define $\mathcal{U}$ to be the set of initial values $v\in\mathcal{V}_{J}$ for which the corresponding waves $W(v)$ are small at all $\Gamma_{k}\times[-\alpha_k,\alpha_k]$: namely, \begin{equation}\label{Udef} \mathcal{U}(J,\Lambda,\gamma,\varepsilon_1)=\bigcap_{k=0}^ N \big\{v\in \mathcal{V}_{J}: \|v\|_{H^1(M)}\leqslant 3C_0 \Lambda\gamma^{-3},\; \|W(v)\|_{H^{2,2}(\Gamma_{k}\times [-\alpha_k,\alpha_k])}\leqslant \varepsilon_1 \big\}. \end{equation} When the parameters $J,\Lambda,\gamma,\varepsilon_1$ are clearly specified in a certain context, we denote this set by simply $\mathcal{U}$ for short. Note that since functions in $\mathcal{V}_J$ are smooth on $M$, the wave $W(v)$ for $v\in \mathcal{V}_J$ is also smooth and hence its $H^{2,2}$-norm is well-defined. Given the Fourier coefficients of $v\in \mathcal{V}_{J}$, the conditions of $\mathcal{U}$ can be checked only using the boundary spectral data. In fact, if a function $v$ has the form $v=\sum_{j=1}^{J}v_j \varphi_j$, then $\|v\|_{H^1(M)}=\sum_{j=1}^{J}(1+\lambda_j)v_j^2$, and the wave $W(v)$ over $\partial M$ is given by \begin{equation}\label{waveboundary} W(v)(x,t) \big|_{\partial M\times \mathbb{R}}= \sum_{j=1}^{J} v_j \cos(\sqrt{\lambda_j} t) \varphi_j(x)|_{\partial M}. \end{equation} For convenience, we use the following equivalent Sobolev norm (e.g. Theorem 2.22 in \cite{KKL}) for a function $v\in H^s(M)$ with the Fourier expansion $v=\sum_{j=1}^{\infty}v_j\varphi_j$: \begin{equation}\label{Hkdef} \|v\|_{H^{s}(M)}^2=\sum_{j=1}^{\infty} (1+\lambda_j^s) v_j^2,\textrm{ for }s\in [1,3]. \end{equation} \begin{smallinitial}\label{smallinitial} Let $u\in H^3(M)$ be a given function with $\|u\|_{H^3(M)}\leqslant \Lambda$, and $u_0,u_{J}$ be defined in Lemma \ref{def-u0} and (\ref{u2}). Then for any $\varepsilon_1>0$, there exists $J_0=J_0(D,\Lambda,\gamma,\varepsilon_1)$ such that $u_{J}\in \mathcal{U}(J,\Lambda,\gamma,\varepsilon_1)$ for any $J\geqslant J_0$. \end{smallinitial} \begin{proof} Assume $J$ is sufficiently large such that $\lambda_J>1$. Suppose $u_0,u_J$ have expansions: $$u_0=\sum_{j=1}^{\infty} d_j \varphi_j, \quad u_{J}=\sum_{j=1}^{J} d_j \varphi_j \in \mathcal{V}_{J}. $$ By (\ref{Hkdef}) we know $$\|u_0\|_{H^1(M)}^2 \geqslant \sum_{j=J+1}^{\infty} d_j^2 \lambda_j \geqslant \lambda_{J} \sum_{j=J+1}^{\infty} d_j^2, $$ and hence by (\ref{Hnorm}) we have \begin{equation}\label{L2error} \|u_0-u_{J}\|_{L^2(M)}^2 = \sum_{j=J+1}^{\infty} d_j^2 \leqslant C_0^2 \Lambda^2 \lambda_{J}^{-1}\gamma^{-2}. \end{equation} Similarly for higher norms, \begin{eqnarray}\label{H3} \|u_0\|_{H^3(M)}^2 \geqslant \sum_{j=J+1}^{\infty} d_j^2 \lambda_j^3 \geqslant \lambda_{J} \sum_{j=J+1}^{\infty} d_j^2 \lambda_j^2, \end{eqnarray} and hence by (\ref{Hnorm}), \begin{eqnarray*} \|u_0-u_{J}\|_{H^2(M)}^2 = \sum_{j=J+1}^{\infty} (1+\lambda_j^2) d_j^2 \leqslant 2\sum_{j=J+1}^{\infty} \lambda_j^2 d_j^2 \leqslant 2C_0^2 \Lambda^2 \lambda_{J}^{-1}\gamma^{-6}. \end{eqnarray*} As a consequence, $u_{J}$ satisfies the $H^1$-norm condition of $\mathcal{U}$ (\ref{Udef}): \begin{eqnarray*} \|u_{J}\|_{H^1(M)} &\leqslant& \|u_0\|_{H^1(M)}+\|u_0-u_{J}\|_{H^1(M)} \\ &\leqslant& C_0\Lambda \gamma^{-1}+\sqrt{2}C_0 \Lambda \lambda_{J}^{-1/2} \gamma^{-3} < 3C_0\Lambda \gamma^{-3}. \end{eqnarray*} Next we show that $u_{J}$ also satisfies the $H^{2,2}$-norm condition of $\mathcal{U}$ (\ref{Udef}) for sufficiently large $J$. This condition is trivially satisfied when $\alpha_k=0$. Due to the finite speed propagation of waves, the condition $u_0 |_{M_{\alpha}}=0$ implies that $W(u_0)|_{\Gamma_k\times (-\alpha_k,\alpha_k)}=0$ for all $k$ with $\alpha_k\neq 0$. Thus it suffices to show that $W(u_0)-W(u_{J})$ has small $H^{2,2}$-norm on $\partial M\times [-D,D]$. Since $u_0\in H^3(M)$, the regularity result for the wave equation (e.g. Theorem 2.45 in \cite{KKL}) shows that $$W(u_0)\big|_{M\times [-D,D]} \in C([-D,D];H^3(M))\cap C^3([-D,D];L^2(M)).$$ Hence from (\ref{waveboundary}), we have $$\big(W(u_0)-W(u_{J})\big)(x,t) \big|_{\partial M\times [-D,D]}=\sum_{j=J+1}^{\infty} d_j\cos(\sqrt{\lambda_j}t)\varphi_j(x)|_{\partial M}.$$ Then the trace theorem and (\ref{Hkdef}) imply that \begin{eqnarray*} \|W(u_0)-W(u_{J})\|^2_{H^{2}(\partial M)} &\leqslant& C\|W(u_0)-W(u_{J})\|^2_{H^{\frac{11}{4}}(M)} \\ &=& C\sum_{j=J+1}^{\infty}(1+\lambda_j^{\frac{11}{4}})d_j^2\cos^2(\sqrt{\lambda_j}t) \\ &\leqslant& 2C\sum_{j=J+1}^{\infty} d_j^2 \lambda_j^{\frac{11}{4}}\leqslant C(\Lambda)\lambda_{J}^{-\frac{1}{4}}\gamma^{-6}, \end{eqnarray*} where the last inequality is due to the following estimate similar to (\ref{H3}): \begin{eqnarray*} \|u_0\|_{H^3(M)}^2 \geqslant \sum_{j=J+1}^{\infty} d_j^2 \lambda_j^3 \geqslant \lambda_{J}^{\frac{1}{4}} \sum_{j=J+1}^{\infty} d_j^2 \lambda_j^{\frac{11}{4}}. \end{eqnarray*} For the time derivatives, the trace theorem and (\ref{Hkdef}) imply \begin{eqnarray*} \|\partial_t^2 W(u_0)-\partial_t^2 W(u_{J})\|^2_{L^2(\partial M)} &\leqslant& C \|\partial_t^2 W(u_0)-\partial_t^2 W(u_{J})\|^2_{H^{\frac{3}{4}}(M)} \\ &=& C\sum_{j=J+1}^{\infty}(1+\lambda_j^{\frac{3}{4}})d_j^2\lambda_j^2 \cos^2(\sqrt{\lambda_j}t) \\ &\leqslant& 2C\sum_{j=J+1}^{\infty}d_j^2\lambda_j^{\frac{11}{4}} \leqslant C(\Lambda)\lambda_{J}^{-\frac{1}{4}}\gamma^{-6}. \end{eqnarray*} Similarly by using (\ref{H3}), \begin{eqnarray*} \|\partial_t W(u_0)-\partial_t W(u_{J})\|^2_{L^2(\partial M)} \leqslant C(\Lambda)\lambda_{J}^{-1}\gamma^{-6}. \end{eqnarray*} Hence by the definition of $H^{2,2}$-norm (\ref{H21}), \begin{eqnarray*} \|W(u_0)-W(u_{J})\|^2_{H^{2,2}(\partial M\times [-D,D])} &\leqslant& 2D\,C(\Lambda)(2\lambda_{J}^{-\frac{1}{4}}\gamma^{-6}+\lambda_{J}^{-1}\gamma^{-6}) \\ &\leqslant& C(D,\Lambda)\lambda_{J}^{-\frac{1}{4}}\gamma^{-6}. \end{eqnarray*} Therefore for all $k=0,\cdots,N$ with $\alpha_k\neq 0$, we have \begin{eqnarray*} \|W(u_J)\|^2_{H^{2,2}(\Gamma_k\times [-\alpha_k,\alpha_k])} = \|W(u_0)-W(u_J)\|^2_{H^{2,2}(\Gamma_k\times [-\alpha_k,\alpha_k])} \leqslant C(D,\Lambda)\lambda_{J}^{-\frac{1}{4}}\gamma^{-6}. \end{eqnarray*} For any $\varepsilon_1>0$, choose sufficiently large $J$ such that $\lambda_{J}\geqslant C(D,\Lambda)\gamma^{-24}\varepsilon_1^{-8}$ and the lemma follows. \end{proof} \begin{remark} The choice of $J_0$ in Lemma \ref{smallinitial} also depends on geometric parameters, which is brought in when applying the trace theorem. Those relevant parameters are part of the parameters we considered in Section \ref{section-uc}, so we omit them in this section for brevity. The same goes for the next two propositions, where the dependency on geometric parameters is brought in when applying Proposition \ref{wholedomain}. \end{remark} We prove the following approximation result for finite spectral data. \begin{projection}\label{projection} Let $u\in H^3(M)$ be a given function with $\|u\|_{L^2(M)}=1$ and $\|u\|_{H^3(M)}\leqslant \Lambda$. Let $\alpha=(\alpha_0,\cdots,\alpha_N)$, $\alpha_k\in [\eta, D]\cup\{0\}$ be given, and $M_{\alpha}$ be defined in (\ref{Malpha}). Then for any $\varepsilon>0$, there exists sufficiently large $J=J(D,N,\Lambda,\eta,\varepsilon)$, such that by only knowing the first $J$ Neumann boundary spectral data $\{\lambda_j,\varphi_j|_{\partial M}\}_{j=1}^J$ and the first $J$ Fourier coefficients $\{a_j\}_{j=1}^J$ of $u$, we can find $\{b_j\}_{j=1}^{J}$ and $u^a=\sum_{j=1}^{J}b_j \varphi_j$, such that $$\|u^a-\chi_{M_{\alpha}}u\|_{L^2(M)} <\varepsilon,$$ where $\chi$ denotes the characteristic function. \end{projection} \begin{proof} We consider the following minimization problem in $\mathcal{U}(J,\Lambda,\gamma,\varepsilon_1)$ (denoted by $\mathcal{U}$ from now on) defined in (\ref{Udef}), where the parameters $J,\gamma,\varepsilon_1$ will be determined later. Let $u_{min}\in\mathcal{U}$ be the solution of the minimization problem \begin{equation}\label{minimization} \|u_{min}-u\|_{L^2(M)}=\min_{w\in \mathcal{U}} \|w-u\|_{L^2(M)}. \end{equation} Observe that given the first $J$ Fourier coefficients of $u$, finding the minimum of the norm $\|w-u\|_{L^2(M)}$ is equivalent to finding the minimum of a polynomial in terms of the ($J$ number of) Fourier coefficients of $w$. Since the conditions of $\mathcal{U}$ (\ref{Udef}) can be checked with finite boundary spectral data by (\ref{waveboundary}) and (\ref{Hkdef}), the minimization problem transforms into a polynomial minimization problem in a bounded domain in $\mathbb{R}^J$ (the space of Fourier coefficients). Hence the Fourier coefficients of the minimizer $u_{min}$ are solvable by only using the finite spectral data. \smallskip Next, we investigate what properties this minimizer $u_{min}$ satisfies. By Proposition \ref{wholedomain} and the fact that the Neumann boundary condition is imposed, $w\in\mathcal{U}$ implies that $\|w\|_{L^2(M(\Gamma_k,\alpha_k))}< \varepsilon_2(h,\Lambda,\eta,\gamma,\varepsilon_1)$ for all $k=0,1,\cdots,N$ with $\alpha_k\neq 0$, where \begin{equation}\label{epsilon20} \varepsilon_2=C_3^{\frac{1}{3}}h^{-\frac{2}{9}}\exp(h^{-C_4 n}) \frac{\Lambda\gamma^{-3}+h^{-\frac{1}{2}}\varepsilon_1}{\big(\log (1+h^{\frac{3}{2}}\gamma^{-3}\frac{\Lambda}{\varepsilon_1})\big) ^{\frac{1}{6}}}+C_5\Lambda\gamma^{-3} h^{\frac{1}{3n+3}}. \end{equation} Hence, $$\|w\|_{L^2(M_{\alpha})} < (N+1)\varepsilon_2.$$ Then for any $w\in \mathcal{U}$ and in particular for $w=u_{min}$, \begin{eqnarray} \label{lower} \|w-u\|_{L^2(M)}^2 &=& \|w-u\|_{L^2(M_{\alpha})}^2+ \|w-u\|_{L^2(M_{\alpha}^c)}^2 \nonumber \\ &>& \|u\|^2_{L^2(M_{\alpha})}-4N\varepsilon_2 + \|w-u\|^2_{L^2(M_{\alpha}^c)}. \end{eqnarray} On the other hand the following estimate holds for $u_{J}$: \begin{eqnarray*} \|u_{J}-u\|_{L^2(M)}^2 &\leqslant& (\|u_{J}-u_0\|_{L^2(M)} + \|u_0-u\|_{L^2(M)})^2 \\ &\leqslant& \|u_{J}-u_0\|^2_{L^2(M)}+4\|u_{J}-u_0\|_{L^2(M)} + \|u_0-u\|^2_{L^2(M)} \\ &\leqslant & C(\Lambda) \lambda_{J}^{-\frac{1}{2}}\gamma^{-2}+ \|u\|^2_{L^2(M_{\alpha})}+ \|u_0-u\|^2_{L^2(M_{\alpha+\gamma}-M_{\alpha})} , \end{eqnarray*} where the last inequality is due to (\ref{L2error}) and the definition of $u_0$. The definition of partition of unity in (\ref{def-u0-partition}), the Sobolev embedding theorem (see the proof of Proposition \ref{wholedomain}) and (\ref{layervolume}) yield that $$\|u_0-u\|_{L^2(M_{\alpha+\gamma}-M_{\alpha})}\leqslant \|u\|_{L^2(M_{\alpha+\gamma}-M_{\alpha})} < 2C_5\Lambda \gamma^{\frac{1}{2\max\{n,3\}}}.$$ Hence, $$\|u_{J}-u\|_{L^2(M)}^2 < C(\Lambda) \lambda_{J}^{-\frac{1}{2}}\gamma^{-2}+ \|u\|^2_{L^2(M_{\alpha})}+ 4C_5^2\Lambda^2 \gamma^{\frac{1}{n+1}}.$$ For sufficiently large $J=J(D,\Lambda,\gamma,\varepsilon_1)$, we have $u_{J}\in \mathcal{U}$ by Lemma \ref{smallinitial}. This indicates that the minimizer $u_{min}$ also satisfies \begin{equation} \label{upper} \|u_{min}-u\|_{L^2(M)}^2 < C(\Lambda) \lambda_{J}^{-\frac{1}{2}}\gamma^{-2}+ \|u\|^2_{L^2(M_{\alpha})}+ 4C_5^2\Lambda^2 \gamma^{\frac{1}{n+1}}. \end{equation} Combining the two inequalities (\ref{lower}) and (\ref{upper}), we have $$\|u_{min}-u\|^2_{L^2(M_{\alpha}^c)} < 4N\varepsilon_2 +C(\Lambda) \lambda_{J}^{-\frac{1}{2}}\gamma^{-2}+ 4C_5^2\Lambda^2\gamma^{\frac{1}{n+1}}.$$ The fact that $\|u_{min}\|_{L^2(M_{\alpha})}< N\varepsilon_2$ implies that \begin{eqnarray*} \|\chi_{M_{\alpha}}u-(u-u_{min})\|^2_{L^2(M)}&=&\|u_{min}-\chi_{M_{\alpha}^c}u\|^2_{L^2(M)} \\ &=& \|u_{min}-\chi_{M_{\alpha}^c}u\|^2_{L^2(M_{\alpha}^c)}+\|u_{min}\|^2_{L^2(M_{\alpha})} \\ &<& 4N\varepsilon_2 +C(\Lambda) \lambda_{J}^{-\frac{1}{2}}\gamma^{-2}+4C_5^2\Lambda^2\gamma^{\frac{1}{n+1}} +4N^2\varepsilon_2^2 . \end{eqnarray*} From our discussion at the beginning of this proof, we know the Fourier coefficients of $u_{min}$ is solvable. Suppose we have found a minimizer $u_{min}=\sum_{j=1}^{J} c_j \varphi_j$. Since the first $J$ Fourier coefficients of $u$ are given as $a_j$, we can replace the function $u-u_{min}$ in the last inequality by $\sum_{j=1}^J a_j \varphi_j-u_{min}$ and the error in $L^2$-norm is controlled by $\Lambda\lambda_{J}^{-1/2}$. Hence by the Cauchy-Schwarz inequality, we obtain \begin{equation}\label{ualast} \big\|\chi_{M_{\alpha}}u-\sum_{j=1}^{J}(a_j-c_j)\varphi_j \big\|^2_{L^2(M)} < 8N\varepsilon_2+8N^2\varepsilon_2^2 +C(\Lambda) \lambda_{J}^{-\frac{1}{2}}\gamma^{-2}+ 8C_5^2\Lambda^2\gamma^{\frac{1}{n+1}}, \end{equation} which makes $u^a :=\sum_{j=1}^{J} b_j \varphi_j$ with $b_j=a_j-c_j$ our desired function. Finally, we determine the relevant parameters. For any $\varepsilon>0$, we first choose and fix $\gamma$ such that the last (\ref{ualast}) term $8C_5^2\Lambda^2\gamma^{\frac{1}{n+1}}= \varepsilon^2/4$, and choose sufficiently large $J$ such that the third term is small than $\varepsilon^2/4$. Then we choose $\varepsilon_2$ so that the first two terms $8N\varepsilon_2+8N^2\varepsilon_2^2=\varepsilon^2/4$. Next we determine $\varepsilon_1$. We choose and fix $h<\eta/100$ such that the second term in (\ref{epsilon20}) is equal to $\varepsilon_2/2$, and choose $\varepsilon_1$ such that the first term in (\ref{epsilon20}) is equal to $\varepsilon_2/2$. By Lemma \ref{smallinitial}, there exists sufficiently large $J$ such that $u_{J}\in \mathcal{U}$, which validates all the estimates. The proposition is proved. \end{proof} \smallskip \subsection{Approximation results with spectral data with error} \hfill \medskip Now suppose that not only do we not know all the spectral data, we also only know them up to an error. More precisely, suppose we are given a set of data $\{\lambda^a_j,\varphi^a_j|_{\partial M}\}$ which is a $\delta$-approximation of the Neumann boundary spectral data, where $\lambda_j^a\in \mathbb{R}_{\geqslant 0}$ and $\varphi_j^a|_{\partial M}\in C^2(\partial M)$. By Definition \ref{deferror}, there exists a choice of Neumann boundary spectral data $\{\lambda_j,\varphi_j|_{\partial M}\}_{j=1}^{\infty}$, such that for all $j\leqslant \delta^{-1}$, \begin{equation}\label{error-condition} \big|\sqrt{\lambda_j}-\sqrt{\lambda_j^a}\big|<\delta,\quad \|\varphi_j - \varphi_j^a \|_{C^{0,1}(\partial M)}+ \big\|\nabla_{\partial M}^2 (\varphi_j- \varphi_j^a)|_{\partial M} \big\|< \delta. \end{equation} Since $\varphi_j^{a}\in C^2(\partial M)$ by assumption, the bound on the $C^{0,1}$-norm above yields \begin{equation}\label{close-C1} \|\varphi_j - \varphi_j^a \|_{C^{0}(\partial M)}+ \big|\nabla(\varphi_j - \varphi_j^a)|_{\partial M} \big|<\delta, \; \textrm{ for }j\leqslant \delta^{-1}. \end{equation} In a local coordinate $(x^1,\cdots,x^{n-1})$ on $\partial M$, for any $f\in C^2(\partial M)$, we have the formula $$\big(\nabla_{\partial M}^2 f\big)(\frac{\partial}{\partial x^{k}},\frac{\partial}{\partial x^{l}})=\frac{\partial^2 f}{\partial x^{k} \partial x^l} -\sum_{i=1}^{n-1} \Gamma_{kl}^i \frac{\partial f}{\partial x^i}, \quad k,l=1,\cdots,n-1.$$ Furthermore, we can choose to work in the geodesic normal coordinate. Then the norm of the second covariant derivative (the Hessian), the formula above and (\ref{coorb}) yield a bound $C\delta$ on the second derivative of $(\varphi_j- \varphi_j^a)|_{\partial M}$: \begin{equation}\label{close-C2} \big|\frac{\partial^2}{\partial x^k \partial x^l} (\varphi_j- \varphi_j^a)|_{\partial M}\big| < C\delta, \; \textrm{ for }j\leqslant \delta^{-1}, \;\, k,l=1,\cdots,n-1. \end{equation} We prove the following approximation result analogous to Proposition \ref{projection}. \begin{measureerror}\label{measureerror} Let $u\in H^3(M)$ be a given function with $\|u\|_{L^2(M)}=1$ and $\|u\|_{H^3(M)}\leqslant \Lambda$. Let $\alpha=(\alpha_0,\cdots,\alpha_N)$, $\alpha_k\in [\eta, D]\cup\{0\}$ be given, and $M_{\alpha}$ be defined in (\ref{Malpha}). Then for any $\varepsilon>0$, there exists sufficiently large $J=J(D,N,\Lambda,\eta,\varepsilon)$ such that the following holds. \\ There exists $\delta=\delta(D, vol(\partial M),N,\Lambda,J,\eta,\varepsilon)\leqslant J^{-1}$ such that by knowing a $\delta$-approximation $\{\lambda^a_j,\varphi^a_j|_{\partial M}\}$ of the Neumann boundary spectral data, and knowing the first $J$ Fourier coefficients $\{a_j\}_{j=1}^J$ of $u$, we can find $\{b_j\}_{j=1}^{J}$ and $u^a=\sum_{j=1}^{J}b_j \varphi_j$, such that $$\|u^a-\chi_{M_{\alpha}}u\|_{L^2(M)} <\varepsilon.$$ Here the known Fourier coefficients of $u$ are with respect to $\{\varphi_j\}$ which is a choice of orthonormalized eigenfunctions satisfying (\ref{error-condition}) for $\{\lambda^a_j,\varphi^a_j|_{\partial M}\}$. \end{measureerror} \begin{proof} Since we only know an approximation of the boundary spectral data, an error appears when we determine if a function belongs to the space $\mathcal{U}$ (\ref{Udef}) in the minimization problem (\ref{minimization}). The norms appeared in the conditions of $\mathcal{U}$ can be written in terms of the Fourier coefficients and boundary spectral data. However in this case, the actual spectral data are unknown and we can only check these norm conditions with a given approximation of the spectral data. First we need to estimate how these conditions change when the spectral data are perturbed. For a function $v(x)=\sum_{j=1}^{J}v_j\varphi_j(x)$ with $\sum_{j=1}^{J}v_j^2\leqslant 1$, the error for the $H^1$-norm condition of $\mathcal{U}$ is \begin{equation}\label{H1error} \Big|\|v\|^2_{H^1(M)}- \sum_{j=1}^{J} (1+\lambda_j^a)v_j^2\Big| =\sum_{j=1}^{J}|\lambda_j-\lambda_j^a| v_j^2 < (2\sqrt{\lambda_J}+\delta)\delta. \end{equation} For the $H^{2,2}$-norm condition of $\mathcal{U}$, from (\ref{waveboundary}) we know $$ W(v)(x,t)|_{\partial M\times \mathbb{R}}= \sum_{j=1}^{J} v_j \cos(\sqrt{\lambda_j} t) \varphi_j(x)|_{\partial M}. $$ To check if this condition is satisfied, we can only use the approximate spectral data: $$W^a (v)(x,t)|_{\partial M\times \mathbb{R}}= \sum_{j=1}^{J} v_j \cos(\sqrt{\lambda^a_j} t) \varphi_j^a(x)|_{\partial M}. $$ In fact, we are only concerned with a finite time range $t\in [-D,D]$. Since $$\big|\cos(\sqrt{\lambda_j} t)-\cos(\sqrt{\lambda_j^a} t)\big|\leqslant |\sqrt{\lambda_j} t-\sqrt{\lambda_j^a} t| <D \delta,$$ we have the following estimate on the error: \begin{eqnarray*} \|W(v)-W^a(v)\|_{H^2(\partial M)} &\leqslant& \|\sum_{j=1}^{J} v_j \cos(\sqrt{\lambda_j} t) \varphi_j-\sum_{j=1}^{J} v_j \cos(\sqrt{\lambda_j^a} t) \varphi_j\|_{H^2(\partial M)} \\ &+& \|\sum_{j=1}^{J} v_j \cos(\sqrt{\lambda_j^a} t) \varphi_j-\sum_{j=1}^{J} v_j \cos(\sqrt{\lambda_j^a} t) \varphi_j^a\|_{H^2(\partial M)} \\ &\leqslant& D\delta \sum_{j=1}^{J}|v_j|\|\varphi_j\|_{H^2(\partial M)}+ \sum_{j=1}^{J} |v_j|\|\varphi_j-\varphi_j^a\|_{H^2(\partial M)} \\ &<& D\delta \sum_{j=1}^{J}\|\varphi_j\|_{H^2(\partial M)} +CJ\delta \sqrt{vol(\partial M)}\; , \end{eqnarray*} where the last inequality is due to (\ref{close-C1}) and (\ref{close-C2}). By the trace theorem and (\ref{Hkdef}), we know $$\|\varphi_j\|_{H^2(\partial M)}^2\leqslant C\|\varphi_j\|_{H^3(M)}^2=C(1+\lambda_j^3),$$ and hence we obtain \begin{equation*} \|W(v)-W^a(v)\|_{H^2(\partial M)} < C(D,vol(\partial M))J\lambda_{J}^{\frac{3}{2}}\delta. \end{equation*} Similarly for the time derivatives, we have \begin{equation*} \|\partial_t W(v)-\partial_t W^a(v)\|_{L^2(\partial M)} < C(D,vol(\partial M))J\lambda_{J}\delta, \end{equation*} and $$\|\partial_t^2 W(v)-\partial_t^2 W^a(v)\|_{L^2(\partial M)} < C(D,vol(\partial M))J\lambda_{J}^{\frac{3}{2}}\delta.$$ Therefore by definition (\ref{H21}), for some $C_0^{\prime}=C_0^{\prime}(D,vol(\partial M))$, we have \begin{equation}\label{H21error} \|W(v)-W^a(v)\|_{H^{2,2}(\partial M\times [-D,D])}< C_0^{\prime}J\lambda_{J}^{\frac{3}{2}}\delta. \end{equation} \smallskip Now following the proof of Proposition \ref{projection}, we still consider the minimization problem (\ref{minimization}), however in a perturbed space of $\mathcal{U}$. We define an approximate space $\mathcal{U}^a$ of $\mathcal{U}$ as follows: \begin{eqnarray*} \mathcal{U}^{a}=\bigcap_{k=0}^ N \Big\{v=\sum_{j=1}^{J}v_j\varphi_j :& \sum_{j=1}^{J}v_j^2\leqslant 1,\; \sum_{j=1}^{J} (1+\lambda_j^a)v_j^2\leqslant 9C_0^2 \Lambda^2\gamma^{-6}+3\lambda_J^{\frac{1}{2}}\delta,\\ &\|W^a (v)\|_{H^{2,2}(\Gamma_{k}\times [-\alpha_k,\alpha_k])}\leqslant \varepsilon_1+C_0^{\prime}J\lambda_{J}^{\frac{3}{2}}\delta \, \Big\}. \end{eqnarray*} Clearly this space $\mathcal{U}^a$ can be determined with only Fourier coefficients and the given approximation $\{\lambda^a_j,\varphi^a_j|_{\partial M}\}$ of the boundary spectral data. Then we consider the minimization problem (\ref{minimization}) with the space $\mathcal{U}$ replaced by $\mathcal{U}^a$. Hence this perturbed minimization problem is solvable by only using the given approximation of the spectral data. By Lemma \ref{smallinitial}, there exists sufficiently large $J$ such that $u_J\in\mathcal{U}$, and it follows from (\ref{H1error}) and (\ref{H21error}) that $u_{J}\in \mathcal{U}^a$. Then one can follow the rest of the proof for Proposition \ref{projection}. The only part changed is $\varepsilon_2$, since the actual $H^1$ and $H^{2,2}$ norms of $v\in\mathcal{U}^a$ differ from the original conditions of $\mathcal{U}$. More precisely, for any $v\in\mathcal{U}^a$, again by (\ref{H1error}) and (\ref{H21error}), we have $$\|v\|_{H^1(M)}< \sqrt{9C_0^2 \Lambda^2\gamma^{-6}+6\lambda_J^{\frac{1}{2}}\delta}<3C_0 \Lambda\gamma^{-3}+3\lambda_J^{\frac{1}{4}}\sqrt{\delta},$$ $$\|W (v)\|_{H^{2,2}(\Gamma_{k}\times [-\alpha_k,\alpha_k])} < \varepsilon_1+2C_0^{\prime}J\lambda_{J}^{\frac{3}{2}}\delta.$$ Therefore following the proof of Proposition \ref{projection}, for $\delta<\lambda_J^{-1}$, one obtains an estimate almost the same as (\ref{ualast}) with $\varepsilon_2(\delta)$: \begin{equation}\label{ualasterror} \big\|\chi_{M_{\alpha}}u-\sum_{j=1}^{J}(a_j-c_j)\varphi_j \big\|^2_{L^2(M)} < 8N\varepsilon_2(\delta)+8N^2\varepsilon_2^2(\delta) +C(\Lambda) \lambda_{J}^{-\frac{1}{2}}\gamma^{-2}+ 8C_5^2\Lambda^2\gamma^{\frac{1}{n+1}}, \end{equation} where $c_j$ is the $j$-th Fourier coefficient of a minimizer, and \begin{equation*} \varepsilon_2(\delta)=C_3^{\frac{1}{3}}h^{-\frac{2}{9}}\exp(h^{-C_4 n}) \frac{\Lambda\gamma^{-3}+h^{-\frac{1}{2}}(\varepsilon_1+2C_0^{\prime}J\lambda_{J}^{\frac{3}{2}}\delta)}{\bigg(\log \big(1+h^{\frac{3}{2}}\gamma^{-3}\frac{\Lambda}{\varepsilon_1+2C_0^{\prime}J\lambda_{J}^{\frac{3}{2}}\delta}\big)\bigg) ^{\frac{1}{6}}}+C_5\Lambda\gamma^{-3} h^{\frac{1}{3n+3}}. \end{equation*} Finally we determine the relevant parameters. For any $\varepsilon>0$, we first choose and fix $\gamma,\varepsilon_2(0),\varepsilon_1$ such that the right hand side of $(\ref{ualasterror})$ with $\delta=0$ is equal to $3\varepsilon^2/4$ in the same way as in Proposition \ref{projection}. By Lemma \ref{smallinitial} we choose and fix sufficiently large $J$ such that $u_{J}\in \mathcal{U}$, which validates all the estimates if we restrict $\delta\leqslant J^{-1}$. At last we choose sufficiently small $\delta<\lambda_J^{-1}$ such that $$N\varepsilon_2(\delta)+N^2\varepsilon_2^2(\delta)-N\varepsilon_2(0)-N^2\varepsilon_2^2(0)<\frac{\varepsilon^2}{32},$$ and then the proposition follows. \end{proof} \begin{remark}\label{projection-partial} We point out that in Proposition \ref{projection} and \ref{measureerror}, it suffices to know the boundary data on $\cup_{\alpha_i>0} \Gamma_i$ to obtain the estimate for $M_{\alpha}$ with $\alpha_0=0$. This may be useful when only partial boundary spectral data (measured only on a part of the boundary) are known. \end{remark} \section{Approximations to boundary distance functions} \label{section-appro} Let $M$ be a compact Riemannian manifold with smooth boundary $\partial M$. For $x\in M$, the \emph{boundary distance function} $r_x:\partial M\to \mathbb{R}$ is defined by $$r_x(z)=d(x,z),\quad z\in\partial M.$$ Then the boundary distance functions define a map $\mathcal{R}: M\to L^{\infty}(\partial M)$ by $\mathcal{R}(x)=r_x$. It is known that the map $\mathcal{R}$ is a homeomorphism and the metric of the manifold can be reconstructed from its image $\mathcal{R}(M)$ (e.g. Section 3.8 in \cite{KKL}). Furthermore, the reconstruction is stable (Theorem \ref{2007}). Therefore, to construct a stable approximation of the manifold from boundary spectral data, we only need to construct a stable approximation to the boundary distance functions $\mathcal{R}(M)$. In this section, we construct an approximation to the boundary distance functions through slicing procedures. \medskip Given $\eta>0$, let $\{\Gamma_i\}_{i=1}^N$ be a partition of the boundary $\partial M$ into disjoint open connected subsets satisfying the assumptions at the beginning of Section \ref{section-projection}: $\textrm{diam}(\Gamma_i)\leqslant \eta$ and every $\Gamma_i$ contains a ball (of $\partial M$) of radius $\eta/6$, where the diameter is measured with respect to the distance of $M$. We can also choose $\Gamma_i$ to be the closure of these open sets. For example, one can choose $\Gamma_i$ to be the Voronoi regions corresponding to a maximal $\eta/2$-separated set on $\partial M$ with respect to the intrinsic distance $d_{\partial M}$ of $\partial M$. It is straightforward to check that these Voronoi regions satisfy our assumptions with \begin{equation}\label{boundN} N\leqslant C(n,vol(\partial M))\eta^{-n+1}. \end{equation} The approximation results in Section \ref{section-projection} enable us to approximate the volume on $M$ by only knowing an approximation of the Neumann boundary spectral data. \begin{volume}\label{volume} Let $\alpha=(\alpha_0,\cdots,\alpha_N)$, $\alpha_k\in [\eta, D]\cup\{0\}$ be given, and $M_{\alpha}$ be defined in (\ref{Malpha}). Then for any $\varepsilon>0$, there exists sufficiently small $\delta=\delta(\eta,\varepsilon)$, such that by only knowing a $\delta$-approximation $\{\lambda^a_j,\varphi^a_j|_{\partial M}\}$ of the Neumann boundary spectral data, we can compute a number $vol^{a}(M_{\alpha})$ satisfying $$\big|vol^{a}(M_{\alpha})-vol(M_{\alpha}) \big|<\varepsilon.$$ \end{volume} \begin{proof} Recall that $\varphi_1=vol(M)^{-1/2}$ on $M$ and it follows that $$\|\chi_{M_{\alpha}}\varphi_1\|^2_{L^2(M)}=\frac{vol(M_\alpha)}{vol(M)}.$$ Since the eigenspace with respect to $\lambda_1=0$ is 1-dimensional, the Fourier coefficients of $\varphi_1$ with respect to any choice of orthonormalized Neumann eigenfunctions are $(1,0,\cdots,0,\cdots)$. Apply Proposition \ref{measureerror} to $u=\varphi_1$, and we obtain the Fourier coefficients of $u^{a}=\sum_{j=1}^{J}b_j\varphi_j$ for sufficiently large $J$, and that the $L^2$-norm of $u^a$ approximates $\|\chi_{M_{\alpha}}\varphi_1\|_{L^2(M)}$. Therefore $\sum_{j=1}^{J}b_j^2$ approximates $vol(M_\alpha)/vol(M)$, and equivalently $vol(M)\sum_{j=1}^{J}b_j^2$ approximates $vol(M_\alpha)$. If $vol(M)$ is known, then $vol(M)\sum_{j=1}^{J}b_j^2$ is the number we are looking for. However, we do not exactly know $vol(M)$ since we do not exactly know the first eigenfunction; we only know an approximation of $vol(M)$ in terms of the first approximate eigenfunction $\varphi_1^a$. More precisely, $$\delta>\|\varphi_1-\varphi_1^a\|_{C^0(\partial M)} \geqslant \big| vol(M)^{-\frac{1}{2}}- \|\varphi_1^a\|_{C^0(\partial M)}\big|. $$ Hence an approximate volume can be defined in the following way: $$vol^{a}(M_{\alpha})=\|\varphi^a_1\|^{-2}_{C^0(\partial M)}\sum_{j=1}^{J}b_j^2\, ,$$ and then it satisfies the statement of the lemma. \end{proof} Besides the conditions we discussed earlier for the partition $\{\Gamma_i\}$, we need to further restrict the choice of the partition. We start with the following independent lemma regarding the \emph{boundary distance coordinates}. One may refer to Section 2.1.21 in \cite{KKL} for a brief introduction on this subject. This type of coordinates will be used to reconstruct the inner part (bounded away from the boundary) of the manifold. \begin{coordinate}\label{coordinate} Let $M\in \mathcal{M}_n(D,K_1,K_2,i_0)$. Then there exist a constant $L$ and boundary points $\{z_i\}_{i=1}^L$, $z_i\in \partial M$ such that the following two properties hold. \noindent (1) \,For any $x\in M$ with $d(x,\partial M)\geqslant i_0/2$, there exist $n$ boundary points $\{z_{i_1(x)},\cdots,z_{i_n(x)}\}\subset \{z_i\}_{i=1}^L$, such that the distance functions $\big(d(\cdot,z_{i_1(x)}),\cdots,d(\cdot,z_{i_n(x)})\big)$ define a bi-Lipschitz local coordinate in a neighborhood of $x$. \noindent (2) \,The map $\Phi_L:M\to \mathbb{R}^{L}$ defined by $$\Phi_L(x)=\big(d(x,z_1),\cdots,d(x,z_L)\big)$$ is bi-Lipschitz on $\{x\in M: d(x,\partial M)\geqslant i_0/2\}$, where the Lipschitz constant and $L$ depend only on $n,D,K_1,K_2,i_0,vol(\partial M)$. \smallskip Furthermore, the boundary points $\{z_i\}_{i=1}^L$ can be chosen as any $r_L$-maximal separated set on $\partial M$, where $r_L<i_0/8$ is a constant depending only on $n,D,K_1,K_2,i_0$. \end{coordinate} \begin{proof} Given $x\in M$ with $d(x,\partial M)\geqslant i_0/2$, let $z\in \partial M$ be a nearest boundary point: i.e. $d(x,z)=d(x,\partial M)$. Then it follows that $z$ is not conjugate to $x$ along the minimizing geodesic from $x$ to $z$. That is to say, the differential $d\exp_x |_v$ is non-degenerate, where $\exp_x$ denotes the exponential map of $M$ and $v=\exp_x^{-1}(z)$. Hence by the Inverse Function Theorem, there exists a neighborhood of $(x,v)\in TM$ (with respect to the Sasaki metric on the tangent bundle), such that the exponential map is a diffeomorphism to a neighborhood of $z$. Furthermore, one can find a uniform radius $r_1$ depending on $n,D,K_1,K_2,i_0$ for the size of these neighborhoods (Lemma 4 in \cite{KKL2}). We take $\{z_i\}$ to be an $r_2$-net on $\partial M$ (with respect to the intrinsic distance $d_{\partial M}$ of $\partial M$), where the parameter $r_2<r_1/8$ is determined later. By definition, there exists $z_1\in \{z_i\}$ such that $d_{\partial M}(z,z_1)<r_2$. Then we search for $n-1$ points $z_2,\cdots,z_n$ such that ${}_{\partial M}{\exp}_{z_1}^{-1}(z_j)$ (for $j=2,\cdots,n$) form a basis in $T_{z_1}(\partial M)$, where ${}_{\partial M}{\exp}$ denotes the exponential map of $\partial M$. We claim that this is possible for sufficiently small $r_2$ explicitly depending on $r_1,n,K_1$. This claim can be proved as follows. Take $v_2,\cdots,v_n$ to be an orthonormal basis of $T_{z_1}(\partial M)$, and consider the points $z_{j}^{\prime}={}_{\partial M}{\exp}_{z_1} (s v_j)\in \partial M$ for a fixed $s\in (r_1/4,r_1/2)$. By definition of $r_2$-net, there exists points $z_2,\cdots,z_n\in \{z_i\}$ such that $d_{\partial M}(z_j^{\prime},z_j)<r_2$ (for $j=2,\cdots,n$). We consider the triangle with the vertices $z_1,z_j^{\prime},z_j$. Since the lengths of the sides $z_1 z_j^{\prime}$ and $z_1 z_j$ are at least $r_1/8$, then for suffciently small $r_2$ explicitly depending on $K_1$, the angle of the triangle at $z_1$ is small (Toponogov's Theorem) and therefore ${}_{\partial M}{\exp}_{z_1}^{-1}(z_j)$ (for $j=2,\cdots,n$) also form a basis. Then by the same argument as Lemma 2.14 in \cite{KKL}, one can show $z_1,z_2,\cdots,z_n$ are the desired boundary points, from which a boundary distance coordinate is admitted in a neighborhood of $x$. From now on, we choose $\{z_i\}_{i=1}^L$ to be a maximal $r_2$-separated set on $\partial M$, which is indeed an $r_2$-net by maximality. The cardinality $L$ of this net is bounded by $C(n,vol(\partial M))r_2^{-n+1}$. The bi-Lipschitzness of the boundary distance coordinates follows from the fact that the differential of the exponential map is uniformly bounded in the relevant domain by a constant depending on $n,D,K_1,K_2,i_0$ (Lemma 3 and Proposition 1 in \cite{KKL2}). This concludes the proof for the first part of the lemma. \smallskip Next we prove the second part of the lemma. We claim that there exists $r_3>0$, such that $\Phi_L$ with respect to any maximal $r_3$-separated set on $\partial M$ is bi-Lipschitz on $\{x\in M: d(x,\partial M)\geqslant i_0/2\}$. Note that $\Phi_L$ is automatically Lipschitz with the Lipschitz constant $\sqrt{L}$ by the triangle inequality. Suppose there exist a sequence of manifolds $M_k\in \mathcal{M}_n(D,K_1,i_0)$ and points $x_k,y_k\in \{x\in M_k: d(x,\partial M_k)\geqslant i_0/2\}$, such that $$\frac{|\Phi_{L,k}(x_k)-\Phi_{L,k}(y_k)|}{d_{M_k}(x_k,y_k)}\to 0,\textrm{ as }k\to \infty,$$ where $\Phi_{L,k}$ is defined with respect to some maximal $1/k$-separated set on $\partial M_k$. The pre-compactness of $\mathcal{M}_n(D,K_1,i_0)$ (Theorem 3.1 in \cite{AKKLT}) yields that there exists a converging subsequence of $M_k$ to a limit $M$ in $C^1$-topology. We choose converging subsequences of $x_k,y_k$ to limit points $x,y\in M$. The assumption implies that $\Phi_{L}(x)=\Phi_{L}(y)$ with respect to a dense subset of $\partial M$. Due to the fact that the boundary distance map $\mathcal{R}$ is a homeomorphism (Lemma 3.30 in \cite{KKL}), it follows that $x=y$. Moreover, we have $d(x,\partial M)\geqslant i_0/2$. However, for sufficiently large $k$ such that $x_k,y_k\in B_{r_1}(x)$, the points $x_k,y_k$ lie in the same boundary distance coordinate neighborhood by the first part of the lemma, on which $\Phi_{L,k}$ is locally bi-Lipschitz with a uniformly bounded Lipschitz constant. This is a contradiction to the assumption. Therefore there exists some $r_3>0$ depending on $n,D,K_1,i_0$, such that $\Phi_L$ with respect to any maximal $r_3$-separated set on $\partial M$ is bi-Lipschitz. Finally, we further restrict $\{z_i\}_{i=1}^L$ to be a maximal $\min\{r_1,r_2,r_3\}$-separated set on $\partial M$. Hence the cardinality $L$ satisfies $$L\leqslant C(n,vol(\partial M))\min\{r_1,r_2,r_3\}^{-n+1},$$ which depends only on $n,D,K_1,K_2,i_0,vol(\partial M)$. We denote $r_L=\min\{r_1,r_2,r_3\}$ which depends on $n,D,K_1,K_2,i_0$. \end{proof} \smallskip \noindent \textbf{Choice of partition.} Let $\eta>0$ be given. We choose boundary points $\{z_i\}_{i=1}^N$ and a partition $\{\Gamma_i\}_{i=1}^N$ of $\partial M$ as follows. Let $\{z_1,\cdots,z_L\}$ be the boundary points determined in Lemma \ref{coordinate}, and then we add $N-L$ number of boundary points such that $\{z_{1},\cdots,z_{N}\}$ is a maximal $\eta/2$-separated set on $\partial M$. This is possible because $\{z_1,\cdots,z_L\}$ can be chosen as any $r_L$-maximal separated set on $\partial M$, with $r_L$ being a uniform constant independent of $\eta$. We take $\{\Gamma_i\}_{i=1}^N$ to be a partition of $\partial M$ (e.g. Voronoi regions corresponding to $\{z_i\}_{i=1}^N$) satisfying the assumptions at the beginning of this section: $\textrm{diam}(\Gamma_i)\leqslant \eta$, $z_i\in \Gamma_i$, and every $\Gamma_i$ contains a ball (of $\partial M$) of radius $\eta/6$. The cardinality $N$ of the partition is bounded above by (\ref{boundN}). \begin{def-section5}\label{Def-Mbeta} Let $\eta>0$ be given. For multi-indices $\beta$ of the form $\beta=(\beta_0,\beta_1,\cdots,\beta_N)$ with $\beta_0\in\{0,1\},\,\beta_1,\cdots,\beta_N \in\mathbb{N}$, we consider the following two types of sub-domains (see Figure \ref{slicing}). \smallskip (1) \,Given a multi-index $\beta=(0,\beta_1,\cdots,\beta_N)$, we define a slicing of the manifold by \begin{equation}\label{Mbeta} M_{\beta}^{\ast}=\bigcap_{i:\,\beta_i>0} \big\{x\in M: \, d(x,\Gamma_i)\in [\beta_i\eta-2\eta,\beta_i\eta) \,\big\}. \end{equation} We also consider the following modified multi-index by setting specific components zero: $$\beta\langle l\rangle:=(0,\beta_1,\cdots,\beta_L,0,\cdots,0,\beta_l,0,\cdots,0),\quad l\in \{L+1,\cdots,N\}.$$ (2) \,Given a multi-index $\beta=(1,\beta_1,\cdots,\beta_N)$, we define a modified multi-index by $$\beta[k,i]:=(1,0,\cdots,0, \beta_k,0,\cdots,0,\beta_i,0,\cdots,0),\quad k\neq i.$$ In other words, $\beta[k,i]$ can only have nonzero $k$-th and $i$-th components besides the $0$-th component. Then we define the following sub-domain: \begin{equation}\label{Mki} M_{\beta[k,i]}^{\ast}=\big\{x\in M: \, d(x,\partial M)\geqslant \beta_k\eta-2\eta,\, d(x,\Gamma_k)<\beta_k\eta,\, d(x,\Gamma_i)\in [\beta_i\eta-2\eta,\beta_i\eta)\, \big\}. \end{equation} \end{def-section5} \begin{figure}[h] \includegraphics[scale=0.5]{Figure4} \caption{Sub-domains from two subsets of the boundary. The former type is used to reconstruct the inner part of the manifold, while the latter type is used to reconstruct the boundary normal neighborhood.} \label{slicing} \end{figure} By definition (\ref{Mbeta}), we only slice the manifold from $\Gamma_i$ if $\beta_i > 0$. Hence $M_{\beta}^{\ast}\subset M_{\beta\langle l\rangle}^{\ast}$ for any $l\in \{L+1,\cdots,N\}$. Since the diameter of the manifold is bounded above by $D$, it suffices to consider a finite number of choices $\beta_i\leqslant 2+D/\eta$ for each $\beta_i$. Notice that we always use a fixed number (independent of $\eta$) of $\Gamma_i$ to slice the manifold. This keeps the total number of slicings from growing too large as $\eta$ gets small. Similar to Lemma \ref{volume}, we can also evaluate approximate volumes for $vol(M_{\beta\langle l\rangle}^{\ast}),\,vol(M_{\beta[k,i]}^{\ast})$, and the error can be made as small as needed given sufficient boundary spectral data. \begin{volumebeta}\label{volumebeta} Let $\eta>0$ be given, and $M_{\beta\langle l\rangle}^{\ast},M_{\beta[k,l]}^{\ast}$ be defined in Definition \ref{Def-Mbeta}. Then for any $\varepsilon>0$, there exists sufficiently small $\delta=\delta(\eta,\varepsilon)$, such that by only knowing a $\delta$-approximation $\{\lambda^a_j,\varphi^a_j|_{\partial M}\}$ of the Neumann boundary spectral data, we can compute numbers $vol^{a}(M_{\beta\langle l \rangle}^{\ast})$, $vol^a(M_{\beta[k,i]}^{\ast})$ satisfying $$\big|vol^{a}(M_{\beta\langle l \rangle}^{\ast})-vol(M_{\beta\langle l \rangle}^{\ast}) \big|<2^{L+1}\varepsilon,\; \textrm{ for any }l\in \{L+1,\cdots, N\},$$ and $$\big|vol^{a}(M_{\beta[k,i]}^{\ast})-vol(M_{\beta[k,i]}^{\ast}) \big|<4\varepsilon, \;\textrm{ for any }i\neq k,$$ where $L$ is a uniform constant independent of $\eta$ determined in Lemma \ref{coordinate}. \end{volumebeta} \begin{proof} Observe that for any $\beta=(0,\beta_1,\cdots,\beta_N)$ with $\beta_1,\cdots,\beta_N>0$, the sub-domain $M_{\beta}^{\ast}$ can be obtained as a finite number of unions, intersections and complements of the sub-domains $M_{\alpha}$ of the form (\ref{Malpha}) with $\alpha_0=0$. More precisely, \begin{eqnarray*} M_{\beta}^{\ast} &=& \bigcap_{i=1}^N \big( M(\Gamma_i,\beta_i\eta)- M(\Gamma_i,\beta_i\eta-2\eta) \big) \\ &=& \bigcap_{i=1}^N M(\Gamma_i,\beta_i\eta) -\bigcup_{i=1}^N M(\Gamma_i,\beta_i\eta-2\eta). \end{eqnarray*} Then the volume of $M_{\beta}^{\ast}$ can be written in terms of the volumes of $M_{\alpha}$ with $\alpha_0=0$ through the following operations. For any $n$-dimensional Hausdorff measurable subset $\Omega_1,\Omega_2\subset M$, $$vol(\Omega_1-\Omega_2)=vol(\Omega_1\cup \Omega_2)-vol(\Omega_2);$$ $$vol(\Omega_1\cap\Omega_2) = vol(\Omega_1)+vol(\Omega_2)-vol(\Omega_1\cup\Omega_2).$$ Moreover, for any multi-indices $\alpha, \alpha^{\prime}$, $$vol(M_{\alpha}\cup M_{\alpha^{\prime}})=vol(M_{\alpha_{max}}), \;\textrm{ where } (\alpha_{max})_i=\max\{\alpha_i,\alpha_i^{\prime}\}.$$ Therefore the approximate volume $vol^a(M_{\beta}^{\ast})$ for $M_{\beta}^{\ast}$ can be defined by replacing the volumes of $M_{\alpha}$ in the expansion with the approximate volume $vol^a(M_{\alpha})$. On the other hand, for a multi-index of the form $\beta[k,i]$, we have \begin{eqnarray*} M_{\beta[k,i]}^{\ast} = M(\Gamma_k,\beta_k\eta)\cap M(\Gamma_i,\beta_i\eta)-M(\partial M,\beta_k\eta-2\eta)\cup M(\Gamma_i,\beta_i\eta-2\eta). \end{eqnarray*} Recall that the volume information from the whole boundary $\partial M$ is incorporated in the $\alpha_0$ component of the multi-index $\alpha$. Thus the volume of $M_{\beta[k,l]}^{\ast}$ can be written in terms of the volumes of $M_{\alpha}$ with $\alpha_0\geqslant 0$. For a multi-index of the form $\beta\langle l\rangle$, the total number of volume terms of $M_{\alpha}$ in $vol(M_{\beta\langle l \rangle}^{\ast})$ is at most $2^{L+1}$. For a multi-index of the form $\beta[k,i]$, the total number of volume terms of $M_{\alpha}$ in $vol(M_{\beta[k,i]}^{\ast})$ is at most 4. Then the error estimates directly follow from Lemma \ref{volume}. \end{proof} \smallskip Now we are in place to define an approximation to the boundary distance functions $\mathcal{R}(M)$. We consider the following candidate. \begin{def-section5}\label{Rast} Let $\eta,\,\varepsilon>0$ be given. For a multi-index $\beta=(\beta_0,\beta_1,\cdots,\beta_N)$ with $\beta_0\in\{0,1\},\,\beta_1,\cdots,\beta_N \in\mathbb{N}_+$, if either of the following two situations happens, we associate with this $\beta$ a piecewise constant function $r_{\beta}\in L^{\infty}(\partial M)$ defined by $$r_{\beta}(z)=\beta_i \eta, \;\textrm{ if }z\in \Gamma_i.$$ \begin{enumerate}[(1)] \item $\beta_0=0$; $\beta_i\eta>i_0/2$ for all $i=1,\cdots,N$, and $vol^a(M_{\beta\langle l\rangle}^{\ast})\geqslant \varepsilon$ for all $l=L+1,\cdots,N$. \item $\beta_0=1$; there exists $k\in \{1,\cdots,N\}$, such that $\beta_k\eta\leqslant i_0/2$ and $vol^a(M_{\beta[k,i]}^{\ast})\geqslant \varepsilon$ for all $i=1,\cdots,N$ with $i\neq k$. \end{enumerate} We test all multi-indices $\beta$ up to $\beta_i\leqslant 2+D/\eta$ for each $\beta_i$, and denote the set of all functions $r_{\beta}$ chosen this way by $\mathcal{R}^{\ast}_{\varepsilon}$. \end{def-section5} Intuitively, the first situation in Definition \ref{Rast} describes a small neighborhood in the interior of the manifold away from the boundary. The second situation describes a small neighborhood near the boundary with the help of the boundary normal neighborhood. We prove that $\mathcal{R}^{\ast}_{\varepsilon}$ is an approximation to the boundary distance functions $\mathcal{R}(M)$ for sufficiently small $\varepsilon$. \begin{approximation}\label{approximation} Let $M\in \mathcal{M}_n(D,K_1,K_2,i_0,r_0)$. For any $\eta>0$, there exists $\varepsilon=\varepsilon(\eta)$ and sufficiently small $\delta=\delta(\eta)$, such that by only knowing a $\delta$-approximation $\{\lambda^a_j,\varphi^a_j|_{\partial M}\}$ of the Neumann boundary spectral data, we can construct a set $\mathcal{R}^{\ast}_{\varepsilon}\subset L^{\infty}(\partial M)$ such that $$d_H (\mathcal{R}^{\ast}_{\varepsilon},\mathcal{R}(M)) \leqslant C_6\sqrt{\eta},$$ where $d_H$ denotes the Hausdorff distance between subsets of the metric space $L^{\infty}(\partial M)$, and the constant $C_6$ depends only on $n,D,K_1,K_2,i_0,vol(\partial M)$. \end{approximation} \begin{proof} Let $\eta<\min\{1,i_0/8\}$. Given any $x\in M$, take a point $x^{\prime}\in M$ such that $d(x,x^{\prime})\leqslant\eta$ and $d(x^{\prime},\partial M)\geqslant \eta$. Clearly there exist positive integers $\beta_i>0$ such that $d(x^{\prime},\Gamma_i)\in [\beta_i\eta-2\eta,\beta_i\eta)$ for all $i=1,\cdots,N$. In fact, there are two choices for each $\beta_i$, and we choose the one satisfying $d(x^{\prime},\Gamma_i)\in [\beta_i\eta-3\eta/2,\beta_i\eta-\eta/2)$ for all $i$. In particular, we see that each $\beta_i$ satisfies $\beta_i\eta-2\eta\leqslant D$. If $\beta_i\eta>i_0/2$ for all $i=1,\cdots,N$, then we consider the multi-index $\beta=(0,\beta_1,\cdots,\beta_N)$. It follows from the triangle inequality that $B_{\eta/2}(x^{\prime})\subset M_{\beta}^{\ast}$. Since $B_{\eta/2}(x^{\prime})$ does not intersect $\partial M$, we have $vol(M_{\beta}^{\ast})> vol(B_{\eta/2}(x^{\prime}))\geqslant c_n\eta^n$ for sufficiently small $\eta$, which implies that $vol(M_{\beta\langle l\rangle}^{\ast})>c_n \eta^n$ for all $l=L+1,\cdots,N$. We denote \begin{equation}\label{epsilon-star} \varepsilon_{\ast}=c_n\eta^n/2, \end{equation} and set $\varepsilon=2^{-L-1}\varepsilon_{\ast}$ in Lemma \ref{volumebeta}. Then we consider the set of functions $\mathcal{R}^{\ast}_{\varepsilon_{\ast}}$. Since $vol^a(M_{\beta\langle l\rangle}^{\ast})>c_n\eta^n-\varepsilon_{\ast}=\varepsilon_{\ast}$ by Lemma \ref{volumebeta}, we have $r_{\beta}\in \mathcal{R}^{\ast}_{\varepsilon_{\ast}}$ by the first situation in Definition \ref{Rast}. Then by the condition $\textrm{diam}(\Gamma_i)\leqslant \eta$ and the triangle inequality, we have \begin{equation}\label{xtobeta} \|r_x-r_{\beta}\|_{L^{\infty}(\partial M)}\leqslant \|r_x-r_{x^{\prime}}\|_{L^{\infty}(\partial M)}+\|r_{x^{\prime}}-r_{\beta}\|_{L^{\infty}(\partial M)}\leqslant\eta+2\eta=3\eta. \end{equation} If there exists $k\in \{1,\cdots,N\}$ such that $\beta_k\eta\leqslant i_0/2$, then we consider the multi-index $\beta=(1,\beta_1,\cdots,\beta_N)$. Without loss of generality, assume $k$ is the index such that $\beta_k=\min_{i> 0} \beta_i$. Hence $$d(x^{\prime},\partial M)= \min \big\{d(x',\Gamma_1),\cdots,d(x',\Gamma_N) \big\}\geqslant \beta_k\eta-3\eta/2,$$ which shows $x^{\prime}\in M_{\beta[k,i]}^{\ast}$ for all $i=1,\cdots,N$ with $i\neq k$ by definition (\ref{Mki}). Moreover, we also have $B_{\eta/2}(x^{\prime})\subset M_{\beta[k,i]}^{\ast}$ for all $i$. Thus by choosing the same $\varepsilon_{\ast}$ and $\varepsilon$ as the previous case, we have $r_{\beta}\in \mathcal{R}^{\ast}_{\varepsilon_{\ast}}$ by the second situation in Definition \ref{Rast}, and (\ref{xtobeta}) still holds. This concludes the proof for one direction. \smallskip On the other hand, given any $r_{\beta}\in \mathcal{R}^{\ast}_{\varepsilon_{\ast}}$, Definition \ref{Rast} and Lemma \ref{volumebeta} indicates that either $vol(M_{\beta\langle l\rangle}^{\ast})>0$ for all $l=L+1,\cdots,N$, or there exists $k$ such that $vol(M_{\beta[k,i]}^{\ast})>0$ for all $i$. Recall that $\beta_1,\cdots,\beta_N>0$ by definition. \noindent \textbf{(i)} The first situation allows us to pick an arbitrary point $x_l$ in every $M_{\beta\langle l\rangle}^{\ast}$. Then by $\textrm{diam}(\Gamma_i)\leqslant\eta$ and the triangle inequality, we have \begin{equation}\label{tril} \|r_{\beta}-r_{x_l}\|_{L^{\infty}(\Gamma_1\cup\cdots\cup \Gamma_L\cup\Gamma_l)}\leqslant 3\eta, \;\textrm{ for any }l\in \{L+1,\cdots,N\}. \end{equation} Notice that all $x_l$ are in fact bounded away from the boundary. More precisely, for any $x_l$, we know from Definition \ref{Rast} that $$d(x_l,\Gamma_i)\geqslant \beta_i\eta-2\eta > i_0/2-2\eta>i_0/4, \;\textrm{ for all }i=1,\cdots, L.$$ Since the boundary points $\{z_i\}_{i=1}^L$ can be chosen as an $r_L$-maximal separated set on $\partial M$, where $r_L<i_0/8$ is a uniform constant independent of $\eta$ (Lemma \ref{coordinate}), we have for any $x_l$, $$d(x_l,\partial M)>i_0/8.$$ Hence for any other $j\in \{L+1,\cdots,N\}$ with $j\neq l$, Lemma \ref{coordinate} yields that $$d(x_l,x_j)\leqslant C(n,D,K_1,i_0)|\Phi_L(x_l)-\Phi_L(x_j)| \leqslant C\sqrt{L}\,\eta,$$ where $\Phi_L(\cdot)=\big(d(\cdot,z_1),\cdots,d(\cdot,z_L)\big)$. Then it follows from the triangle inequality and (\ref{tril}) that $$\|r_{\beta}-r_{x_l}\|_{L^{\infty}(\Gamma_j)}\leqslant \|r_{\beta}-r_{x_j}\|_{L^{\infty}(\Gamma_j)}+\|r_{x_j}-r_{x_l}\|_{L^{\infty}(\Gamma_j)} \leqslant (C\sqrt{L}+3)\eta.$$ Thus by ranging $j\neq l$ over $\{L+1,\cdots,N\}$, we obtain \begin{equation*}\label{rbeta} \|r_{\beta}-r_{x_l}\|_{L^{\infty}(\partial M)}\leqslant (C\sqrt{L}+3)\eta. \end{equation*} \noindent \textbf{(ii)} The second situation allows us to pick an arbitrary point $x_i$ in every $M_{\beta[k,i]}^{\ast}$. Observe from Definition \ref{Rast} that for any $x_i$, we have $$d(x_i,\partial M)\leqslant d(x_i,\Gamma_k) < \beta_k\eta\leqslant i_0/2.$$ The fact that $d(x,\Gamma_k)\geqslant d(x,\partial M)$ implies that $$\|r_{\beta}-r_{x_i}\|_{L^{\infty}(\Gamma_k\cup\Gamma_i)}\leqslant 2\eta.$$ For any other $j\in\{1,\cdots,N\}$ with $j\neq k,i$, we have $$d(x_i,x_j)\leqslant C\sqrt{\eta}.$$ This is due to the fact that the diameter of the sub-domain $\{x\in M: \, d(x,\partial M)\geqslant \beta_k\eta-2\eta,\, d(x,\Gamma_k)<\beta_k\eta \}$ for $\beta_k\eta\leqslant i_0/2$ is bounded above by $C\sqrt{\eta}$. Hence by ranging $j\neq k,i$ over $\{1,\cdots,N\}$, we obtain \begin{equation*}\label{rbeta} \|r_{\beta}-r_{x_i}\|_{L^{\infty}(\partial M)}\leqslant C\sqrt{\eta}+2\eta. \end{equation*} \end{proof} \begin{remark}\label{remark-thirdlog} We only used a fixed number (independent of $\eta$) of subsets of the boundary to slice the manifold, so that the total number of slicings does not grow too large as $\eta$ gets small. To reconstruct the inner part of the manifold, we used $L+1$ subsets with $L$ being a uniform constant (however not explicit). Near the boundary, we took advantage of the boundary normal neighborhood and essentially only used two subsets. Instead if we use all $N$ subsets to slice the manifold, it would result in a third logarithm in Theorem \ref{stability}. \end{remark} \begin{remark}\label{approximation-partial} By virtue of Remark \ref{projection-partial}, the approximate volume for $M_{\alpha}$ with $\alpha_0=0$ in Lemma \ref{volume} can be found by only knowing the boundary data on $\cup_{\alpha_i>0} \Gamma_i$. This implies that the approximate volume for $M_{\beta}^{\ast}$ (with $\beta_0=0$) in Lemma \ref{volumebeta} can be found by only knowing the boundary data on $\cup_{\beta_i>0} \Gamma_i$. Thus in a similar but simpler way as Definition \ref{Rast} and Proposition \ref{approximation}, one can define an approximation to $\mathcal{R}(M)$ restricted on a part of the boundary using partial boundary spectral data. Furthermore in the case of partial data, a similar calculation as in Appendix \ref{constants} yields a $\log$-$\log$-$\log$ estimate on the stability of the reconstruction of $\mathcal{R}(M)$. \end{remark} The following result shows that the reconstruction of a manifold from $\mathcal{R}(M)$ is stable. \begin{2007}(Theorem 1 in \cite{KKL2})\label{2007} Let $M$ be a compact Riemannian manifold with smooth boundary. Suppose $\mathcal{R}^{\ast}$ is an $\eta$-approximation to the boundary distance functions $\mathcal{R}(M)$ for sufficiently small $\eta$. Then one can construct a finite metric space $X$ directly from $\mathcal{R}^{\ast}$ such that $$d_{GH}(M,X)<C_7(n,D,K_1,K_2,i_0)\, \eta^{\frac{1}{36}},$$ where $d_{GH}$ denotes the Gromov-Hausdorff distance between metric spaces. \end{2007} \medskip Finally we prove the main results Theorem \ref{stability} and Theorem \ref{Cor1}. \begin{proof}[Proof of Theorem \ref{stability}] The estimate directly follows from Proposition \ref{approximation} and Theorem \ref{2007}. The dependency of constants is derived in Appendix \ref{constants}. The only part left is to find an upper bound for $vol(\partial M),vol(M)$ in terms of other geometric parameters. Due to Corollary 2(b) in \cite{KKL2}, the (intrinsic) diameter of $\partial M$ is uniformly bounded by a constant depending on $n,D,\|R_M\|_{C^1},\|S\|_{C^2},i_0$, however not explicitly. Then by the volume comparison theorem for $\partial M$, $vol(\partial M)$ is uniformly bounded by the same set of parameters. As for $vol(M)$, the manifold $M$ is covered by harmonic coordinate charts with the total number of charts bounded (not explicitly) by a constant depending on $n,D,\|R_M\|_{C^1},\|S\|_{C^2},i_0$ (Theorem 3 in \cite{KKL2}). Away from the boundary, the volumes of balls of a small radius are uniformly bounded. Near the boundary, we can use the boundary normal neighborhood of $\partial M$ since $vol(\partial M)$ is already shown to be bounded. Hence $vol(M)$ is uniformly bounded by the same set of parameters. \end{proof} \begin{proof}[Proof of Theorem \ref{Cor1}] We take the first $\delta^{-1}$ Neumann boundary spectral data of $M_2$, and by Definition \ref{deferror}, this set of finite data (without error) is a $\delta$-approximation of the Neumann boundary spectral data of $M_2$. By Proposition \ref{approximation}, we can construct an approximation to $\mathcal{R}(M_2)$. On the other hand, the finite spectral data of $M_2$ is $\delta$-close to the Neumann boundary spectral data of $M_1$ by Definition \ref{deferror}, since the Neumann boundary spectral data of $M_1$ and $M_2$ are $\delta$-close by assumption. Then from the pull-back of the finite spectral data of $M_2$ via the boundary isometry, we can construct an approximation to $\mathcal{R}(M_1)$. Since the boundary isometry (diffeomorphism) preserves Riemannian metrics on the boundaries, the pull-back of the finite spectral data via the boundary isometry produces an isometric approximation to the boundary distance functions. Hence Theorem \ref{Cor1} follows from Corollary 1 in \cite{KKL2}. \end{proof} \section{Auxiliary lemmas}\label{auxiliary} This section contains the proofs of several lemmas used in Section \ref{section-uc}. Some of the lemmas in this section, especially Lemma \ref{dd}, are important technical results, and we prove them here without interrupting the structure of the main proof. Some other lemmas are known facts. We did not find precise references for them, so we present short proofs here. \begin{riccati}\label{riccati} Let $(M,g)\in \mathcal{M}_n(D,K_1,K_2,i_0)$. Denote by $S_{\rho}$ the second fundamental form of the equidistant hypersurface in $M$ defined by the level set $d(\cdot,\partial M)=\rho$ for $\rho<i_0$. Then there exists a uniform constant $r_b$ explicitly depending only on $K_1,i_0$, such that for any $\rho\leqslant r_b$, we have $\|S_{\rho}\|\leqslant 2K_1$. Moreover, if the metric components satisfy (\ref{coorb}) with respect to a coordinate chart in a ball $U$ of $\partial M$, then the metric components with respect to the boundary normal coordinate in $U\times [0,r_b]$ satisfy $$\|g_{ij}\|_{C^1}\leqslant C(n,\|R_M\|_{C^1},\|S\|_{C^1}),\quad \|g_{ij}\|_{C^4}\leqslant C(n,K_1,K_2,i_0),\; \textrm{ for all } 1\leqslant i,j\leqslant n.$$ \end{riccati} \begin{proof} At an arbitrary point $z\in \partial M$, take an arbitrary unit vector $V$ in $T_z (\partial M)$ and extend it to $V(\rho)\in T_{\gamma_{z,\textbf{n}}(\rho)} M$ ($\rho<i_0$) via the parallel translation along $\gamma_{z,\textbf{n}}$, where $\gamma_{z,\textbf{n}}$ denotes the geodesic of $M$ from $z$ with the initial normal vector $\textbf{n}$ at $z$. We still use the notation $S_{\rho}$ to denote the shape operator of the equidistant hypersurface with distance $\rho$ from $\partial M$. Consider the following function $$\kappa_V (\rho)=\langle S_{\rho} (V(\rho)),V(\rho) \rangle_{g}.$$ The bound on the second fundamental form of $\partial M$ indicates $|\kappa_V(0)|\leqslant K_1$. For convenience, we omit the evaluation at $\rho$ and use $V$ to denote the vector field $V(\rho)$. Since $V$ is a parallel vector field with respect to the normal vector field $\frac{\partial}{\partial \rho}$ (or simply $\partial_{\rho}$), we have \begin{equation*} \frac{d }{d\rho} \kappa_V = \langle \nabla_{\partial_{\rho}} (S_{\rho} V) ,V \rangle+ \langle S_{\rho} V, \nabla_{\partial_{\rho}} V\rangle = \langle (\nabla_{\partial_{\rho}} S_{\rho}) V,V \rangle. \end{equation*} Then the Riccati equation (e.g. Theorem 2 in \cite{PP}, p44) leads to the following formula: \begin{equation}\label{kappa-riccati} \frac{d }{d\rho} \kappa_V = -\langle S_{\rho}^2 V,V \rangle +R_M(V,\partial_{\rho},V,\partial_{\rho}) . \end{equation} Due to the fact that $S_{\rho}$ is symmetric and $|V|=1$, we have $$\langle S_{\rho}^2 V,V \rangle = |S_{\rho} V|^2 \geqslant |\langle S_{\rho} V,V \rangle|^2 .$$ Hence, \begin{equation}\label{riccati-scalar-upper} \frac{d }{d\rho} \kappa_V (\rho) \leqslant -\kappa_V^2 (\rho) + K_1^2\, . \end{equation} On the other hand, we need a lower bound for $d\kappa_V/d\rho$. This is possible because we \emph{a priori} know the solution of the Riccati equation exists up to $i_0$, and the equidistant hypersurfaces vary smoothly in a neighborhood of $\partial M$. This implies that there exists a positive number $\rho_{max}\leqslant i_0/2$ satisfying $$\rho_{max}=\sup \big\{\rho\in [0,\frac{i_0}{2}]: \|S_{\tau}\|\leqslant 2K_1 \textrm{ for all }\tau\in [0,\rho] \big\}.$$ Hence for any $\rho\in [0,\rho_{max}]$, we have $|S_{\rho} V|\leqslant 2K_1$ as the condition above is a closed condition. Then from (\ref{kappa-riccati}), \begin{equation}\label{riccati-scalar-lower} \frac{d }{d\rho} \kappa_V (\rho) \geqslant -4K_1^2-K_1^2=-5K_1^2\, . \end{equation} Combining (\ref{riccati-scalar-upper}) and (\ref{riccati-scalar-lower}), we have $$\big| \frac{d }{d\rho} \kappa_V (\rho) \big| \leqslant 5K_1^2\, , \quad \rho\in [0,\rho_{max}].$$ Thus for any $\rho\leqslant \min\{\rho_{max},(10K_1)^{-1}\}$, we have $|\kappa_V(\rho)|\leqslant 3K_1/2$. Since $z$ and $V$ are arbitrary, this shows $\|S_{\rho}\|\leqslant 3K_1/2$. We claim that the uniform constant $r_b$ can be chosen as $r_b=\min\{i_0/2,(10K_1)^{-1}\}$. This choice is obviously justified if $\rho_{max}=i_0/2$. Now if $\rho_{max}<i_0/2$, we prove that $\rho_{max}> (10K_1)^{-1}$. Suppose otherwise, and it implies that $\|S_{\rho}\|\leqslant 3K_1/2$ satisfies for any $\rho\leqslant \rho_{max}$. We know the solution of the Riccati equation exists in a neighborhood of $\rho_{max}$, and therefore there exists a larger $\rho>\rho_{max}$ satisfying the condition for $\rho_{max}$ since $\rho_{max}< i_0/2$ by assumption. This contradicts to the maximality of $\rho_{max}$. As a consequence, our estimate holds up to $\rho\leqslant (10K_1)^{-1}$ in this case. On the other hand, the fact that $(10K_1)^{-1}<\rho_{max}<i_0/2$ justifies our choice of $r_b$ in this case. This completes the proof for the first part of the lemma. \smallskip For the second part, we consider the matrix Riccati equation in the boundary normal coordinate. This time we use the Lie derivative version of the Riccati equation (e.g. Proposition 7(3) in \cite{PP}, p47). The components of the shape operator are denoted by $S_{\alpha}^{l}=\sum_{\beta=1}^{n-1} g^{\beta l}S_{\alpha \beta}$, where $S_{\alpha \beta}$ denotes the components of the second fundamental form of the equidistant hypersurfaces. Here the evaluation at $\rho$ is omitted. Then the Riccati equation has the following form: $$\frac{d}{d \rho} S_{\alpha\beta}=\sum_{\gamma,l=1}^{n-1} g_{\gamma l}S_{\alpha}^{\gamma}S_{\beta}^{l}+R_M \big(\frac{\partial}{\partial x^{\alpha}}, \frac{\partial}{\partial \rho}, \frac{\partial}{\partial x^{\beta}}, \frac{\partial}{\partial \rho} \big).$$ By definition we have the equation on the distortion of metric: $$\frac{d}{d \rho} g_{\alpha\beta}=2S_{\alpha\beta}.$$ Due to the first part of the lemma, $d g_{\alpha\beta}/d\rho$ is uniformly bounded. As a consequence, $g_{\alpha\beta}$ is uniformly bounded since it is bounded in the coordinate chart on $\partial M$. The tangential derivatives of $g_{\alpha\beta}$ are estimated as follows. The Riccati equation can be written in terms of $(S_{\alpha\beta})$ and $(g_{\alpha\beta})$ using the formula for the matrix inverse. We differentiate these two equations with respect to all tangential directions $x^1,\cdots,x^{n-1}$, and we get a system of first-order ODE with the variable $\textbf{v}$: $$\textbf{v}(\rho)=\big(\cdots,\frac{\partial g_{\alpha\beta}}{\partial x_T}(\rho),\cdots,\frac{\partial S_{\gamma l}}{\partial x_T}(\rho),\cdots \big), \quad \alpha,\beta,\gamma,l=1\cdots,n-1,$$ where $x_T$ ranges over all tangential directions $x^1,\cdots,x^{n-1}$. This system of equations can be written in the following form: $$\frac{d}{d \rho} \textbf{v}=B_1\textbf{v}+B_2\textbf{v}+\nabla R_M^{\ast}.$$ The matrix $B_1$ is obtained by differentiating the term of the $S^2$ form in the Riccati equation, and only consists of components of the second fundamental form $(S_{\alpha\beta})$ and the metric $(g_{\alpha\beta})$. The matrix $B_2$ is obtained by differentiating the curvature term, and only consists of components of the curvature tensor and $(g_{\alpha\beta})$. The vector $\nabla R_M^{\ast}$ absorbs all the remaining terms and is considered as a constant vector. More precisely, the vector $\nabla R_M^{\ast}$ is made up of components of the covariant derivative $\nabla R_M$, and components of $R_M$, $(S_{\alpha\beta})$, $(g_{\alpha\beta})$. Due to the first part of the lemma, the components $(S_{\alpha\beta})$ and $(g_{\alpha\beta})$ are uniformly bounded in the boundary normal neighborhood of width $r_b$. Then it follows that the components $(g^{\alpha\beta})$ are also uniformly bounded. This implies that the matrices $B_1,B_2$ have norms bounded above by $C(n,K_1)$, and the vector $\nabla R_M^{\ast}$ has length bounded above by $C(n,K_1,\|\nabla R_M\|)$. The initial condition $|\textbf{v}(0)|$ is bounded above by $n, \|\nabla S\|$. Then the standard theory of ODE yields a bound for $|\textbf{v}|$ and hence for all components of $\textbf{v}$. In particular, $\partial g_{\alpha\beta}/\partial x_T$ are uniformly bounded, which implies that $\|g_{ij}\|_{C^1}\leqslant C(n,\|R_M\|_{C^1},\|S\|_{C^1})$ for all $1\leqslant i,j\leqslant n$. We keep differentiating the matrix Riccati equation with respect to $x_T$ and $\rho$ up to the fourth order. By the same argument, all relevant coefficients of that system of ODE are uniformly bounded by $\|R_M\|_{C^4}$, $(S_{\alpha\beta})$, $(g_{\alpha\beta})$ and previous lower order estimates. Since the initial condition at $\rho=0$ is bounded by $n, \|g_{\alpha\beta}(0)\|_{C^4}$, $\|S\|_{C^4}$ and $\|R_M\|_{C^3}$, the $C^4$ estimate for the metric components directly follows from (\ref{coorb}). \end{proof} \begin{CATradius}\label{CATradius} (1) \,For any $M\in \mathcal{M}_n(K_1)$, we have $r_{\textrm{CAT}}(M)>0$. Assume further $M\in \mathcal{M}_n(D,K_1,K_2,i_0)$. The submanifold $M_h$ is defined in Definition \ref{Mhdh}. Suppose $\widetilde{M}$ is an extension of $M$ satisfying Lemma \ref{extensionmetric}(1-3) with the extension width $\delta_{ex}$. Then \\ (2) \,for sufficiently small $h,\delta_{ex}$ explicitly depending on $K_1,K_2,i_0$, we have $$r_{\textrm{CAT}}(M_h)\geqslant \min \big\{C(n,\|R_M\|_{C^1},\|S\|_{C^1}),r_{\textrm{CAT}}(M) \big\},$$ $$r_{\textrm{CAT}}(\widetilde{M})\geqslant\min \big\{C(n,\|R_M\|_{C^1},\|S\|_{C^1}),\frac{i_0}{4},\frac{r_{\textrm{CAT}}(M)}{2} \big\};$$ (3) \,for sufficiently small $h,\delta_{ex}$, we have $$r_{\textrm{CAT}}(M_h)\geqslant \min\big\{\frac{2}{3}r_{\textrm{CAT}}(M),\frac{\pi}{2K_1}\big\},\quad r_{\textrm{CAT}}(\widetilde{M})\geqslant \min\big\{\frac{2}{3}r_{\textrm{CAT}}(M),\frac{\pi}{2K_1}\big\}.$$ \end{CATradius} \begin{proof} Due to the Characterization Theorem in \cite{ABB2}, any point $x\in M$ has an open ball $U_x$ such that $U_x$ has curvature bounded above by $K_1^2$ in the sense of Alexandrov. In particular, for any point $p,q\in U_x$ satisfying $d_{U_x}(p,q)<\pi/K_1$, there is a unique minimizing geodesic in $U_x$ (not necessarily a minimizer of $M$) connecting $p$ and $q$ (e.g. Theorem 8.2.1 in \cite{AKP}). \smallskip \noindent (1) Suppose $r_{\textrm{CAT}}(M)=0$, and there exist sequences of points $p_i,q_i$, such that there are two minimizing geodesics of $M$ joining each pair of points $p_i,q_i$ with $d(p_i,q_i)\to 0$. By the compactness of $M$, we can find converging subsequences of points, still denoted by $p_i$ and $q_i$. Let $x$ be their limit point. For sufficiently large $i$, there are two minimizing geodesics of $M$ connecting $p_i,q_i$ and they both lie in $U_x$, which is a contradiction to the property of $U_x$. \smallskip \noindent (2) Given an arbitrary point $p\in M_h$, suppose $q\in M_h$ is a point such that there are two minimizing geodesics of $M_h$ connecting $p,q$. Without loss of generality, assume $d_h(p,q)<\min\{\pi/2K_1,r_{\textrm{CAT}}(M)\}$. We choose $h$ sufficiently small such that $\|S_{\partial M_h}\|\leqslant 2\|S\|$ and $\|S_{\partial M_h}\|_{C^1}\leqslant 2\|S\|_{C^1}$. Recall that no conjugate points occur along geodesics (of $M_h$) of length less than $\pi/2K_1$ (Corollary 3 in \cite{ABB2}). Furthermore, we consider $p,q$ to be the closest pair: $d_h(p,q)=r_{\textrm{CAT}}(M_h)$. Then by the first variation formula (e.g. Proposition 3 in \cite{ABB2}), the two geodesics connecting $p,q$ form a closed geodesic of $M_h$. It is known that geodesics on manifolds with smooth boundary are of $C^{1,1}$. Hence their geodesic curvature exists almost everywhere and is bounded by $C(n,\|R_M\|_{C^1},\|S\|_{C^1})$ due to (\ref{acceleration}). Now consider these two geodesics of $M_h$ connecting $p,q$ as a closed $C^{1,1}$-curve of $M$, and it lies in the ball of $M$ centered at $p$ of the radius $\min\{\pi/2K_1,r_{\textrm{CAT}}(M)\}$, which is CAT$(K_1)$ due to Theorem 4.3 in \cite{AB}. Hence by Corollary 1.2(c) in \cite{AB}, the length of this closed curve is bounded below by $C(n,\|R_M\|_{C^1},\|S\|_{C^1})$, and therefore $d_h(p,q)$ is bounded below by $C(n,\|R_M\|_{C^1},\|S\|_{C^1})$. Next we derive a lower bound for $r_{\textrm{CAT}}(\widetilde{M})$. Suppose $p,q\in \widetilde{M}$ is the closest pair of points such that there are two minimizing geodesics of $\widetilde{M}$ joining $p,q$. Assume $\widetilde{d}(p,q)<\min\{\pi/4K_1,i_0/4,$ $r_{\textrm{CAT}}(M)/2\}$. Then we immediately see that at least one of these two geodesics intersects $\widetilde{M}-M$. This implies that both geodesics lie in the boundary normal (tubular) neighborhood of $\partial M$ by assumption. Furthermore, the two geodesics connecting $p,q$ form a closed geodesic of $\widetilde{M}$ by the first variation formula. We move inwards this closed geodesic along the family of geodesics normal to $\partial M$ by distance $\delta_{ex}<i_0/2$. This process results in a closed $C^{1,1}$-curve of $M$ contained in the boundary normal neighborhood. For sufficiently small $\delta_{ex}$ depending on $K_1,K_2$, this closed $C^{1,1}$-curve of $M$ has length at most $3\widetilde{d}(p,q)$ and its geodesic curvature is bounded by $C(n,\|R_M\|_{C^1},\|S\|_{C^1})$ almost everywhere. Hence this closed curve of $M$ lies in a ball of $M$ of the radius $\min\{\pi/2K_1,r_{\textrm{CAT}}(M)\}$ (which is CAT$(K_1)$), and therefore its length is bounded below by $C(n,\|R_M\|_{C^1},\|S\|_{C^1})$ by Corollary 1.2(c) in \cite{AB}. This shows that the length of the original closed geodesic of $\widetilde{M}$ is bounded below by $C(n,\|R_M\|_{C^1},\|S\|_{C^1})$, which gives the lower bound for $\widetilde{d}(p,q)$. \smallskip \noindent (3) Here we only prove for $M_h$; the proof for $\widetilde{M}$ is the same. Suppose not, and we can find $p_i,q_i\in M_{h_i}$ ($h_i\to 0$) such that there are two minimizing geodesics of $M_{h_i}$ connecting each pair $p_i,q_i$ with $d_{h_i}(p_i,q_i)<\min\{2r_{\textrm{CAT}}(M)/3,\pi/2K_1\}$. Moreover, we can assume $q_i$ is the closest point from $p_i$ such that this happens, and therefore the two geodesics connecting $p_i,q_i$ form a closed geodesic of $M_{h_i}$. Thus we have a sequence of closed $C^1$-curves with lengths less than $4r_{\textrm{CAT}}(M)/3$. This sequence of closed curves also has lengths uniformly bounded away from $0$ due to $(2)$. Hence by the Arzela-Ascoli Theorem, we can find a subsequence converging to a limit closed curve in $M$ of nonzero length (not necessarily of $C^1$). Let $p,q\in M$ be the limit points of $p_i,\,q_i$. Since $d_{h_i}$ converges to $d$ (Lemma \ref{distances}), then the lower semi-continuity of length yields that the limit closed curve has length at most $2d(p,q)$. Consider the segment of the limit closed curve from $p$ to $q$ and the other segment from $q$ to $p$. Both segments must have lengths at least the distance $d(p,q)$. Since the limit closed curve has length at most $2d(p,q)$, each segment is a minimizing geodesic of $M$. If these two segments do not coincide, then we get two minimizing geodesics of $M$ from $p$ to $q$ of lengths at most $2r_{\textrm{CAT}}(M)/3$, which contradicts the condition for $r_{\textrm{CAT}}(M)$. If the two segments coincide, we pick a point $y\in M$ on the limit curve close to $p$, and consider points $y_1,y_2$ on the closed geodesic of $M_{h_i}$ near (fixed) $y$ at opposite sides from $p_i$. For sufficiently large $i$, the points $y_1,y_2$ can be arbitrarily close in $M_{h_i}$ and meanwhile bounded away from $p_i$. However, the angle between the geodesic segment of $M_{h_i}$ from $p_i$ to $y_1$ and the segment from $p_i$ to $y_2$ is always $\pi$, since the curve in question is a closed $C^1$-curve. This is a contradiction to the local CAT condition for $M_{h_i}$ combined with $(2)$. \end{proof} \begin{dhs}\label{dhs} Let $h$ be sufficiently small determined at the beginning of Section \ref{subsection3.4}. Let $d_h^s(\cdot,z)$ (Definition \ref{definition-dhs}) be the smoothening of the function $d_h(\cdot,z)$ (Definition \ref{Mhdh}) with the smoothening radius $r=a_T h^3$, where $a_T=\min\{1,T^{-1}\}$. Then the following properties are satisfied for $z,z_1,z_2\in M_h$, $x\in M$ and $x_1,x_2\in \widetilde{M}$.\\ (1) \,$|d_h(x_1,z_1)-d_h(x_1,z_2)|\leqslant d_h(z_1,z_2).$\\ (2) \,$|d_h^s(x,z_1)-d_h^s(x,z_2)|\leqslant (1+CnK_1^2 h^6)d_h(z_1,z_2).$ \\ (3) \,For sufficiently small $h$ only depending on $K_1$, we have $$|d_h(x_1,z)-d_h(x_2,z)|<\frac{3}{2}h^{-1}\widetilde{d}(x_1,x_2).$$ (4) \,For sufficiently small $h$ depending on $n,K_1,i_0$, if $d_h(x,z)<i_0$, then $$|d_h^s(x,z)-d_h(x,z)|<2a_T h^2.$$ \end{dhs} \begin{proof} (1) directly follows from the definition of $d_h$.\\ (2) Let $r=a_T h^3$. Observe that the ball of radius $h^3$ centered at any $x\in M$ does not intersect $\partial \widetilde{M}$, and hence the distance function $\widetilde{d}(\cdot,x)$ for $x\in M$ is simply a geodesic distance function. Due to (\ref{curvatureextended}), the Jacobian $J_x(v)$ of the exponential map $\exp_x(v)$ of $\widetilde{M}$ at $v\in \mathcal{B}_{r}(0)\subset T_x\widetilde{M}$ satisfies \begin{equation}\label{Jacobian} |J_x(v)-1| \leqslant CnK_1^2|v|^2 \leqslant CnK_1^2 h^6. \end{equation} Then it follows from (\ref{normalization}) that \begin{eqnarray}\label{normalizationestimate} \int_{\widetilde{M}} k_1\big(\frac{\widetilde{d}(y,x)}{r}\big)dy&=& \int_{\mathcal{B}_{r}(0)\subset T_x{\widetilde{M}}} k_1\big(\frac{|v|}{r}\big)J_x(v)dv \nonumber \\ &\leqslant& (1+CnK_1^2h^6) \int_{\mathbb{R}^n} k_1\big(\frac{|v|}{r}\big)dv. \end{eqnarray} This inequality (\ref{normalizationestimate}), (\ref{normalization}) and (1) yield (2). \smallskip \noindent (3) Recall that the second fundamental form of $\partial M_h$ is bounded by $2K_1$ due to Lemma \ref{riccati}, and $\widetilde{M}$ can be considered as an extension of $M_h$ by gluing a collar of width $6h$. If $x_1,x_2\in M_h$, then Lemma \ref{distances} applies by replacing $M$ with $M_h$ and we have \begin{equation}\label{dhprojection} |d_h(x_1,z)-d_h(x_2,z)|\leqslant d_h(x_1,x_2)\leqslant (1+36K_1h)\widetilde{d}(x_1,x_2)\, . \end{equation} If $x_1,x_2\in \widetilde{M}-M_h$, then Lemma \ref{distances} yields $$d_h(x_1^{\perp_h},x_2^{\perp_h})\leqslant (1+36K_1h)\widetilde{d}(x_1,x_2).$$ Then by the definition of $d_h$ (\ref{dh}) and (\ref{dhprojection}), we have \begin{eqnarray*} |d_h(x_1,z)-d_h(x_2,z)|&\leqslant& |d_h(x_1^{\perp_h},z)-d_h(x_2^{\perp_h},z)|+ h^{-1}|\widetilde{d}(x_1,x_1^{\perp_h})-\widetilde{d}(x_2,x_2^{\perp_h})| \\ &\leqslant& d_h(x_1^{\perp_h},x_2^{\perp_h}) + h^{-1}|\widetilde{d}(x_1,\partial M_h)-\widetilde{d}(x_2,\partial M_h)| \\ &\leqslant& (1+36K_1h)\widetilde{d}(x_1,x_2)+h^{-1}\widetilde{d}(x_1,x_2). \end{eqnarray*} Thus the desired estimate follows for sufficiently small $h$ only depending on $K_1$. If $x_1\in \widetilde{M}-M_h,\,x_2\in M_h$, then similarly we have \begin{eqnarray*} |d_h(x_1,z)-d_h(x_2,z)|&\leqslant& |d_h(x_1^{\perp_h},z)-d_h(x_2,z)|+ h^{-1}\widetilde{d}(x_1,x_1^{\perp_h}) \\ &\leqslant& d_h(x_1^{\perp_h},x_2) + h^{-1}\widetilde{d}(x_1,\partial M_h) \\ &\leqslant& (1+36K_1h)\widetilde{d}(x_1,x_2) +h^{-1}\widetilde{d}(x_1,x_2), \end{eqnarray*} and the same estimate follows. \smallskip \noindent (4) In view of (\ref{Jacobian}) and (\ref{normalizationestimate}), the Jacobian only generates error terms of order at least $h^6$. Hence we only need to prove that for any point $y$ in the ball (of $\widetilde{M}$) of the smoothening radius $a_T h^3$ around the center $x\in M$, it satisfies that $|d_h(y,z)-d_h(x,z)|<3a_T h^2/2$, which is guaranteed by (3). \end{proof} \begin{closeness-curve}\label{closeness-curve} Let $\gamma_1,\gamma_2: [0,l]\to \mathbb{R}^n$ be two $C^{1,1}$ curves. If $\|\gamma_1-\gamma_2\|_{C^0}\leqslant \epsilon<l^2/4$ and $\|\gamma_i^{\prime\prime}\|_{L^{\infty}}\leqslant \kappa$ for $i=1,2$, then $\|\gamma_1^{\prime}-\gamma_2^{\prime}\|_{C^0}\leqslant C(\kappa)\sqrt{\epsilon}$. \end{closeness-curve} \begin{proof} Since $\gamma_i$ is of $C^{1,1}$, $\gamma_i^{\prime}$ is absolute continuous. Hence Taylor's theorem with the integral form of the remainder applies: $$\gamma_i (s_2)=\gamma_i (s_1)+\gamma_i^{\prime}(s_1)(s_2-s_1)+\int_{s_1}^{s_2} \gamma_i^{\prime\prime}(\tau)(s_2-\tau)d\tau, \quad \forall \; 0\leqslant s_1<s_2\leqslant l.$$ From $\|\gamma_i^{\prime\prime}\|_{L^{\infty}}\leqslant \kappa$, we have $$\big|\gamma_i (s_2)-\gamma_i (s_1)-\gamma_i^{\prime}(s_1)(s_2-s_1) \big|\leqslant \frac{\kappa}{2}(s_2-s_1)^2.$$ We add this inequality for $\gamma_1,\gamma_2$ together: $$\big|(\gamma_1 (s_2)-\gamma_2 (s_2))-(\gamma_1 (s_1)-\gamma_2 (s_1))-(\gamma_1^{\prime}(s_1)-\gamma_2^{\prime}(s_1))(s_2-s_1) \big|\leqslant \kappa(s_2-s_1)^2.$$ Then by $\|\gamma_1-\gamma_2\|_{C^0}\leqslant \epsilon$, $$|\gamma_1^{\prime}(s_1)-\gamma_2^{\prime}(s_1)|\leqslant \frac{2\epsilon}{s_2-s_1}+\kappa (s_2-s_1).$$ Take $s_2-s_1=\sqrt{\epsilon}$ if exists, and we have $$|\gamma_1^{\prime}(s_1)-\gamma_2^{\prime}(s_1)|\leqslant (\kappa+2)\sqrt{\epsilon}.$$ Since $\sqrt{\epsilon}<l/2$, we can find $s_2=s_1+\sqrt{\epsilon}$ for any $s_1\in [0,l/2]$. For $s_1\in (l/2,l]$, one can repeat the whole process backwards. Hence the estimate above holds for all $s_1\in[0,l]$, which proves the lemma. \end{proof} \begin{dd}\label{dd} Let $h$ be sufficiently small determined at the beginning of Section \ref{subsection3.4}. Let $d_h^s(\cdot,z)$ (Definition \ref{definition-dhs}) be the smoothening of the function $d_h(\cdot,z)$ (Definition \ref{Mhdh}) with the smoothening radius $r=a_T h^3$, where $a_T=\min\{1,T^{-1}\}$. Then for sufficiently small $h$ depending on $n,K_1,K_2$, given any $x\in M$ and $z\in M_h$ satisfying $h/4 \leqslant d_h (x,z)\leqslant \min\{i_0/2,r_0/2,\pi/6K_1\}$, we have $$|\nabla_x d_h^s(x,z)|> 1-2h.$$ \end{dd} \begin{proof} Let $r=a_T h^3$. By the definition (\ref{dhsdef}), we have \begin{eqnarray*} \nabla_x d_h^s(x,z)&=&c_{n}r^{-n}\int_{\widetilde{M}}\nabla_x k_1\big(\frac{\widetilde{d}(y,x)}{r}\big)d_h(y,z)dy \\ &=& c_{n}r^{-n} \int_{\widetilde{B}_r(x)\subset \widetilde{M}} k_1^{\prime}\frac{1}{r} \big(\frac{-\exp_{x}^{-1}(y)}{\widetilde{d}(y,x)} \big) d_h(y,z) dy, \end{eqnarray*} where $\exp_x$ denotes the exponential map of $\widetilde{M}$ at $x\in M$. Now we change to the geodesic normal coordinate of $\widetilde{M}$ around $x$, and identify vectors in the tangent space $T_x\widetilde{M}$ with points in $\mathbb{R}^n$: \begin{eqnarray*} \nabla_x d_h^s(x,z) &=&c_{n}r^{-n} \int_{\mathcal{B}_{r}(0)\subset T_x \widetilde{M}} k_1^{\prime}\frac{1}{r} \frac{(-v)}{|v|} d_h(\exp_x(v),z) J_x(v) dv \\ &=& c_{n}r^{-n} \int_{\mathcal{B}_{r}(0)\subset T_x \widetilde{M}} -\nabla_v \Big(k_1\big(\frac{|v|}{r}\big)\Big) d_h(\exp_x(v),z) J_x(v) dv \\ &=& c_{n}r^{-n} \int_{\mathcal{B}_{r}(0)\subset T_x \widetilde{M}} k_1\big(\frac{|v|}{r}\big) \nabla_v\big(d_h(\exp_x(v),z) J_x(v)\big) dv, \end{eqnarray*} where $J_x(v)$ denotes the Jacobian of $\exp_x$ at $v$. Here we have used integration by parts in the last equality. It is known that $|\nabla_v J_x(v)|\leqslant C(n,K_1,K_2)|v|\leqslant C(n,K_1,K_2)h^3$ due to the $C^1$-estimate for the metric components (Lemma 8 in \cite{HV}) and Lemma \ref{extensionmetric}(3). Then by (\ref{normalization}), we have \begin{eqnarray*} && \bigg| \, c_{n}r^{-n} \int_{\mathcal{B}_{r}(0)} k_1\big(\frac{|v|}{r}\big) d_h(\exp_x(v),z) \big(\nabla_v J_x(v)\big) dv \, \bigg| \\ &\leqslant& c_{n}r^{-n} \int_{\mathcal{B}_{r}(0)} k_1\big(\frac{|v|}{r}\big) \frac{\pi}{4K_1} C(n,K_1,K_2)h^3 dv \leqslant C(n,K_1,K_2)h^3. \end{eqnarray*} Hence we only need to estimate the lower bound for the length of the dominating term \begin{equation}\label{ddA0} A_0=c_{n}r^{-n} \int_{\mathcal{B}_{r}(0)\subset T_x \widetilde{M}} k_1\big(\frac{|v|}{r}\big) \big(\nabla_v d_h(\exp_x(v),z)\big) J_x(v) dv. \end{equation} We start by considering the following two simple cases. \smallskip \textbf{Case 1:} $d_h(z,\partial M_h)>\min\{i_0/2,r_0/2,\pi/6K_1\}$. In this case, we know $x\in M_h$ and no geodesic from $z$ to $x$ intersects with $\partial M_h$ in this case. Then the distance function $d_h(\cdot,z)$ in the relevant domain is simply a geodesic distance function with the second derivative bounded by $5/h$ for sufficiently small $h$ depending on $K_1$ (e.g. Theorem 27 in \cite{PP}, p175). Since the exponential map and its inverse are uniformly bounded up to $C^2$ in the relevant domain for sufficiently small $h$ depending on $K_1,K_2$, we have $$\big|\nabla_v d_h(\exp_x(v),z)-\nabla_v d_h(\exp_x(v),z)|_{v=0}\big|\leqslant Ch^{-1}|v| \leqslant Ch^2.$$ Note that vectors in $T_v(T_x \widetilde{M})$ are identified with vectors in $T_x \widetilde{M}$. Observe that at $v=0$, we know $$\nabla_v d_h(\exp_x(v),z) \big|_{v=0}=(d\exp_x |_{v=0})^{-1} \nabla_x d_h(x,z)=\nabla_x d_h(x,z).$$ Hence by the Jacobian estimate (\ref{Jacobian}) and the normalization (\ref{normalization}), we obtain \begin{eqnarray*} |\nabla_x d_h^s(x,z)-\nabla_x d_h(x,z)|&\leqslant & |A_0-\nabla_x d_h(x,z)|+C(n,K_1,K_2)h^3 \\ &\leqslant& Ch^2+C(n,K_1,K_2)h^3. \end{eqnarray*} which gives the desired lower bound for $|\nabla_x d_h^s(x,z)|$ for sufficiently small $h$, due to $|\nabla_x d_h(x,z)|$ $=1$. \smallskip \textbf{Case 2:} $x\in M-M_h$ and $\widetilde{d}(x,\partial M_h)>r$. In this case, the gradient $\nabla_x d_h(x,z)=h^{-1}\nabla_x \widetilde{d}(x,\partial M_h)$ by the definition of $d_h$ (\ref{dh}). The second derivative of $\widetilde{d}(\cdot,\partial M_h)$ is bounded by the bound $2K_1$ on the second fundamental forms of the equidistant hypersurfaces from $\partial M$ in the boundary normal neighborhood of $\partial M$ (Lemma \ref{riccati}). Hence we have \begin{equation}\label{bdhcloseness-0} \big|\nabla_v d_h(\exp_x(v),z)-\nabla_v d_h(\exp_x(v),z)|_{v=0}\big|\leqslant C(K_1)h^{-1}|v| \leqslant C(K_1) h^{2}. \end{equation} Then the same argument as Case 1 shows that \begin{equation}\label{bdhcloseness} |\nabla_x d_h^s(x,z)-\nabla_x d_h(x,z)|\leqslant C(K_1) h^{2}+C(n,K_1,K_2)h^3, \end{equation} which yields a lower bound considering $|\nabla_x d_h(x,z)|=h^{-1}$. \medskip The general case when $x$ is close to $\partial M_h$ requires more careful treatment. We spend the rest of the proof to address it. \smallskip \textbf{Case 3:} $x\in M-M_h$ with $\widetilde{d}(x,\partial M_h)\leqslant r$ or $x\in M_h$. Since $d_h(x,z)\leqslant \min\{r_0/2,\pi/6K_1\}$ is bounded by the radius of radial uniqueness (\ref{CATchoice}), the gradient $|\nabla_x d_h(x,z)|$ equals to $1$ or $h^{-1}$ depending on whether $x$ is in $M_h$. It is known that geodesics of $M_h$ are of $C^{1,1}$ and the second derivative of a geodesic exists except at countably many switch points (switching between interior segments and boundary segments) where both one-sided second derivatives exist (e.g. Section 2 in \cite{ABB}). Furthermore, the second derivative exists and vanishes at intermittent points which are the accumulation points of switch points. It was also proved that if the endpoints of a family of geodesics converge, then the geodesics converge uniformly in $C^1$ (the first Lemma in Section 4 of \cite{ABB}). However, the estimates in the said work were done in terms of an extrinsic parameter (depending on how a manifold is embedded in the ambient space), and we show the following modification in terms of intrinsic parameters. The manifold $M_{h}$ has curvature bounded above by $4K_1^2$ locally in the sense of Alexandrov due to the Characterization Theorem in \cite{ABB2}. Furthermore by Theorem 4.3 in \cite{AB} and (\ref{CATchoice}), for any $z\in M_h$, the ball of $M_h$ around $z$ of the radius $\min\{2r_0/3,\pi/4K_1\}$ is a metric space of curvature bounded above by $4K_1^2$ . Denote by $\gamma_{x},\gamma_{y}$ the minimizing geodesics of $M_h$ from $x,y\in M_h$ to $z$. Denote the length of $\gamma_x$ by $L_x$ (i.e. $L_x=d_h(x,z)$). The geodesics $\gamma_x,\gamma_y$ are parametrized in the arclength parameter on $[0,L_x],[0,L_y]$ respectively. Without loss of generality, assume $L_x\leqslant L_y$. Hence $$d_h(\gamma_y(L_x),\gamma_x(L_x))=d_h(\gamma_y(L_x),\gamma_y(L_y))=L_y-L_x\leqslant d_h(x,y),$$ where we used $\gamma_x(L_x)=\gamma_y(L_y)=z$. Then Corollary 8.2.6 in \cite{AKP} shows that if $d_h(x,z)\leqslant \pi/6K_1$ and $d_h(x,y)$ is sufficiently small depending on $K_1$, we have $$\|\gamma_{x}-\gamma_{y}\|_{C^0([0,L_x])} < 2d_h(x,y),$$ where the $C^0$-norm is the uniform norm with respect to $d_h$. This leads to $\|\gamma_{x}-\gamma_{y}\|_{C^0([0,L_x])} < C\widetilde{d}(x,y)$ if $\widetilde{d}(x,y)$ is sufficiently small by (\ref{dhprojection}). On the other hand, due to Lemma \ref{riccati} and (\ref{acceleration}), the second derivatives of $\gamma_{x},\gamma_{y}$ are bounded by $C(n,K_1,K_2)$ whenever they exist in the boundary normal coordinate of $\partial M_h$, and both one-sided second derivatives respect the same bound at switch points. We lift the part of the curves $\gamma_{x},\gamma_{y}$ near $x,y$ onto the tangent space $T_x \widetilde{M}$. Without loss of generality, assume all of $\gamma_x,\gamma_y$ lie in the image of $\exp_x$. Since the exponential map and its inverse are uniformly bounded up to $C^2$, the properties stated above satisfied by $\gamma_x,\gamma_y$ are also satisfied by their lifts: namely, if $\widetilde{d}(x,y)$ is sufficiently small depending on $K_1$, $$\|\exp_x^{-1}\circ \gamma_x-\exp_x^{-1}\circ \gamma_y\|_{C^0([0,L_x])}<C\widetilde{d}(x,y);$$ and the second derivatives of $\exp_x^{-1}\circ \gamma_x,\exp_x^{-1}\circ \gamma_y$ are uniformly bounded by $C(n,K_1,K_2)$ in $L^{\infty}$-norm. Here the $C^0$-norm is the uniform norm with respect to the Euclidean distance in $T_x \widetilde{M}$. Hence Lemma \ref{closeness-curve} applies: \begin{equation}\label{closeness-lift} \|(\exp_x^{-1}\circ \gamma_x)^{\prime}-(\exp_x^{-1}\circ \gamma_y)^{\prime}\|_{C^0([0,L_x])}<C(n,K_1,K_2)\sqrt{\widetilde{d}(x,y)}\, . \end{equation} At the starting point $y=\gamma_y(0)$ of $\gamma_y$, we know $\gamma_y^{\prime}(0)=-\nabla_y d_h(y,z)$ and hence $$(\exp_x^{-1}\circ \gamma_y)^{\prime}(0)=(d\exp_x |_{v})^{-1} \gamma_y^{\prime}(0)=-\nabla_v d_h(\exp_x(v),z), $$ where $v=\exp_x^{-1}(y)$. At the starting point $x=\gamma_x(0)$ of $\gamma_x$, we simply have $(\exp_x^{-1}\circ \gamma_x)^{\prime}(0)=-\nabla_x d_h(x,z)$ by definition. Thus for sufficiently small $h$ depending on $K_1$, if $y\in M_h$ and $\widetilde{d}(x,y)\leqslant h^3$, the estimate (\ref{closeness-lift}) at starting points gives \begin{equation}\label{closeness} |\nabla_v d_h(\exp_x(v),z)-\nabla_x d_h(x,z)| < C\sqrt{\widetilde{d}(x,y)} \leqslant C(n,K_1,K_2)h^{\frac{3}{2}}. \end{equation} The difference between this case and Case 1 is that the formula for $\nabla_x d_h^s(x,z)$ (at the beginning of the proof) may split into two parts: the integral over points in $M_h$ and over points in $M-M_h$. The key observation is that in a small neighborhood intersecting $\partial M_h$, the gradient $\nabla_x d_h(x,z)$ for $x\in M-M_h$ is essentially normal to $\partial M_h$, which has almost the same direction as the normal component (with respect to $\partial M_h$) of $\nabla_x d_h(x,z)$ for $x\in M_h$. A precise version of this observation will be shown later. The $h^{-1}$ scaling in the definition of $d_h$ (\ref{dh}) plays a crucial role in obtaining the desired lower bound. Denote the part of the integral $A_0$ (\ref{ddA0}) over points in $M_h$ by $A_1$, and the part of $A_0$ over points in $M-M_h$ by $A_2$. We divide Case 3 into the following three situations depending on where $x$ lies. \smallskip \textbf{Case 3(i):} $x\in M_h$ and $\widetilde{d}(x,\partial M_h)>r$. In this case, the integral $A_0$ only involves points in $M_h$ and $A_0=A_1$. Then the same argument as Case 1 and (\ref{closeness}) imply that \begin{equation*}\label{dhslower} |\nabla_x d_h^s(x,z)-\nabla_x d_h(x,z)|<C(n,K_1,K_2)h^{\frac{3}{2}}. \end{equation*} \textbf{Case 3(ii):} $x\in \partial M_h$. Denote by $\textbf{n}_x\in T_x (\widetilde{M})$ the outward-pointing unit vector normal to $\partial M_h$. The estimate (\ref{closeness}) yields the closeness between normal components: $$\big|\langle \nabla_v d_h(\exp_x(v),z),\textbf{n}_x \rangle -\langle \nabla_x d_h(x,z),\textbf{n}_x \rangle \big|< Ch^{\frac{3}{2}}, \textrm{ if }\exp_x(v)\in M_h.$$ Since clearly $\langle \nabla_x d_h(x,z),\textbf{n}_x \rangle\geqslant 0$ for $x\in \partial M_h$, we have \begin{equation}\label{smallnormal} \langle \nabla_v d_h(\exp_x(v),z),\textbf{n}_x \rangle >-Ch^{\frac{3}{2}}, \textrm{ if }\exp_x(v)\in M_h, \end{equation} which implies that $\langle A_1,\textbf{n}_x \rangle > -Ch^{\frac{3}{2}}$. On the other hand, we replace the evaluation at $v=0$ in the estimate (\ref{bdhcloseness-0}) with $v=\exp_x^{-1}(x^{\prime})$ for an arbitrary point $x^{\prime}\in M-M_h$ close to $x$. Then consider their normal components similarly. Since $\nabla_x d_h(x^{\prime},z)$ can be arbitrarily close to $h^{-1}\textbf{n}_x$ and the exponential map only changes the inner product by a higher order $C(K_1)r^2$ term, we have \begin{equation}\label{largenormal} \langle \nabla_v d_h(\exp_x(v),z),\textbf{n}_x \rangle \geqslant h^{-1}-Ch^{2}, \textrm{ if }\exp_x(v)\in M-M_h. \end{equation} Furthermore by (\ref{bdhcloseness-0}), the tangential component of $\nabla_v d_h(\exp_x(v),z)$ can only have length at most $Ch^{2}$ if $\exp_x(v)\in M-M_h$. This implies that $|A_2-\langle A_2,\textbf{n}_x \rangle \textbf{n}_x|<Ch^{2}$. \smallskip \textbf{(1)} If $c_{n}r^{-n} \int_{\{v\in \mathcal{B}_r(0): \,\exp_x(v)\in M-M_h\}} k_1\big(\frac{|v|}{r}\big) dv \geqslant h$, then (\ref{largenormal}) yields that $\langle A_2,\textbf{n}_x \rangle \geqslant 1-Ch^{3}$. Thus by (\ref{smallnormal}), $$|A_0|\geqslant |\langle A_0,\textbf{n}_x \rangle| =|\langle A_1+A_2,\textbf{n}_x \rangle| > 1-Ch^{\frac{3}{2}}-Ch^{3}.$$ \textbf{(2)} If $c_{n}r^{-n} \int_{\{v\in \mathcal{B}_r(0): \,\exp_x(v)\in M-M_h\}} k_1\big(\frac{|v|}{r}\big) dv < h$, then by (\ref{closeness}) and (\ref{normalization}), we have \begin{eqnarray*} |A_1| &>& \bigg| \, c_{n}r^{-n} \int_{\{v\in \mathcal{B}_r(0): \,\exp_x(v)\in M_h\}} k_1\big(\frac{|v|}{r}\big) \big(\nabla_x d_h(x,z)\big) J_x(v) dv \, \bigg| - Ch^{\frac{3}{2}} \\ &>& 1-h-Ch^{\frac{3}{2}}. \end{eqnarray*} Observe that (\ref{largenormal}) implies that $\langle A_2,\textbf{n}_x \rangle > 0$ for sufficiently small $h$. If $\langle A_1,\textbf{n}_x \rangle \geqslant 0$, then \begin{eqnarray*} |A_0|=|A_1+A_2|&\geqslant& \big|A_1+\langle A_2,\textbf{n}_x \rangle \textbf{n}_x \big|-\big|A_2-\langle A_2,\textbf{n}_x \rangle \textbf{n}_x \big| \\ &>& |A_1|-Ch^{\frac{3}{2}} > 1-h-Ch^{\frac{3}{2}}-Ch^{2}. \end{eqnarray*} If $\langle A_1,\textbf{n}_x \rangle < 0$, then $|\langle A_1,\textbf{n}_x \rangle|<Ch^{\frac{3}{2}}$ by (\ref{smallnormal}). This shows that $\big|A_1-\langle A_1,\textbf{n}_x \rangle \textbf{n}_x \big|>1-h-Ch^{\frac{3}{2}}$. Hence we have \begin{eqnarray*} |A_0|&\geqslant& \big|A_1+A_2-\langle A_1+A_2,\textbf{n}_x \rangle \textbf{n}_x \big| \\ &\geqslant& \big|A_1-\langle A_1,\textbf{n}_x \rangle \textbf{n}_x \big|- \big|A_2-\langle A_2,\textbf{n}_x \rangle \textbf{n}_x \big| \\ &>& 1-h-Ch^{\frac{3}{2}}-Ch^{2}. \end{eqnarray*} \smallskip \textbf{Case 3(iii):} $x\notin \partial M_h$ and $\widetilde{d}(x,\partial M_h)\leqslant r$. In this case, we choose an arbitrary point $x_0\in \partial M_h$ such that $\widetilde{d}(x_0,x)\leqslant r$. By the triangle inequality, (\ref{closeness}) yields that $$\big|\nabla_v d_h(\exp_x(v),z)-\nabla_{v} d_h(\exp_x(v),z)|_{v=v_0}\big| <C(n,K_1,K_2)h^{\frac{3}{2}}, \textrm{ if }\exp_x(v)\in M_h,$$ where $v_0=\exp_{x}^{-1}(x_0)$. Then we consider the normal component with respect to $(d\exp_{x}|_{v_0})^{-1} \textbf{n}_{x_0}$ $\in T_x(\widetilde{M})$ and replace the vector $\textbf{n}_x$ in Case 3(ii) with $(d\exp_{x}\big|_{v_0})^{-1} \textbf{n}_{x_0}$. Since $\langle \nabla_x d_h(x,z)|_{x=x_0},$ $\textbf{n}_{x_0} \rangle_{x_0}\geqslant 0$ with respect the inner product of $T_{x_0} \widetilde{M}$, after lifting the vectors onto $T_x \widetilde{M}$ via the exponential map, we have \begin{eqnarray*} \langle (d\exp_{x} |_{v_0})^{-1} \big(\nabla_x d_h(x,z) \big|_{x=x_0}\big), (d\exp_{x} |_{v_0})^{-1} (\textbf{n}_{x_0}) \rangle_x \geqslant -C(K_1)r^2. \end{eqnarray*} Then the rest of the argument in Case 3(ii) applies up to a higher order term as $d\exp_x|_{v_0}$ only changes the inner product by a higher order $C(K_1)r^2$ term. \smallskip Finally, combining all the cases together, we obtain \begin{eqnarray*} |\nabla_x d_h^s(x,z)|\geqslant |A_0|-C(n,K_1,K_2)h^3 >1-h-C(n,K_1,K_2)h^{\frac{3}{2}}, \end{eqnarray*} and therefore the lemma follows. \end{proof} \begin{mindistance}\label{mindistance} For $i\geqslant 1$ and sufficiently small $h$ depending on $n,T,K_1,i_0$, we have $$dist_{\widetilde{M}\times \mathbb{R}}(\partial \Omega_{i,j}^0,\Omega_{i,j}) > \min\{\frac{h^3}{100},\frac{h^2}{20T}\}.$$ For $i=0$, we have $$dist_{\widetilde{M}\times \mathbb{R}}(\partial \Omega_{0,j}^0,\Omega_{0,j}) > \frac{h^3}{6T^2}.$$ \end{mindistance} \begin{proof} There are two types of boundaries involved. The first type is from the level sets of $d_h^s(\cdot,z_{i,j})$. For $i\geqslant 2$, the distance of the first type is from the the boundary of the cylinder $\{x: d_h^s(x,z_{i,j}) \leqslant \frac{1}{2}\min\{1,T^{-1}\}h\}\times [-T_i,T_i]$ and the boundary of $\cup_{l=0}^{i-1}\cup_{j}\overline{\Omega}_{l,j}$. Since a larger cylinder $\{x: d_h^s(x,z_{i,j}) \leqslant \min\{1,T^{-1}\}h\}\times [-T_i-h,T_i+h]$ is also contained in $\cup_{l=0}^{i-1}\cup_{j}\overline{\Omega}_{l,j}$ due to (\ref{dinclusion}), then the distance of this type is bounded below by the distance between these two cylinders, which is bounded below by $\min\{1,T^{-1}\}h^2/20$ by Lemma \ref{dhs}(4,3) if $h<1/10$. For $i=1$, the distance of the first type is from the the boundary of the cylinder $\{x: d_h^s(x,z_{1,j}) \leqslant h/2\}\times [-T_1,T_1]$ and the boundary of $\cup_{j}\Omega_{0,j}$. By (\ref{sublemma0d}) and Sublemma \ref{sublemmainitial2}, the cylinder $\{x: d_h^s(x,z_{1,j}) \leqslant 3h/4\}\times [-T_1,T_1]$ is contained in the open set $\cup_{j}\Omega_{0,j}$, and hence the distance between the boundary of the cylinder and that of $\cup_{j}\Omega_{0,j}$ is bounded away from 0. To obtain an explicit estimate, one can prove a slightly tighter estimate than Sublemma \ref{sublemmainitial2} if $T>10h$: $$\big(\cup_{b\in [0,2h]}\Gamma_b(8h)\big)\times [-T+\frac{11}{2}h,T-\frac{11}{2}h] \subset \cup_j\Omega_{0,j}.$$ With (\ref{sublemma0d}), this shows that a larger cylinder $\{x: d_h^s(x,z_{1,j}) \leqslant 3h/4\}\times [-T_1-h/2,T_1+h/2]$ is contained in $\cup_{j}\Omega_{0,j}$. Then Lemma \ref{dhs}(4,3) yields a lower bound $h^2/40$ if $h<1/20$. For $i\geqslant 1$, the other type of boundaries is generated by the level sets of $\psi_{i,j}$. Suppose boundary points $(x_1,t_1)$ and $(x_2,t_2)$ belong to $\{\psi_{i,j} =9T^2h\}$ and $\{\psi_{i,j} = 8T^2h\}$ respectively, and hence by the definition of $\psi_{i,j}$ we have \begin{eqnarray*} &&\Big(\big(1-\xi(d(x_1,\partial M))-\xi(\rho_0-d_h^s(x_1))\big)T_i-d_h^s(x_1)\Big)^2\\ &&-\Big(\big(1-\xi(d(x_2,\partial M))-\xi(\rho_0-d_h^s(x_2))\big)T_i-d_h^s(x_2)\Big)^2-t_1^2+t_2^2 = T^2 h. \end{eqnarray*} Then, \begin{eqnarray*} &&2T^2 \big|\xi(\rho_0-d_h^s(x_1))-\xi(\rho_0-d_h^s(x_2)) \big|+2T^2 \big|\xi(d(x_1,\partial M)) -\xi(d(x_2,\partial M)) \big| \\ &&+2T\big|d_h^s(x_1)-d_h^s(x_2) \big|+2T|t_1-t_2| >T^2 h. \end{eqnarray*} By the definition of $\xi$, \begin{eqnarray*} &&\frac{6T^2}{h}|d_h^s(x_1,z_{i,j})-d_h^s(x_2,z_{i,j})|+\frac{6T^2}{h}| d(x_1,\partial M)-d(x_2,\partial M)| \\ &&+2T|d_h^s(x_1,z_{i,j})-d_h^s(x_2,z_{i,j})|+2T|t_1-t_2| > T^2 h. \end{eqnarray*} Then it follows that at least one of the four absolute values must be larger than $h^2/24$ if $h<3T$, which implies that at least one of $|d_h(x_1,z_{i,j})-d_h(x_2,z_{i,j})|$, $|d(x_1,\partial M)-d(x_2,\partial M)|$ or $|t_1-t_2|$ is larger than $h^2/50$ by Lemma \ref{dhs}(4). Here we divided the smoothening radius by a constant to keep the error brought by the convolution relatively small. Since $d(x,\partial M)=\widetilde{d}(x,\partial M)$ for $x\in M$, Lemma \ref{dhs}(3) yields that at least one of $\widetilde{d}(x_1,x_2)$ or $|t_1-t_2|$ is larger than $h^3/100$ and hence the lemma follows. Finally for the initial step $i=0$, the first type of boundary distance is from $\{\rho(x)=-3h/2\}$ and the boundary of $\Upsilon$, which is clearly bounded below by $h/2$. The second type of boundary distance is between level sets of $\psi_{0,j}$. One can follow the same argument as for $i\geqslant 1$ for this type of boundary distance, and obtain a lower bound $h^3/6T^2$. \end{proof} \begin{geodiff}\label{geodiff} Suppose $\gamma(s)$ is a geodesic of $M$ satisfying $\gamma(0)\in \partial M$ and the initial vector $\gamma^{\prime}(0)\in T_{\gamma(0)}\partial M$. Then there exists a constant $\epsilon_0$ explicitly depending on $n,\|R_M\|_{C^1},\|S\|_{C^1},i_0$ such that for any $s\leqslant \epsilon_0$, we have $d(\gamma(s),\partial M) \leqslant C(n,\|R_M\|_{C^1},\|S\|_{C^1})s^2$. \end{geodiff} \begin{proof} Without loss of generality, assume the geodesic $\gamma(s)$ lies entirely in the interior of $M$ except for the initial point. Consider another geodesic of $\partial M$ with the same initial point $\gamma(0)$ and the same initial vector $\gamma^{\prime}(0)$. We claim that the distance between this geodesic of $\partial M$ and $\gamma(s)$ is bounded above by $Cs^2$ for sufficiently small $s$. Clearly this claim yields the lemma. Denote the geodesics of $M,\partial M$ in question with the arclength parametrization by $\gamma_1,\gamma_2$. Take $\epsilon_0<i_0$ and we consider the geodesics $\gamma_i(s)$ ($i=1,2$) in a $C^1$ boundary normal coordinate $(x^1,\cdots,x^n)$. Due to Lemma 8 in \cite{HV} and Lemma \ref{riccati}, within a uniform radius explicitly depending on $n,\|R_M\|_{C^1},\|S\|_{C^1},i_0$, the $C^1$-norm of metric components is uniformly bounded by a constant explicitly depending on $n,\|R_M\|_{C^1},\|S\|_{C^1}$. Since $\gamma_1,\gamma_2$ have the same initial point and the same initial vector, we know $\gamma_1^j(0)=\gamma_2^j(0)$ and $\partial_s \gamma_1^j(0)=\partial_s \gamma_2^j(0)$ for all $j=1,\cdots,n$, where $\gamma_i^j$ denotes the $j$-th component of $\gamma_i$ with respect to the coordinate $x^j$. The fact that $|\partial_s \gamma_1(s)|_{M}=|\partial_s \gamma_2(s)|_{\partial M}=1$ yields $|\partial_s \gamma_i^j(s)|\leqslant C$ for any $j$ due to the $C^0$ metric bound in bilinear form. Moreover, the geodesic equation in local coordinates has the following form: $$\partial_s^2 \gamma^j+\sum_{k,l}\Gamma_{kl}^j (\partial_s \gamma^k)( \partial_s \gamma^l)=0,$$ and $\gamma_1,\gamma_2$ satisfy this equation with $\Gamma_{kl}^j$ of $M,\partial M$ respectively. Hence by applying the $C^1$ bound for metric components, we have an estimate for the second derivative: \begin{equation}\label{acceleration} |\partial_s^2 \gamma_i^j(s)| \leqslant C(n,\|R_M\|_{C^1},\|S\|_{C^1}), \; \textrm{ for all }j=1,\cdots,n. \end{equation} Since $\gamma_1,\gamma_2$ lie entirely in $\textrm{int}(M),\partial M$ by assumption, they are at least of $C^2$ and hence \begin{eqnarray*} |\gamma_1^j(s)-\gamma_2^j(s)|\leqslant\frac{s^2}{2}\sup_{s^{\prime}\in (0,s)} \big|\partial_s^2 \gamma_1^j(s^{\prime})-\partial_s^2 \gamma_2^j(s^{\prime}) \big| \leqslant C(n,\|R_M\|_{C^1},\|S\|_{C^1})s^2. \end{eqnarray*} This implies $d(\gamma_1(s),\gamma_2(s))\leqslant C(n,\|R_M\|_{C^1},\|S\|_{C^1})s^2$ due to the $C^0$ metric bound. \end{proof} \begin{areaLipschitz}\label{areaLipschitz} Denote $A_t(\epsilon)=\{x\in \Sigma_t: l(x)>\epsilon\}$ and by $U(A_t(\epsilon))$ the set of all points on all minimizing geodesics from $A_t(\epsilon)$ to $\Gamma$. Then for sufficiently small $\epsilon$ explicitly depending on $K_1$ and any $t^{\prime}\in [t-\epsilon/2,t)$, we have $$vol_{n-1}(A_t(\epsilon))<5^{n-1} vol_{n-1}\big(U(A_t(\epsilon))\cap \Sigma_{t^{\prime}}\big).$$ \end{areaLipschitz} \begin{proof} We define a function $F: U(A_t(\epsilon))\cap \Sigma_{t^{\prime}}\to A_t(\epsilon)$ by mapping a point $x\in U(A_t(\epsilon))\cap \Sigma_{t^{\prime}}$ to the initial point of the particular minimizing geodesic containing $x$ from $A_t(\epsilon)$ to $\Gamma$. This function is well-defined since minimizing geodesics cannot intersect at $\Sigma_{t^{\prime}}$; otherwise they would fail to minimize length past an intersection point. To show the measure estimate in question, it suffices to show that $F$ is locally Lipschitz with a Lipschitz constant $5$ for sufficiently small $\epsilon$ depending on $K_1$. Since the measure in question is an $(n-1)$-dimensional Hausdorff measure, the Lipschitz continuity of $F$ implies the measure estimate with the constant $5^{n-1}$ (Section 5.5.2 in \cite{BBI}). Here we show that the function $F$ is locally Lipschitz. For any point $y_0 \in U(A_t(\epsilon))\cap\{x: t-\epsilon/2\leqslant d(x,\Gamma)\leqslant t\}$, there exists $x_0\in U(A_t(\epsilon))\cap \Sigma_{t-\epsilon}$ such that $x_0$ lies on a minimizing geodsic from $y_0$ to $\Gamma$, which indicates $d(y_0,\Gamma)=d(y_0,x_0)+d(x_0,\Gamma)$. Observe that the geodesic segment from $y_0$ to $x_0$ does not intersect the boundary. Then there exists a small neighborhood of $y_0$, such that for any $y$ in this neighborhood, the minimizing geodesic from $x_0$ to $y$ does not intersect the boundary. Thus the distance function $d(\cdot,x_0)$ in the small neighborhood of $y_0$ is just a geodesic distance function with the second derivative bounded by $3/\epsilon$ for sufficiently small $\epsilon$ depending on $K_1$ (e.g. Theorem 27 in \cite{PP}, p175). Hence we have \begin{eqnarray*} d(y,\Gamma)&\leqslant& d(x_0,\Gamma)+d(y,x_0) \\ &=& d(y_0,\Gamma)-d(y_0,x_0)+d(y,x_0)\\ &\leqslant& d(y_0,\Gamma)+\nabla_y d(y_0,x_0)\cdot \exp^{-1}_{y_0} (y) + \frac{3}{2\epsilon} d(y,y_0)^2+o\big(d(y,y_0)^2\big). \end{eqnarray*} This shows that the distance function $d(\cdot,\Gamma)$ is a semi-concave function in $U(A_t(\epsilon))\cap\{x: t-\epsilon/2\leqslant d(x,\Gamma)\leqslant t\}$ for sufficiently small $\epsilon$ with the semi-concavity constant $3/\epsilon$. Now consider the gradient flow by the distance function $d(\cdot,\Gamma)$, and the function $F$ is simply the gradient flow restricted to this region $U(A_t(\epsilon))\cap\{x: t^{\prime}\leqslant d(x,\Gamma)\leqslant t\}$ for $t^{\prime}\in [t-\epsilon/2,t)$. By Lemma 2.1.4(i) in \cite{P}, the restricted gradient flow (or F) is locally Lipschitz with a Lipschitz constant $e^{3/2}<5$. \end{proof} \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} In its original form \cite{GinzSiro}, the supernova remnant (SNR) paradigm for the origin of Galactic cosmic rays (CRs) is based on a purely energetic ground: if $\sim 10-20\%$ of the kinetic motion of the expanding shell of a supernova gets converted into accelerated particles, and one accounts for the energy dependent escape time from the Galaxy, SNRs can be the sources of the bulk of Galactic CRs. After the pioneering works on diffusive shock acceleration (DSA, \cite{krim77,bo78,bell78}), it became clear that this mechanism is the most promising acceleration process that can be responsible for energy conversion from bulk kinetic motion of a plasma to kinetic energy of charged particles. The DSA naturally leads to spectra of accelerated particles $N(E)\propto E^{-2}$ for strong shocks, not too dissimilar from the ones needed to describe data after accounting for the energy dependent escape time from the Galaxy, with a residence time that scales as $\tau_{esc}(E)\propto E^{-0.6}$. There are however two main concerns with this simple picture: first, the required acceleration efficiency is not so small that the dynamical reaction of the accelerated particles on the shock can be neglected. Second, if particle scattering is guaranteed by normal interstellar magnetic turbulence alone, the maximum energy of accelerated particles is exceedingly small and the mechanism cannot account for cosmic rays with energies up to the knee. It was soon understood that this second problem could be mitigated only by requiring CRs to generate the turbulence necessary for their scattering though streaming instability \cite{bell78,lagage}, a mechanism similar to that discussed by \cite{wentzel} in the context of CR propagation in the Galaxy. This latter point intrinsically makes the acceleration process even more non-linear. The modern non-linear (NL) theory of DSA allows us to describe particle acceleration at SNR shocks by taking into account 1) the dynamical reaction of the accelerated particles on the system, 2) the magnetic field amplification due to streaming instability, and 3) the dynamical reaction of the amplified magnetic field on the plasma. These effects are interconnected in a rather complex way, so that reaching the knee and having enough energy channelled into CRs are no longer two independent problems. The situation is in fact even more complex given that the evolution of the SNR in time depends on the environment. A generic prediction of NLDSA is that the spectra of accelerated particles are no longer power laws but rather concave spectra. In the case of extremely modified shocks, the asymptotic shape of the spectrum for $E\gg 1$ GeV is $N(E)\propto E^{-1.2}$ (see e.g. \cite{je91,maldrury} for reviews on CR modified shocks) to be compared with the standard $E^{-2}$ spectrum usually associated to DSA. Instead of clarifying the situation, this bit of information made it more puzzling in that so flat spectra are hard to reconcile with the CR spectrum observed at Earth. In this paper we show how the application of NLDSA to SNRs leads to time-integrated spectra that are very close to power laws at energies below 10-100 TeV where most measurements of CR spectra are performed with high statistical significance. The crucial piece of physics to connect the acceleration process inside the sources to the spectrum observed at Earth is the escape flux: during the Sedov-Taylor phase of the evolution (and to a lesser amount also during the ejecta dominated phase) particles can escape from a SNR in the form of a spectrum peaked at the maximum momentum reached at any given time. Particles which do not escape are advected downstream, lose energy adiabatically and eventually escape at later times. We calculate the spectrum injected by a single SNR as the superposition of these two components under different assumptions. Indeed, the semi-analytical method adopted here not only allows for a complete treatment of NLDSA but, being computationally very cheap, also allows for a very wide scan of the parameter space and an unprecedented investigation of the poorly known pieces of physics that enter the problem. For simplicity, here we focus on type I supernovae, which occur in the typical interstellar medium (ISM), while qualitative differences between these and type II supernovae are only discussed by considering expansion in a more rarefied, hotter ISM, but totally ignoring any spatial stratification of the circumstellar region, which might be characterized by winds, bubbles and other complex structures. We also limit our attention to the proton component, while the results on nuclei will be presented in an upcoming paper since the additional issues that appear in that case deserve a detailed discussion. The introduction of nuclei is a fundamental step in the field and is essential to explain the CR spectrum above the knee (see e.g. \cite{kascade} and \cite{bertaina} for a review). \section{A back-of-the-envelope calculation of the escape flux from a SNR}\label{sec:benchmark} The escape of cosmic rays from a SNR is a very difficult problem to tackle, both from the physical and mathematical point of view. One can envision that at some distance upstream of the shock the particle density (or current) gets sufficiently small that the particles are no longer able to generate the waves that may scatter them and lead to their return to the shock front. These are escaping particles. However, the location of this free escape boundary is not easily calculated from first principles and it is usually assumed to be a given fraction of the radius of the shock. An additional uncertainty is introduced by the fact that the shock dynamics changes in time. The evolution of a SNR is characterized by three phases: an ejecta dominated (ED) phase, in which the mass of material accumulated behind the blast wave is less than the mass of the ejecta; a Sedov-Taylor (ST) phase, that starts when the accumulated mass equals the mass of the ejecta; a radiative phase, when the shock dissipates energy through radiation. The SNR is expected to spend most of the time over which it is active as a CR factory in the ST phase, that typically starts $500-1000$ years after the initial explosion. The maximum momentum of accelerated particles during the ED phase is expected to increase with time \cite{lagage}. As discussed by \cite{escape}, this is due to the fact that magnetic field amplification is rather efficient and the shock speed stays almost constant during this stage. After the beginning of the ST phase, the shock velocity, and thus also the efficiency of magnetic field amplification, decrease with time: as a consequence, the maximum momentum, $p_{max}$, is expected to drop with time as well \cite{escape}. The process of particle escape from the upstream region becomes important. At any given time, the system is no longer able to confine the particles that were accelerated to the highest energies at earlier times, so these particles escape from the shock. The instantaneous spectrum of the escaping particles at any given time is very much peaked around $p_{max}(t)$ \cite{freeescape}. This qualitative picture of particle escape is the one that we mimic by assuming the existence of a free escape boundary, but as stressed above, the escape phenomenon is likely to be much more complex than suggested by this simple picture. Before embarking in a detailed calculation including the non-linear effects, it is useful to illustrate the results of a back-of-the-envelope calculation, based on a test-particle approach. Let us consider a SNR shell with a time dependent radius $R_{sh}(t)$ expanding with velocity $V_{sh}(t)$ in a uniform medium with density $\rho_{0}$ and suppose that escaping particles have momentum $p_{max}(t)$ and carry away a fraction $F_{esc}$ of the bulk energy flux $\frac{1}{2}\rho_{0} V_{sh}(t)^{3}$. Let $N_{esc}(p)$ be the spectrum of cosmic rays inside the remnant, so that the energy contained in a range ${\rm d} p$ around $p$ is \begin{equation}\label{eq:dep} {\rm d} \mathcal{E}(p)=4\pi p^{2} N_{esc}(p)pc~{\rm d} p\, . \end{equation} The energy carried away by particles escaping in a time interval ${\rm d} t$ at time $t$ is \begin{equation}\label{eq:det} {\rm d}\mathcal{E}(t)=F_{esc}(t)\frac{1}{2}\rho V_{sh}^{3}(t)4\pi R_{sh}(t)^{2}{\rm d} t\,. \end{equation} In a general way we can write $R_{sh}(t)\propto t^{\nu}$, and thus $V_{sh}(t)\propto t^{\nu-1}$. Using these time-dependencies, and equating the two expression for ${\rm d}\mathcal{E}$, one obtains \begin{equation}\label{eq:Nesc} N_{esc}(p)\propto t^{5\nu-3}F_{esc}(t)p^{-3}\frac{{\rm d} t}{{\rm d} p}\,. \end{equation} During the ST stage, $p_{max}$ is determined by the finite size of the accelerator, therefore we require that the diffusion length $\lambda(p)$ at $p_{max}(t)$ is a fraction $\chi$ of the SNR radius (free escape boundary): \begin{equation}\label{eq:defchi} \lambda(p_{max})\simeq D(p_{max})/V_{sh}=\chi R_{sh}\, . \end{equation} Assuming for the diffusion coefficient the generic form $D(p)\propto p^\alpha/\delta B^\gamma$ and, for a magnetic field scaling as $\delta B(t)\propto t^{-\mu}$, we obtain \begin{equation}\label{eq:pt} p(t)^\alpha\propto R_{sh}(t)V_{sh}(t) \delta B(t)^\gamma\propto t^{2\nu-1-\gamma \mu}\, , \end{equation} which implies \begin{equation}\label{eq:dpt} \frac{{\rm d} t}{{\rm d} p}\propto \frac{t}{p}\, . \end{equation} Substituting Eq.~\ref{eq:dpt} into Eq.~\ref{eq:Nesc} one obtains: \begin{equation}\label{eq:p4} N_{esc}(p)\propto p^{-4}t^{5\nu-2} F_{esc}(t);\quad t=t(p). \end{equation} This relation illustrates a striking result: if the fraction of the bulk energy going into escaping particles is roughly constant in time, and if the SNR evolution during the ST stage is adiabatic and self-similar (i.e.\ $\nu=2/5$), the global spectrum of particles escaping the system from the upstream boundary is exactly $p^{-4}$. This means that the diffuse CR spectrum, usually explained by invoking the quasi-universal slope predicted by Fermi's mechanism at strong shocks, may be as well due the equally general evolution of a SNR during the ST stage. Possible corrections to Eq.~\ref{eq:p4} might lead to a slightly different spectrum for the escape flux. For instance, if the SNR evolution were not perfectly adiabatic, e.g.\ as a consequence of the energy carried away by escaping particles ($\nu\leq 2/5$) or if $F_{esc}$ decreased with time (corresponding to a reduction of the shock modification), the spectrum of the escaping particles could be as flat as $\sim p^{-3.5}$. Reasonable modifications to the basic prediction for the escaping particle spectrum generally lead to spectra that are somewhat flatter than $p^{-4}$. As we stressed above, the phenomenon of particle escape from the accelerator is very complex: for instance, in general the maximum momentum reached by particles at late stages of the SNR evolution is still high enough that there is a reservoir of CRs downstream that lose energy adiabatically during the expansion of the remnant and are eventually free to escape only when the shock dies out. The spectrum observed at Earth is made of the sum of these two components released at different times and with very different spectra. \section{NLDSA at SNR shocks} In this work we adopt the semi-analytical formalism for NLDSA developed by \cite{freeescape}, which represents the generalization of the work of \cite{ab05,ab06} to the case in which there is a free escape boundary at some position upstream of the shock. This calculation allows us to describe particle acceleration at a plane non relativistic shock in the assumption of quasi-stationarity and taking into account conservation of mass, momentum and energy, including the dynamical reaction of cosmic rays and amplified magnetic fields on the shock. The calculation makes use of the injection recipes discussed in \citep{bgv05}. In terms of mechanisms for magnetic field amplification, we only consider the (standard) resonant streaming instability, and the dynamical reaction of the amplified field on the plasma is taken into account as discussed in \cite{jumpkin}. The assumption that only resonantly produced modes are excited in the upstream plasma is clearly rather restrictive, especially in the light of the recent results such as those by \cite{bell04} which suggest that non-resonant modes might grow faster and lead to more efficient magnetic field amplification at least during the early stages of the SNR evolution \cite{ab09}. On the other hand, such modes are typically produced at wavelengths which are much shorter than the gyration radius of the particles and can hardly be responsible for efficient scattering of particles at the highest energies, unless very rapid inverse cascading takes place. Possible damping of the magnetic field is also phenomenologically taken into account in a way that allows us to reproduce the results of \cite{pz03}: the damping efficiency is parametrized as \begin{equation}\label{eq:damp} \zeta(t)=1-\exp\left[-\frac{V_{sh}(t)}{V_{damp}}\right] \end{equation} where $\zeta(t)$ is the ratio between the damping and growth rates. The results we present in the following are obtained with $V_{damp}$=2000 km/s, but we checked that varying $V_{damp}$ varying between 500 and 5000 km/s leaves the results basically unchanged. The energy associated with damped magnetic turbulence is assumed to go into thermal energy of the background plasma (turbulent heating) as described in \cite{jumpkin} and references therein. As already mentioned, from the point of view of the environment, we focus on SNRs in an ISM with spatially constant density. The circumstellar environment of type II/Ib,c SNe may be very complicated, depending on the details of the pre-SN stages (e.g.\ the production of Wolf-Rayet and Red Supergiant winds). We do not investigate here this possibly very complex structure and we qualitatively discuss the difference between type I and type II SNe by simply assuming a high density cold gas for the former and a rarefied warmer gas for the latter, just to illustrate the effects of these assumptions on the time integrated CR spectrum from a single remnant. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Hydro.eps} \caption{Time dependence of the shock radius, $R_{sh}$ (in units of pc), shock velocity, $V_{sh}$ (in units of $10^3$ km/s), downstream magnetic field, $B_2$ (in units of $100\mu$G), flux through the free escape boundary placed at $x_{0}=R_{sh}$, $F_{esc}$ (in units of $\rho_0 V_{sh}^3/2$), total compression factor, $R_{tot}$, and damping parameter, $\zeta$ (see Eq.~\ref{eq:damp}). The curves refer to a SNR in a medium with magnetic field $B_{0}=5\mu$G, temperature $T_0=10^{5}\rm{K}$, density $n_0=0.1 \rm{cm}^{-3}$, and injection parameter $\xi_{inj}=3.9$.\label{fig:hydro}} \end{center} \end{figure} The evolution of the forward shock position and velocity is taken as adiabatic and is described according to the analytical approach of \cite{TMK99} (table 7), with $E_{SN}=10^{51}$erg for the SN energy and $M_{ej}=1.4M_\odot$ for the mass of the ejecta. In the case of modified shocks, this kind of solution is expected to only hold approximately, since escaping particles may in principle carry away a non-negligible amount of bulk energy, making the shock behave as partially radiative. The evolution of the remnant is followed until its age is $\sim 10^{5}$ yr: for standard values of the parameters at this time $p_{max}$ has dropped to values in the range 1-10 GeV/c. At each time-step the quasi-stationary solution for the shock dynamics and the instantaneous spectrum of accelerated particles is calculated. The calculation also returns the escape flux from $x=x_{0}$, the free escape boundary far upstream \cite{freeescape}. The flux of CRs contributed at Earth by a single remnant is the result of the integration over time of the instantaneous escape flux plus the spectrum of particles advected downstream and escaping at later times. Treatment of this latter part is especially problematic and requires some discussion. If diffusion is neglected behind the shock, in principle, particles that are advected downstream sit within a fluid element in which the strength of the magnetic field, in the absence of damping, is just the result of adiabatic decompression of the field just behind the shock at the time when these particles were accelerated. In this case some fraction of particles, even at the highest energies, may remain confined downstream and lose energy in the expansion of the shell. The escape of these accumulated particles will be possible only at very late times, after the shock has dissipated away. It is important to realize that in this scenario, due to adiabatic losses, none of the advected particles can actually escape at the knee energy. In order to describe adiabatic losses, we assume that the post-shock pressure, dominated by the sum of gas+CR pressure, is nearly uniform: a reasonable assumption, given that the fluid is subsonic. The downstream plasma pressure is proportional to the square of the shock Mach number, hence $\rho^{\gamma}(t)\propto p_{gas}(t)\propto V_{sh}^{2}(t)$. A relativistic particle with energy $E_{0}$ advected downstream at time $t_{0}$ will at a later time $t$ have an energy $E(t)=E_{0}/L(t_{0},t)$, with $L(t_{0},t)=\left[V_{sh}(t_{0})/V_{sh}(t)\right]^{\frac{2}{3\gamma}}$. It is possible to check {\it a posteriori} that choosing $\gamma=5/3$ or 4/3, respectively corresponding to a gas or CR dominated pressure, does not lead to major differences in the results. Other authors have proposed that advected particles stop suffering adiabatic losses only when the pressure of the fluid element they sit with matches the ISM value \cite{bv83}. This recipe leads to very severe losses for the advected particles and when their spectrum is added to that contributed by the escaping particles, the result is very far from a power-law and incompatible with observations. Either because of magnetic field damping or because of gradients in the magnetic field strength downstream (possibly induced by gradients in the accelerated particle pressure), it could well be that particles of a given maximum energy at a given time cannot be confined downstream at later times. In this case, at any time $t$ all particles with momentum $p\geq p_{esc}(t)$ must escape the system, where $p_{esc}(t)$ is defined so that the corresponding diffusion length in the instantaneous downstream magnetic field is $\lambda(p_{esc},B_{2})\sim x_0$. It is easy to show (and we will do it later) that $p_{esc}(t)\geq p_{max}(t)$ at any time. These two recipes (escape at $p\sim p_{max}(t)$ and escape at $p_{esc}(t)$) lead to different integrated spectra from an individual SNR and unfortunately they are not the only two conceivable scenarios for particle escape. For instance large scale instabilities could break the structure of the forward shock in smaller size shocks that could allow some particle escape sideways. In this case it might make sense to assume that some fraction of the advected particles at any time may escape the system with their instantaneous spectrum. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{spectra.eps} \caption{Escape flux through the free escape boundary (dashed lines) and advected spectra (solid lines) at four different times for the same benchmark SNR used for Fig.~\ref{fig:hydro}} \label{fig:spectra} \end{center} \end{figure} Given the importance of this physical phenomenon for establishing the spectrum of CRs observed at Earth, in the following we illustrate the time integrated spectra for the three escape scenarios outlined above. An attempt to calculating the cumulative injection spectrum of CRs has been made by \cite{pz05}: in their approach the shock modification was fixed {\it a priori} and constant throughout the whole SNR evolution, rather than being self-consistently calculated, and no dynamical feedback of the magnetic field was taken into account. However, the evolution of the shock modification is, in fact, strictly connected with the acceleration efficiency and in turn with the normalization of the spectrum, so that only within a self-consistent non-linear approach it is possible to understand which are the SNR stages that contribute the most to the diffuse galactic CR spectrum. For illustrative purposes, in Fig.~\ref{fig:hydro} we show the time dependence of the shock radius, $R_{sh}$, shock velocity, $V_{sh}$, downstream magnetic field, $B_2$, flux through the free escape boundary, $F_{esc}$, total compression factor, $R_{tot}$, and damping parameter, $\zeta$. The curves refer to a SNR expanding in a medium with background magnetic field $B_{0}=5\mu$G, temperature $T_0=10^{5}$K, density $n_0$=0.1 cm$^{-3}$; the injection parameter is fixed as $\xi_{inj}=3.9$ and $x_0=R_{sh}$. In order to highlight the need for a non-linear treatment of DSA, it is worth noticing that at the beginning of the ST phase ($\sim 800\textrm{yr}$) the total compression ratio is $R_{tot}\sim 9$, corresponding to $\sim50\%$ of the bulk pressure channelled into CRs, and $F_{esc}\sim 20\%$. We also show, in Fig.~\ref{fig:spectra}, the escape flux through the free escape boundary (dashed lines) and advected spectra (solid lines) at four different times (as specified on the figure) for the same benchmark SNR used for Fig.~\ref{fig:hydro}. \subsection{Escape of particles around $p_{max}(t)$} Here we focus on the escape recipe in which at any given time particles escape in a narrow region around $p_{max}$ as discussed in \cite{freeescape}, but most of the particles are advected downstream and stay there losing energy adiabatically. In Fig.~\ref{fig:T5xi39} we illustrate the CR spectrum from our benchmark SNR, where we assume that the ISM has density $n_0=0.1 \rm{cm}^{-3}$ and temperature $T_{0}=10^{5}\rm{K}$. The injection is assumed to correspond to $p_{inj}=3.9 p_{th,2}$ where $p_{th,2}$ is the momentum of thermal particles downstream of the shock (see \cite{bgv05}). The left panel refers to the case in which the free boundary condition is imposed at a distance from the shock $x_{0}=R_{sh}$, while the right panel refers to $x_{0}=0.15 R_{sh}$. The latter value of $x_0$ approximately corresponds to the position of the contact discontinuity at the beginning of the ST phase. This ratio, however, increases with time and becomes of order 1 before the beginning of the radiative phase. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{plotT5-xi39.eps} \includegraphics[width=0.49\textwidth]{plotT5-xi39-chi015.eps} \caption{CR spectrum injected in the ISM by a SNR expanding in a medium with density $n_0=0.1 \rm{cm}^{-3}$, temperature $T_{0}=10^{5}\rm{K}$ and injection parameter $\xi_{inj}=3.9$. The dashed line is due to the escape of particles from upstream, the dash-dotted line is the spectrum of particles escaping at the end of the evolution. The solid line is the sum of the two. {\it Left}: $x_{0}=R_{sh}$. {\it Right}: $x_{0}=0.15 R_{sh}$.} \label{fig:T5xi39} \end{center} \end{figure} The dashed lines represent the spectrum of particles that escape from the remnant towards upstream infinity at any given time. The peak at high energies corresponds to early times, when the maximum energy is the highest. At later times particles of lower and lower energy escape. The spectrum is somewhat flatter than $p^{-4}$ because the escape flux decreases with time, as discussed in \S \ref{sec:benchmark}. The dash-dotted lines represent the spectrum contributed by the particles trapped inside the remnant and escaping at the end of the evolution, after the effect of adiabatic losses. Here the SNR is assumed to die as a CR factory at an age of $\sim10^{5}$yr, namely when the amplified magnetic field has dropped below $\delta B/B_{0}<10^{-3}$ and thus $p_{max}\sim$1-10 GeV/c. The solid line, which is the sum of the two contributions, is very close to being the canonical power law $p^{-4}$. In this case, as in most cases we will show below, a dip is present in the spectrum. This dip is found at energies a factor of a few below the maximum one and marks the transition between energies at which the advected particles are the dominant contribution and energies where only escape at the early ST stage is important. The distance between the cutoff in the spectrum of advected particles and the peak at the highest energies provides an estimate of the strength of adiabatic energy losses. A few points are worth being noticed: 1) the accelerated particles reach the knee only if one chooses $x_{0}=R_{sh}$ (left panel), while for the more popular choice $x_{0}=0.15 R_{sh}$ (right panel), the maximum energy is appreciably lower. On the other hand this conclusion depends on the details of the magnetic field generation and scattering properties. We cannot exclude that more efficient magnetic field amplification on spatial scales which may be responsible for resonant scattering of particles at $p_{max}$ may change this conclusion. In this case however the general trend is to have somewhat flatter spectra, so that the naive expectation is that the time-integrated particle spectrum will resemble the one on the left panel. 2) the spectral concavity which is typical of NLDSA and that appears very clearly in the instantaneous particle spectra is almost completely washed out by the temporal evolution. In the case with $x_{0}=0.15 R_{sh}$, for example, the time convolution leads to spectra even slightly steeper than $p^{-4}$. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{plotT5-xi36.eps} \includegraphics[width=0.49\textwidth]{plotT5-xi42.eps} \caption{Injection spectrum for a SNR exploding in a medium with density $n_0=0.1 \rm{cm}^{-3}$, temperature $T_{0}=10^{5}\rm{K}$ and injection parameter $\xi_{inj}=3.6$ ({\it left panel}) and $\xi_{inj}=4.2$ ({\it right panel}). For both panels $x_{0}=R_{sh}$ and the lines are labelled as in Fig.~\ref{fig:T5xi39}.} \label{fig:T5x01} \end{center} \end{figure} The way the spectrum of injected particles is affected by changing the injection efficiency is illustrated in Fig.~\ref{fig:T5x01}: the left panel refers to $\xi_{inj}=3.6$, corresponding to injecting into the acceleration process a fraction $\eta\sim2\times10^{-4}$ of the particles crossing the shock, while the right panel is obtained with $\xi_{inj}=4.2$ ($\eta\sim2\times10^{-6}$). The benchmark case $\xi_{inj}=3.9$ corresponds to $\eta\sim2\times10^{-5}$. One can see that in the less efficient case ($\xi_{inj}=4.2$) the resulting spectrum is steeper, lower values of the maximum momentum are reached and the energy channelled into accelerated particles is very low. It is difficult to notice any appreciable changes in the spectral shape between the case $\xi_{inj}=3.6$ and those in Fig.~\ref{fig:T5xi39}, though the most efficient case ($\xi_{inj}=3.6$) leads to a slightly higher particle flux as could be expected. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{plotT4-xi39.eps} \includegraphics[width=0.49\textwidth]{plotT6-xi39.eps} \caption{Injection spectrum for a SNR exploding in a medium with temperature $T_{0}=10^{4}\rm{K}$ (left panel) and $T_{0}=10^{6}\rm{K}$ (right panel). In both cases the injection parameter is $\xi_{inj}=3.9$ and $x_{0}=R_{sh}$.} \label{fig:T4T6} \end{center} \end{figure} As stressed in the initial discussion, we focus here on SNRs expanding in a spatially homogeneous medium, similar to the environment in which a type Ia SN is expected to occur. On the other hand it is interesting to explore the effects of warmer, more tenuous media on the particle acceleration process. In Fig.~\ref{fig:T4T6} we show the injected spectra for a medium with temperature $T_0=10^{4}\rm{K}$ and gas density $n_0=1\rm{cm}^{-3}$ and one with $T_0=10^{6}\rm{K}$ and $n_0=0.01\rm{cm}^{-3}$. Also these two cases show a total spectrum which is very close to a power law $p^{-4}$ up to $\sim 10^{5}$ GeV, with a bump close to the maximum energy reached during the SNR evolution. The shape of the escape flux from upstream is different in the two cases because of the very different values of the Mach number. In the case of a hot medium (right panel) the Mach number is systematically lower and not only the spectrum is somewhat steeper, but more important the escape flux drops faster when the Mach number drops in time. This leads to spectra of escaping particles which are more concentrated around the highest momenta. On the other hand the overall shape of the spectrum remains close to a power law although acceleration is somewhat more efficient (and the maximum momentum is higher) in the lower temperature case (left panel). The cases illustrated so far suggest that the spectrum injected by a SNR as a result of the integration over time of its injection history is very close to a power law $p^{-4}$ in the energy region where most high quality measurements are currently available. The good news is that the concavity which follows from the formation of a precursor upstream of the shock is not prominent in the injected spectra. The bad news is that it appears to be very difficult to steepen this injected spectra to the levels that are suggested by naive estimates based on simple diffusion models. We will discuss this point in \S \ref{sec:earth}. An exception to the persistence of a very flat power law appears if one takes into account the finite speed of the waves responsible for particle scattering upstream and downstream. This point was discussed for instance by \cite{bell78} but it is easy to understand that the conclusions are very much model dependent. The spectrum of accelerated particles (even in the test particle theory of DSA) is determined by the compression factor of the velocities of the {\it scattering centers}. These centers are in fact plasma waves propagating in the upstream and downstream fluids and their velocity depends on the nature of the waves and on whether they are produced {\it in situ} and/or produced somewhere else and eventually advected. For instance the standard picture of NLDSA assumes that these waves are backward (i.e. moving against the fluid) Alfv\'en modes generated upstream of the shock and then partly reflected and partly transmitted through the shock surface, so that downstream there are only waves that have been advected from upstream \cite{sb71}. In this case one can show that the resulting effect on the spectrum of accelerated particles mainly consists in a flattening (e.g. \cite{schlick}). On the other hand, if gradients in the accelerated particles were present downstream, some level of turbulence could be generated downstream as well, so that there could be waves traveling away from the shock surface. In this scenario, and if the wave velocity is large enough, the spectrum of accelerated particles could be appreciably steeper, as investigated e.g. in \cite{zp08}. In Fig.~\ref{fig:T5VA} we show the injected spectrum in the case in which we assume that the waves downstream move in the forward direction at the Alfv\'en speed as calculated in the amplified field. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{plotT5-xi39-VA.eps} \caption{Injected spectrum with forward moving waves downstream, with the Alfv\'en velocity v$_{A}$ calculated using the amplified magnetic field.} \label{fig:T5VA} \end{center} \end{figure} A byproduct of dealing with steeper instantaneous spectra is that the shocks are less modified, the magnetic field amplification is less efficient and eventually the integrated spectrum is cut off at relatively low energy, $\sim 10^{4}-10^{5}$ GeV. \subsection{Escape of particles at $p>p_{esc}(t)$} Here we discuss the case in which at any given time all the particles with momentum $p>p_{esc}(t)$ escape the SNR, with $p_{esc}(t)$ defined so that $\lambda_{2}(p_{esc})\equiv D(p_{esc},B_{2})/V_{2}=x_{0}$, where $V_{2}=V_{sh}/R_{tot}$ is the downstream velocity. The product $V\delta B$ is constant across the subshock, hence the diffusion length at any given $p$ immediately upstream of the shock is exactly the same as downstream. On the other hand, the local diffusion length in the precursor $\lambda(x,p)\propto p/\delta B(x)/V(x)$ would be constant if only adiabatic compression were taken into account, but increases with distance from the shock as soon as CR-induced magnetic field amplification is included. Since $p_{max}$ is determined by an average diffusion length throughout the precursor, the inequality $p_{esc}\geq p_{max}$ follows immediately. This implies that at any given time particles with momentum larger than a given ``escape'' momentum cannot be confined in the system. In Fig.~\ref{fig:escall} we show the spectrum injected by an individual SNR in this scenario. We assume that the SN explosion occurs in a medium with magnetic field $B_{0}=5\mu$G, temperature $T_0=10^{5}\rm{K}$ and density $n_0=0.1 \rm{cm}^{-3}$ and the injection parameter is $\xi_{inj}=3.9$: these are the benchmark parameters already used to obtain the results shown in Fig.~\ref{fig:hydro} and Fig.~\ref{fig:spectra}. The free escape boundary is assumed to be at $x_0=R_{sh}$. The two panels of Fig.~\ref{fig:escall} refer to the case in which all particles with $p>p_{esc}(t)$ escape the accelerator at any given time (right panel) and to the case in which only 10\% of them are allowed to escape the acceleration region (left panel). In this latter case, the particles that are trapped in the shell are advected downstream and lose energy adiabatically. In both panels the dashed line represents the escape flux through the free escape boundary, the dotted line is the flux of particles escaping at $p>p_{esc}(t)$ and the dash-dotted line refers to the particles that remain in the expanding shell and escape at the end of the evolution. The solid line is the sum of all contributions. It is clearly visible that the net effect of the instantaneous escape at $p>p_{esc}$ is to flatten the injected spectrum and possibly wash out the dip-like feature at the highest energies (see right panel). These findings seem to agree with the results of previous calculations presented in \cite{pz05}, where a similar recipe for escape was adopted. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{plotT5-xi39-Pesc01.eps} \includegraphics[width=0.49\textwidth]{plotT5-xi39-Pesc1.eps} \caption{Spectrum injected by our benchmark SNR if a fraction of particles with $p>p_{esc}(t)$ (all particles in the right panel and 10\% of them in the left panel) leave the accelerator at any given time. In both panels the dashed line represents the escape flux through the free escape boundary, the dotted line is the flux of particles escaping at $p>p_{esc}(t)$ and the dash-dotted line refers to the particles that remain in the expanding shell and escape at the end of the evolution. The solid line is the sum of all contributions. } \label{fig:escall} \end{center} \end{figure} Our conclusion is that even in this escape scenario the generic spectrum of injected particles is very close to $p^{-4}$ or flatter. \subsection{Escape from a broken shell} \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{plotT5-xi39-beta0001.eps} \includegraphics[width=0.49\textwidth]{plotT5-xi39-beta001.eps} \includegraphics[width=0.49\textwidth]{plotT5-xi39-beta01.eps} \caption{Spectrum injected by our benchmark SNR if particles escape the acceleration region from a broken shell. The three panels refer to four values of $\beta$ as indicated, where $\beta$ is the fraction of particles that escape the shell from downstream. In all panels the dashed line represents the escape flux through the free escape boundary, the dotted line is the flux of particles escaping from the sides and the dash-dotted line refers to the particles that remain in the expanding shell and escape at the end of the evolution. The solid line is the sum of all contributions. } \label{fig:broken} \end{center} \end{figure} Here we discuss the case in which the expanding shell is broken, possibly due to instabilities and/or inhomogeneities in the circumstellar medium. In this case, at any given time particles can escape the expanding shell from the sides in addition to escaping from the far upstream region. In Fig.~\ref{fig:broken} we show the spectrum injected by an individual SNR in this scenario. We consider again the benchmark environmental parameters: $B_{0}=5\mu$G, $T_0=10^{5}\rm{K}$, $n_0=0.1 \rm{cm}^{-3}$, $x_0=R_{sh}$ and $\xi_{inj}=3.9$. The three panels refer to different values of the parameter $\beta$ (as indicated) which quantifies the fraction of the particles in the downstream plasma that are allowed to escape the system. For $\beta<1$, the particles that are unable to escape are advected downstream, lose energy adiabatically and are injected at the end of the SNR evolution as usual. As one could easily expect, while $\beta$ increases the gap between the escape flux to upstream infinity (dashed lines) and the advected spectrum is filled and eventually disappears for $\beta=0.1$. When this happens, however, the time integrated spectrum injected by the SNR is visibly flatter that $p^{-4}$. \section{The spectrum at Earth} \label{sec:earth} The spectrum of CRs observed at Earth is the result of several complex phenomena occurring during propagation: particles are injected at the sources, for instance in SNRs, then diffuse in the interstellar magnetic field and could possibly be advected in a Galactic wind if one is present \citep{jones79}. Moreover CRs may be reaccelerated during propagation due to second order Fermi acceleration induced by scattering against Alfv\'en waves in the Galactic magnetic field \citep[e.g.][]{seoptu}. In principle all these phenomena modify the spectrum with respect to the injected one. For standard values of the parameters, both advection in a wind and reacceleration become of some importance only at relatively low energies and for the purpose of the present discussion they can be safely disregarded \citep[see e.g.][and references therein]{jones+01}. If the diffusion coefficient has the form $D(E)\propto E^{\delta}$, the spectrum observed at Earth can be estimated as being $N(E)\propto Q(E) E^{\delta}$, so that for an injection spectrum $Q(E)\propto E^{-2}$ the observed spectrum requires $\delta\sim 0.65-0.7$. This simple estimate leads to an important consequence, recently reviewed in \cite{hillas}, that CRs at Earth should become highly anisotropic at energies much lower than the knee. This anisotropy is not observed, hence posing a very serious problem for simple recipes of CR diffusion in the Galactic magnetic field. In order to alleviate this problem it has been proposed that the injection spectrum could be $Q(E)\propto E^{-2.4}$ and that the diffusion coefficient could be $D(E)\propto E^{1/3}$, but as we discussed above, at least in the case of SNRs or any other source where the acceleration process is based on the first order Fermi process in highly supersonic shocks, this situation is very hard to reproduce since the derived injection spectra are generically harder than $E^{-2.4}$. On the other hand one should keep in mind that the conclusion on the anisotropy might be due to too simplistic leaky box models or diffusion models \citep[e.g.\ GALPROP][]{galprop} where the Galactic magnetic field has no structure: the interplay between parallel and perpendicular diffusion and the random walk of magnetic field lines could for instance have a crucial influence on the anisotropy of CRs. Moreover, as discussed in \cite{ptuskin2006}, the observed anisotropy could be affected in a non trivial way by the distribution in space and time of local supernovae. The rather disappointing picture that arises from this line of thought is that, even if the basic principles of both acceleration and propagation of CRs are thought to be rather well understood, at the present time neither the injected spectrum nor the propagated spectrum can be reliably calculated. The main obstacle to reaching clear predictions is in the complex nature of the accelerator and of the Galaxy as a medium in which CRs propagate. Progress on the first issue is likely to come from efforts aimed at clarifying the nature of the turbulent magnetic field that is responsible for particle scattering and escape: 1) theoretical investigation is required of the instabilities that are more likely to lead to amplified magnetic fields at scales that are useful for the particle scattering; 2) precious information is still to be gathered from comparison between observations and models of the multifrequency emission of individual sources \citep[e.g.][]{mor09}, including morphological information \citep[e.g.][]{mor10}). \section{Discussion} Here we discuss the main ingredients and uncertainties that enter the calculation of the spectrum of cosmic rays injected by SNRs. This is determined by the superposition of two contribution: 1) particles that escape the expanding shell from a free escape boundary at some location upstream of the shock; 2) particles that leave the accelerator at some later time when the shock slows down and liberates the particles trapped behind it. The latter phenomenon takes place only after the shell has expanded and particles behind the shock have suffered adiabatic energy losses. This is a crucial point because if indeed SNRs, at some stage of their evolution, are able to accelerate CRs (protons) up to the knee energy, these particles cannot contribute to the CR spectrum around the knee unless they leave the accelerator immediately after production. This short introduction already opens the way to several questions: 1) where is the free escape boundary located? 2) what physical processes regulate its position? 3) which particles escape the accelerator at any given time? As illustrated by the numerous cases considered in this paper, although we have phenomenological tools to calculate what might happen, we are not able at the present time to provide unique answers to the questions above. Let us start the discussion with a comment on the commonly adopted recipe to describe the escape of particles by assuming the existence of a free escape boundary at some location $x_{0}$ upstream of the shock. While from the mathematical point of view, this assumption is well posed, from the physical point of view the problem remains in that the position of this boundary is related to poorly understood details of the problem, especially the ability of particles to self-generate their own scattering centers. The position of the free escape boundary should in principle coincide with a location upstream of the shock where particles are no longer able to scatter effectively and return to the shock. This would lead to an anisotropic distribution function of the accelerated particles, that can no longer be described by the standard diffusion-convection equation. Moreover, while waves can be generated both resonantly \cite{skillinga,bell78} and non-resonantly \cite{bell04}, particles can scatter effectively only with resonant waves. This adds to the complexity of the problem, in that one might have amplified magnetic fields of large strength but on scales which do not imply effective scattering of the highest energy particles. This concept of a free escape boundary which is self-adjusted by the accelerated particles adds to the extreme non-linearity of NLDSA and is currently not included in any of the calculations presented in the literature. This clearly makes the prediction of a maximum energy of accelerated particles very uncertain whenever it is determined by the size of the accelerator (namely by $x_{0}$) rather than by the finite age of the accelerator. What appears to be a rather solid result is that the highest maximum energy throughout the history of the SNR is reached at the beginning of the Sedov-Taylor phase, provided the magnetic field is self-generated by the accelerated particles through streaming instability. However the nature of the mechanism responsible for the magnetic field amplification is unknown: the bright narrow X-ray rims suggest that the interstellar magnetic field is amplified at the shock, but at the present time it is not possible to say for sure whether the field is induced by CRs or by some type of fluid instability associated with the corrugation of the shock surface due to the propagation in an inhomogeneous environment (see for instance \cite{joki07}). On the other hand, even if the magnetic field is induced by the presence of accelerated particles, the flavor of CR induced instability involved is all but trivial to identify. Resonant streaming instability, the only one included in the calculations presented here, has the advantage of producing waves which are at the right wavelengths to scatter particles resonantly, thereby increasing their energy because of multiple shock crossings. However particles can reach the energy of the knee only if the mechanism is assumed to work efficiently even in the regime in which the field has reached non-linear amplification, $\delta B/B_{0}\gg 1$, which is all but obvious since the resonance condition becomes ill defined in this regime. Non-resonant magnetic field amplification (e.g. \cite{bell04}) can possibly lead to larger values of the turbulent magnetic field, but typically the field is produced on scales which are minuscule compared with the gyration radius of the highest energy particles, which makes it hard to understand how they reached that energy in the first place and how they can keep increasing their energy, unless a very effective inverse cascade occurs in the precursor, thereby transferring power to larger spatial scales. The general trend of the injection spectra calculated in this paper is to be very close to power laws with index -4, with all the difficulties that this implies in terms of connecting SNRs with CRs observed at Earth. On the other hand it is remarkable that quasi-power-law spectra are obtained by overlapping instantaneous spectra which are characterized by the concavity typical of NLDSA. This important physical point should be kept in mind whenever one tries to infer the spectrum of accelerated particles from that of the radiation observed by a SNR. In general the two spectra are not required to be the same. A noticeable exception to the rule of injected spectra that are flat power laws is represented by the case in which the waves responsible for the scattering of accelerated particles in the downstream plasma move in the forward direction. This scenario would lead to a time integrated spectrum which is appreciably steeper than $p^{-4}$. However the instantaneous spectra are also rather steep, which implies that magnetic field amplification is not very efficient and the maximum momentum of accelerated particles is much lower than the knee (see Fig.~\ref{fig:T5VA}). Although the basic physical intuition associated with having a large velocity of scattering waves downstream is to infer that the spectra can become appreciably steeper (or flatter for that matter, it all depends on the direction of motion of the waves) one should also keep in mind that when $\delta B/B\gg 1$ and the waves are not necessarily Alfv\'en waves, even the form of the transport equation as is usually used might be profoundly affected: particles might propagate in a non diffusive way in the shock proximity. Another case in which we obtained relatively steep spectra injected by a SNR is that of injection with low efficiency (see the case $\xi_{inj}=4.2$ in Fig.~\ref{fig:T5x01}). However this case also leads to a rather small fraction of energy channelled into accelerated particles and to a rather low maximum momentum, which makes this case appear of scarce interest for the origin of CRs, at least in the context of the standard picture of CR propagation in the Galaxy. A caveat for this type of calculation of the injection history of CRs in SNRs is that the surrounding medium could be much more complicated than assumed here. For instance in a type II SN one might expect that the shell propagates first in the magnetized wind of the presupernova star where the magnetic field should be mainly perpendicular. In this case particle acceleration does not occur in the regime described by the transport equation used here (and in most literature on the topic). Drifts in the shock region might make the maximum energy achievable higher than predicted by NLDSA at parallel shocks (see for instance \cite{joki87}). At some time in the evolution one could envision a transition to a mainly parallel field configuration where our calculations would apply. The time integrated spectrum in this case could be different from those calculated here which apply to standard type Ia SN. The case of type II SN has been mimicked here only by assuming a hot, more rarefied circumstellar medium. On the other hand, it is conceivable that a spread in the acceleration efficiencies (and/or in the maximum achievable momenta) between individual sources might be essential to explain the overall observed spectrum of Galactic CRs. If this is the case, the work presented in this paper is only a first step towards reproducing the observations, a task that can only be accomplished by adding up the contributions due to different populations of SNRs, with different environmental parameters (see e.g. in Fig.~5 of \cite{hillas}). \section*{Acknowledgments} This work was partially supported by MIUR (under grant PRIN-2006) and by ASI through contract ASI-INAF I/088/06/0. This research was also supported in part by the National Science Foundation under Grant No. PHY05-51164. We wish to acknowledge the KITP in Santa Barbara for the exciting atmosphere during the Program {\it Particle Acceleration in Astrophysical Plasmas}, July 26-October 3, 2009.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We start with a well-known brainteaser which can be found in literature dating back to 1980 [1, 2, 3], and which was more recently popularized in 2019 on the famous prediction site fivethirtyeight.com [4]: \bigskip \noindent {\it{Four coins lie on the corners of a square table, some heads-up and some tails-up (they may all have the same orientation). Each turn, a blindfolded player can flip some of the coins, after which the table is rotated arbitrarily. If the player's goal is to at any time have all coins heads-up simultaneously, does he have a strategy that guarantees victory in a finite number of turns?}} \bigskip \noindent For the simple case above, there is indeed a strategy that wins within $15$ turns. In particular, label the positions of the table $1, 2, 3, 4$, with these positions fixed from the perspective of the player. Then a {\it{move}} (performed once per turn) will consist of a vector in $\mathbb{Z}_2^4$, with $0$ denoting leaving the coin in the corresponding position as is and $1$ denoting flipping the coin in that position. E.g., the vector $(0, 0, 0, 1)$ denotes only flipping the coin in position $4$. The player's strategy should then be the following sequence of $15$ moves: $$ (1, 1, 1, 1), (0, 1, 0, 1), (1, 1, 1, 1), (0, 0, 1, 1), (1, 1, 1, 1), (0, 1, 0, 1), (1, 1, 1, 1), (0, 0, 0, 1),$$ $$ (1, 1, 1, 1), (0, 1, 0, 1), (1, 1, 1, 1), (0, 0, 1, 1), (1, 1, 1, 1), (0, 1, 0, 1), (1, 1, 1, 1) $$ \bigskip \noindent It is easy to show by case work that these moves guarantee a win for the player. \bigskip \noindent One naturally asks the question, what if instead of four coins there were $n$ coins? Furthermore, viewing a coin as a counter counting$\pmod{2}$ (to which the player adds either $0$ or $1$ each turn), what if instead the counters counted$\pmod{m}$, with the player adding one of $0, 1, \dots m-1$ to each counter each turn? This problem was considered by Bar Yehuda, Etzion, and Moran in [1], who showed: \bigskip \noindent {\bf{Theorem 1.1}}. The player can win if and only if $n = 1$, $m = 1$, or $(n, m) = (p^a, p^b)$ for some prime $p$ and $a, b \in \mathbb{N}$. \bigskip \noindent We independently derive this result, and simplify their argument by providing a clever explicit construction of a winning set of moves. We also generalize the problem as follows: instead of the table simply rotating, one can imagine that the table permutes the counters in positions $1, 2, \dots, n$ based on elements from some subset $S \subseteq S_n$, where $S_n$ denotes the symmetric group and where $S$ contains the identity.\footnote{We may assume without loss of generality that $S$ contains the identity, since the player can pretend a certain permutation $t \in S$ happens every turn by default and then is followed by a permutation from the set $t^{-1} \cdot S$, which contains the identity.} Denote the parameters of this game by the ordered pair $(S, m)$, so that $(\mathbb{Z}_n, m)$ represents the setting in Theorem 1.1. Additionally, let $G \le S_n$ be the subgroup of $S_n$ generated by $S$. Our main result is: \bigskip \noindent {\bf{Theorem 1.2}}. The player can win the $(S, m)$-game if and only if $|G| = 1$, $m = 1$, or $(|G|, m) = (p^a, p^b)$ for some prime $p$ and $a, b \in \mathbb{N}$. \bigskip \noindent Our paper is divided into five parts, where the first four parts consist of a simplification of the proof of Theorem 1.1 in [1]. In Section 2, we show that if $(n, m) = (p, q)$ for distinct primes $p, q$ then the player cannot win. In Section 3, we show if the player cannot win when $(n, m) = (a, b)$ then the player also cannot win for $(n, m) = (a, bk)$ or $(n, m) = (ak, b)$ for any $k \in \mathbb{N}$. In Section 4, we constructively show that the player can win if $(n, m) = (p^a, p)$ for a prime $p$ and any $a \in \mathbb{N}$. In Section 5, we extend the construction to the case where $(n, m) = (p^a, p^b)$ where $b > 1$. Finally, drawing on the methods used in Sections 2 through 5, in Section 6 we prove Theorem 1.2 in full generality. \section{The $(n, m) = (p, q)$ case} We start by extending the notation in the introduction. Instead of denoting ``moves" by vectors in $\mathbb{Z}_2^4$, we will now use vectors in $\mathbb{Z}_m^n$. E.g., the vector $(2, 0, 3)$ will denote adding $2$ to the counter in position $1$ and adding $3$ to the counter in position $3$. We will also use vectors in $\mathbb{Z}_m^n$ to describe the configuration of counters. Furthermore, call a configuration of counters {\it{homogenous}} if each counter on the table shows the same number, and non-homogenous otherwise. \bigskip \noindent {\bf{Lemma 2.1}}. The player cannot win if $(n, m) = (p, q)$ for distinct primes $p, q$. \bigskip \noindent {\it{Proof.}} When $(n, m) = (p, q)$ for distinct primes $p, q$, we will show that for any non-homogenous configuration, there is no move guaranteed to make the configuration homogenous following an arbitrary rotation of the table. \bigskip \noindent Indeed, suppose the configuration on the table prior to a rotation was $(x_1, x_2, \dots, x_p)$ and consider any move $(y_1, y_2, \dots, y_p)$. For this move to guarantee that the configuration of the table afterwards was homogenous, the following equalities would have to hold simultaneously: \begin{align*} x_1 + y_1 = x_2 + y_2 = &\dots = x_p + y_p\pmod{q}\\ x_p + y_1 = x_1 + y_2 = &\dots = x_{p-1} + y_p\pmod{q}\\ &\vdots\\ x_2 + y_1 = x_3 + y_2 = &\dots = x_1 + y_p\pmod{q}. \end{align*} This implies that $$ x_1 - x_2 =x_2 - x_3 = \dots = x_p - x_1 = y_2 - y_1\pmod{q}. $$ Thus $$ p(x_1 - x_2) = (x_1 - x_2) + (x_2 - x_3) + \dots + (x_p - x_1) = 0\pmod{q}$$ and so $x_1 = x_2\pmod{q}$. Similarly we obtain $x_1 = x_2 = \dots = x_p\pmod{q}$ so indeed such a move is only possible if the configuration of the table was already homogenous. \bigskip \noindent Therefore if the starting configuration is non-homogenous, the player can never force the configuration to be homogenous and so cannot win. $\blacksquare$ \section{The $(n, m) = (ak, b)$ and $(n, m) = (a, bk)$ cases} We handle each case separately: \bigskip \noindent {\bf{Lemma 3.1}}. If the player cannot win when $(n, m) = (a, b)$, then the player cannot win when $(n, m) = (ak, b)$ for any $k \in \mathbb{N}$. \bigskip \noindent {\it{Proof.}} Suppose that the player cannot win if $(n, m) = (a, b)$ for some $(a, b) \in \mathbb{N}^2$, and consider the case where $(n, m) = (ak, b)$ for some $k \in \mathbb{N}$. Suppose for the sake of contradiction that the player had a sequence of moves $y_1, y_2, \dots, y_N$ for some $N \in \mathbb{N}$ that guaranteed a win, where $y_i = (y_{i,1}, y_{i,2}, \dots, y_{i,ak})$ for all $i$. Now let $y'_i =(y_{i,k}, y_{i,2k}, \dots, y_{i,ak})$ for all $i$. We must have that the sequence of moves $y'_1, y'_2, \dots, y'_N$ wins for $(n, m) = (a, b)$, contradiction. $\blacksquare$ \bigskip \noindent {\bf{Lemma 3.2}}. If the player cannot win when $(n, m) = (a, b)$, then the player cannot win when $(n, m) = (a, bk)$ for any $k \in \mathbb{N}$. \bigskip \noindent {\it{Proof.}} Suppose that the player cannot win if $(n, m) = (a, b)$ for some $(a, b) \in \mathbb{N}^2$, and consider the case where $(n, m) = (a, bk)$ for some $k \in \mathbb{N}$. Suppose for the sake of contradiction that the player had a sequence of moves $y_1, y_2, \dots, y_N$ for some $N \in \mathbb{N}$ that guaranteed a win, where $y_i =(y_{i,1}, y_{i,2}, \dots, y_{i,a})$ for all $i$. Note that $y_{i,j} \in \mathbb{Z}_{bk}$ for all $i, j$, and define the homomorphism $\phi: \mathbb{Z}_{bk} \rightarrow \mathbb{Z}_b$ where $\phi(x) = x\pmod{b}$. Now let $y'_i = (\phi(y_{i,1}), \phi(y_{i,2}), \dots, \phi(y_{i,a}))$ for all $i$. We must have that the sequence of moves $y'_1, y'_2, \dots, y'_N$ wins for $(n, m) = (a, b)$, contradiction. $\blacksquare$ \bigskip \noindent {\bf{Corollary 3.3}}. The combination of Lemmas 2.1, 3.1, and 3.2 immediately imply the ``only if" direction in Theorem 1.1. \section{The $(n, m) = (p^a, p)$ case} Here we prove the following lemma constructively: \bigskip \noindent {\bf{Lemma 4.1}}. The player can win if $(n, m) = (p^a, p)$ for some prime $p$ and $a \in \mathbb{N}$. \bigskip \noindent {\it{Proof.}} Let $x_{i,j} = \binom{i}{j}\pmod{p}$ where $x_{i,j} \in \mathbb{Z}_p$ for all $i,j$. Furthermore, let $x_j = (x_{0,j}, x_{1,j}, \dots, x_{p^a-1,j})$ for all $j \in \{0, 1, \dots, p^a - 1\}$. For all $i \in \{1, 2, \dots, p^{p^a} - 1\}$, let $y_i = x_{v_p(i)}$ where $v_p$ denotes $p$-adic valuation. We claim that the sequence of moves $y_1, y_2, \dots, y_{p^{p^a} - 1}$ wins. \bigskip \noindent Since the matrix $[x_0^T\ x_1^T\ \dots\ x_{p^a-1}^T]$ is lower triangular and its main diagonal is identically $1$, its determinant is $1 \ne 0$ and so the moves $x_0, x_1, \dots, x_{p-1}$ form a basis over $\mathbb{Z}_p^{p^a}$. Therefore, we can write the starting configuration $s$ as $s = c_0x_0 + c_1x_1 + \dots c_{p^a-1}x_{p^a-1}$ for some $c_0, c_1, \dots, c_{p^a-1} \in \mathbb{Z}_p$. \bigskip \noindent The following intermediate claim is then the key to the proof: \bigskip \noindent {\bf{Claim 4.2}}. For any $j$, let $x'_j$ be a cyclic permutation of $x_j$. Then $x_j - x'_j = e_0x_0 + e_1x_1 + \dots + e_{j-1}x_{j-1}$ for some $e_0, e_1, \dots, e_{j-1} \in \mathbb{Z}_p$. \bigskip \noindent {\it{Proof.}} We proceed by induction on $j$. When $j = 0$ the result is trivial, since $x_j = x'_j$. Now suppose $j > 0$ and let $x^{(k)}_j = (x_{k,j}, x_{k+1,j}, \dots, x_{k-1,j})$ for all $k \in \{0, 1, \dots, p^a - 1\}$, so that $x^{(0)}_j = x_j$. Repeatedly utilizing the fact that $\binom{i}{j} - \binom{i-1}{j} = \binom{i-1}{j-1}$ and $\binom{p^a}{j} = \binom{0}{j} = 0\pmod{p}$ we have that, working in $\mathbb{Z}_p^{p^a}$, \begin{align*} x^{(k+1)}_j - x^{(k)}_j &= (x_{k+1,j}, x_{k+2,j}, \dots, x_{k,j}) - (x_{k,j}, x_{k+1,j}, \dots, x_{k-1,j})\\ &= (x_{k,j-1}, x_{k+1,j-1}, \dots, x_{k-1,j-1})\\ &= x^{(k)}_{j-1} \end{align*} Therefore letting $x'_j = x^{(k)}_j$ for some $k$ we have \begin{align*} x'_j - x_j &= \left(x^{(k)}_j - x^{(k-1)}_j\right) +\left (x^{(k-1)}_j - x^{(k-2)}_j\right) + \dots + \left(x^{(1)}_j - x^{0)}_j\right)\\ &= x^{(k-1)}_{j-1} + x^{(k-2)}_{j-1} + \dots + x^{(0)}_{j-1} \end{align*} But by the inductive hypothesis we know that each $x^{(i)}_{j-1}$ can be written as a linear combination of $x_0, x_1, \dots, x_{j-1}$ (with a coefficient of $1$ behind $x_{j-1}$), which completes the proof. $\blacksquare$ \bigskip \noindent Returning to the proof of Lemma 4.1, recall that we wrote $s = c_0x_0 + c_1x_1 + \dots c_{p^a-1}x_{p^a-1}$ for some $c_0, c_1, \dots, c_{p^a-1} \in \mathbb{Z}_p$. For any such starting configuration $s$, let $f(s)$ denote that largest $i$ such that $c_i \ne 0$. We will show by induction on $f(s)$ that the sequence of moves $y_1, y_2, \dots, y_{p^{f(s)+1} - 1}$ wins. Note that the base case where $c_i = 0$ for all $i$ is trivial, since the player immediately wins. \bigskip \noindent The main idea is that as a consequence of Claim 4.2, every time the table rotates and we rewrite the new configuration as a linear combination of $x_0, x_1, \dots, x_{p^a-1}$, the coefficient behind $x_{f(s)}$ is invariant. Specifically, suppose $c_{f(s)} = c \ne 0$. Let $s'$ be the configuration on the table after $(p - c)p^{f(s)}$ moves, and write $s' = c'_0x_0 + c'_1x_1 + \dots c'_{p^a-1}x_{p^a-1}$ for some $c'_0, c'_1, \dots, c'_{p^a-1} \in \mathbb{Z}_p$. By Claim 4.2, each move $y_i$ with $v_p(i) < f(s)$ would not affect any of the coefficients behind $x_j$ for any $j \ge f(s)$ and each of the $p - c$ moves with $v_p(i) = f(s)$ would increase the coefficient behind $x_{f(s)}$ by $1$ regardless of how the table rotates between moves, so that $c'_{f(s)} = c_{f(s)} + p - c = 0\pmod{p}$. Therefore after $(p - c)p^{f(s)}$ moves we are in a configuration with $f(s') < f(s)$ and since the next $p^{f(s') + 1} - 1$ moves are copies of the first $p^{f(s') + 1} - 1$ moves, we are done by induction. $\blacksquare$ \section{The $(n, m) = (p^a, p^b)$ case} Finally, we expand upon our construction in Section 4 to prove Theorem 1.1 in its entirety. Specifically, we will show by induction on $b$ that there is a sequence of $p^{bp^a} - 1$ moves that wins. The base case of $b = 1$ follows from the proof of Lemma 4.1. \bigskip \noindent Now, suppose $x_1, x_2, \dots, x_{p^{(b - 1)p^a} - 1}$ is a sequence of moves that wins in the $(n, m) = (p^a, p^{b-1})$ case. Additionally, let $y_1, y_2, \dots, y_{p^{p^a} - 1}$ be the sequence of moves that wins in the $(n, m) = (p^a, p)$ case as in the proof of Lemma 4.1. Note that $x_i \in \mathbb{Z}_{p^{b-1}}^{p^a}$ for all $i$ and $y_j \in \mathbb{Z}_p^{p^a}$ for all $j$, but interpret each of these vectors as vectors in $\mathbb{Z}_{p^b}^{p^a}$ (through the identity homomorphism). Define the homomorphism $\varphi: \mathbb{Z}_{p^b}^{p^a} \rightarrow \mathbb{Z}_{p^{b-1}}^{p^a}$ where $\varphi(x) = x\pmod{p^{b-1}}$ element-wise, and let $$ z_i = \begin{cases} px_{\varphi(i)} &\mbox{if } i \ne 0 \\ y_{ip^{(1-b)p^a}} & \mbox{if } i = 0 \end{cases}\pmod{p^{(b -1)p^a}}$$ for all $i \in \{1, 2, \dots, p^{bp^a} - 1\}$, where $px_{\varphi(i)}$ denotes element-wise multiplication by $p$. We claim that $z_1, z_2, \dots, z_{p^{bp^a} - 1}$ is a sequence of moves that wins in the $(n, m) = (p^a, p^b)$ case. \bigskip \noindent If we consider the configuration of the table $\pmod{p}$, then up to rotation none of the moves of the form $px_{\varphi(i)}$ affect the configuration. Therefore we know by the definition of the sequence $y_1, y_2, \dots, y_{p^{p^a} - 1}$ that after some move of the form $y_j$, every counter will be $0\pmod{p}$. It is then clear that after this move $y_j$, the following sequence of $p^{(b -1)p^a} - 1$ moves $px_1, px_2, \dots, px_{p^{(b-1)p^a} - 1}$ will win the game. This completes the proof of Theorem 1.1. $\blacksquare$ \bigskip \noindent We will also prove that our constructions are optimal in terms of number of moves necessary to win. \bigskip \noindent {\bf{Theorem 5.1}}. If the player can win for a given $(n, m)$, then he can guarantee a win in no less than $m^n - 1$ moves. \bigskip \noindent {\it{Proof}}. Consider any sequence of moves $y_1, y_2, \dots, y_N \in \mathbb{Z}_m^n$ with $N < m^n - 1$. Let $z_k = \sum_{i=1}^{k}y_i$ for all $k \in \{1, 2, \dots, N\}$ and suppose that the table did not rotate at all after any of the moves. Then this sequence of moves would only win if the starting configuration was equal to $0$ or $-z_k$ (with each element reduced$\pmod{m}$) for some $k$. But there are $N + 1 < m^n$ such winning configurations and $m^n$ possible starting configurations, so there are a non-zero number of starting configurations for which this sequence of moves never wins. This implies the desired result. $\blacksquare$ \section{Generalizing to groups} Consider any subset of permutations $S \subseteq S_n$ with the identity ${\bf{1}} \in S$ and suppose that instead of simply rotating each turn, the counters on the table can be acted on by any permutation in $S$. As in the introduction, let $G \le S_n$ be the subgroup of $S_n$ generated by $S$. Here we will prove Theorem 1.2, which states that the player can win the $(S, m)$-game if and only if $|G| = 1$, $m = 1$, or $(|G|, m) = (p^a, p^b)$ for some prime $p$ and $a, b \in \mathbb{N}$. \bigskip \noindent We break the proof into two parts: first we shall show the ``only if" direction, and then we shall show the ``if" direction when $b = 1$. The case where $b > 1$ will then immediately follow from the logic in the proof of Theorem 1.1 in Section 5. \bigskip \noindent {\bf{Lemma 6.2}}. The player can win the $(S, m)$-game only if $|G| = 1$, $m = 1$, or $(|G|, m) = (p^a, p^b)$ for some prime $p$ and $a, b \in \mathbb{N}$. \bigskip \noindent {\it{Proof}}. We mimic the proof of Lemma 2.1. Suppose that there exist distinct primes $p$ and $q$ with $v_p(|G|), v_q(m) > 0$ and without loss of generality assume $m = q$. By Cauchy's Theorem there must exist some $c \in G$ with order $p$. Let $g(x)$ denote the position of the counter currently at position $x$ after the permutation $g \in G$ is applied to the counters on the table. Call a configuration of the table {\it{semi-homogenous}} if the counters in positions $g(1), gc(1),gc^2(1), \dots, gc^{p-1}(1)$ show the same number for all $g \in G$. We will show that for any non-semi-homogenous configuration on the table, there is no move guaranteed to make the configuration semi-homogenous following an arbitrary permutation of the table. \bigskip \noindent Indeed, suppose the configuration on the table prior to a permutation was $(x_1, x_2, \dots, x_n)$ and consider any move $(y_1, y_2, \dots, y_n)$. For convenience, let $x_g$ denote $x_{g(1)}$ for all $g \in G$ and define $y_g$ similarly, and let us work in $\mathbb{Z}_q$. Additionally, let $T = \{s^{-1} | s \in S\}$ be the set of inverses of elements in $S$. For this move to guarantee that the configuration of the table was semi-homogenous after the move, the following $|S||G|$ strings of equalities would have to hold simultaneously: $$ x_{tg} + y_g = x_{tgc} + y_{gc} = \dots = x_{tgc^{p-1}} + y_{gc^{p-1}} $$ for all $t \in T$ and $g \in G$. Now, fix a specific $d \in G$ and write $dcd^{-1} = \prod_{i = 1}^{k}t_i$ for some $k \in \mathbb{N}$ and $t_1, t_2, \dots, t_k \in T$ (this representation is guaranteed to exist since $T$ generates $G$). Using the first equality in the string with $g = d$ and $t \in \{{\bf{1}}, t_k\}$, we obtain $x_d - x_{dc} = x_{t_kd} - x_{t_kdc} =y_{dc} - y_d$. Using the first equality again with $g = t_kd$ and $t \in \{{\bf{1}}, t_{k-1}\}$ we obtain $x_{t_kd} - x_{t_kdc} = x_{t_{k-1}t_kd} - x_{t_{k-1}t_kdc} =y_{t_kdc} - y_{t_kd}$. Combining equalities we obtain $x _d - x_{dc} = x_{t_{k-1}t_kd} - x_{t_{k-1}t_kdc}$. Continuing in this fashion we obtain $$ x_d - x_{dc} = x_{t_1t_2{\dots}t_kd} - x_{t_1t_2{\dots}t_kdc} = x_{dc} - x_{dc^2} $$ and repeating the argument we have $$ x_d - x_{dc} = x_{dc} - x_{dc^2} = \dots = x_{dc^{p-1}} - x_{d} $$ which holds for any $d \in G$. Notice that these $p$ expressions sum to $0$, so since $p$ and $q$ are distinct each expression must equal $0$ and so the configuration $(x_1, x_2, \dots, x_n)$ must have been semi-homogenous to begin with. \bigskip \noindent Therefore if the starting configuration is not semi-homogenous, the player can never force the configuration to be semi-homogenous and so cannot win. $\blacksquare$ \bigskip \noindent Now we proceed to the second part of the proof of Theorem 6.1: \bigskip \noindent {\bf{Lemma 6.3}}. The player can win the $(G, m)$-game if $(|G|, m) = (p^a, p)$ for some prime $p$ and $a \in \mathbb{N}$. \bigskip \noindent {\it{Proof}}. We mimic the proof of Lemma 4.1. Suppose there exist vectors $x_0, x_1, \dots, x_{n-1}$ that form a basis for $\mathbb{Z}_p^n$ and that have the property that $x_j - g \cdot x_j$ can be written as a linear combination of $x_1, x_2, \dots, x_{j-1}$ for all $j \in \{0, 1, \dots, n-1\}$ and all $g \in G$, where $g \cdot x$ represents the permutation of the coordinates of $x$ corresponding to $g \in S_n$. For all $i \in \{1, 2, \dots, p^{n} - 1\}$, let $y_i = x_{v_p(i)}$ where $v_p$ denotes $p$-adic valuation. Then by the same reasoning as from the proof of Lemma 4.1, the sequence of moves $y_1, y_2, \dots, y_{p^{n} - 1}$ wins. \bigskip \noindent Thus it suffices to show that such a basis $x_0, x_1, \dots, x_{n-1}$ exists. Consider the orbits of $G$ on $\mathbb{Z}_p^n$, and suppose we partition $\mathbb{Z}_p^n$ into these orbits. Since $|G| = p^a$, the Orbit-Stabilizer Theorem implies that each orbit has size $p^k$ for some $k \in \{0, 1, 2, \dots, a\}$. Since the number of vectors in $\mathbb{Z}_p^n$ is $p^n$, the number of orbits of size $1$ must be divisible by $p$. But note that $0$ has an orbit of size $1$, so there exists some nonzero $x_0 \in \mathbb{Z}_p^n$ fixed by $G$. \bigskip \noindent We can repeat the argument on the quotient space $\mathbb{Z}_p^n/\langle x_0 \rangle$ to find some nonzero $x_1 \in \mathbb{Z}_p^n/\langle x_0 \rangle$ that is fixed by $G$. Continuing in this fashion we obtain a nested sequence of subspaces $$ \langle x_0 \rangle < \langle x_0, x_1 \rangle < \dots < \langle x_0, x_1, \dots, x_{n-1} \rangle $$ each of which is fixed by $G$, and it is clear that the basis $x_0, x_1, \dots, x_{n-1}$ satisfies the desired condition. This completes the proof. $\blacksquare$ \bigskip \noindent Given a sequence of moves that wins the $(G, p)$-game, we can then induct on $b$ as in Section 5 to construct a sequence of moves that wins the $(G, p^b)$-game for any $b \in \mathbb{N}$. Furthermore, if the player can win the $(G, m)$-game then since $S \subseteq G$ he can use the same sequence of moves to win the $(S, m)$-game, so the combination of Lemmas 6.2 and 6.3 imply Theorem 1.2, as desired. $\blacksquare$ \section{Acknowledgements} The author would like to thank Dhroova Aiylam and Alexander Katz for their helpful discussions. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Cross-dispersed spectroscopy makes possible to acquire information of wide spectral regions in a single exposure, by projecting several dispersion axes on the detector simultaneously. As a consequence, the reduction process required to analyze this kind of data is complicated, since different diffraction orders need to be selected, extracted, calibrated independently and combined in the final step. This difficulty led many authors to develop methods and software packages for the reduction of cross-dispersed and echelle spectra \citep[e.g.][]{moreno1982, rossi1985, piskunov2002, bochanski2009}. In the past decade the near infrared (NIR) has also been explored by cross-dispersed spectrographs, such as Spex \citep{rayner2003} at the NASA Infrared Telescope Facility (IRTF), with a resolving power of $\sim$ 2000 and reaching from 0.8 to 5.5$\upmu$m. Other examples are TripleSpec \citep{edelstein2007} and the Folded-port Infrared Echellette (FIRE) \citep{simcoe2008}, achieving R $\sim$ 2600 and R $\sim$ 6000 respectively, and covering roughly the same wavelength domain (0.8 - 2.4$\upmu$m). Another instrument of similar capabilities is the Ohio State Infrared Imager/Spectrometer (OSIRIS), currently installed at the Southern Astrophysics Research Observatory (SOAR), attached to the 4.1m telescope. OSIRIS provides spectral coverage from 1.0$\upmu$m to 2.4$\upmu$m in cross-dispersed mode, with a resolving power of $\sim$ 1200. High resolution (R $\sim$ 3000) long-slit modes are also available, but multi-band spectroscopy of this kind suffers from differences in aperture and seeing. However, reduction of NIR spectra has a complexity of its own, mostly related to telluric spectral features, both in absorption and emission, and black body radiation due to the telescope itself. A rich literature has been developed on the subject \citep[e.g.][]{maiolino1996,vacca2003,cushing2004}. There are currently no specific software packages available for the reduction of cross-dispersed spectra taken with OSIRIS. Aiming at providing a fast and highly automated task, we developed the {\sc xdspres} (acronym for cross-dispersed spectra reduction script) package. The CL language was chosen due to the availability of almost all of the basic tasks needed to perform the reduction in the Image Reduction and Analysis Facility ({\sc IRAF}) software \citep{IRAF1,IRAF2}. In \S \ref{sec:osiris} we describe main aspects of the instrument, focusing on its effects on the reduction process. In \S \ref{sec:reduction} we describe the general steps towards a fully reduced spectrum, as well as the approach adopted by the {\sc xdspres} package to each of these steps, and finally in \S \ref{sec:summary} we give a brief summary. \section{OSIRIS} \label{sec:osiris} In this section we discuss the main aspects of the cross-dispersed mode of OSIRIS, with special attention to those characteristics that are relevant to the reduction process. A complete description of the instrument can be found in its on line User's Manual\footnote{http://www.ctio.noao.edu/instruments/ir\_instruments/ \\ osiris2soar/manual/}. The detector is a 1024x1024 HAWAII array \citep{hodapp1996}, sensitive to wavelengths of up to 2.5$\upmu$m. Equation \ref{eq:linearity} models the non-linear behavior of the array, which only becomes critical above 28,000 counts. Usually the detector is read only at the end of the integration, but since it can be read non-destructively different sampling methods could be implemented. A residual image is sometimes seen, specially when bright sources are observed in acquisition mode. This means that eventually some of the first spectra taken after the target acquisition images have to be discarded. Residuals have approximately 2\% of the intensity of the original source, and it should not be a problem to science exposures that have typical counts below one thousand. {\small \begin{align} \frac{\mbox{ADU}^\prime}{\mbox{ADU}} &= 1.00108 - 1.015777 \times 10^{-6} \mbox{ADU} \notag \\ & + 1.548099 \times 10^{-10}\mbox{ADU}^2 \notag \\ & - 1.945376 \times 10^{-15}\mbox{ADU}^3 \label{eq:linearity} \end{align}} In cross-dispersed mode OSIRIS projects almost six orders on the detector, from which three are extracted. Wavelength coverage for each of the extracted orders are 1.2 - 1.5, 1.5 - 1.9 and 1.9 - 2.35$\upmu$m, for the J, H and K bands respectively, all of them with R $\sim$ 1200. Orders that are not extracted include a small portion of the J band (1.0 - 1.2$\upmu$m), and second order duplicates of the J band, located to the right of the K band. Figure \ref{fig:sky} shows an example of sky spectrum where the three main orders are evident. Orders that are not extracted are also visible in figure \ref{fig:flats}. From figure \ref{fig:sky} it can also be noted that dispersion axes are nearly vertical, meaning that within a given aperture each line corresponds to a particular wavelength. The misalignment between detector lines and wavelength coordinate are less than one pixel from one end of the slit to the other, or less than one third of the full width at half maximum (FWHM) of a emission line in the J band. Therefore corrections to dispersion axis orientation were not attempted, and all extractions assume a vertical dispersion. \begin{figure}[ht] \plotone{f1.eps} \caption{Spectrum showing atmospheric emission lines and identifying the orders that are projected on the chip. The exposure time for this image was 20s. Horizontal lines seen in this and subsequent images, mainly around 140 and 650 pixels, are probably the result of scattered light that reached the grating.} \label{fig:sky} \end{figure} \section{Reduction process} \label{sec:reduction} \subsection{Flat field} \label{sec:flat} In cross-dispersed mode, flat field images are taken with the cross-dispersing grism already positioned, which results in a spectrum of the flat field lamp, rather than an evenly illuminated image. Since the main purpose of a flat field is to identify pixel-to-pixel variations which are intrinsic to the detector, the continuum that corresponds to the spectral energy distribution of the lamp has to be removed. Moreover, two sets of flat fields are needed, one with the flat field lamp on and another with the lamp off. The later is required because thermal radiation from the telescope becomes appreciable in the low energy end of the spectrum, as it can be seen in Fig. \ref{fig:flats}. Typical sets consist of 10 exposures of each kind. The \textit{xdflat} task automates the preparation of a normalized flat field image, which will be later used to correct the science images. First it applies a linearity correction to all flat field images, according to equation \ref{eq:linearity}. Then both sets (flat-on and flat-off) are averaged independently, and the resulting flat-off image is subtracted from the flat-on. We have omitted a figure showing the subtracted flat because it is visually identical to the flat-on. The only noticeable difference is the suppression of a few hot pixels at the lower portion of the image. To remove the spectrum of the flat field lamp \textit{xdflat} begins by extracting each order. Apertures are identified by a centering algorithm ({\sc apfind}) that searches for three local maxima in the central lines of the chip. The peaks are assumed to be separated by more than 30 pixels and have an approximate width of 80 pixels. Aperture sizes are reevaluated by setting the borders at 20\% of the peak intensity of each order. A tracing algorithm ({\sc aptrace)} moves in regular five pixel steps along the dispersion axis, assessing changes in peak location for each order, leading to a two-dimensional description of the aperture position. The aperture tracing function, a second order Legendre polynomial, is fitted to predefined sample regions of the chip that are less affected by scattered light. Errors in the two-dimensional aperture border definitions are usually below three pixels. \begin{figure}[ht] \centering \plottwo{f2a.eps}{f2b.eps} \caption{\textit{Left} A sample flat field image with the lamp turned off. Thermal radiation from the telescope can be seen, as the flat field exposure is taken with grism already positioned. \textit{Right} A flat field with the lamp turned on, clearly showing the three orders in the center. Also visible are: further J band orders at the lower left and beyond the K band to the right; hot pixels in the lower corners; two groups of cold pixels near the center of chip; a small portion of an order at the detector's left border, between lines 400 and 800. Both images were taken with 3.2s of exposure.} \label{fig:flats} \end{figure} A 30th order Legendre polynomial is fit to the spectrum, which is then normalized. Such a high order polynomial is justified by the complex pattern produced by the flat field lamp as it passes through the spectrometer, as shown by figure \ref{fig:flatj}. Artificial oscillations at the apertures' limits are ignored after extraction. Typical RMS of the fit is below 5000 ADU, which may seem high but actually amounts to roughly 2\% of the average signal. The final flat-field image has all its pixel counts set to 1, except those on the regions occupied by the spectrum, which are replaced by the ratio between the original count and the fitted polynomial. \begin{figure}[ht] \centering \plotone{f3.eps} \caption{Mean flat field spectrum at the J band (\textit{grey}), and fitted function (\textit{black}), RMS for this fit is $\sim$ 4600 ADU, which correspond to roughly 2\% of the average signal. Artificial oscillations of the fit have no practical effects over the science spectrum as these portions are ignored after extraction.} \label{fig:flatj} \end{figure} \subsection{Subtraction Object - Sky} In the NIR spectral region the atmosphere plays an important role. Besides a significant telluric absorption, several atmospheric emission lines are entangled with the spectrum of the astronomical source (see figure \ref{fig:sky} for a sample spectrum of the sky, where the J, H and K bands are identified). The process of removing telluric emission lines is commonly known as sky subtraction, or sky chopping, and the angular size of the target dictates whether additional off-source exposures are required. In the case of point sources, which occupy only a small fraction of the slit, one can take exposures with the source in two different positions along the slit, and later subtract subsequent images. This is the technique employed to obtain the spectra of standard stars, and it makes more efficient use of telescope time. When extended sources are concerned, a separate set of exposures taken from a nearby dark region of the sky is needed, a process commonly referred to as nodding. The \textit{doosiris} task was developed to reduce spectra from extended sources, therefore it assumes that a set of sky exposures was taken along with the science exposures, in order to remove the telluric emission lines. There are two ways by which users can inform the software about the nature of each image, namely: interactively identifying them via \textit{SAO Image DS9}, or providing an ASCII file with the type of exposure with respect to its numerical order. For further details refer to the {\sc xdspres} Manual. No attempt was made to provide a software solution for identifying different types of exposures, as specific criteria regarding the spectrum of the astronomical target would have to be predefined, adding, in our judgement, unnecessary complexity to the code. Nodding patterns that make best use of telescope time use each sky exposure in more than one subtraction, as in O-S-O or O-S-O-O-S\footnote{Where ``O'' stands for object and ``S'' for sky}. It is thus impractical to simply subtract a combination of sky images from an equivalent combination of target ones. Instead of assessing the relevant physical quantities, a routine searches for the best telluric calibrator based on the file name index, assuming that these are sequentially numbered after the time of exposure. \subsection{Extraction and Wavelength Calibration} Extraction of science spectra follows the same procedures that were described in section \ref{sec:flat}\footnote{Although \textit{doosiris} is prepared to automatically define the borders of each aperture, targets that have complex spatial profiles should be personally reviewed.}. The sky spectrum is extracted using the same aperture definitions of the target spectrum. Wavelength calibration is based on strong OH lines present in the sky exposures, a sample of which is shown in figure \ref{fig:skyspec}. As of the moment of the publication of this paper, OSIRIS presents what appears to be an illumination problem that produces lines across the detector, in the direction perpendicular to the dispersion axis. Since these lines can lead to confusion in the OH line identification process, a high order polynomial is used to fit and remove the vertical profile identified between columns 980 and 1024 (see figure \ref{fig:background}). \begin{figure}[ht] \plotone{f4.eps} \caption{Average of chip columns 980 to 1024. This background profile is removed prior to wavelength calibration to avoid confusion with OH emission lines.} \label{fig:background} \end{figure} Interactive line identification is usually the best option, and since the dispersion function is almost linear there is no need for manually identifying more than four well spaced features. If the dispersion function fitting was successful, the unidentified features will match those in the line list provided with {\sc xdspres}, which was extracted from \citet{oliva1992}. \textit{Doosiris} also provides an option to automatically identify OH features in the spectrum of the sky that uses the \textit{reidentify} task, which requires a previously identified image. For $\sim$ 20 identified features, typical residuals are below 2 pixels, which translates into roughly $\pm$ 50 km s$^{-1}$. \begin{figure}[ht] \centering \plotone{f5.eps} \caption{A sample of the sky's spectrum at the H band.} \label{fig:skyspec} \end{figure} \subsection{Telluric Removal and Flux calibration} \label{sec:tell} The subtraction of sky exposures from the on source images, obviously can only account for emission features of the atmosphere. To deal with the more subtle problem of removing atmospheric absorption \textit{doosiris} by default uses the spectrum of an A0V star, that should be obtained just before or after the science images. If the observed standard star has a different spectral type, the model atmosphere spectra, mentioned below, have to be replaced accordingly. The standard star, being a point source, do not need a separate set of sky exposures, because it occupies only a small fraction of the slit. \textit{Doosiris} is prepared to manage two or three different star positions on the slit. In either case subsequent exposures are subtracted and the resulting images are summed; a sample of this sum can be seen in figure \ref{fig:star_sub}. After division by the normalized flat field image, both spectra are extracted and summed. \begin{figure}[ht] \centering \plotone{f6.eps} \caption{The resultant image of subtracting subsequent exposures of the standard star occupying two different positions along the slit.} \label{fig:star_sub} \end{figure} The spectrum of an A0V is almost devoid of metallic absorption lines, but the H lines that are present need to be eliminated before it can be applied to the science spectrum as a telluric calibrator. The method employed here follows the reasoning of \citet{vacca2003}, but with a different implementation. It basically consists in dividing the spectrum of the standard star by a model atmosphere of Vega, obtained from R. Kurucz\footnote{http://kurucz.harvard.edu/stars.html}. First the model of Vega was smoothed by a Gaussian with $\sigma$ equal to the FWHM measured in a NeAr calibration lamp, to match the resolving power of the standard star. A spline was then adjusted to the continuum, leading to a purely absorption spectrum. The later is provided with the {\sc xdspres} package. The actual division of the reference star is performed by the \textit{telluric} task, which allows for the shifting and scaling of the model. Figure \ref{fig:tell_star} shows a comparison between the observed spectrum of the standard star and a model atmosphere for Vega. \begin{figure}[ht] \plotone{f7.eps} \caption{Comparison between the standard star H band spectrum (top) and a model atmosphere (bottom) for Vega. The resolving power of both spectra is 1200.} \label{fig:tell_star} \end{figure} Once the absorption lines due to the stellar atmosphere have been removed, the spectrum becomes essentially a black body with telluric features. Its normalization by a polynomial that acts as a pseudo-continuum returns a purely telluric spectrum. Unabsorbed regions, that translate into sample regions for continuum fitting, were identified with the aid of NSO/Kitt Peak FTS data produced by NSF/NOAO\footnote{Available at http://www.eso.org/sci/facilities/paranal/ \\ instruments/isaac/tools/spectroscopic\_standards.html}. This division of the science spectrum also allows shifting and scaling. Some of the strongest telluric bands cannot be fully removed. Additionally, the high absorption in these regions causes a significant decrease in S/N. The same polynomial employed as a pseudo-continuum for the reference star is later used to produce an independent sensitivity function for each aperture, by comparing it to a black body of 9480 K. This procedure restores the correct slope of the spectrum regardless of the accuracy in absolute flux. The later is estimated from the exposure time and magnitude of the standard star, which has to be provided by the user. Figure \ref{fig:stages} shows the effects of telluric line removal and flux calibration to a sample spectrum. \begin{figure}[ht] \plotone{f8.eps} \caption{From top to bottom: a) Sample spectrum at J band just after wavelength calibration. b) Same spectrum after the removal of telluric lines. c) Flux calibrated spectrum. Areas of strong telluric absorption cannot be fully corrected, and even if they could the signal to noise ratio would still be much lower than the rest of the spectrum.} \label{fig:stages} \end{figure} One would expect that a good flux calibration leads to a perfect alignment of the spectrum between different apertures. Although generally true, it has been observed that agreement is harder to achieve where the H and K bands meet. Strong telluric absorption bands near 1.9$\upmu$m difficult the evaluation of the sensibility function causing large deviations in the final spectrum. Figure \ref{fig:complete} shows a completely reduced spectrum encompassing the whole spectral range. \begin{figure}[ht] \plotone{f9.eps} \caption{Flux calibrated spectrum in the whole spectral range. Aperture transitions are at 1.5$\upmu$m and 1.9$\upmu$m. Strong telluric absorption bands that dominate the spectrum between 1.8 and 2.0$\upmu$m difficult the alignment between the H and K bands.} \label{fig:complete} \end{figure} \section{Summary} \label{sec:summary} We have presented the {\sc xdspres} CL-based package, consisting of the \textit{xdflat} and \textit{doosiris} tasks, aimed at being a complete reduction facility for cross-dispersed spectra taken with the OSIRIS spectrometer, currently installed at the SOAR telescope. This particular instrument provides a relatively large spectral coverage, being able to project the full range between 1.2$\upmu$m and 2.35$\upmu$m over the detector in a single exposure. The blazing of different orders in the same image adds complexity to the already lengthy reduction of infrared spectroscopy data. {\sc xdspres} automatically performs the more mechanical and time consuming steps of the reduction, at the same time that it allows considerable user interaction in the more subjective stages. In addition, the possibility of a fast reduction provides means to make site adjustments to the observation strategy. As a sample of actually published data that was fully reduced with the {\sc xdspres} tasks, see \citet{riffel2011}. The complete software package and its documentation is available to the community at the web site \textit{http://www.if.ufrgs.br/$\sim$ruschel/software}. \subsection*{Acknowledgments} We thank an anonymous referee for very interesting comments that increased considerably the quality of the present paper. DRD thanks the support from the Brazilian research funding agency CNPq. OSIRIS is a collaborative project between the Ohio State University and Cerro Tololo Inter-American Observatory (CTIO) and was developed through NSF grants AST 90-16112 and AST 92-18449. CTIO is part of the National Optical Astronomy Observatory (NOAO), based in La Serena, Chile. NOAO is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under cooperative agreement with the National Science Foundation. This work has been done with observations from the SOAR telescope, a collaboration among the Minist\'erio da Ci\^encia e Tecnologia/Brazil, NOAO, UNC and MSU.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Consider a system of linear equations \begin{equation} \label{eq:the_system} \Theta\vec x=\vec y \end{equation} with $\vec x\in\R^m$, $\vec y\in\R^n$ and \[ \Theta= \begin{pmatrix} \theta_{11} & \cdots & \theta_{1m} \\ \vdots & \ddots & \vdots \\ \theta_{n1} & \cdots & \theta_{nm} \end{pmatrix},\qquad \theta_{ij}\in\R. \] The classical measure of how well the space of solutions to this system can be approximated by integer points is defined as follows. Let $|\cdot|$ denote the sup-norm in the corresponding space. \begin{definition} \label{def:belpha_1} The supremum of the real numbers $\gamma$, such that there are arbitrarily large values of $t$ for which (resp. such that for every $t$ large enough) the system of inequalities \begin{equation} \label{eq:belpha_1_definition} |\vec x|\leq t,\qquad|\Theta\vec x-\vec y|\leq t^{-\gamma} \end{equation} has a nonzero solution in $(\vec x,\vec y)\in\Z^m\oplus\Z^n$, is called the \emph{regular} (resp. \emph{uniform}) \emph{Diophantine exponent} of $\Theta$ and is denoted by $\beta_1$ (resp. $\alpha_1$). \end{definition} This paper is a result of the attempt to generalize this concept to the case of the problem of approximating the space of solutions to \eqref{eq:the_system} by $p$-dimensional rational subspaces of $\R^{m+n}$. A large work in this direction was made by W.~Schmidt in \cite{schmidt_annals}. Later, in \cite{laurent_up_down}, \cite{bugeaud_laurent_up_down}, a corresponding definition was given by M.~Laurent and Y.~Bugeaud in the case when $m=1$. With their definition they were able to split the classical Khintchine transference principle into a chain of inequalities for intermediate exponents. However, the way we defined $\alpha_1$ and $\beta_1$ naturally proposes a generalization, which appears to be different from Laurent's: \begin{definition} \label{def:belpha_p} The supremum of the real numbers $\gamma$, such that there are arbitrarily large values of $t$ for which (resp. such that for every $t$ large enough) the system of inequalities \begin{equation} \label{eq:belpha_p_definition} |\vec x|\leq t,\qquad|\Theta\vec x-\vec y|\leq t^{-\gamma} \end{equation} has $p$ solutions $\vec z_i=(\vec x_i,\vec y_i)\in\Z^m\oplus\Z^n$, $i=1,\ldots,p$, linearly independent over $\Z$, is called the \emph{$p$-th regular (resp. uniform) Diophantine exponent of the first type} of $\Theta$ and is denoted by $\beta_p$ (resp. $\alpha_p$). \end{definition} In Section \ref{sec:laurexp} we propose a definition of intermediate exponents of the second type, which is consistent with Laurent's. In subsequent Sections we show the connection between these two generalizations and some exponents that naturally emerge in Schmidt's parametric geometry of numbers developed in \cite{schmidt_summerer}. Then we discuss the properties of these quantities, generalize some of the observations made in \cite{schmidt_summerer}, and split Dyson's transfer inequality into a chain of inequalities for the intermediate exponents of the second type. \section{Laurent's exponents and their generalization} \label{sec:laurexp} Set $d=m+n$. Let us denote by $\pmb\ell_1,\ldots,\pmb\ell_d$ the columns of the matrix \[ \begin{pmatrix} E_m & -\tr\Theta\\ \Theta & E_n \end{pmatrix}, \] where $E_m$ and $E_n$ are the corresponding unity matrices and $\tr\Theta$ is the transpose of $\Theta$. Clearly, $\cL=\spanned_\R(\pmb\ell_1,\ldots,\pmb\ell_m)$ is the space of solutions to the system \eqref{eq:the_system}, and $\cL^\bot=\spanned_\R(\pmb\ell_{m+1},\ldots,\pmb\ell_d)$. Denote also by $\vec e_1,\ldots,\vec e_d$ the columns of the $d\times d$ unity matrix $E_d$. The following Definition is a slightly modified Laurent's one. \begin{definition} \label{def:ba_for_m_equal_to_1} Let $m=1$. The supremum of the real numbers $\gamma$, such that there are arbitrarily large values of $t$ for which (resp. such that for every $t$ large enough) the system of inequalities \begin{equation} \label{eq:ba_for_m_equal_to_1} |\vec Z|\leq t,\qquad|\pmb\ell_1\wedge\vec Z|\leq t^{-\gamma} \end{equation} has a nonzero solution in $\vec Z\in\wedge^p(\Z^d)$ is called the \emph{$p$-th regular (resp. uniform) Diophantine exponent of the second type} of $\Theta$ and is denoted by $\gb_p$ (resp. $\ga_p$). \end{definition} Here $\vec Z\in\wedge^p(\R^d)$, $\pmb\ell_1\wedge\vec Z\in\wedge^{p+1}(\R^d)$ and for each $q$ we consider $\wedge^q(\R^d)$ as a $\binom dq$-dimensional Euclidean space with the orthonormal basis consisting of the multivectors \[ \vec e_{i_1}\wedge\ldots\wedge\vec e_{i_q},\qquad 1\leq i_1<\ldots<i_q\leq d, \] and denote by $|\cdot|$ the sup-norm with respect to this basis. Laurent denoted the exponents $\gb_p$, $\ga_p$ as $\omega_{p-1}$, $\hat\omega_{p-1}$, respectively, and showed that for $p=1$ they coincide with $\beta_1$, $\alpha_1$. He also noticed that one does not have to require $\vec Z$ to be decomposable in Definition \ref{def:ba_for_m_equal_to_1}, which essentially simplifies working in $\wedge^p(\R^d)$. In order to generalize Definition \ref{def:ba_for_m_equal_to_1} let us set for each $\sigma=\{i_1,\ldots,i_k\}$, $1\leq i_1<\ldots<i_k\leq d$, \begin{equation} \label{eq:L_sigma} \vec L_\sigma=\pmb\ell_{i_1}\wedge\ldots\wedge\pmb\ell_{i_k}, \end{equation} denote by $\cJ_k$ the set of all the $k$-element subsets of $\{1,\ldots,m\}$, $k=0,\ldots,m$, and set $\vec L_\varnothing=1$. Let us also set $k_0=\max(0,m-p)$. \begin{definition} \label{def:ba} The supremum of the real numbers $\gamma$, such that there are arbitrarily large values of $t$ for which (resp. such that for every $t$ large enough) the system of inequalities \begin{equation} \label{eq:ba} \max_{\sigma\in\cJ_k}|\vec L_\sigma\wedge\vec Z|\leq t^{1-(k-k_0)(1+\gamma)},\qquad k=0,\ldots,m, \end{equation} has a nonzero solution in $\vec Z\in\wedge^p(\Z^d)$ is called the \emph{$p$-th regular (resp. uniform) Diophantine exponent of the second type} of $\Theta$ and is denoted by $\gb_p$ (resp. $\ga_p$). \end{definition} We tended to make this definition look as simple as possible. However, it will be more convenient to work with in the multilinear algebra setting after it is slightly reformulated. To give the desired reformulation let us set for each $\sigma=\{i_1,\ldots,i_k\}$, $1\leq i_1<\ldots<i_k\leq d$, \begin{equation} \label{eq:E_sigma} \vec E_\sigma=\vec e_{i_1}\wedge\ldots\wedge\vec e_{i_k}, \end{equation} denote by $\cJ'_k$ the set of all the $k$-element subsets of $\{m+1,\ldots,d\}$, $k=0,\ldots,n$, and set $\vec E_\varnothing=1$. Set also $k_1=\min(m,d-p)$. \begin{proposition} \label{prop:ba_substitution} The inequalities \eqref{eq:ba} can be substituted by \begin{equation} \label{eq:ba_modified} \max_{\begin{subarray}{c} \sigma\in\cJ_k \\ \sigma'\in\cJ'_{d-p-k} \end{subarray}} |\vec L_\sigma\wedge\vec E_{\sigma'}\wedge\vec Z|\leq t^{1-(k-k_0)(1+\gamma)},\qquad k=k_0,\ldots,k_1. \end{equation} \end{proposition} \begin{proof} Since $\pmb\ell_1,\ldots,\pmb\ell_m,\vec e_{m+1},\ldots,\vec e_d$ form a basis of $\R^d$, for each $q=1,\dots,d$ the multivectors \[ \vec L_\rho\wedge\vec E_{\rho'},\qquad\rho\in\cJ_j,\ \rho'\in\cJ'_{q-j},\ \max(0,q-n)\leq j\leq\min(q,m), \] form a basis of $\wedge^q(\R^d)$. Let us denote by $|\cdot|_\Theta$ the sup-norm in each $\wedge^q(\R^d)$ with respect to such a basis. Since any two norms in a Euclidean space are equivalent, and since in Definition \ref{def:ba} we are concerned only about exponents, we can substitute \eqref{eq:ba} by \begin{equation} \label{eq:ba_Thetanized} \max_{\sigma\in\cJ_k}|\vec L_\sigma\wedge\vec Z|_\Theta\leq t^{1-(k-k_0)(1+\gamma)},\qquad k=0,\ldots,m, \end{equation} and \eqref{eq:ba_modified} by \begin{equation} \label{eq:ba_modified_Thetanized} \max_{\begin{subarray}{c} \sigma\in\cJ_k \\ \sigma'\in\cJ'_{d-p-k} \end{subarray}} |\vec L_\sigma\wedge\vec E_{\sigma'}\wedge\vec Z|_\Theta\leq t^{1-(k-k_0)(1+\gamma)},\qquad k=k_0,\ldots,k_1. \end{equation} Writing \[ \vec Z=\sum_{j=\max(0,p-n)}^{\min(p,m)}\sum_{\begin{subarray}{c} \rho\in\cJ_j \\ \rho'\in\cJ'_{p-j} \end{subarray}} Z_{\rho,\rho'}\vec L_\rho\wedge\vec E_{\rho'}, \] we see that \eqref{eq:ba_Thetanized} for each $k$ means exactly that \[ \ Z_{\rho,\rho'}=0,\qquad\qquad\qquad\quad\ \text{ if }\rho\in\cJ_j,\ j>m-k, \] \begin{equation} \label{eq:coordinates_filtered} |Z_{\rho,\rho'}|\leq t^{1-(k-k_0)(1+\gamma)},\qquad\text{ if }\rho\in\cJ_j,\ j\leq m-k. \end{equation} Hence we see that all the inequalities in \eqref{eq:ba_Thetanized} with $k>k_1$ are trivial. Next, since we are concerned about large values of $t$, by Minkowski's first convex body theorem we may confine ourselves to considering only positive values of $1+\gamma$. Then the function $t^{1-(k-k_0)(1+\gamma)}$ is non-increasing with respect to $k$, so for each $\rho\in\cJ_j$ of all the inequalities \eqref{eq:coordinates_filtered} we may keep the ones with the largest $k$, i.e. with the one equal to $m-j$. Thus, \eqref{eq:ba_Thetanized} becomes equivalent to \begin{equation} \label{eq:coordinates_graduated} |Z_{\rho,\rho'}|\leq t^{1-(k-k_0)(1+\gamma)},\qquad\text{ if }\rho\in\cJ_{m-k},\ k_0\leq k\leq k_1.\phantom{,\ \rho'\in\cJ'_{p-m+k}} \end{equation} On the other hand, \eqref{eq:ba_modified_Thetanized} means that \begin{equation} \label{eq:coordinates_sandwiched} |Z_{\rho,\rho'}|\leq t^{1-(k-k_0)(1+\gamma)},\qquad\text{ if }\rho\in\cJ_{m-k},\ \rho'\in\cJ'_{p-m+k},\ k_0\leq k\leq k_1, \end{equation} which is obviously equivalent to \eqref{eq:coordinates_graduated}. \end{proof} \section{Schmidt's exponents} Let $\La$ be a unimodular $d$-dimensional lattice in $\R^d$. Denote by $\cB_\infty^d$ the unit ball in sup-norm, i.e. the cube with vertices at the points $(\pm1,\ldots,\pm1)$. For each vector $\pmb\tau=(\tau_1,\ldots,\tau_d)\in\R^d$ denote by $D_{\pmb\tau}$ the diagonal $d\times d$ matrix with $e^{\tau_1},\ldots,e^{\tau_d}$ on the main diagonal. Let us also denote by $\lambda_p(M)$ the $p$-th successive minimum of a compact symmetric convex body $M\subset\R^d$ (centered at the origin) with respect to the lattice $\La$. Suppose we have a path $\gT$ in $\R^d$ defined as $\pmb\tau=\pmb\tau(s)$, $s\in\R_+$, such that \begin{equation} \label{eq:sum_of_taus_is_zero} \tau_1(s)+\ldots+\tau_d(s)=0,\quad\text{ for all }s. \end{equation} In our further applications to Diophantine approximation we shall confine ourselves to a path that is a ray with the endpoint at the origin and all the functions $\tau_1(s),\ldots,\tau_d(s)$ being linear. However, in this Section, as well as in the next one, all the definitions and statements are given for arbitrary paths and lattices. Set $\cB(s)=D_{\pmb\tau(s)}\cB_\infty^d$. Consider the functions \[ \psi_p(\La,\gT,s)=\frac{\ln(\lambda_p(\cB(s)))}{s},\qquad p=1,\ldots,d. \] \begin{definition} \label{def:schmidt_psi} We call the quantities \[ \bpsi_p(\La,\gT)=\liminf_{s\to+\infty}\psi_p(\La,\gT,s),\qquad \apsi_p(\La,\gT)=\limsup_{s\to+\infty}\psi_p(\La,\gT,s) \] \emph{the $p$-th lower} and \emph{upper Schmidt's exponents of the first type}, respectively. \end{definition} \begin{definition} \label{def:schmidt_Psi} We call the quantities \[ \bPsi_p(\La,\gT)=\liminf_{s\to+\infty}\bigg(\sum_{i=1}^p\psi_i(\La,\gT,s)\bigg)\,,\qquad \aPsi_p(\La,\gT)=\limsup_{s\to+\infty}\bigg(\sum_{i=1}^p\psi_i(\La,\gT,s)\bigg) \] \emph{the $p$-th lower} and \emph{upper Schmidt's exponents of the second type}, respectively. \end{definition} Sometimes, when it is clear from the context what lattice and what path are under consideration, we shall write simply $\psi_p(s)$, $\bpsi_p$, $\apsi_p$, $\bPsi_p$, and $\aPsi_p$. The following Proposition and its Corollaries generalize some of the observations made in \cite{schmidt_summerer} and \cite{bugeaud_laurent_up_down}. \begin{proposition} \label{prop:mink} For any $\La$ and $\gT$ we have \begin{equation} \label{eq:mink} 0\leq-\sum_{i=1}^d\psi_i(s)=O(s^{-1}). \end{equation} Particularly, \begin{equation} \label{eq:mink_0} \bPsi_d=\aPsi_d=\lim_{s\to\infty}\sum_{i=1}^d\psi_i(s)=0. \end{equation} \end{proposition} \begin{proof} Due to \eqref{eq:sum_of_taus_is_zero} the volumes of all the parallelepipeds $\cB(s)$ are equal to $2^d$, so by Minkowski's second theorem we have \[ \frac{1}{d!}\leq\prod_{i=1}^d\lambda_i(\cB(s))\leq1. \] Hence \[ -\frac{\ln(d!)}{s}\leq\sum_{i=1}^d\psi_i(s)\leq0, \] which immediately implies \eqref{eq:mink}. \end{proof} \begin{corollary} \label{cor:Psi_inter_dyson} For any $\La$ and $\gT$ and any $p$ within the range $1\leq p\leq d-2$ we have \begin{equation} \label{eq:Psi_inter_dyson} \frac{\bPsi_{p+1}}{d-p-1}\leq\frac{\bPsi_p}{d-p}\qquad\text{ and }\qquad\frac{\aPsi_{p+1}}{d-p-1}\leq\frac{\aPsi_p}{d-p}\,. \end{equation} \end{corollary} \begin{proof} Since $\psi_{p+1}(s)\leq\psi_{p+2}(s)\leq\ldots\leq\psi_d(s)$, it follows from \eqref{eq:mink} that \[ \psi_{p+1}(s)\leq\frac{-1}{d-p}\sum_{i=1}^p\psi_i(s), \] whence \[ \sum_{i=1}^{p+1}\psi_i(s)\leq\left(1-\frac{1}{d-p}\right)\sum_{i=1}^p\psi_i(s). \] It remains to take the $\liminf$ and the $\limsup$ of both sides as $s\to\infty$. \end{proof} Applying consequently \eqref{eq:Psi_inter_dyson} we get the following statement. \begin{corollary} \label{cor:Psi_dyson} For any $\La$ and $\gT$ we have \begin{equation} \label{eq:Psi_dyson} (d-1)\bPsi_{d-1}\leq\bPsi_1\qquad\text{ and }\qquad(d-1)\aPsi_{d-1}\leq\aPsi_1. \end{equation} \end{corollary} Another simple corollary to Proposition \ref{prop:mink} is the following statement. \begin{corollary} \label{cor:Psi_and_psi} For any $\La$ and $\gT$ we have \begin{equation} \label{eq:Psi_and_psi} \bPsi_{d-1}=-\apsi_d\qquad\text{ and }\qquad\aPsi_{d-1}=-\bpsi_d. \end{equation} \end{corollary} As we shall see later, the first of the inequalities \eqref{eq:Psi_dyson} generalizes Khintchine's and Dyson's transference inequalities. \section{Schmidt's exponents of the second type from the point of view of multilinear algebra} As before, let us consider the space $\wedge^p(\R^d)$ as the $\binom dp$-dimensional Euclidean space with the orthonormal basis consisting of the multivectors \[ \vec e_{i_1}\wedge\ldots\wedge\vec e_{i_p},\qquad 1\leq i_1<\ldots<i_p\leq d. \] Let us order the set of the $p$-element subsets of $\{1,\ldots,d\}$ lexicographically and denote the $j$-th subset by $\sigma_j$. To each vector $\pmb\tau=(\tau_1,\ldots,\tau_d)$ let us associate the vector \begin{equation} \label{eq:T_hat} \widehat{\pmb\tau}=\Big(\widehat\tau_1,\ldots,\widehat\tau_r\Big),\qquad\widehat\tau_j=\sum_{i\in\sigma_j}\tau_i,\qquad r={\binom dp}. \end{equation} Thus, a path $\gT:s\to\pmb\tau(s)$ leads us by \eqref{eq:T_hat} to the path $\widehat\gT:s\to\widehat{\pmb\tau}(s)$ also satisfying the condition \[ \widehat\tau_1(s)+\ldots+\widehat\tau_r(s)=0,\quad\text{ for all }s. \] Finally, given a lattice $\La\subset\R^d$, let us associate to it the lattice $\widehat\La=\wedge^p(\La)$. \begin{proposition} \label{prop:Psi_p_is_Psi_1} For any $\La$ and $\gT$ we have \[ \bPsi_p(\La,\gT)=\bPsi_1(\widehat\La,\widehat\gT)=\bpsi_1(\widehat\La,\widehat\gT) \quad\text{ and }\quad \aPsi_p(\La,\gT)=\aPsi_1(\widehat\La,\widehat\gT)=\apsi_1(\widehat\La,\widehat\gT). \] \end{proposition} \begin{proof} Let us denote by $\lambda_i(M)$ the $i$-th successive minimum of a body $M$ with respect to $\La$ if $M\subset\R^d$ and with respect to $\widehat\La$ if $M\in\wedge^p(\R^d)$. The matrix $D_{\widehat{\pmb\tau}}$ is the $p$-th compound of $D_{\pmb\tau}$: \[ D_{\widehat{\pmb\tau}}=D_{\pmb\tau}^{(p)}. \] This means that $D_{\widehat{\pmb\tau}}\cB_\infty^r$ is comparable to Mahler's $p$-th compound convex body of $D_{\pmb\tau}\cB_\infty^d$ (see \cite{mahler_compound_I}), i.e. there is a positive constant $c$ depending only on $d$, such that \begin{equation} \label{eq:comparable_to_pseudo_compound} c^{-1}D_{\widehat{\pmb\tau}}\cB_\infty^r\subset[D_{\pmb\tau}\cB_\infty^d]^{(p)}\subset cD_{\widehat{\pmb\tau}}\cB_\infty^r. \end{equation} In \cite{schmidt_DA} the set $D_{\widehat{\pmb\tau}}\cB_\infty^r$ is called the $p$-th pseudo-compound parallelepiped for $D_{\pmb\tau}\cB_\infty^d$. It follows from Mahler's theory of compound bodies that \begin{equation} \label{eq:first_minimum_vs_product} \lambda_1\left([D_{\pmb\tau}\cB_\infty^d]^{(p)}\right)\asymp\prod_{i=1}^p\lambda_i\left(D_{\pmb\tau}\cB_\infty^d\right) \end{equation} with the implied constants depending only on $d$. Combining \eqref{eq:comparable_to_pseudo_compound} and \eqref{eq:first_minimum_vs_product} we get \[ \ln\left(\lambda_1\left(D_{\widehat{\pmb\tau}(s)}\cB_\infty^r\right)\right)= \sum_{i=1}^p\ln\left(\lambda_i\left(D_{\pmb\tau(s)}\cB_\infty^d\right)\right)+O(1), \] whence \[ \psi_1(\widehat\La,\widehat\gT,s)=\sum_{i=1}^p\psi_i(\La,\gT,s)+o(1). \] It remains to take the $\liminf$ and the $\limsup$ of both sides as $s\to\infty$. \end{proof} \section{Diophantine exponents in terms of Schmidt's exponents} Let $\pmb\ell_1,\ldots,\pmb\ell_d$, $\vec e_1,\ldots,\vec e_d$ be as in Section \ref{sec:laurexp}. Set \[ T= \begin{pmatrix} E_m & 0 \\ \Theta & E_n \end{pmatrix}. \] Then \[ \tr{(T^{-1})}= \begin{pmatrix} E_m & -\tr\Theta \\ 0 & E_n \end{pmatrix}, \] so the bases $\pmb\ell_1,\ldots,\pmb\ell_m,\vec e_{m+1},\ldots,\vec e_d$ and $\vec e_1,\ldots,\vec e_m,\pmb\ell_{m+1},\ldots,\pmb\ell_d$ are dual. Let us specify a lattice $\La$ and a path $\gT$ as follows. Set \begin{equation} \label{eq:La} \La=T^{-1}\Z^d=\Big\{ \Big(\langle\vec e_1,\vec z\rangle,\ldots,\langle\vec e_m,\vec z\rangle,\langle\pmb\ell_{m+1},\vec z\rangle,\ldots,\langle\pmb\ell_d,\vec z\rangle\Big)\in\R^d \,\Big|\, \vec z\in\Z^d \Big\} \end{equation} and define $\gT:s\mapsto\pmb\tau(s)$ by \begin{equation} \label{eq:path} \tau_1(s)=\ldots=\tau_m(s)=s,\quad\tau_{m+1}(s)=\ldots=\tau_d(s)=-ms/n. \end{equation} Schmidt's exponents $\bpsi_p$, $\apsi_p$ corresponding to such $\La$ and $\gT$ and the exponents $\beta_p$, $\alpha_p$ are but two different points of view at the same phenomenon. The same can be said about $\bPsi_p$, $\aPsi_p$ and $\gb_p$, $\ga_p$. It is exposed in the following two Propositions. \begin{proposition} \label{prop:belpha_via_psis} We have \begin{equation} \label{eq:belpha_via_psis} (1+\beta_p)(1+\bpsi_p)=(1+\alpha_p)(1+\apsi_p)=d/n. \end{equation} \end{proposition} \begin{proof} The parallelepiped in $\R^d$ defined by \eqref{eq:belpha_1_definition} can be written as \begin{equation*} M_\gamma(t)=\Big\{ \vec z\in\R^d \,\Big|\, \max_{1\leq j\leq m}|\langle\vec e_j,\vec z\rangle|\leq t,\ \max_{1\leq i\leq n}|\langle\pmb\ell_{m+i},\vec z\rangle|\leq t^{-\gamma} \Big\}, \end{equation*} where $\langle\,\cdot\,,\cdot\,\rangle$ denotes the inner product in $\R^d$. Therefore, $\beta_p$ (resp. $\alpha_p$) equals the supremum of the real numbers $\gamma$, such that there are arbitrarily large values of $t$ for which (resp. such that for every $t$ large enough) the parallelepiped $M_\gamma(t)$ contains $p$ linearly independent integer points. Hence, considering the parallelepipeds \begin{equation} \label{eq:P_gamma} P_\gamma(t)=T^{-1}M_\gamma(t)=\Big\{ \vec z\in\R^d \,\Big|\, \max_{1\leq j\leq m}|\langle\vec e_j,\vec z\rangle|\leq t,\ \max_{1\leq i\leq n}|\langle\vec e_{m+i},\vec z\rangle|\leq t^{-\gamma} \Big\}, \end{equation} we see that \begin{equation} \label{eq:belpha_p_via_parallelepipeds} \beta_p=\limsup_{t\to+\infty}\big\{ \gamma\in\R \,\big|\, \lambda_p(P_\gamma(t))=1 \big\}\,,\qquad\alpha_p=\liminf_{t\to+\infty}\big\{ \gamma\in\R \,\big|\, \lambda_p(P_\gamma(t))=1 \big\}, \end{equation} where $\lambda_p(P_\gamma(t))$ is the $p$-th minimum of $P_\gamma(t)$ with respect to $\La$. But $P_{m/n}(t)=D_{\pmb\tau(\ln t)}\cB_\infty^d$, so \begin{equation} \label{eq:psis_via_parallelepipeds} \bpsi_p(\La,\gT)=\liminf_{t\to+\infty}\frac{\ln(\lambda_p(P_{m/n}(t)))}{\ln t}\,,\qquad\apsi_p(\La,\gT)=\limsup_{t\to+\infty}\frac{\ln(\lambda_p(P_{m/n}(t)))}{\ln t}\,. \end{equation} A simple calculation shows that \[ P_\gamma(t)=t^{\frac{m-n\gamma}{d}}P_{m/n}\big(t^{\frac{n+n\gamma}{d}}\big), \] i.e. \[ \lambda_p(P_\gamma(t))=(t')^{\frac{-m+n\gamma}{n+n\gamma}}\lambda_p\big(P_{m/n}(t')\big) \] with $t'=t^{\frac{n+n\gamma}{d}}$. Therefore, the equality \[ \lambda_p(P_\gamma(t))=1 \] holds if and only if \[ 1-\frac{d}{n+n\gamma}+\frac{\ln(\lambda_p(P_{m/n}(t')))}{\ln t'}=0. \] Hence, in view of \eqref{eq:belpha_p_via_parallelepipeds}, \eqref{eq:psis_via_parallelepipeds}, we get \[ \beta_p=\limsup_{t\to+\infty}\left\{ \frac dn\left( 1+\frac{\ln(\lambda_p(P_{m/n}(t)))}{\ln t} \right)^{-1}-1 \right\}=\frac dn\left(1+\bpsi_p\right)^{-1}-1 \] and \[ \alpha_p=\liminf_{t\to+\infty}\left\{ \frac dn\left( 1+\frac{\ln(\lambda_p(P_{m/n}(t)))}{\ln t} \right)^{-1}-1 \right\}=\frac dn\left(1+\apsi_p\right)^{-1}-1, \] which immediately implies \eqref{eq:belpha_via_psis}. \end{proof} \begin{proposition} \label{prop:ba_via_Psis} Set $\varkappa=\min(p,\frac mn(d-p))$. Then \begin{equation} \label{eq:ba_via_Psis} (1+\gb_p)(\varkappa+\bPsi_p)=(1+\ga_p)(\varkappa+\aPsi_p)=d/n. \end{equation} \end{proposition} \begin{proof} Let $\vec L_\sigma$, $\vec E_\sigma$, $\cJ_k$, $\cJ'_k$ be as in Section \ref{sec:laurexp}. Since $T^{-1}\pmb\ell_i=\vec e_i$ and $T^{-1}\vec e_j=\vec e_j$, if $1\leq i\leq m$ and $m+1\leq j\leq d$, we have \begin{equation} \label{eq:T_deapplied_to_LE} (T^{-1})^{(k+k')}(\vec L_\sigma\wedge\vec E_{\sigma'})=\vec E_\sigma\wedge\vec E_{\sigma'},\qquad\text{ for each } \sigma\in\cJ_k,\ \sigma'\in\cJ'_{k'}, \end{equation} where $(T^{-1})^{(k+k')}$ is the $(k+k')$-th compound of $T^{-1}$. Furthermore, since $\La=T^{-1}\Z^d$, we have \begin{equation} \label{eq:T_and_La} \widehat\La=\wedge^p(\La)=(T^{-1})^{(p)}(\wedge^p(\Z^d)). \end{equation} Hence for each $\vec Z\in\wedge^p(\Z^d)$ and each $\sigma\in\cJ_k$, $\sigma'\in\cJ'_{d-p-k}$ (with $k$ satisfying $k_0\leq k\leq k_1$) we get \begin{equation} \label{eq:T_deapplied_to_LEZ} |\vec L_\sigma\wedge\vec E_{\sigma'}\wedge\vec Z|= |(T^{-1})^{(d-p)}(\vec L_\sigma\wedge\vec E_{\sigma'})\wedge(T^{-1})^{(p)}\vec Z|= |\vec E_\sigma\wedge\vec E_{\sigma'}\wedge\vec Z'|, \end{equation} where $\vec Z'\in\widehat\La$. Here, besides \eqref{eq:T_deapplied_to_LE}, \eqref{eq:T_and_La}, we have made use of the fact that for every $\vec V\in\wedge^p(\R^d)$, $\vec W\in\wedge^{d-p}(\R^d)$ the wedge product $\vec V\wedge\vec W$ is a real number and \[ |\vec V\wedge\vec W|=|T^{(p)}\vec V\wedge T^{(d-p)}\vec W|, \] provided $\det T=1$. Taking into account that any two norms in a Euclidean space are equivalent, we conclude from \eqref{eq:T_deapplied_to_LEZ} and Proposition \ref{prop:ba_substitution} that $\gb_p$ (resp. $\ga_p$) equals the supremum of the real numbers $\gamma$, such that there are arbitrarily large values of $t$ for which (resp. such that for every $t$ large enough) the system of inequalities \begin{equation} \label{eq:ba_modified_with_T_applied} \max_{\begin{subarray}{c} \sigma\in\cJ_k \\ \sigma'\in\cJ'_{d-p-k} \end{subarray}} |\vec E_\sigma\wedge\vec E_{\sigma'}\wedge\vec Z|\leq t^{1-(k-k_0)(1+\gamma)},\qquad k=k_0,\ldots,k_1, \end{equation} has a nonzero solution in $\vec Z\in\widehat\La$. The inequalities \eqref{eq:ba_modified_with_T_applied} define the parallelepiped \begin{equation} \label{eq:P_gamma_hat} \widehat P_\gamma(t)=\Big\{ \vec Z\in\wedge^p(\R^d) \,\Big|\, \max_{\begin{subarray}{c} \sigma\in\cJ_{m-k} \\ \sigma'\in\cJ'_{p-m+k} \end{subarray}} |\langle\vec E_\sigma\wedge\vec E_{\sigma'},\vec Z\rangle|\leq t^{1-(k-k_0)(1+\gamma)},\ k=k_0,\ldots,k_1 \Big\}. \end{equation} By analogy with \eqref{eq:belpha_p_via_parallelepipeds} we can write \begin{equation} \label{eq:ba_via_parallelepipeds} \gb_p=\limsup_{t\to+\infty}\Big\{ \gamma\in\R \ \Big|\, \lambda_1\big(\widehat P_\gamma(t)\big)=1 \Big\}\,,\qquad \ga_p=\liminf_{t\to+\infty}\Big\{ \gamma\in\R \ \Big|\, \lambda_1\big(\widehat P_\gamma(t)\big)=1 \Big\}, \end{equation} where $\lambda_1\big(\widehat P_\gamma(t)\big)$ is the first minimum of $\widehat P_\gamma(t)$ with respect to $\widehat\La$. Consider the path $\widehat\gT$ defined by \eqref{eq:T_hat} for $\gT$. Then \[ \widehat\tau_j(s)=\sum_{i\in\sigma_j}\tau_i(s), \] and if $\sigma_j\cap\{1,\ldots,m\}\in\cJ_{m-k}$\,, we have \[ \widehat\tau_j(s)=(m-k)s-\frac{(p-(m-k))m}{n}s=\left(\frac dn(k_0-k)+\varkappa\right)s=(1-(k-k_0)(1+\gamma_0))\ln t, \] where \[ t=e^{\varkappa s},\qquad\gamma_0=\frac{d}{n\varkappa}-1. \] Hence \[ \widehat P_{\gamma_0}(t)=D_{\widehat{\pmb\tau}(s)}\cB_\infty^r, \] where, as before, $r=\binom dp$. Thus, similar to \eqref{eq:psis_via_parallelepipeds}, we get \begin{equation} \label{eq:hat_psis_via_parallelepipeds} \bpsi_1(\widehat\La,\widehat\gT)=\liminf_{t\to+\infty}\frac{\varkappa\ln(\lambda_1(\widehat P_{\gamma_0}(t)))}{\ln t}\,,\qquad \apsi_1(\widehat\La,\widehat\gT)=\limsup_{t\to+\infty}\frac{\varkappa\ln(\lambda_1(\widehat P_{\gamma_0}(t)))}{\ln t}\,. \end{equation} The rest of the argument is very much the same as the corresponding part of the proof of Proposition \ref{prop:belpha_via_psis}. Let us observe that \[ \widehat P_\gamma(t)=t^{1-\frac{1+\gamma}{1+\gamma_0}}\widehat P_{\gamma_0}\big(t^{\frac{1+\gamma}{1+\gamma_0}}\big). \] This implies that \[ \lambda_1\big(\widehat P_\gamma(t)\big)=(t')^{1-\frac{1+\gamma_0}{1+\gamma}}\lambda_1\big(\widehat P_{\gamma_0}(t')\big) \] with $t'=t^{\frac{1+\gamma}{1+\gamma_0}}$. Therefore, the equality \[ \lambda_1\big(\widehat P_\gamma(t)\big)=1 \] holds if and only if \[ 1-\frac{1+\gamma_0}{1+\gamma}+\frac{\ln(\lambda_1(\widehat P_{\gamma_0}(t')))}{\ln t'}=0. \] Hence, in view of \eqref{eq:ba_via_parallelepipeds}, \eqref{eq:hat_psis_via_parallelepipeds}, we get \[ \gb_p=\limsup_{t\to+\infty}\left\{ (1+\gamma_0)\left( 1+\frac{\ln(\lambda_1(\widehat P_{\gamma_0}(t)))}{\ln t} \right)^{-1}-1 \right\}= (1+\gamma_0)\left(1+\varkappa^{-1}\bpsi_1(\widehat\La,\widehat\gT)\right)^{-1}-1 \] and \[ \ga_p=\liminf_{t\to+\infty}\left\{ (1+\gamma_0)\left( 1+\frac{\ln(\lambda_1(\widehat P_{\gamma_0}(t)))}{\ln t} \right)^{-1}-1 \right\}= (1+\gamma_0)\left(1+\varkappa^{-1}\apsi_1(\widehat\La,\widehat\gT)\right)^{-1}-1. \] Thus, \[ (1+\gb_p)(\varkappa+\bpsi_1(\widehat\La,\widehat\gT))=(1+\ga_p)(\varkappa+\apsi_1(\widehat\La,\widehat\gT))=d/n. \] It remains to apply Proposition \ref{prop:Psi_p_is_Psi_1}. \end{proof} \section{Transposed system} The subspace spanned by $\pmb\ell_{m+1},\ldots,\pmb\ell_d$ is the space of solutions to the system \[ -\tr\Theta\vec y=\vec x. \] As we noticed in Section \ref{sec:laurexp}, it coincides with the orthogonal complement $\cL^\bot$ for $\cL$. Denote by $\beta_p^\ast$, $\alpha_p^\ast$, $\gb_p^\ast$, $\ga_p^\ast$ the corresponding $p$-th regular and uniform Diophantine exponents of the first and of the second types for the matrix $\tr\Theta$. Obviously, they coincide with the ones corresponding to $-\tr\Theta$. The lattice constructed for $-\tr\Theta$ the very same way $\La$ was constructed for $\Theta$, would be \[ \begin{pmatrix} E_n & 0 \\ \tr\Theta & E_m \end{pmatrix}\Z^d. \] But transposing the first $n$ and the last $m$ coordinates turns this lattice into \[ \begin{pmatrix} E_m & \tr\Theta \\ 0 & E_n \end{pmatrix}\Z^d=\tr T\Z^d=\La^\ast, \] which is the lattice, dual for $\La$. For this reason with $\tr\Theta$ we shall associate $\La^\ast$. Now, the most natural way to specify the path determining Schmidt's exponents associated to $\tr\Theta$ is to take into account the coordinates permutation just mentioned and consider the path $\gT^\ast:s\to\pmb\tau^\ast(s)$ defined by \begin{equation} \label{eq:path_ast} \tau^\ast_1(s)=\ldots=\tau^\ast_m(s)=-ns/m,\quad\tau^\ast_{m+1}(s)=\ldots=\tau^\ast_d(s)=s. \end{equation} Denoting \[ \bpsi_p^\ast=\bpsi_p(\La^\ast,\gT^\ast),\quad \apsi_p^\ast=\apsi_p(\La^\ast,\gT^\ast), \] \[ \bPsi_p^\ast=\bPsi_p(\La^\ast,\gT^\ast),\quad \aPsi_p^\ast=\aPsi_p(\La^\ast,\gT^\ast), \] we see that any statement proved for an arbitrary $\Theta$ concerning the quantities $\beta_p$, $\alpha_p$, $\bpsi_p$, $\apsi_p$, $\bPsi_p$, $\aPsi_p$ remains valid if $\Theta$ is substituted by $\tr\Theta$, and the quantities $n$, $m$, $\beta_p$, $\alpha_p$, $\bpsi_p$, $\apsi_p$ are substituted by $m$, $n$, $\beta_p^\ast$, $\alpha_p^\ast$, $\bpsi_p^\ast$, $\apsi_p^\ast$, $\bPsi_p^\ast$, $\aPsi_p^\ast$, respectively. Particularly, the analogues of Propositions \ref{prop:belpha_via_psis}, \ref{prop:ba_via_Psis} hold: \begin{proposition} \label{prop:starred_belpha_via_starred_psis} We have \begin{equation} \label{eq:starred_belpha_via_starred_psis} (1+\beta_p^\ast)(1+\bpsi_p^\ast)=(1+\alpha_p^\ast)(1+\apsi_p^\ast)=d/m. \end{equation} \end{proposition} \begin{proposition} \label{prop:starred_ba_via_starred_Psis} Set $\varkappa^\ast=\min(p,\frac nm(d-p))$. Then \begin{equation} \label{eq:starred_ba_via_starred_Psis} (1+\gb_p^\ast)(\varkappa^\ast+\bPsi_p^\ast)=(1+\ga_p^\ast)(\varkappa^\ast+\aPsi_p^\ast)=d/m. \end{equation} \end{proposition} Further, same as \eqref{eq:psis_via_parallelepipeds}, we get \begin{equation} \label{eq:starred_psis_via_parallelepipeds} \bpsi_p^\ast=\liminf_{t\to+\infty}\frac{\ln(\lambda_p^\ast(P_{m/n}(t^{-n/m})))}{\ln t}\,,\qquad\apsi_p^\ast=\limsup_{t\to+\infty}\frac{\ln(\lambda_p^\ast(P_{m/n}(t^{-n/m})))}{\ln t}\,, \end{equation} where $\lambda_p^\ast$ denotes the $p$-th minimum with respect to $\La^\ast$. Let us show that $\bpsi_p^\ast$, $\apsi_p^\ast$ are closely connected with $\bpsi_{d-p}$, $\apsi_{d-p}$ (which, as before, are related to $\La$ and the path $\gT$ defined by \eqref{eq:path}). It follows from the definition of $P_\gamma(t)$ that there is a positive constant $c$ depending only on $\Theta$, such that \[ c^{-1}P_\gamma(t^{-1})\subseteq P_\gamma(t)^\ast\subseteq cP_\gamma(t^{-1}), \] where $P_\gamma(t)^\ast$ is the polar reciprocal body for $P_\gamma(t)$. Furthermore, it follows from Mahler's theory that \[ \lambda_p^\ast(P_\gamma(t)^\ast)\lambda_{d+1-p}(P_\gamma(t))\asymp1 \] with the implied constants depending only on $d$. Hence \begin{equation} \label{eq:mahler_with_no_ast} \lambda_p^\ast(P_\gamma(t^{-1}))\lambda_{d+1-p}(P_\gamma(t))\asymp1 \end{equation} Combining \eqref{eq:starred_psis_via_parallelepipeds}, \eqref{eq:mahler_with_no_ast} and \eqref{eq:psis_via_parallelepipeds} with $p$ substituted by $d+1-p$ we get \begin{proposition} \label{prop:starred_psis_via_psis} We have \[ \bpsi_p^\ast=-\dfrac nm\apsi_{d+1-p}\quad\text{ and }\quad\apsi_p^\ast=-\dfrac nm\bpsi_{d+1-p}\,. \] \end{proposition} \begin{corollary} \label{cor:starred_belpha_via_psis} We have \[ (1+\beta_p^\ast)(m-n\apsi_{d+1-p})=(1+\alpha_p^\ast)(m-n\bpsi_{d+1-p})=d. \] \end{corollary} \begin{proof} Follows from Propositions \ref{prop:starred_belpha_via_starred_psis} and \ref{prop:starred_psis_via_psis}. \end{proof} \begin{corollary} \label{cor:starred_times_nonstarred_equals_one} We have \[ \alpha_{d+1-p}\beta_p^\ast=1\quad\text{ and }\quad\alpha_{d+1-p}^\ast\beta_p=1. \] \end{corollary} \begin{proof} Follows from Proposition \ref{prop:belpha_via_psis} and Corollary \ref{cor:starred_belpha_via_psis}. \end{proof} In order to obtain the corresponding relations between the exponents of the second type, let us go in the opposite direction and prove \begin{proposition} \label{prop:starred_equals_nonstarred} We have \[ \gb_p=\gb_{d-p}^\ast\quad\text{ and }\quad\ga_p=\ga_{d-p}^\ast. \] \end{proposition} \begin{proof} Let $\vec L_\sigma$, $\vec E_\sigma$, $\cJ_k$, $\cJ'_k$ be as in Section \ref{sec:laurexp}. We remind that the bases $\pmb\ell_1,\ldots,\pmb\ell_m,\vec e_{m+1},\ldots,\vec e_d$ and $\vec e_1,\ldots,\vec e_m,\pmb\ell_{m+1},\ldots,\pmb\ell_d$ are dual. So, if $\sigma\in\cJ_k$, $\sigma'\in\cJ'_{k'}$, then \[ \ast(\vec L_\sigma\wedge\vec E_{\sigma'})=\pm\vec E_{\overline\sigma}\wedge\vec L_{\overline\sigma'}, \] where $\ast$ denotes the Hodge star operator, \[ \overline\sigma=\{1,\ldots,m\}\backslash\sigma,\qquad\overline\sigma'=\{m+1,\ldots,d\}\backslash\sigma', \] and the sign depends on the parity of the corresponding permutation. Hence for any $\sigma\in\cJ_k$, $\sigma'\in\cJ'_{d-p-k}$, and any $\vec Z\in\wedge^p(\Z^d)$ we have \[ |\vec L_\sigma\wedge\vec E_{\sigma'}\wedge\vec Z|=|\vec E_{\overline\sigma}\wedge\vec L_{\overline\sigma'}\wedge\ast\vec Z|. \] Thus, \begin{equation} \label{eq:max_equals_hodged_max} \max_{\begin{subarray}{c} \sigma\in\cJ_k \\ \sigma'\in\cJ'_{d-p-k} \end{subarray}} |\vec L_\sigma\wedge\vec E_{\sigma'}\wedge\vec Z|= \max_{\begin{subarray}{c} \sigma'\in\cJ'_{p-m+k} \\ \sigma\in\cJ_{m-k} \end{subarray}} |\vec L_{\sigma'}\wedge\vec E_\sigma\wedge\ast\vec Z|, \end{equation} for each $\vec Z\in\wedge^p(\Z^d)$. Set $k_0^\ast=\max(0,n-(d-p))$, $k_1^\ast=\min(n,p)$. Then $k_0^\ast=k_0+p-m$, $k_1^\ast=k_1+p-m$, and the inequality $k_0\leq k\leq k_1$ is equivalent to $k_0^\ast\leq p-m+k\leq k_1^\ast$. Therefore, it follows from \eqref{eq:max_equals_hodged_max} that \eqref{eq:ba_modified} is equivalent to \begin{equation} \label{eq:ba_hodged} \max_{\begin{subarray}{c} \sigma'\in\cJ'_k \\ \sigma\in\cJ_{p-k} \end{subarray}} |\vec L_{\sigma'}\wedge\vec E_\sigma\wedge\ast\vec Z|\leq t^{1-(k-k_0^\ast)(1+\gamma)},\qquad k=k_0^\ast,\ldots,k_1^\ast. \end{equation} It remains to apply Proposition \ref{prop:ba_substitution} and the fact that $\ast(\wedge^p(\Z^d))=\wedge^{d-p}(\Z^d)$. \end{proof} \begin{corollary} \label{cor:starred_ba_via_Psis} Set $\varkappa^{\ast\ast}=\min(d-p,\frac mnp)=\frac mn\varkappa^\ast$. Then \[ (1+\gb_p^\ast)(\varkappa^{\ast\ast}+\bPsi_{d-p})=(1+\ga_p^\ast)(\varkappa^{\ast\ast}+\aPsi_{d-p})=d/n. \] \end{corollary} \begin{proof} Follows from Propositions \ref{prop:ba_via_Psis} and \ref{prop:starred_equals_nonstarred}. \end{proof} \begin{corollary} \label{cor:starred_equals_nonstarred} We have \[ \bPsi_p^\ast=\dfrac nm\bPsi_{d-p}\quad\text{ and }\quad\aPsi_p^\ast=\dfrac nm\aPsi_{d-p}\,. \] \end{corollary} \begin{proof} Follows from Proposition \ref{prop:starred_ba_via_starred_Psis} and Corollary \ref{cor:starred_ba_via_Psis}. \end{proof} \section{Transference inequalities} For $p=1$ we have $\beta_1=\gb_1$, $\alpha_1=\ga_1$, which was shown in \cite{bugeaud_laurent_up_down}, or which can also be seen from our Propositions \ref{prop:belpha_via_psis}, \ref{prop:ba_via_Psis} and the obvious fact that $\bpsi_1=\bPsi_1$ and $\apsi_1=\aPsi_1$. In \cite{khintchine_palermo} A.~Khintchine proved for $m=1$ his famous transference inequalities \begin{equation} \label{eq:khintchine_transference} \gb_1^\ast\geq n\gb_1+n-1,\qquad \gb_1\geq\frac{\gb_1^\ast}{(n-1)\gb_1^\ast+n}\,. \end{equation} As we mentioned in the Introduction, M.~Laurent and Y.~Bugeaud in their paper \cite{bugeaud_laurent_up_down} split \eqref{eq:khintchine_transference} into a chain of inequalities for intermediate exponents. They proved that for $m=1$ and every $p=1,\ldots,n-1$ \begin{equation} \label{eq:khintchine_transference_split} \gb_{p+1}\geq\frac{(n-p+1)\gb_{p}+1}{n-p}\,,\qquad \gb_{p}\geq\frac{p\gb_{p+1}}{\gb_{p+1}+p+1}\,. \end{equation} By Proposition \ref{prop:starred_equals_nonstarred} we have $\gb_1^\ast=\gb_{d-1}$. Therefore, \eqref{eq:khintchine_transference} can be easily obtained by iterating \eqref{eq:khintchine_transference_split}. In \cite{dyson} F.~Dyson generalized \eqref{eq:khintchine_transference} to the case of arbitrary $n$, $m$ by proving that \begin{equation} \label{eq:dyson_transference} \gb_1^\ast\geq\frac{n\gb_1+n-1}{(m-1)\gb_1+m}\,. \end{equation} It is interesting to rewrite \eqref{eq:dyson_transference} in terms of Schmidt's exponents. By Propositions \ref{prop:starred_equals_nonstarred} and \ref{prop:ba_via_Psis} it becomes simply \begin{equation} \label{eq:Psi_very_dyson} (d-1)\bPsi_{d-1}\leq\bPsi_1, \end{equation} which coincides with the first statement of Corollary \ref{cor:Psi_dyson}. But we already have an intermediate variant of this inequality! It is \begin{equation} \label{eq:Psi_inter_very_dyson} \frac{\bPsi_{p+1}}{d-p-1}\leq\frac{\bPsi_p}{d-p}\,, \end{equation} the first statement of Corollary \ref{cor:Psi_inter_dyson}. Rewriting it in terms of Diophantine exponents we get \begin{theorem} \label{t:inter_dyson} For each $p=1,\ldots,d-2$ the following statements hold. If $p\geq m$, then \begin{equation} \label{eq:inter_dyson_p_geq} (d-p-1)(1+\gb_{p+1})\geq(d-p)(1+\gb_p). \end{equation} If $p\leq m-1$, then \begin{equation} \label{eq:inter_dyson_p_leq} (d-p-1)(1+\gb_p)^{-1}\geq(d-p)(1+\gb_{p+1})^{-1}-n. \end{equation} \end{theorem} If $m=1$, then $p\geq m$ and \eqref{eq:inter_dyson_p_geq} gives the first inequality of \eqref{eq:khintchine_transference_split}. If $n=1$, then $p+1\leq m$ and \eqref{eq:inter_dyson_p_leq} in view of Proposition \ref{prop:starred_equals_nonstarred} gives the second inequality of \eqref{eq:khintchine_transference_split}. As we see, the description of the discussed phenomenon in terms of Schmidt's exponents given by \eqref{eq:Psi_inter_very_dyson} is much more elegant. Its another attraction is its universality for all values of $n$, $m$ whose sum is equal to $d$. Moreover, the second statements of Corollaries \ref{cor:Psi_inter_dyson}, \ref{cor:Psi_dyson} are the analogues of \eqref{eq:Psi_inter_very_dyson} and \eqref{eq:Psi_very_dyson} for the upper Schmidt's exponents, so rewriting them with the help of Proposition \ref{prop:ba_via_Psis} gives us the analogue of Theorem \ref{t:inter_dyson} for the uniform Diophantine exponents splitting the inequality \begin{equation} \label{eq:apfel_transference} \ga_1^\ast\geq\frac{n\ga_1+n-1}{(m-1)\ga_1+m} \end{equation} proved by A.~Apfelbeck in \cite{apfelbeck} into a chain of inequalities for intermediate exponents: \begin{theorem} \label{t:inter_apfel} For each $p=1,\ldots,d-2$ the following statements hold. If $p\geq m$, then \begin{equation} \label{eq:inter_apfel_p_geq} (d-p-1)(1+\ga_{p+1})\geq(d-p)(1+\ga_p). \end{equation} If $p\leq m-1$, then \begin{equation} \label{eq:inter_apfel_p_leq} (d-p-1)(1+\ga_p)^{-1}\geq(d-p)(1+\ga_{p+1})^{-1}-n. \end{equation} \end{theorem}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Games and framework} Game theory is the study of systematic and strategic decision-making in interactive situations of conflict and cooperation. The models are widely used in economics, political science, biology and computer science to capture the behavior of individual participants in terms of responses to strategies of the rest. The field attempts to describe how decision makers do and should interact within a well-defined system of rules to maximize their satisfaction with the outcome \cite{GT-Critical,GT-fudenberg,Course in GT}. A game is a model of the strategies of these decision makers or players as we will call them, in terms of choices made by each of them. They are assumed to all have individual preference profiles $\sigma_{x_1}\succeq \sigma_{x_2}\succeq \cdots \succeq \sigma_{x_m}$ over a set of $m$ outcomes $\{\sigma_{x_j}\}$, where $'\succeq\,'$ should be interpreted as "preferred by", and the $x_j$'s as indices for possible outcomes. An outcome or strategy profile $\sigma \in S_n \times S_{n-1} \times \cdots \times S_1 = S$ is equivalent to the combination of the strategies $s^i_j \in S_i$ of the participants, where $s^i_j$ is the $j$'th strategy of player $i$, $S_i$ the set of strategies or choices available to that player and $S$ the set of all possible strategy profiles. In order to evaluate the profit or satisfaction of player $i$ with regards to a strategy profile we need to define for each player a payoff function $\$_i$ that takes a strategy profile $\sigma$ as input and outputs a real numerical value as a measure of desirability. We have $\$:S\rightarrow \mathbb{R}$ and $\$_i(\sigma_k)\geq\$_i(\sigma_l) \Leftrightarrow \sigma_k \succeq_i \sigma_l$. The question to answer is, what should rational players choose to do given that they have partial or complete information of the content of $S$ and the payoff functions $\$_i$? The main approach is to find a \emph{solution concept}, with the most famous one being the \emph{Nash Equilibrium}, where all players simply make the choice $s^i_j$ that is the best possible response to any configuration in $S/S_i$, i.e to any combination of strategies by their counter-parties. In situations where such equilibrium does not exist one needs to extend the game to allow for mixed (probabilistic) strategies where the players extend the sets $S_i$ to $\Delta(S_i)$, i.e the set of convex combinations of the $s^i_j$'s to acquire one. Just as classical probability distributions extend pure strategy games to mixed ones, quantum probabilities, operations and entanglement can extend the framework to outperform any classical setup. \subsection{Quantum games} A quantum game is defined by a set $\Gamma$ of objects and the relationships between them: \begin{equation}\label{qg} \Gamma=\{\rho_{\mathcal{Q}},\mathcal{H}_{\mathcal{Q}}, n, S_{i}, \$_{i}\} \;\; \textrm{for}\;\; i=1, \cdots ,n \end{equation} where $\mathcal{H}_{\mathcal{Q}}$ is the Hilbert space of the composite quantum system, $\rho_{\mathcal{Q}}$ is the initial state of the game defined on $\mathcal{H}_{\mathcal{Q}}$, $n$ is the number of players, $S_{i}$ the set of available strategies of player $i$ and $\$_{i}$ the payoffs available to player $i$ for each game outcome. In our quantum game protocol the $m_i$ different pure strategies available to a player $i$ will be encoded in the basis states of an $m_i$-level quantum system $\rho_{\mathcal{Q}_{i}} \in \mathcal{H}_{{\mathcal{Q}_{i}}}$. With $n$ players we'll end up needing a initial quantum state $\rho_{\mathcal{Q}} \in \mathcal{H}_{\mathcal{Q}}=\mathcal{H}_{\mathcal{Q}_n}\otimes \mathcal{H}_{\mathcal{Q}_{n-1}}\cdots \otimes \mathcal{H}_{\mathcal{Q}_1}$ with $\textrm{dim}(\mathcal{H}_{\mathcal{Q}})= \prod_{i=1}^n \textrm{dim}(\mathcal{H}_{\mathcal{Q}_i})$ to accommodate for all possible game outcomes \cite{review}. The strategies are chosen and played by each player trough the application of a unitary operator $U_i \in S_i = S(m_i)$ on their own sub-systems, where the set of allowed quantum operations $S(m_i)$ is some subset of the special unitary group $\textrm{SU}(m_i)$. The general procedure of a quantum game consists of a transformation of the composite initial state trough local unitary operations by the players: $U_n\otimes U_{n-1} \otimes \cdots \otimes U_1:\mathcal{H}_{\mathcal{Q}} \rightarrow \mathcal{H}_{\mathcal{Q}}$. Followed by a measurement outcome, or in terms of pre-measurement reasoning, an expectation value: $\$: \mathcal{H}_{\mathcal{Q}} \rightarrow \mathbb{R}$. \section{Quantum Kolkata restaurant problem} This is a general form of a minority game \cite{kolkata1,Hayden,Chen}, where $n$ non-communicating agents (players), have to choose between $m$ choices. A payoff of $\$=1$ is payed out to the players that make \emph{unique} choices. Players making the same choice receive $\$=0$. The challenge is to come up with a strategy profile that maximizes the expected payoffs $E_i(\$)$ of all players $i$, and has the property of being a Nash equilibrium. In the absence of communication, in a classical framework, there is nothing else to do, but to randomize. \subsection{Collective aim in the quantum case} It has been shown for the case of three players and three choices in the quantum setting, starting with a GHZ-type state $|\psi_{in}\rangle=\frac{1}{\sqrt{3}}\left(|000\rangle+|111\rangle+|222\rangle\right) $, that shared entanglement and local SU(3) operations (by the players on their own subsystems) will lead to a expected payoff $E(\$)=\frac{2}{3}$. This is a 50\% increase compared to the classical payoff of $\frac{4}{9}$ reachable trough randomization. Although the details of the protocol can be found in \cite{puya}, it is instructive to jump back a couple steps. Since we have three players with three allowed pure choices the Hilbert space we are dealing with is the space of three qutrit states; with a basis $B=\{|ijk\rangle \};\; i,j,k \in \{0,1,2\}$, each of which representing a post-measurement outcome of the game, where $i,j,k$ denotes the final choices of players $1,2,3$ respectively. We have $\textrm{span}(B) = \{\sum_{i,j,k =0}^2 a_{ijk}|ijk\rangle: i,j,k = 0,1,2 \; \textrm{and} \; a_{ijk} \in \mathbb{C}\}$ which with a normalization condition gives us the complete Hilbert space of the game. We can divide $B$ into subsets that are interesting from the point of view of the possible outcomes: \begin{eqnarray} L\, &=& \{|000\rangle,|111\rangle,|222\rangle\}, \\ G\, &=& \{|012\rangle,|120\rangle,|201\rangle,|021\rangle,|102\rangle,|210\rangle\},\\ D_1 &=& \{|011\rangle,|022\rangle,|100\rangle,|122\rangle,|200\rangle,|211\rangle\},\\ D_2 &=& \{|101\rangle,|202\rangle,|010\rangle,|212\rangle,|020\rangle,|121\rangle\},\\ D_3 &=& \{|110\rangle,|220\rangle,|001\rangle,|221\rangle,|002\rangle,|112\rangle\}, \end{eqnarray} where $L$ contains all states for which none the players 1,2,3 receive any payoff. It is thus a collective objective to avoid these states. $G$ contains all those states that returns a payoff $\$=1$ to the three of them and the sets $D_i$ contains the post-measurement states leads to a payoff $\$=1$ for player $i$ and $\$=0$ to players $\neq i$. Thus the general goal of each player $i$ is to maximize the probability of the post-measurement outcome to be a state in $G_i=G \cup D_i$. Starting with an initial state $|\psi_{in}\rangle$, each player $i \in {1,2,3}$ applies an operator from its set of allowed strategies $S_i \subseteq \textrm{SU}(3)$, transforming it to its final state $|\psi_{fin}\rangle = U_1 \otimes U_2 \otimes U_3 |\psi_{in}\rangle$. The expected payoff $E(\$_i)$ of player $i$ is the probability of the post-measurement outcome to be a state in $G_i$: \begin{equation}\label{payoff} E(\$_i) = \sum_{|\xi\rangle \in G_i} \left|\langle \psi_{fin} |\xi\rangle \right|^2. \end{equation} Given that we have an initial state in $\textrm{span}(L)$ containing all states of the form $|\psi_{in}\rangle=\alpha|000\rangle+\beta|111\rangle+\gamma|222\rangle $ with $\alpha,\beta,\gamma \in \mathbb{C}$ (We will assume $0 \leq \alpha,\beta,\gamma \in \mathbb{R}$ later for simplicity), what is the rational aim of player $i$? First, note that all states in $\textrm{span}(L)$ are unbiased with regards of change in player positions. We can assume that they don't even know which qutrit they control since that knowledge doesn't add any useful information for the choice of $U_i \in S_i$. Second, any choice of $U_i$ aimed at increasing the probability of post-measurement state to end up being in $D_i$ must due to the symmetry of the setup increase the probability of the state being in $D_{j\neq i \vee k} \cup D_{k\neq i \vee j}$ with 2:1. Third, an outcome in $G = G_1 \cap G_2 \cap G_3$ is as favorable as any outcome in $D_i$ for player $i$. It follows therefore that the players should aim for producing a state in $\textrm{span}(G)$ to the extent this is possible. Although it was shown in \cite{puya} that they fail to fully depart from $\textrm{span}(L)$, whereby they reach a maximum payoff of $E(\$)=\frac{2}{3}$ rater than $E(\$)=1$ for $\alpha = \beta = \gamma = \frac{1}{\sqrt{3}},$ with a final state of the following form: \begin{multline} \mid\psi_{fin}\rangle=\frac{1}{3}\left(|000\rangle+|012\rangle+|021\rangle+|102\rangle\right.+\\\left.|111\rangle+|120\rangle+|201\rangle+|210\rangle+|222\rangle\right). \end{multline} \subsection{Expected payoffs for initial states in span(\emph{L}) } Due to the symmetries mentioned in the previous section the three players will have to chose a unitary operator $U$ that takes states from $\textrm{span}(L)$ to $\textrm{span}(L \cup G)$, without the possibility to favor any subset of $G$ (even if that possibility existed, that would only put a roof for the individual payoffs and any choice of subset other than the whole would decrease the expected payoff, due to the lack of coordination the choice of subset). This leads to the conclusion that there exists a $U$ (fixed up to a global phase) for any initial state $|\psi_{in}\rangle=\alpha|000\rangle+\beta|111\rangle+\gamma|222\rangle $, that maximizes the individual expected payoffs and is a Nash equilibrium solution, since any departure from this strategy will lead to a lower payoff. We have: \begin{equation}\label{payoff} E_{max}(\$) = \sum_{|\xi\rangle \in G} \left|\langle \psi_{fin}| U^{\dagger} \otimes U^{\dagger} \otimes U^{\dagger} |\xi\rangle \right|^2, \end{equation} simultaneously for all three of them. Figure 1 shows numerically calculated expected payoffs $E(\$)$ for $\alpha=\sin\vartheta\cos\varphi; \, \beta=\sin\vartheta\sin\varphi; \, \gamma=\cos\vartheta$ where $\varphi= \frac{\pi}{40}M,\vartheta= \frac{\pi}{40}N$ and $M,N = 1,2, \cdots, 20$. A total of 400 optimizations with equally many different associated operators $U_{MN}$. We see that the expected payoff is maximized for $\varphi=\frac{\pi}{4}$ and $\vartheta=\cos^{-1}\frac{1}{\sqrt{3}}$, where the initial state is maximally entangled, and falls off towards the classical expected payoff as the entanglement decreases. \begin{figure} \includegraphics[scale=0.75]{optpt400.jpg}\\ \caption{Payoffs associated with optimal strategies in a three player game with a variable initial state. The expected payoff $E(\$)$ decreases as the level of entanglement decreases.}\label{b} \end{figure} \section{Conclusions} The ambition of self-maximization in the studied Kolkata restaurant problem leads individual players to act in such way that the collective good is maximized. The game is symmetric with regards to permutations of player positions which guides the participants to aim for a set of outcomes that favors them all. The expected payoff reachable trough local operations changes with the level of entanglement in the initial state. \subsection*{Acknowledgements:} The work was supported by the Swedish Research Council (VR). \bibliographystyle{aipproc} \IfFileExists{\jobname.bbl}{} {\typeout{} \typeout{******************************************} \typeout{** Please run "bibtex \jobname" to optain} \typeout{** the bibliography and then re-run LaTeX} \typeout{** twice to fix the references!} \typeout{******************************************} \typeout{} }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{ACKNOWLEDGMENT} \ifCLASSOPTIONcaptionsoff \newpage \fi \renewcommand*{\bibfont}{\footnotesize} \bibliographystyle{IEEEtranN} \section{Three-stage Place Recognition} \label{sec:alg} Our place recognition algorithm consists of three parts: (\textit{i}) place retrieval using a \textit{retrieval key}, (\textit{ii}) semi-metric localization via pre-alignment using an \textit{aligning key}, and (\textit{iii}) full SCD comparison for potential refinement and localization-quality assessment. \subsection{Place Retrieval using a Retrieval Key} \label{sec:pr} Existing widely adopted solutions leveraged past trajectory or motion uncertainties to reduce the search space \cite{dube2017segmatch, Chen2019OverlapNetLC}. Differing from them, we pursue global localization without prior knowledge. We solely rely on the descriptor itself while minimizing computational costs from global search by introducing sub-descriptors. Using all extracted retrieval keys in the map, we construct a k-d tree for fast search and retrieve the closest place in terms of the retrieval key. Potentially, the top $k$ candidate indexes then may be retrieved to be verified at the full \ac{SCD} comparison phase. Interestingly, we empirically found that using only the best candidate ($k=1$) yields meaningful performance, outperforming the case using multiple candidates. A discussion on the candidate set size ($k$) will be presented in \secref{sec:topk}. As a result of the tree search, we topologically retrieve the corresponding map place for the query. \input{src/fig_scd_aug.tex} \subsection{Semi-metric Localization using an Aligning Key} \label{sec:prealign} Given a retrieved candidate place, the typical \ac{SLAM} framework would proceed to metric-level localization by finding the relative pose between the query and candidate place recognized by the place retrieval module. Well-known approaches would include ICP and its variants, which compare two scans to find the optimal pose, minimizing an alignment cost. Despite their popularity, these metric localization methods may suffer local minima and required a good initial guess. In the second phase of our place recognition algorithm, we exploit the aligning key and determine the partial relative pose through the pre-aligning phase. The naive brute-force version of the alignment \equref{eqn:minImgDist1} is computationally proportional to the number of columns $N_\text{A}$, which is heavier than the simple and frequently used $L2$ norm. We propose conducting brute-force aligning by using query and target aligning keys, instead of using the full \ac{SCD}s. The pre-alignment using the aligning key procedure is formalized as \begin{equation} \hat{n}^{*} = \argmin_{n \in [N_{\text{inv}}]} d_{\textbf{w}}(\textbf{w}_{Q, n}, \textbf{w}_{M}) \ , \label{eq:prealign1} \end{equation} where $\hat{n}^{*}$ is the estimated shift for the best alignment between the query and target \ac{SCD}s. We simply propose using $d_{\textbf{w}}$ as the $L2$ distance between two vectors. This computed column shift $\hat{n}^{*}$ can serve as a good initial value for further localization refinement such as \ac{ICP}. The evaluation of this initial guess is given in \secref{sec:icp}. \subsection{Full Descriptor\bl{-based False Positive Rejection}} \label{sec:fullscd} The final step of place recognition is to compare the full \ac{SCD} \bl{to reject the potential false positive. As will be shown in \secref{sec:topk}, using a full descriptor may deteriorate the spatial discernibility.} Using the previously computed initial column shift $\hat{n}^{*}$, the original search space in \equref{eqn:minImgDist1} is shrinked to only the neighborhood $\mathcal{N}(\cdot)$ of the \textit{pre-aligned} shift $\hat{n}^{*}$. \begin{equation} D(\textbf{f}_{Q}, \textbf{f}_{M}) = \min_{n \in \mathcal{N}(\hat{n}^{*})} d(\textbf{f}_{Q, n}, \textbf{f}_{M}) \ , \label{eq:prealign2} \end{equation} This reduced search space may be insecure when the variation over columns is poor, when the upper vertical \ac{FOV} is low \cite{geiger2012we} when our maximum height bin encoding function hardly makes a diversified distribution. This can be overcome by developing a more discerning bin encoding function. However, as will be shown in experiments (see \secref{sec:exp}), we found that even an extremely tight choice of neighbor, $\mathcal{N}(\hat{n}^{*}) = \{\hat{n}^{*}\}$ (i.e., assuming the pre-aligning as the best alignment) is empirically enough and outperforms other methods. Finally, we go over the $k$ candidates proposed by the k-d tree and search for candidates satisfying an acceptance threshold to select it as the revisited place. \begin{equation} {c}^{*} = \argmin_{{c}_{k} \in \mathcal{C}} { D(\textbf{f}_{Q}, \textbf{f}_{M}^{c_k}) }, \ \text{s.t} \ D < \tau \ , \label{equ:LoopFound} \end{equation} where $\mathcal{C}$ is the candidate index set extracted from the k-d tree, $\tau$ is the acceptance threshold, and $c^{*}$ is the index of the recognized place. Because we use $k=1$, this full descriptor similarity score performs as the validity check to confirm that $D < \tau$ before accepting the candidate as the correct match. \section{Conclusion} \label{sec:conclusion} In this paper, we presented a global place recognition module combining topological and metric localization. As a global localizer, the proposed method can be a solution to a kidnapped robot problem serving as a place recognizer at a \textit{wake-up} phase. We also showed the invariance of \textit{Scan Context++} in both the rotational and lateral directions. Via the evaluation, we validated that the proposed localizer achieved discriminability and real-time performance without necessitating prior knowledge. \section{Requirements for Structural Place Recognition} \label{sec:probdefmain} \input{src/definition1.tex} \input{src/tab_taxonomy.tex} \input{src/definition23.tex} \subsection{Terminology and Problem Definition} \label{sec:probdef} \bl { We first define our \textit{place recognition} problem. As a robot traverses an environment, a set of range sensor measurements is streamed with increasing timestamps. We consider every single sensor measurement $z_t$ acquired at a certain spatial location $l_t$ at time $t$ as a \textit{place}. A \textit{map} is a database, a set of all streamed measurements after the time a robot has started a mission. Then, our place recognition can be defined as finding a revisited place within a map for a query place. It is also important to trustfully decide whether there is no revisited place in a map. \textit{Revisitedness} is satisfied for two places, $a$ and $b$, temporally apart from a certain window size (i.e., $|{t_b} - {t_a}| > \delta_t $), if the Euclidean distance between two places' spatial locations is less than a certain threshold (i.e., $|l_{b} - l_{a}| < \delta_l $). } \bl { To construct such a place recognition system, two submodules are required. The first is description function $f(\cdot)$. To ease handling noisy or heavy raw measurements, a raw measurement $z_t$ is encoded into a more compact form called descriptor $\textbf{f}_t = f(z_t)$. The second is retrieval that defines a similarity function ${sim}(\cdot, \cdot)$ or distance function $D(\cdot, \cdot)$. It takes two descriptors and returns a scalar-value similarity or distance in the descriptor space. Then, the place recognition is reduced to the nearest search problem using the description and similarity functions when a query measurement $z$ and a map are given. One can conclude that two places $a$ and $b$ are the same if the descriptor distance $D(\textbf{f}_a, \textbf{f}_b)$ is lower than a threshold $\tau$. } \subsection{Invariance} \label{seq:reqinv} Most LiDAR place recognition methods \cite{he2016m2dp, dube2020segmap, yin2018locnet, Chen2019OverlapNetLC} have been tested over less complex environments \cite{geiger2012we, fordcampus} with few dynamic objects or viewpoint changes. The existing research has mostly focused on increasing the discriminability of a descriptor, rather than on defining and overcoming structural diversity. We provide a taxonomical analysis of the potential \bl{nuisances} for structural place recognition, as shown in \tabref{tab:taxonomy}. We categorized each invariance in the comparison to the rather widely-studied visual place recognition problem for each corresponding invariance type. \subsubsection{Internal Factors} The measurement's variation could be derived from a robot itself, we named \textit{internal factors}. This includes rotation, translation, and scale changes of the sensor coordinate mostly induced by ego-motion (R, T, and SP in \tabref{tab:taxonomy}). \figref{fig:exampletax} illustrates the sample measurement discrepancy under rotational and translational variance. In terms of scale, the same object looks very different due to the variation in point cloud density caused by the sensing distance. \subsubsection{External Factors} Similar to illumination changes (short-term variance) and weather changes (long-term variance) in the visual domain, structures may undergo similar variance in the short-term through occlusions by dynamic objects and in the long-term through permanent structural changes from construction or demolition. This external factor becomes critical as we deploy robots for long-term navigation. \subsubsection{Sensor Characteristics} The last factor, sensor characteristics, maybe more range-sensor specific. Unlike the highly structured sensor data obtained by cameras, LiDAR point clouds are unstructured, and sensing changes dramatically depending on the sensor's specifications (e.g., range, number of rays, and point cloud resolution depending on \ac{FOV}). Thus, a generic place recognition system should be invariant to sensor specifications. \input{src/fig_taxonomy.tex} \input{src/fig_pipeline.tex} \subsection{Overview} \label{sec:overview} The proposed method consists of two parts: (\textit{i}) place description and (\textit{ii}) place recognition. The overall pipeline is illustrated in \figref{fig:pipeline}. The place recognition module consists of place retrieval, semi-metric localization, and verification. In the next two sections, we will introduce each module in detail. \section{Discussion} \label{sec:discussion} Beyond the evaluation of the proposed global localization method, we provide ablation studies and interpretations. \subsection{Descriptor Resolution} \label{sec:res} We examined the descriptor resolution and corresponding performances. As in \tabref{tab:abl_res}, the lower resolution yielded better performance. \bl{Therefore, we used the baseline resolution for the following subsections.} \input{src/tab_eval_resolution} \subsection{Analysis on Retrieval key Performance} \label{sec:topk} \textbf{Candidate numbers.} In \secref{sec:pr}, we leveraged the k-d tree to propose $k$ candidates for retrieval and \bl{only a single answer is selected after the full descriptor-based false positive rejection (\secref{sec:fullscd})}. In this subsection, we examine the effect of $k$ on performance. \bl{We first note that the increase in $k$ does not mean to relax the success criteria, but rather the number of candidates in the first step of our algorithm.} Interestingly, \bl{as in \tabref{tab:topk},} all statistics outperformed others when we only chose the best candidate. \bl{The full descriptor may suffer confusion showing the best performance at $k=1$. Though this may seem contrary to a general belief for better performance under more candidates, the result indicates the reduced spatial discernibility of the full descriptor. Based on this investigation, we used $k=1$ for all experiments conducted earlier.} \input{src/tab_eval_topk} \bl{\textbf{Retrieval key vs. full descriptor brute-force search.} Additionally, we analyzed how the performance varies if an entire database is compared (i.e., brute-force) using the full descriptor-based distance \equref{eq:prealign2}. Through the multiple tests in \tabref{tab:bf}, the performance difference between the retrieval key-based and the brute-force search is negligible although the brute-force search requires heavier computations following $\mathcal{O}(n)$ (e.g., almost 1 second for 4500 frames of \texttt{KITTI 00}). } \input{src/tab_eval_bf} \textbf{Full descriptor's effect.} \bl{Nevertheless the confusion of the full descriptor-based similarity proven in \tabref{tab:topk}, this additional similarity validation enhanced the precision for the augmentation cases as well as its semi-metric localization capability.} In \figref{fig:keyeffect}, the augmented Scan Context's precision was improved by eliminating less accurate matches via the supplemental similarity verification using a full descriptor. \input{src/fig_fulldesc_role.tex} \subsection{Correctness Criteria} \label{sec:thres} \bl{ The performance tends to be improved when more tight criteria are applied as in \tabref{tab:thres}. This phenomenon occurred because laterally displaced queries, which are generally difficult to recognize, are considered as correct rejections when they are missed. However, precise localization was more difficult in reversed revisits (i.e., \texttt{KITTI 08} in \tabref{tab:thres}) because the previously correctly recognized queries (e.g., within \unit{4}{m}$-$\unit{8}{m}) are considered false alarms. From these findings, \unit{8}{m} was used for the criteria of correctly recognized places during our main evaluations in \secref{sec:exp} and the ablations in \secref{sec:discussion} to successfully cope with laterally translated revisits. Even if a place is recognized from a slightly distant place (e.g., \unit{4}{m}$-$\unit{8}{m} apart), the proposed method can close a loop successfully because it provides a semi-metric localization result. } \input{src/tab_eval_thres} \subsection{Robustness to Roll-Pitch and Height Perturbations} \label{sec:rollpitch} \bl{ The previously used datasets are mainly from wheeled platforms with little roll-pitch and height perturbations. However, sensor measurement variation can occur in terms of rotational and height variations between two scans. In this regard, we added the additional experiments on roll-pitch perturbed simulations and a real-world hand-held LiDAR experiment. We randomly pre-rotated an input scan with respect to both roll and pitch for the simulation. In the real-world hand-held LiDAR dataset, the height of the measurement origins varies slightly while a human navigator walks. } \bl{ \textbf{Simulations.} The degree of pre-rotations is divided into three levels: [\unit{-5}{\degree}, \unit{5}{\degree}], [\unit{-10}{\degree}, \unit{10}{\degree}], and [\unit{-15}{\degree}, \unit{15}{\degree}]. The simulations are conducted for two sequences \texttt{KAIST 03} and \texttt{Riverside 02}, and performance losses are clearly observed for all methods. For the \texttt{KAIST 03} sequence, M2DP showed smaller performance drops than ours. However, in \texttt{Riverside 02}, the performance degradations became clear with respect to the degree of perturbation for all three methods. We believe that this topic, robust place recognition under severe roll-pitch variations, has still not been studied much, and it could be a valuable future academic research topic. } \bl{ \textbf{Hand-held data.} Second, the result of real-world hand-held LiDAR data is given in \figref{fig:rollpitch3}. We used \texttt{KA Urban Campus 1} sequence provided in LiLi-OM \cite{li2021towards}, which was acquired from a slowly walking human navigator. It has the same direction revisits and narrow front horizontal FOVs ($\sim$\unit{70}{\degree}). In this real-world data, ours outperformed M2DP by a large margin and showed mild (e.g., human walking) roll-pitch and height perturbations are acceptable. Hence, the proposed method may not be restricted in wheeled platforms and work for a hand-held traverse under mild roll-pitch motions. } \input{src/fig_eval_rollpitch} \subsection{Comparison to Deep learning-based methods} \label{sec:dl} \bl{ We also provide comparisons to recent deep learning-based approaches, SegMap \cite{dube2020segmap} and PointNetVLAD \cite{angelina2018pointnetvlad}\footnote{\bl{For the input processing, we follow \cite{kim2019}. An input is a ground-removed, zero-centered 4096 points within a [\unit{-25}{m}, \unit{25}{m}] cubic region. }}. For both methods, we used pre-trained weights the authors released. SegMap revealed hampered performance compared to than SegMatch. This could be due to the limitation in generalization capability over unseen environments. PointNetVLAD showed comparable performance in the environment under little rotational and translational variations (i.e., \figref{fig:dl1}) but failed when the variation increased (\figref{fig:dl2} and (\figref{fig:dl3}), respectively).} \input{src/fig_eval_dl} \subsection{Failure Cases} \label{sec:summary} We illustrate sample cases when the proposed method succeeded and failed to localize against the map. As shown in \figref{fig:succ}, the proposed method overcome lateral and/or rotational discrepancy between map and query scans. The \ac{SCD} is successfully localized to the map even with many dynamic objects (e.g., cars). However, when a tall and large object (e.g, bus) appears very proximal to the sensor on both query and map scans, the localization might fail as in \figref{fig:fail2}. The other failure case was found when the vehicle was moving along a corridor-like place (\figref{fig:fail1}). \input{src/fig_disscussion_failures.tex} \subsection{Which SCD to Use?} \label{sec:whichscd} The final question to answer is which SCD to use and in what case. Based on the evaluation, generally, A-CC yielded the best performance even under the composite variance (i.e., \textit{Rot + Lat}) as in the case of \texttt{Oxford} (\figref{fig:exp_oxford}) and \texttt{Pangyo} (\figref{fig:exp_pangyo}). \bl{Therefore, CC and A-CC are more preferred when the target environment is an urban road. We recommend using PC or A-PC for more general environments and when semi-metric localization capability is more critical. Because classic ICP is much more sensitive to the rotational component of the initialization, PC would be a better choice despite a little sacrifice in precisions from CC (but still comparable performance). For patrolling robots and shuttles that repeat the same route with minimum variance, PC would exhibit more meaningful performance as proved in multi-session scenarios (\figref{fig:exp_multisession}).} \subsection{Limitations and Potential Extension} \label{sec:limitation} \subsubsection{Invariance in One Direction} \label{sec:limit} The proposed method is natively invariant in one direction and we chose rotation and lateral direction to be invariance axes. This limitation was overcome by a robust search scheme and augmentation. \subsubsection{Leveraging Deep Learning for Scan Context Descriptor} \label{sec:deep} The proposed descriptor itself is in ordered 2D format, and inputting this into a deep network is very straightforward. As reported in \cite{kim2019, xu2021disco}, the descriptor is learnable and provides meaningful performance by only using a small network. This type of approach would particularly be beneficial when GPU is available and the localization is almost a memorization problem. \subsubsection{Application to Non-urban Environment} \label{sec:indoor} The proposed method is most powerful in an urban environment where the descriptor can encode the nearby structural variance. The proposed descriptor is 1-channel with height value but easily expandable. For example, \cite{wang2020intensity} considered LiDAR intensity value as an additional channel to successfully operate in an indoor environment. Combining deep learning with indoor application yielded meaningful performance with dense pedestrian traffic \cite{spoxel2020}. We think incorporating point cloud distribution or semantic labels as additional channels would further enhance the scan context beyond the urban environment such as indoor and natural environments. \subsubsection{Application to Other Range Sensors} \label{sec:radarmisc} The proposed descriptor is not limited to LiDAR sensors but also applicable to general range sensors including radars. As we reported a potential extension to radar sensors in \cite{kim2020mulran}, the descriptor can be applied to radars. \subsubsection{\bl{Generalizability over measurement variation}} \label{sec:radarmisc} \bl{Future studies examining the sensor difference between the mapping and localization phase would also be meaningful. LiDAR measurement varies depending on the hardware choice and mounting configuration. Achieving generalizability over measurement variation would be needed.} \section{Experimental Evaluation} \label{sec:exp} Next, we validated our spatial descriptor and place recognition algorithm on various datasets. As addressed in \figref{fig:exampletax} and \tabref{tab:taxonomy}, coping with multiple variations of a place is crucial for loop detection and global localization. To clearly state the associated invariance, we color-coded routes depending on the revisit types. \subsection{Revisit with Small Variance} \label{sec:easy} Among the eight sequences in \tabref{tab:dataset}, \texttt{KITTI 00} and \texttt{MulRan KAIST 03} are relative \textit{easy} sequences, containing small rotational/translation variance and few dynamic objects. For \texttt{KITTI 00} (\figref{fig:kitti00_pr}), M2DP showed the highest performance with respect to both precision and recall. SegMatch revealed quite lower recall compared to the others; however, the distribution of the recognition was sufficient to construct a globally consistent map. In particular, SegMatch successfully recognized the loop at the middle crossroad, where a composite change (both \textit{rotational and lateral}) existed (see \figref{fig:matchedkitti1}), while the other methods failed to do so. Both \ac{PC} and \ac{CC} showed similar performance because \texttt{KITTI 00} barely has any rotations or lane changes at the loops. The \ac{PC} matched pairs at the 100\% precision are visualized in \figref{fig:matchedkitti2}. In \texttt{MulRan KAIST 03} (\figref{fig:kaist03_pr}), all of the methods successfully recognized the loops because this sequence is for a campus environment with almost no lane changes and few dynamic objects. \subsection{Revisit with Rotational or Lateral Variance} \label{sec:rot_or_lat} Next, we examined sequences showing dominant variance in either the rotational or lateral direction. Their performance is summarized in \figref{fig:exp_rot_or_lat}. \subsubsection{\texttt{KITTI 08}} This sequence only contains reverse revisits, with half of them further including simultaneous lane change. This appears as the concentrated distribution of revisit events in \figref{fig:kitti08_traj}. M2DP and \ac{CC} failed due to the severe rotational variance while \ac{PC} showed substantially better precision. SegMatch yielded enough precision, but the recall is limited. For this sequence with rotational variance, we examined the \ac{A-CC} to see the improvement of the augmentation. \subsubsection{\texttt{MulRan Riverside 02}} In this sequence, the vehicle revisits a place with multiple lane changes but in the same direction. This variance is clearly captured in \figref{fig:riverside02_traj}. In terms of precision-recall, \ac{CC} outperformed the others techniques by large margin (\figref{fig:riverside02_pr}). The time-elevation graph in \figref{fig:matchedriver} shows true/false matches for the sequence. \ac{CC} outperformed the others in challenging regions with few false positives (red), which can potentially be treated using existing robust back-ends \cite{sunderhauf2012switchable, agarwal2013robust, rosinol2019kimera}. As with the improvement of A-CC in \texttt{KITTI 08}, the augmentation (\ac{A-PC}) improved the \ac{PC} under lateral variance. \subsection{Concurrent Rotational and Lateral Variance} \label{sec:rotlatinv} The more complex case includes concurrent rotational and lateral variance. We used \texttt{Oxford} and \texttt{Pangyo} to evaluate performance under composite variance. We excluded SegMatch for the composite cases because of their high dependency on odometry. Enhancing other prior modules for place recognition is beyond the scope of this paper. \subsubsection{\texttt{Oxford}} \label{sec:rotlatinvoxford} As shown in \figref{fig:exp_oxford}, the performances of the original \ac{PC} and \ac{CC} without augmentation are steeply limited at a certain recall, even with increased thresholds. Interestingly, the unrecognized recalls at this steep point matched the ratio of non-same direction revisits (\unit{43}{\%}) shown in \tabref{tab:dataset}. Applying associated augmentation to \ac{PC} and \ac{CC} showed improved precisions at the higer recalls, with large margins for both descriptors. Overall, the \ac{A-CC} generally showed higher precision than the \ac{A-PC} did. In \figref{fig:matchedoxford}, true/false matches for each method are visualized. For a fair comparison, we pinned the recall at 50\% for all methods to measure each method's accuracy and effectiveness quantitatively. Note the importance of the distribution shown in \figref{fig:exp_oxford}. According to this plot, CC outperforms PC in terms of precision and maximum F1 score, except for the distribution score. This indicates that the increased precision of CC is concentrated in easy regions, while PC can detect difficult loops that may critically contribute to SLAM performance (see \figref{fig:matchedoxford_pc}). However, the restricted performance of CC was alleviated by A-CC, as can be seen in the improved distribution score as shown in \figref{fig:oxfordjan11_dr}. This improvement is also depicted in \figref{fig:matchedoxford_acc}, in which A-CC detects well-distributed loop-closures. \subsubsection{\texttt{NAVER LABS Pangyo}} \label{sec:latinvpangyo} This \texttt{Pangyo} sequence includes sporadic lane changes during revisits, accompanied by rotational change. This composite variance (both \textit{rotational and lateral}) is inevitable in an urban environment when the reverse route necessarily involves a lane change. The \texttt{Pangyo} sequence encompasses abundant types of variance as can be seen in \figref{fig:exp_pangyo}. Overall, augmentation yielded substantial improvement when the revisit underwent composite variance. A-PC showed the best performance for rotational change (\figref{fig:pangyo_dr3}) and M2DP was meaningful for lateral variance. However, under concurrent rotational and lateral variance, A-CC proved its validity over other methods. \input{src/fig_exp_multisession.tex} \input{src/fig_eval_match_sejong02to01recall50.tex} \subsection{Multi-session Capability} \label{sec:evalmultisession} \input{src/fig_exp_semiloc.tex} So far, we have investigated revisits within a single session. Here, we consider place recognition in multi-session scenarios toward long-term autonomy. To validate our methods in multi-session scenarios, we chose two sequences from a dataset with sufficient temporal differences. The first pair was \texttt{Oxford 2019-01-15-13-06-37} to \texttt{Oxford 2019-01-11-13-24-51}. We used \texttt{Oxford 2019-01-11-13-24-51} as a map and tested the loop-closure performance of \texttt{Oxford 2019-01-15-13-06-37} as a query sequence. Another pair used for testing multi-session loop-closure showed a larger temporal gap of two months. We chose \texttt{Sejong 01} in the \texttt{MulRan} dataset as a map using \texttt{Sejong 02} as a query. As can be seen in \figref{fig:exp_multisession}, these revisits mostly included lateral variance but with a temporal gap. For the \texttt{Oxford} pair, all of the methods successfully detected loops. The \texttt{Sejong} pair was more challenging because the lateral change included multiple lane changes. The loop-closure results for the \texttt{Sejong} pair are further visualized in \figref{fig:matchedsejong}. M2DP seemed to show meaningful performance but included wrong loop-closures. CC the showed best performance for the multi-session scenarios. Obvious improvements could be made via augmentation, although this was excluded from the multi-session scenarios. \subsection{Metric Localization Evaluation and Quality Assessment} \label{sec:icp} Together with the retrieved place, the proposed method is capable of estimating the relative 1D pose between query and map places. This is important when the topological place retrieval is combined with the metric localization because this initial estimate can be exploited in further metric refinement. From the aligning key registration, we estimate a 1D relative pose (i.e., rotation for PC and lateral displacement for CC). Using the ground truth pose provided in \texttt{Pangyo} sequence, we plot the estimated 1D relative pose against the true relative pose from the ground truth. As can be seen in \figref{fig:semilocrot1} and \figref{fig:semiloclat1}, the estimation yielded a meaningful relative pose inference of \unit{1.03}{\degree} for A-PC and \unit{0.84}{m} for A-CC on average. We can further examine the quality of this metric localization using the full descriptor similarity score. In the proposed method, we utilized this similarity score as the second barometer to exclude the retrieval with large SCD distance (i.e., small similarity). To assess the metric evaluation quality, we present a scatter plot between the RMSE of the relative estimation from ICP and the SCD distance in \figref{fig:semiloc}. \input{src/tab_evo.tex} \label{sec:slam} \subsection{External Module Dependence and SLAM Integration} Being lightweight and independent to an external module would be needed in a global localizer. We aimed to develop a stand-alone module without requiring prior information such as odometry. During evaluation, we found that SegMatch's place recognition performance is affected by the odometry quality. We also empirically discovered that the SegMatch hardly made recalls for harsh environments such as the \texttt{Riverside 02 (MulRan)} sequence when a good quality of frame-to-frame odometry barely obtainable due to many dynamic objects. In the previous evaluations, although we fed the ground truth as the odometry to ensure their best performance, the performance was restricted for less structured and repeated environments. The proposed implementation is lightweight provided in a single C++ and header file pair. Thus, ours is easy to combine with any keyframe-based pose-graph \ac{SLAM} system because the Scan Context-based place recognition's atomic element is a single keyframe measurement. Along our open source place recognition module\footnote{https://github.com/gisbi-kim/scancontext}, we also made a real-time LiDAR SLAM system publicly available. It is written in C\texttt{++} and named SC-LeGO-LOAM\footnote{https://github.com/gisbi-kim/SC-LeGO-LOAM} integrated with LeGO-LOAM \cite{shan2018lego}. \bl{As in \tabref{tab:evo}, the Scan Context-based loop detection and pose-graph optimization with iSAM2 \cite{kaess2012isam2} successfully alleviated the odometry trajectory's drifts. For a detailed demonstration, we refer to the attached multimedia file.} \subsection{Computational Cost} \label{sec:timecost} The proposed place descriptor \bl{generation} and recognition modules are both fast. The per module computational costs are visualized in \figref{fig:timecost} for two sequences. \ac{PC}'s computation costs are reported in \figref{fig:timecost} because the computational costs for PC and CC are almost the same under a similar resolution and only the coordinate selections differed. These time consumptions are measured while running the Scan Context integrated real-time LiDAR \ac{SLAM} (\secref{sec:slam}) on Intel i9-9900 CPU (3.10GHz) and 64G RAM. As can be seen in \figref{fig:timecost2}, the mean computational time is less than \unit{10}{ms}. The most time consuming task is the k-d tree reconstruction, which is performed periodically in batches. However, a graph plots the conservative case when we repeatedly rebuild the tree every other \unit{10}{secs}. This could be elongated depending on the application to reduce the total cost and is not even required for the multisession scenario. The mean execution time is even shorter at \texttt{Pangyo} despite its large scale because \texttt{Pangyo} used 32-ray \ac{LiDAR} with fewer points than \texttt{KITTI 00}. This also indicates that the overall computational complexity is $\mathcal{O}(1)$ though periodic batch tree rebuilding scales linearly with the map $\mathcal{O}(N)$, with $N$ being the number of nodes in the map. \input{src/fig_timecost.tex} \bl{ The time cost comparisons with the other methods are given in \tabref{tab:times}. The timing for ours and M2DP were measured using Matlab, while SegMatch was copied from \cite{dube2017segmatch}. SegMatch spent most of its time on segmentation. M2DP was the most lightweight. A-PC only requires extra description times during the augmentation phase as no extra costs managing retrieval keys are needed. Despite requiring GPU (GTX 1080 Ti), PointNetVLAD (\secref{sec:dl}) was more expensive than ours. The retrievals were fast for M2DP and PointNetVLAD because a fixed-length vector's comparison of Euclidean distance is very lightweight. } \input{src/tab_times.tex} \section{Dataset and Evaluation Criteria} \label{sec:expsetup} For the evaluation, we chose trajectories to cover broad revisit types including rotation and lateral changes. We describe the datasets and evaluation criteria below. \subsection{Datasets} \label{sec:dataset} In total, eight sequences were selected from four publicly available datasets covering diverse environments: \texttt{KITTI Odometry} \cite{geiger2012we}, \texttt{MulRan} \cite{kim2020mulran}, \texttt{Oxford Radar RobotCar} \cite{RadarRobotCarDatasetICRA2020}, and \texttt{NAVER LABS}\footnote{https://hdmap.naverlabs.com/ and https://challenge.naverlabs.com/} datasets. The detailed characteristics of each sequence and the environment will be provided in the following subsections. The overlaid trajectories on the aerial map, as shown in \figref{fig:datasetviz}, illustrate the trajectory shape, scale, and surrounding environments (excluded well-known \texttt{KITTI} sequences). The details of the four datasets are summarized in \tabref{tab:dataset}. \subsubsection{KITTI} {KITTI Odometry}\footnote{http://www.cvlibs.net/datasets/kitti/eval\_odometry.php} \cite{geiger2012we} is the most widely used dataset for LiDAR place recognition \cite{he2016m2dp, dube2017segmatch, yin2018locnet, oreos19iros, Chen2019OverlapNetLC}. This dataset provides 64-ray LiDAR scans (Velodyne HDL-64E) We selected two sequences, \texttt{00} and \texttt{08}, with a sufficient number of loops. Note that sequence \texttt{08} is only composed of reverse loops. \subsubsection{MulRan} The Multimodal Range Dataset (MulRan) \cite{kim2020mulran} was specifically designed to support place recognition evaluation and contains a large number of loop events. This dataset provides 64-ray LiDAR scans (Ouster OS1-64) in 12 sequences covering a campus for a planned city. We chose three sequences: \texttt{KAIST}, \texttt{Riverside}, and \texttt{Sejong}. \texttt{KAIST 03} is a campus environment with few dynamic objects and multiple well-distributed buildings. \texttt{Riverside 02} involves travel on roads along the riverside. This sequence includes few surrounding structures and many perceptually similar unstructured objects such as roadside trees, which are frequently repeated throughout the sequence. More critically, this sequence has multiple lane changes at the revisit phase (the blue parts in \figref{fig:riverside02_traj}), which enable us to quantitatively assess the methods' robustness under lateral changes. The third environment in MulRan, the \texttt{Sejong} sequence, encompasses the long circular route of a master-planned city called Sejong \cite{alicha-2014}. As a planned city, its environment reveals slowly varying structural changes even within a relatively short period of time. We chose \texttt{Sejong 01} and \texttt{Sejong 02} and examined the multi-session loop-closure capability and the robustness over a temporal gap (between June 2019 and August 2019). \subsubsection{Oxford Radar RobotCar} The \texttt{Oxford Radar Robot Car} \cite{RadarRobotCarDatasetICRA2020} dataset, which we simply call \texttt{Oxford}, is a radar extension of the Oxford RobotCar dataset \cite{RobotCarDatasetIJRR}. This extension provides range data from a \ac{FMCW} radar and two 32-ray 3D LiDARs (Velodyne HDL-32E) mounted at the left and right sides of the radar. For each place, we constructed a single point cloud by concatenating scans from the left and right LiDARs (their center is a new sensor coordinate) and used this newly generated scan for the evaluation. The sites of \texttt{Oxford} mostly have a maximum of two lanes and no expected heavy lateral displacement. Instead, the sequence contained reverse revisits occurring simultaneously with small lane changes (i.e., the red places in \figref{fig:oxfordjan11_traj}). This dataset enabled us to evaluate the robustness to concurrent rotation-and-lateral changes. Among the repeatedly recorded sequences over the same site, we selected two sequences (\texttt{2019-01-11-13-24-51} and \texttt{2019-01-15-13-06-37}) whose INS and GPS signals were secured over the entire trajectory. The sequence \texttt{2019-01-11-13-24-51} was used for a intra-session place recognition validation as shown in \figref{fig:exp_oxford}. The selected sequences were also used to validate the inter-session place recognition performance, which is named \texttt{2019-01-15-13-06-37 to 2019-01-11-13-24-51} and is visualized in \figref{fig:oxfordmulti_traj}. We can see all global relocalizations (i.e., revisits) arose within the same direction. \subsubsection{NAVER LABS} The last evaluation sequence is a long single trajectory through highly urbanized environments, named \texttt{Pangyo}, from the \texttt{NAVER LABS} dataset\footnote{https://hdmap.naverlabs.com/ and https://challenge.naverlabs.com/} made by NAVER LABS. The long \unit{31}{km} sequence includes tall buildings, wide roads (the magenta boxes in \figref{fig:datasetviz}), and multiple revisits per place. More than half of the same-direction-revisits occurred in different lanes accompanied by rotation changes. We used \texttt{Pangyo} to validate a method's comprehensive performance and scalability. \subsection{Correctness Criteria} \label{sec:criteria} The measure of the each place strongly depends on the applications and the target environment (e.g., indoors or outdoors). In this evaluation, we aimed to include changes of up to three lane (approximately \unit{8}{m}), which frequently occur in complex urban sites. By doing so, the robot recognizes a place even when revisiting occurs at a laterally separated location. Secondly, in \ac{SLAM} applications, coarse global loop detection typically followed by the pose regression module, generates a metric constraint between the query and the map. If the loop candidate is detected too broadly (e.g., \unit{25}{m} in \cite{suaftescu2020kidnapped}), then the accompanied fine localization module may fail. Considering these two aspects, we count the detected place as correct if a query place and a detected loop candidate place are less than \unit{8}{m} apart. We prepared 1$-$1.5m equidistant sampled measurements to avoid redundant frames during stop sections and to enable each place to contribute the same. The numbers of nodes for each sequence used for the evaluation are reported in \tabref{tab:dataset}. \subsection{Evaluation Metrics} \label{sec:evalmetric} \input{src/fig_exp_toy.tex} \subsubsection{Precision-Recall Curve} We used the precision-recall curve as a main evaluation metric \cite{lowry2015visual}. As argued in \cite{lowry2015visual}, for a place recognition system, increasing potential matches is important, even if a few false predictions occur \cite{sunderhauf2012switchable}. We also examined the maximum F1 score \cite{schutze2008introduction}, the harmonic mean of precision and recall, as our evaluation metric. \subsubsection{Recall Distribution} We would like to note that the precision-recall curve may not fully reveal the performance toward loop-closure in a \ac{SLAM} framework. The spatial and temporal distributions of the loop-closure are essential for the \ac{SLAM}, while the precision-recall curve could be limited to measuring the distribution of place recognition. \textit{Not all recalls should be credited equally} from the point of view of \ac{SLAM} loop-closure. To value more distributed loop detections, we formulated the true revisits as the reference loop distribution and measuring \ac{KL} divergence against it. As illustrated in \figref{fig:exp_toy_gt}, we constructed a histogram of the loop-closure event with respect to the translational and rotational variance between a nearest one in a map and a query pose. The sample revisit events collected from \texttt{Oxford 2019-01-15-13-06-37} contains two major groups. In this toy example, we simulated three algorithms showing different recall distributions and measured \ac{KL} divergence with respect to the ground-truth recall distribution. In \figref{fig:exp_toy_cases}, few loop-closures are found from the group 2 for the leftmost case. The other two showed better distributed loop-closure detections with respect to the internal factor variation, providing spatially unbiased localization performance. Even with smaller revisit detection, the middle case yielded better distribution showing lower KL-D value. During the evaluation, we show arrows to indicate that higher precision ($\uparrow$), higher F1 score ($\uparrow$), and lower KL-D ($\downarrow$) imply better performance. Potentially, the Wasserstein distance (a.k.a. the earth mover's distance) or Jensen–Shannon Divergence could be the measure to use as also discussed in detail in \cite{wgan}. However, we chose to use \ac{KL}-D because we need to compare the relative distance between methods while having the GT distribution as the reference. Measuring relative information is favored over the symmetry. Here, we used the ground-truth loop-closure as the reference distribution and measure relative entropy against this reference. \input{src/fig_exp_easy.tex} \subsection{Comparison Targets} We compared the proposed methods against two other methods: \textit{M2DP} and \textit{SegMatch}. All of the comparison targets are agnostic to sensor type (e.g., ray numbers) and run on CPU. \subsubsection{SCD} We present the performance of \ac{PC} \cite{kim2018scan}, \ac{CC}, \ac{A-PC}, and \ac{A-CC}. For the proposed methods, we only retrieved a single candidate from the k-d tree ($k=1$). Downsample point cloud using a $\unit{0.5}{m}^3$ voxel is used to make a \ac{SCD} (\tabref{tab:impl}). The evaluation curves were acquired by changing the threshold of the \ac{SCD} distance. \subsubsection{M2DP} Identical to our methods, M2DP \cite{he2016m2dp} only requires a point cloud from a single scan as an input. We followed the code and the parameters provided\footnote{https://github.com/LiHeUA/M2DP}, with one difference. We empirically found that applying \unit{0.1}{m} cubic voxel downsampling a priori boosts M2DP's performance, and we made this modification to secure better performance. The query descriptor is compared to all of the map descriptors in terms of Euclidean distance, which was used as a threshold. \subsubsection{SegMatch} Among the three options in SegMatch \cite{dube2017segmatch}, we used the eigenvalue to describe a segment, which is the same as the author's configuration designed for the \texttt{KITTI} dataset. We excluded the learning-based version, SegMap \cite{dube2020segmap}, because our method works on CPU and for a fair comparison. The evaluation curves for SegMatch are acquired by changing the segment feature distance threshold. Unlike the other global localization methods (ours and M2DP), SegMatch requires odometry information. During the evaluation, we leverage ground-truth to provide odometry. As will be seen, despite the exploitation of highly accurate odometry, SegMatch failed to overcome severe variance, while our method reliably localized without requiring any geometric prior. Not being a global descriptor as M2DP and SCD are, SegMatch only had a short range in its PR and DR curve. This is because the parameters in SegMatch are tuned to local segmentation and do not substantially affecting recall. \input{src/fig_eval_match_kitti.tex} \section{Introduction} \label{sec:intro} Recognizing a previously visited place is important for various robot missions (e.g., loop detection in \ac{SLAM} \cite{kim2018scan}, global localization for a kidnapped robot \cite{kim2019}, or multi-robot mapping \cite{saeedi2016multiple}). Describing a place with a set of compact representations has been tackled in depth within the computer vision and robotics community, yielding many state-of-the-art visual place recognition methods \cite{cummins2011appearance, milford2012seqslam,lowry2015visual,galvez2012bags}. In contrary to the flourishing studies on visual place recognition, studies on range sensors are still missing a solid solution to this global localization problem. Recent studies have reported \cite{kim2018scan, kim2019, cieslewski2016point, kim2018stereo, mo2020place, oertel2020augmenting} that structural information could be more effective than appearance particularly within outdoor environments. \bl{These studies had attempted to overcome the major bottlenecks resulting from unstructured, unordered, and sparse range sensor data, which make consuming input data harder than pixelated image data. Existing methods have focused on compactly summarizing a place, but they have rarely achieved invariances in structural place recognition.} Our preliminary version of this paper presented in \cite{kim2018scan} tried to establish this compact representation by capturing the highest structural points \bl{when the level of roll-pitch disturbances is not severe (e.g., under \unit{10}{\degree}) such as a wheeled robot or a slow walking hand-held system}. This strategy allowed us to achieve robustness for underlying structural variance (e.g., dynamic objects and seasonal changes) for incoming \ac{LiDAR} measurements. Although our previous Scan Context showed meaningful performance, the algorithm failed to achieve invariance in the lateral direction and was inefficient using a brute-force search. Overcoming these limitations in \cite{kim2018scan}, we complete the algorithm to include both rotational and lateral \bl{robustness} thereby introducing a generic \textit{structural place recognition} for a range sensor. Secondly, the modified algorithm improved previously brute-force search to use sub-descriptors and expedited the process by the order of magnitude. \bl{In summary, our new contributions are:} \begin{itemize} \item \bl{\textbf{Robustness to Lateral/Rotational Changes: }} Missing lateral invariance may be a critical issue in an urban environment where lane-level change is inevitable. To resolve this limitation, we generalized the previous descriptor \bl{to include both lateral and rotational robustness simultaneously}. \bl{This is achieved via \textit{Scan Context augmentation} based on urban road assumption.} \item \textbf{Semi-metric Localization: } Combining place retrieval and metric localization, our global place recognition method bridges the gap between topological and metric localization. The proposed method provides not only the retrieved map place index but also 1-DOF (yaw or lateral) initial guess for metric refinement such as \ac{ICP}. \item \textbf{Lightweight and Modules Independence: } As a global localizer, the proposed method does not require prior knowledge or any geometric constraints (e.g., odometry). The implementation is lightweight provided in a single C++ and header pair and readily integrable to existing \ac{SLAM} framework. \item \textbf{Real-time Performance on CPU: } By introducing compact summarizing sub-descriptors, \textit{keys}, we achieved substantial cost reduction. The \textit{retrieval key} based tree search eliminates naive pixel-wise comparison followed by \textit{aligning key} based pre-alignment. Our method runs in real-time supporting up to \unit{100}{Hz} (e.g., average \unit{7.4}{ms} on \texttt{KITTI 00} \cite{geiger2012we}) without requiring GPU. \item \textbf{Extensive Validation: } We evaluate the proposed method \bl{across diverse and challenging test scenarios} to validate both in-session and multi-session scenarios. We note the existing precision-recall curve may not fully capture the loop-closure performance for \ac{SLAM} research missing evaluation on the match distribution. We propose to use DR (distribution-recall) curves to measure not only the recalls but also their diversity for the meaningful loop-closure. \end{itemize} \section{Related Works} \label{sec:related} In this section, we provide a literature review on place recognition in both visual and structural aspects. We briefly review recent place recognition works, focusing on the sensor modality as well as global and local descriptions. \subsection{Place Recognition for Visual Sensing} \label{sec:litvis} For visual recognition, both the local and global aspects of the place summarization were examined. The local description-based methods relied on detecting and describing handcrafted local keypoints (i.e., a small patch) \cite{bay2008speeded, rublee2011orb}. Using these local descriptors, Bayesian inference \cite{cummins2011appearance} or bag-of-words vocabulary tree \cite{galvez2012bags} was applied for place recognition. \citeauthor{cadena2012robust} proposed fusing the bag-of-words and a \ac{CRF} matching of 3D geometry for a stereo camera system. Compared to local descriptors, global descriptors are more compact in representation and robust to local noises. The entire image is encapsulated by a single condensed representation (e.g., a fixed-size vector \cite{sunderhauf2015performance, arandjelovic2016netvlad} or a downsized image \cite{milford2012seqslam}) without maintaining a set of local keypoint descriptors. Similarly, as in local descriptors, recent studies in global descriptors enhanced the performance by exploiting structural information. \citeauthor{oertel2020augmenting} \cite{oertel2020augmenting} reported that the use of structural cues when making a global descriptor yields higher performance than appearance-only methods. \citeauthor{mo2020place} \cite{mo2020place} fed reconstructed 3D sparse points into a \ac{LiDAR} descriptor pipeline, which outperformed appearance-only-based global descriptors. \subsection{Place Recognition for Range Sensing} \label{sec:litlidar} \subsubsection{LiDAR} The early phase of LiDAR-based place recognition focused on 2D range data \cite{tipaldi2010flirt, tipaldi2013geometrical}. \bl{\citeauthor{olson2009real} proposed correlative scan matching-based loop closure detection for 2D LiDAR \cite{olson2009real, olson2009recognizing}}. As 3D LiDAR appeared, 3D point cloud summarization drew attention. For the initial 3D LiDAR place recognition methods \cite{steder2010robust, rusu2010fast, bosse2013place}, local keypoint-based approaches were used, similar to following the early history described above in the visual domain. A point cloud from a 3D LiDAR poses challenges in a different aspect. First, the data is unstructured without having a constant and consistent grid density. Second, the data sparsity grows as the range increases, varying the target object density depending on the sensing range. These sensor characteristics make the local descriptions unstable; thus, a courser summarization unit that is robust to the local noise and inconsistent point density is preferred. M2DP \cite{he2016m2dp} compressed a single LiDAR scan into a global descriptor (i.e., 192D vector) that is robust to noisy input. PointNetVLAD \cite{angelina2018pointnetvlad} leveraged a learning-based approach to summarize a place into a single vector representation. However, despite the performance and robustness of global descriptors, one drawback is that they do not secure invariance compared with local-based methods. As reported in \cite{kim2019}, these global descriptors were less invariant to the transformation (e.g., heading changes) because transformed local point coordinates may produce different embedding and cause failure in place recognition (\figref{fig:exampletax}). \bl{Recently, similar to our approach, a semi-handcrafted heading-invariant feature learning approach named LocNet \cite{yin20193d} was proposed. However, compared to LocNet}, achieving \bl{not only} rotational \bl{but also} translational invariance is required while maintaining the performance of the current state-of-the-art global point cloud descriptors. In this line of study, a local characteristic such as segment or height was examined. For example, \citeauthor{dube2020segmap} proposed a segment-based global localization method using a handcraft segment descriptor \cite{dube2017segmatch} and learned segment embeddings \cite{dube2020segmap}. They recovered a relative transformation between two matched frames through geometric consistency checks, even under severe viewpoint changes such as reverse revisits. Our preliminary work, \textit{Scan Context} \cite{kim2018scan}, proposed to make a 2D descriptor based on the height of the surrounding structures. This descriptor obtained rotational invariance and yielded relative yaw as a by-product. Stemming from this work, some authors \cite{oreos19iros, Chen2019OverlapNetLC} tried to simultaneously estimate the relative yaw between two scans and their similarity. Learning-based approaches included semi-learned \cite{yin2018locnet,kim2019}, and full learning-based \cite{oreos19iros, Chen2019OverlapNetLC} methods. \subsubsection{Radar} \label{sec:litradar} More recently, a long-range perceptible \ac{FMCW} radar has been highlighted in robotics applications \cite{RadarRobotCarDatasetICRA2020, kim2020mulran}. Radars provide far longer range and robustness compared to cameras and LiDAR; however, radar place recognition methods are still not mature. Exploiting the image-like format of radar data, some studies leveraged computer vision techniques to describe a radar image at local \cite{barnes2020under} and global description \cite{suaftescu2020kidnapped, gadd2020look} levels. However, the projection model of the radar image inevitably eliminates height information generating a top-down view. To handle this elevation loss, \citeauthor{hong2020radarslam} \cite{hong2020radarslam} used a LiDAR descriptor M2DP \cite{he2016m2dp} but using the intensity of a pixel in lieu of a point's height. Similarly, \cite{kim2020mulran} showed the feasibility of Scan Context by replacing the height with the intensity. \subsection{Augmentation of the Scan Context Descriptor} \label{sec:scdaug} Because we construct a descriptor from the \ac{BEV}, the dominant motion complexity is reduced to 3-\ac{DOF} which is then summarized in a 2D descriptor. This indicate that both descriptors are deficient in certain \ac{DOF}. For example, \ac{PC} is written in the polar coordinates and loses the translational component; \ac{CC} is described in the Cartesian coordinates and lacks the rotational component. This deficiency is critical when revisit occurs in a combined motion. A typical example would be revisiting in a reversed route from the opposite lane. To overcome this limitation and impose robustness along the fixed axis, we created virtual \ac{SCD}s to augment a place, thereby achieving pseudo-invariance along the deficient direction. \subsubsection{Augmented PC (A-PC)} We aimed to cover lane changes (\unit{2}{m} spaced lanes) and a reversed route (\unit{180}{\degree} heading change). \bl{During this augmentation process}, a \ac{PC} is synthetically duplicated by assuming virtual lateral displacement. Our particular interest is lane change, and we synthetically considered two virtual vehicle positions that are laterally \unit{2}{m} apart. Two additional \ac{A-PC}s are generated with respect to these virtual vehicle poses and root-shifted point clouds. This \textit{root shifting} is the same way as in our previous work \cite{kim2018scan}. \subsubsection{Augmented CC (A-CC)} For \ac{CC}, the augmentation is as simple as a double flip on both axes. The lacking rotational component should encompass lane changes, and we flip the descriptor on both axes to create the \ac{A-CC}. Both the \ac{A-PC} and the \ac{A-CC} are illustrated in \figref{fig:scdaug}. The augmented descriptors' place index is assigned as identical to its original one. For matching, empirically, we found maintaining a single k-d tree containing both original and augmented keys outperforms using multiple k-d trees. \input{src/tab_impl} \input{src/fig_dataset_vis.tex} \input{src/tab_dataset} \subsection{Computational Complexity} \label{sec:complexity} Among all of the introduced modules, the neighbor search is the most computationally demanding. Tree construction consumes periodic resources and the add-on augmentation step requires increased time computation proportional to the number of the augmentations. As will be shown in \secref{sec:timecost}, the number of augmentations and periodic tree maintenance are negligible. Even the main computational bottleneck of the retrieval module is extremely lightweight. Naive descriptor comparison, as described in \equref{eq:imgDist} and \equref{eq:minImgDist}, requires the computation of $\mathcal{O}(N_A \cdot N_R \cdot N_A)$. This cost is substantially reduced by pre-alignment, as described in \equref{eq:prealign1} and \equref{eq:prealign2}, eliminating linear search through $N_A$ elements. The reduced computational cost becomes $\mathcal{O}(N_A \cdot N_R \cdot 1)$. Approximating $N_A \sim N_R \sim N$, this reduction can be regarded as a reduction from $\mathcal{O}(N^3)$ to $\mathcal{O}(N^2)$ with the descriptor dimension $N$. For example, the \ac{CC} in \figref{fig:scdexample} is a square matrix with the format $N_A = N_R =N$. \subsection{Implementation Details} \label{sec:impl_detail} The used parameters are listed as in \tabref{tab:impl}. Here, ROI and grid size determines the resolution. For example, $20\times60$ for PC indicates $80/20=\unit{4}{m}$ and $360/60 = 6^\circ$ resolution. Similarly, $40\times40$ for CC indicates $200/40=\unit{5}{m}$ and $80/40 =\unit{2}{m}$ resolution, for the \ac{R-axis} and the \ac{A-axis}, respectively. The discussion on parameter selection will be given in \secref{sec:discussion}. \section{Scan Context Descriptor (SCD)} \label{sec:sc2} In this section, we describe a novel spatial descriptor named \textit{\ac{SCD}}. The pipeline begins with partitioning the raw measurement and projecting them into discretized bins using \ac{BEV}. When dividing into the \ac{BEV} bins, two types of perpendicular bases (polar and Cartesian) are considered. After partition and coordinate selection, the subset of the measurement is encoded to its associated discretized bin using the bin encoding function. \bl{As we present, the invariance of the proposal place recognition module arises from the bin encoding function and the distance function.} \subsection{Motivation} \label{sec:scdmotiv} Our descriptor and search engine were strongly motivated by the revisit patterns in urban environments. We found typical patterns due to the nonholonomic vehicle motion following traffic rules (e.g., lane-keeping). The dominant motion is locally two-dimensional, and the motion occurs with at most two directions that are likely to be disjointed. These typical patterns motivated the choice of two coordinate frames, polar and Cartesian, and the associated matching algorithm. \subsection{Descriptor Axes and Resolution} \label{sec:roipart} \bl{We assume that the input is a single scan of 3D LiDAR.} The first phase for generating the descriptor is to partition a \bl{downsampled point cloud} within a \ac{ROI}. The upper bound of the \ac{ROI} and the partitioning resolution decide the shape of an \ac{SCD}. Given the partitioned raw measurements, we project them on 2D descriptor space; \bl{namely, the approach first (i) projects each 3D point to a 2D point, (ii) parametrizes the 2D point in polar or Cartesian coordinates, and (iii) obtains a scalar value (details in \secref{sec:bef}) for each bin by discretizing the 2D space.} As shown in \figref{fig:pipeline}, we name the horizontal axis the \textit{\ac{A-axis}} and the vertical axis the \textit{\ac{R-axis}}. The change along the \ac{A-axis} corresponds to the column-wise shift; thus, pre-alignment along the \ac{A-axis} will allow us to infer a rough metric-level relative pose, overcoming changes in the associated direction. The choice of aligning/retrieval axes determines the type of \ac{SCD}, as either \ac{PC} and \ac{CC}. \subsubsection{Polar Coordinates} As introduced in our earlier work \cite{kim2018scan}, the \ac{PC} adopts polar coordinates using the azimuth $\theta$ as the \ac{A-axis} and the radius $r$ as the \ac{R-axis}. Because the azimuth is on the \ac{A-axis}, the \ac{PC} is robust to rotational variance. \subsubsection{Cartesian Coordinates} The \ac{CC} leverages Cartesian coordinates and uses the lateral direction ($y$) as the \ac{A-axis}. The longitudinal direction (or travel direction, $x$) becomes the \ac{R-axis}. Naturally, the descriptor is invariant to lateral direction translation. \subsubsection{Descriptor Resolution} The resolution of the axes determines the resolution of the descriptor, which is the user parameter of the proposed method. The user parameters are denoted as \begin{equation} \label{eq:param} \small (\Delta_{R}, \Delta_{A}, [R_{min},R_{max}], [A_{min},A_{max}]), \end{equation} where each component indicates an ordered set consisting of the resolution of the \ac{R-axis}, the resolution of the \ac{A-axis}, the range of the \ac{R-axis}, and the range of the \ac{A-axis}, respectively. Sample parameter sets and their \ac{SCD} are given in \figref{fig:scdexample}. As will be discussed in \secref{sec:res}, coarse discretization implicitly reduces the influence of dynamic objects, noisy local structures, and computational cost. \subsubsection{Independence of the Input Modality} \label{sec:inputdata} The partitioning is independent of the measurement's distribution or data type (e.g., a \ac{BEV} image, voxels, or 3D points). Therefore, our descriptor is generic with regard to any range measurements. The descriptor representation covers not only 3D point clouds but also other range sensors such as radar \cite{kim2020mulran} by selecting a proper bin encoding function in \secref{sec:bef}. \input{src/fig_scd_viz.tex} \subsection{Bin Encoding Function} \label{sec:bef} We denote a single disjoint section partitioned by the aligning and retrieval axes as a \textit{bin}. A single bin includes a subset of a robot sensor measurement ($Z_{ij} \in Z$), where the $i$ and $j$ indicate the \ac{A-axis} and \ac{R-axis} indexes, respectively. The bin may be empty, $Z_{ij} = \emptyset$, when no range data falls into the bin, in which case we assign a value of $0$ to that bin. For each subset of measurement $Z_{ij}$ for bin $(i,j)$, we assign a representative value using a \textit{bin encoding function} $\psi(\cdot)$. The bin encoding function should be able to encapsulate the subset of the raw data in order to make the descriptor discernable and robust to the \bl{nuisances} (\tabref{tab:taxonomy}). \begin{definition}{ } A \textit{bin encoding function}, $\psi : Z_{ij} \rightarrow \mathbb{R}$, is invariant to the internal factors and independent of sensor specifications. \label{req:bef} \end{definition} Following our previous work \cite{kim2018scan}, we propose to assign the maximum height of 3D points within a bin. The intuition behind this selection stems from an urban planning concept called \textit{isovist} \cite{benedikt1979take, KIM201974}. In this concept, the maximally visible structure and its visible volume's polygon shape decide the use of a place and make a place discernable. Focusing on the maximum height instead of structural shape eliminates the sparsity variation caused by the sensing resolution, range, and object size. Notably, any other function that meets the above requirement could be used as the encoding function. For the example of an \ac{FMCW} radar \cite{kim2020mulran}, the raw radar intensity value was adopted. Some follow-up studies of our previous work \cite{kim2018scan} leveraged LiDAR intensity \cite{wang2020intensity}, interpolated intensity \cite{kim2020mulran}, and difference in the height of 3D points \cite{mo2020place}. \bl{We note that heterogeneous LiDAR place recognition in situations where a mapper and a localizer are different (e.g., LiDAR's mounted height varies) is beyond this paper's scope because it does not obey the above requirement.} \subsection{Scan Context Descriptor} \label{sec:scd} After the \ac{ROI} partitioning (\secref{sec:roipart}) and bin encoding (\secref{sec:bef}), each bin contains a representative feature summarizing the data within the bin (i.e., the maximum height for \ac{SCD}). We accumulate these bin values into a matrix form to complete a 2D descriptor for a place; the rows and columns of the matrix correspond to the retrieval axis and the aligning axis. The resulting descriptor can be understood as the contour of the skyline of the surrounding structures. Depending on the coordinate selection, we name the resulting 2D descriptor as \textit{Polar Context (PC)} or \textit{Cart Context (CC)}. \subsubsection{Polar Context (PC)} \label{sec:pc} When the polar-coordinate \ac{ROI} is used, we name the resulting \ac{SCD} as a \textit{\ac{PC}}. The \ac{PC} is designed for rotation-invariant place recognition (e.g., revisit in the reversed direction) because the rotational variation corresponds to column-wise shifts. \subsubsection{Cart Context (CC)} \label{sec:cc} Similarly, using Cartesian \ac{ROI} partitioning yields a \ac{SCD} called a \textit{\ac{CC}}. In a \ac{CC}, lateral translation is reflected as column-wise shifts; thus the \ac{CC} can handle lateral variation, including a revisit with lane changes. Each \ac{SCD} has its own invariance for tackling the internal factors. \ac{PC} and \ac{CC} allow one dimension for the \ac{A-axis} and may be limited when rotation and translation occur simultaneously. To cope with this, we propose hallucinating the \ac{R-axis} to achieve robustness in both directions (\secref{sec:scdaug}). \subsection{Distance between \ac{SCD}s} \label{sec:dist} Next, we define the proximity between two places by the similarity score of the associated \ac{SCD}. \subsubsection{Alignment Score} \label{sec:alignscore} As illustrated in \figref{fig:shiftcc}, if two \ac{SCD}s are acquired from the same place, then two descriptors should contain consistent contents within a matrix but may reveal a column order difference. To measure similarity, therefore, we should examine the sum of the column-wise co-occurrences using the cosine similarity between two descriptors. This column-wise comparison is particularly effective for dynamic objects or partial noises. A cosine distance is used to compute a distance between two column vectors, $c_{Q}^{j}$ and $c_{M}^{j}$, at the same column index $j$. The distance between two descriptors is \begin{equation} d(\textbf{f}_{Q}, \textbf{f}_{M}) = \frac{1}{N_{\text{A}}} \sum_{j=1}^{N_{\text{A}}} \left( 1 - \frac{c_{Q}^{j} \cdot c_{M}^{j}} {\Vert c^{j}_{Q} \Vert \Vert c^{j}_{M} \Vert } \right). \label{eq:imgDist} \end{equation} The subscripts $Q$ and $M$ indicate query and map places, where the descriptor's dimensions are $\textbf{f} \in \mathbb{R}^{N_R \times N_A}$. In addition, we divide the summation by the number of columns for normalization. \subsubsection{Naive Column Alignment} \label{sec:bestalign} However, the column of the query \ac{SCD}, $\textbf{f}_{Q}$, may be shifted even in the same place (\figref{fig:shiftcc}). By simply shifting the order of the query descriptor while the $\textbf{f}_{M}$ is fixed, we can calculate distances with all possible column-shifted $\textbf{f}_{Q}$ and find the minimum distance. Then, the minimum distance of \equref{eq:imgDist} becomes our desired distance function $D(\cdot, \cdot)$ as \begin{eqnarray} \label{eq:minImgDist} D(\textbf{f}_{Q}, \textbf{f}_{M}) &=& \min_{n \in [N_{\text{A}}]} d(\textbf{f}_{Q, n}, \textbf{f}_{M}) \label{eqn:minImgDist1} \ , \\ \nonumber n^{*} &=& \argmin_{n \in [N_{\text{A}}]} d(\textbf{f}_{Q, n}, \textbf{f}_{M}) \label{eqn:minImgDist2} \ , \end{eqnarray} where $[N_{\text{A}}]$ indicates a set $\{1, 2, ..., N_{\text{A} \text{-} 1}, N_{\text{A}}\}$ and $\textbf{f}_{Q, n}$ is a \ac{SCD} whose columns are shifted from the original one by an amount $n$. The column-shift process aligns the rotational variance for \ac{PC} and lateral displacement for \ac{CC}. \input{src/fig_scd_cc.tex} \subsection{Sub-descriptors} \label{sec:subdesc} The abovementioned naive comparison over the full 2D descriptor is computationally expensive. To alleviate this cost we introduce two sub-descriptors. From the full 2D descriptor, \ac{SCD}, we extract 1D vectors by summarizing the descriptor in the row and the column direction. Each sub-descriptor plays a major role in place recognition and semi-metric localization. \subsubsection{Retrieval key} \label{sec:retrievalkey} The first sub-descriptor introduced is the \textit{retrieval key}, $\textbf{v} \in \mathbb{R}^{N_R}$, a vector whose dimension is equal to the number of \ac{SCD} rows, $N_\text{R}$. Given any function that $f_{R}(\cdot)$ maps a column of a \ac{SCD} to a single real number, we \textit{squeeze} the column dimension of a \ac{SCD} by applying the $f_{R}(\cdot)$ for each row in a \ac{SCD}. Additionally, the following condition is required. For a given row $r$ of the \ac{SCD}, \begin{definition}{ } A \textit{retrieval key function} $f_{R} : r \rightarrow \mathbb{R}$ is permutation invariant. \end{definition} With this requirement, we can create a sub-descriptor that is unaffected by the column order; this means we can produce a consistent sub-descriptor independent of the internal factors of the nuisances (e.g., rotation or lane changes). Practically, we used the $L1$ norm for our experiments, but any other function can be used that maps a vector to a single real number by obeying the above requirement. The $L0$ norm was used in our previous work \cite{kim2018scan}. \subsubsection{Aligning Key} \label{sec:alignkey} Similarly as with the retrieval key, we introduce the \textit{aligning key} $\textbf{w} \in \mathbb{R}^{N_A}$ as another sub-descriptor of the \ac{SCD}, which is a vector whose dimension is equal to the number of \ac{SCD} columns $N_\text{A}$. Although no requirement is needed for the aligning key, we adopted the same $L1$ norm when summarizing a column.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \noindent{\em Traditional to quantitative verification.} While traditional formal verification focused on Boolean properties of systems, such as ``every request is eventually granted'', recently significant attention has been shifted to quantitative aspects such as expressing properties like ``the long-run average success rate of an operation is at least one half'' or ``the long-run average (or the maximal, or the accumulated) resource consumption is below a threshold.'' Quantitative properties are essential for performance related properties, for resource-constrained systems, such as embedded systems. \smallskip\noindent{\em Overview.} The first natural way to express quantitative properties is to consider automata with counters. However, computational analysis of such models quickly lead to undecidability, and a classical way to limit expressiveness for decidability is to consider {\em monitor counters}, i.e., the counter values do not influence the control. The second approach is to consider automata with weights (or weighted automata). However, weighted automata have limited expressiveness, and they have been extended as nested weighted automata~\cite{nested} (nesting of weighted automata) for expressiveness. We establish that for a well-studied and wide class of quantitative functions, automata with monitor counters and nested weighted automata are equivalent, i.e., they represent a robust class of quantitative specifications. We study for the first time such quantitative automata under probabilistic semantics. Quite surprisingly we show that several problems that are undecidable for the classical questions of emptiness and universality become decidable under the probabilistic semantics. We present a complete picture of decidability for nested weighted automata and automata with monitor counters under probabilistic semantics. \smallskip\noindent{\em Automata with monitor counters.} A natural extension of automata is automata with monitor counters, which are automata equipped with counters. At each transition, a counter can be started, terminated, or the value of the counter can be increased or decreased. However, the transitions do not depend on the counter values, and hence they are referred to as monitor counters. The values of the counters when they are terminated gives rise to the sequence of weights. A value function aggregates the sequence into a single value. For example, for words over $\{a,\#\}$, such automata can express the maximal length of block of $a$'s that appear infinitely often. Automata with monitor counters are similar in spirit with the class of register automata of~\cite{DBLP:conf/lics/AlurDDRY13}, and we consider them over infinite words. \smallskip\noindent{\em Weighted automata.} Weighted automata extend finite automata where every transition is assigned a rational number called weight. Hence every run gives rise to a sequence of weights, and which is aggregated into a single value by a value function. For non-deterministic weighted automata, the value of a word $w$ is the infimum value of all runs over~$w$. Weighted automata provide a natural and flexible framework for expressing quantitative\footnote{We use the term ``quantitative'' in a non-probabilistic sense, which assigns a quantitative value to each infinite run of a system, representing long-run average or maximal response time, or power consumption, or the like, rather than taking a probabilistic average over different runs.} properties~\cite{Chatterjee08quantitativelanguages}. First, weighted automata were studied over finite words with weights from a semiring, and ring multiplication as value function~\cite{Droste:2009:HWA:1667106}, and later extended to infinite words with limit averaging or supremum as value function~\cite{Chatterjee08quantitativelanguages,DBLP:journals/corr/abs-1007-4018,Chatterjee:2009:AWA:1789494.1789497}. While weighted automata over semirings can express several quantitative properties~\cite{DBLP:journals/jalc/Mohri02}, they cannot express long-run average properties that weighted automata with limit averaging can~\cite{Chatterjee08quantitativelanguages}. However, even weighted automata with limit averaging cannot express the following basic quantitative property (the example is from~\cite{nested}). \begin{example}\label{ex:intro} Consider infinite words over $\{r,g,i\}$, where $r$ represents requests, $g$ represents grants, and $i$ represents idle. A basic and interesting property is the average number of $i$'s between a request and the corresponding grant, which represents the long-run average response time of the system. \end{example} \smallskip\noindent{\em Nested weighted automata.} To enrich expressiveness, weighted automata was extended to \emph{nested weighted automata (NWA)}~\cite{nested}. A nested weighted automaton consists of a master automaton and a set of slave automata. The master automaton runs over input infinite words. At every transition the master can invoke a slave automaton that runs over a finite subword of the infinite word, starting at the position where the slave automaton is invoked. Each slave automaton terminates after a finite number of steps and returns a value to the master automaton. Each slave automaton is equipped with a value function for finite words, and the master automaton aggregates the returned values from slave automata using a value function for infinite words. For Boolean finite automata, nested automata are equivalent to the non-nested counterpart, whereas nested weighted automata is strictly more expressive than non-nested weighted automata~\cite{nested}, for example, nested weighted automata can express the long-run average response time property (see~\cite[Example~5]{nested}). It has been shown in~\cite{nested} that nested weighted automata provides a specification framework where many basic quantitative properties, that cannot be expressed by weighted automata, can be expressed easily, and it provides a natural framework to study quantitative run-time verification. \smallskip\noindent{\em Classical questions.} The classical questions for automata are {\em emptiness} (resp., {\em universality}) that asks for the existence (resp., non-existence) of words that are accepted. Their natural extensions has been studied in the quantitative setting as well (such as for weighted automata, NWA, etc)~\cite{Chatterjee08quantitativelanguages,nested}. \smallskip\noindent{\em Motivation for probabilistic questions.} One of the key reasons for quantitative specification is to express performance related properties. While the classical emptiness and universality questions express the best/worst case scenarios (such as the best/worst-case trace of a system for average response time), they cannot express the average case average response time, where the average case represents a probability distribution over the traces. Performance related properties are of prime interest for probabilistic systems, and quite surprisingly, quantitative automata have not been studied in a probabilistic setting which we consider in this work. \smallskip\noindent{\em Probabilistic questions.} Weighted automata and its extension as nested weighted automata, or automata with monitor counters are all measurable functions from infinite words to real numbers. We consider probability distribution over infinite words, and as a finite representation for probability spaces we consider the classical model of finite-state Markov chains. Moreover, Markov chains are a canonical model for probabilistic systems~\cite{PRISM,BaierBook}. Given a measurable function (or equivalently a random variable), the classical quantities w.r.t. a probability distribution are: (a)~the expected value; and (b)~the cumulative distribution below a threshold. We consider the computation of the above quantities when the function is given by a nested weighted automata or automata with monitor counters, and the probability distribution is given by a finite-state Markov chain. We also consider the approximate variants that ask to approximate the above quantities within a tolerance term $\epsilon>0$. Moreover, for the cumulative distribution we consider the special case of {\em almost-sure} acceptance, which asks whether the probability is~1. \smallskip\noindent{\em Our contributions.} In this work we consider several classical value functions, namely, $\textsc{Sup},\textsc{Inf},\textsc{LimSup},\textsc{LimInf},\textsc{LimAvg}$ for infinite words, and $\textsc{Max},\textsc{Min},\textsc{Sum},\fBsum{B},\textsc{Sum}^+$ (where $\fBsum{B}$ is the sum bounded by $B$, and $\textsc{Sum}^+$ is the sum of absolute values) for finite words. First, we establish translations (in both directions) between automata with monitor counters and a special class of nested weighted automata, where at any point only a bounded number of slave automata can be active. However, in general, in nested weighted automata unbounded number of slave automata can be active. We describe our main results for nested weighted automata. \begin{itemize} \item {\em $\textsc{LimSup}$ and $\textsc{LimInf}$ functions.} We consider deterministic nested weighted automata with $\textsc{LimSup}$ and $\textsc{LimInf}$ functions for the master automaton, and show that for all value functions for finite words that we consider, all probabilistic questions can be answered in polynomial time. This is in contrast with the classical questions, where the problems are $\textsc{PSpace}{}$-complete or undecidable (see Remark~\ref{remark:LimInf-classical-vs-probabilistic} for further details). \item {\em $\textsc{Inf}$ and $\textsc{Sup}$ functions.} We consider deterministic nested weighted automata with $\textsc{Sup}$ and $\textsc{Inf}$ functions for the master automaton, and show the following: the approximation problems for all value functions for finite words that we consider are $\#P$-hard and can be computed in $\textsc{ExpTime}{}$; other than the $\textsc{Sum}$ function, the expected value, the distribution, and the almost-sure problems are $\textsc{PSpace}{}$-hard and can be solved in $\textsc{ExpTime}{}$; and for the $\textsc{Sum}$ function, the above problems are uncomputable. Again we establish a sharp contrast w.r.t. the classical questions as follows: for the classical questions, the complexity of $\textsc{LimSup}$ and $\textsc{Sup}$ functions always coincide, whereas we show a substantial complexity gap for probabilistic questions (see Remark~\ref{remark:Inf-classical-vs-probabilistic} and Remark~\ref{remark:LimInf-vs-Inf} for further details). \item {\em $\textsc{LimAvg}$ function.} We consider deterministic nested weighted automata with $\textsc{LimAvg}$ function for the master automaton, and show that for all value functions for finite words that we consider, all probabilistic questions can be answered in polynomial time. Again our results are in contrast to the classical questions (see Remark~\ref{remark:LimAvg-classical-vs-probabilistic}). \item {\em Non-deterministic automata.} For non-deterministic automata we show two results: first we present an example to illustrate the conceptual difficulty of evaluating a non-deterministic (even non-nested) weighted automata w.r.t. a Markov chain, and also show that for nested weighted automata with $\textsc{LimSup}$ value function for master automaton and $\textsc{Sum}$ value function for slave automata, all probabilistic questions are undecidable (in contrast to the deterministic case where we present polynomial-time algorithms). \end{itemize} Note that from above all decidability results we establish carry over to automata with monitor counters, and we show that all our undecidability (or uncomputability) results also hold for automata with monitor counters. Note that decidability results for nested weighted automata are more interesting as compared to automata with monitor counters as unbounded number of slaves can be active. Our results are summarized in Table~\ref{tab:compLimInf} (in Section~\ref{s:liminf}), Table~\ref{tab:compInf} (in Section~\ref{s:inf}), and Table~\ref{tab:compLimAvg} (in Section~\ref{s:limavg}). In summary, we present a complete picture of decidability of the basic probabilistic questions for nested weighted automata (and automata with monitor counters). \smallskip\noindent{\em Technical contributions.} We call a nested weighted automaton $\mathbb{A}$, an \emph{$(f;g)$-automaton} if its the master automaton value function is $f$ and the value function of all slave automata is $g$. We present the key details of our main technical contributions, and for sake of simplicity here explain for the case of the uniform distribution over infinite words. Our technical results are more general though (for distribution given by Markov chains). \begin{itemize} \item We show that in a deterministic $(\textsc{LimInf};\textsc{Sum})$-automaton $\mathbb{A}$, whose master automaton is strongly connected as a graph, almost all words have the same value which, is the infimum over values of any slave automaton from $\mathbb{A}$ over all finite words. \item For a deterministic $(\textsc{Inf};\textsc{Sum})$-automaton $\mathbb{A}$ and $C>0$ we define $\mathbb{A}^C$ as the deterministic $(\textsc{Inf};\textsc{Sum})$-automaton obtained from $\mathbb{A}$ by stopping every slave automaton if it exceeds $C$ steps. We show that for every deterministic $(\textsc{Inf};\textsc{Sum})$-automaton $\mathbb{A}$ and $\epsilon >0$, there exists $C$ exponential in $|\mathbb{A}|$ and polynomial in $\epsilon$ such that the expected values of $\mathbb{A}$ and $\mathbb{A}^C$ differ by at most $\epsilon$. \item We show that the expected value of a deterministic $(\textsc{LimAvg};\textsc{Sum})$-automaton $\mathbb{A}$ coincides with the expected value of the following deterministic (non-nested) $\textsc{LimAvg}$-automaton ${\cal A}$. The automaton ${\cal A}$ is obtained from $\mathbb{A}$ by replacing in every transition an invocation of a slave automaton ${\mathfrak{B}}$ by the weight equal to the expected value of ${\mathfrak{B}}$. \end{itemize} \smallskip\noindent{\em Related works.} Quantitative automata and logic have been extensively and intensively studied in recent years. The book~\cite{Droste:2009:HWA:1667106} presents an excellent collection of results of weighted automata on finite words. Weighted automata on infinite words have been studied in~\cite{Chatterjee08quantitativelanguages,DBLP:journals/corr/abs-1007-4018,DrosteR06}. The extension to weighted automata with monitor counters over finite words has been considered (under the name of cost register automata) in~\cite{DBLP:conf/lics/AlurDDRY13}. A version of nested weighted automata over finite words has been studied in~\cite{bollig2010pebble}, and nested weighted automata over infinite words has been studied in~\cite{nested}. Several quantitative logics have also been studied, such as~\cite{BokerCHK14,BouyerMM14,AlmagorBK14}. While a substantial work has been done for quantitative automata and logics, quite surprisingly none of the above works consider the automata (or the logic) under probabilistic semantics that we consider in this work. Probabilistic models (such as Markov decision processes) with quantitative properties (such as limit-average or discounted-sum) have also been extensively studied for single objectives~\cite{filar,Puterman}, and for multiple objectives and their combinations~\cite{CMH06,Cha07,CFW13,BBCFK11,CKK15,Forejt,FKN11,CD11,Baier-CSL-LICS-1,Baier-CSL-LICS-2}. However, these works do not consider properties that are expressible by nested weighted automata (such as average response time) or automata with monitor counters. In the main paper, we present the key ideas and main intuitions of the proofs of selected results, and detailed proofs are relegated to the appendix. \section{Preliminaries} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{D}}{\mathbb{D}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\boldsymbol{\theta}}{\boldsymbol{\theta}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\pi}{\pi} \newcommand{\pi^W}{\pi^W} \newcommand{\mathcal{U}}{\mathcal{U}} \Paragraph{Words}. We consider a finite \emph{alphabet} of letters $\Sigma$. A \emph{word} over $\Sigma$ is a (finite or infinite) sequence of letters from $\Sigma$. We denote the $i$-th letter of a word $w$ by $w[i]$. The length of a finite word $w$ is denoted by $|w|$; and the length of an infinite word $w$ is $|w| = \infty$. \medskip \Paragraph{Labeled automata}. For a set $X$, an \emph{$X$-labeled automaton} ${\cal A}$ is a tuple $\tuple{\Sigma, Q, Q_0, \delta, F, {C}}$, where (1)~$\Sigma$ is the alphabet, (2)~$Q$ is a finite set of states, (3)~$Q_0 \subseteq Q$ is the set of initial states, (4)~$\delta \subseteq Q \times \Sigma \times Q$ is a transition relation, (5)~$F$ is a set of accepting states, and (6)~${C} : \delta \mapsto X$ is a labeling function. A labeled automaton $\tuple{\Sigma, Q, q_0, \delta, F, {C}}$ is \emph{deterministic} if and only if $\delta$ is a function from $Q \times \Sigma$ into $Q$ and $Q_0$ is a singleton. In definitions of deterministic labeled automata we omit curly brackets in the description of $Q_0$ and write $\tuple{\Sigma, Q, q_0, \delta, F, {C}}$. \medskip \Paragraph{Semantics of (labeled) automata}. A \emph{run} $\pi$ of a (labeled) automaton ${\cal A}$ on a word $w$ is a sequence of states of ${\cal A}$ of length $|w|+1$ such that $\pi[0]$ belong to the initial states of ${\cal A}$ and for every $0 \leq i \leq |w|-1$ we have $(\pi[i], w[i], \pi[i+1])$ is a transition of ${\cal A}$. A run $\pi$ on a finite word $w$ is \emph{accepting} iff the last state $\pi[|w|]$ of the run is an accepting state of ${\cal A}$. A run $\pi$ on an infinite word $w$ is \emph{accepting} iff some accepting state of ${\cal A}$ occurs infinitely often in $\pi$. For an automaton ${\cal A}$ and a word $w$, we define $\mathsf{Acc}(w)$ as the set of accepting runs on $w$. Note that for deterministic automata, every word $w$ has at most one accepting run ($|\mathsf{Acc}(w)| \leq 1$). \medskip \Paragraph{Weighted automata}. A \emph{weighted automaton} is a $\mathbb{Z}$-labeled automaton, where $\mathbb{Z}$ is the set of integers. The labels are called \emph{weights}. We assume that weights are given in the unary notation, and, hence, the values of weights are linearly bounded in the size of weighted automata. \medskip \Paragraph{Semantics of weighted automata}. We define the semantics of weighted automata in two steps. First, we define the value of a run. Second, we define the value of a word based on the values of its runs. To define values of runs, we will consider \emph{value functions} $f$ that assign real numbers to sequences of rationals. Given a non-empty word $w$, every run $\pi$ of ${\cal A}$ on $w$ defines a sequence of weights of successive transitions of ${\cal A}$, i.e., ${C}(\pi)=({C}(\pi[i-1], w[i], \pi[i]))_{1\leq i \leq |w|}$; and the value $f(\pi)$ of the run $\pi$ is defined as $f({C}(\pi))$. We denote by $({C}(\pi))[i]$ the weight of the $i$-th transition, i.e., ${C}(\pi[i-1], w[i], \pi[i])$. The value of a non-empty word $w$ assigned by the automaton ${\cal A}$, denoted by $\valueL{{\cal A}}(w)$, is the infimum of the set of values of all {\em accepting} runs; i.e., $\inf_{\pi \in \mathsf{Acc}(w)} f(\pi)$, and we have the usual semantics that infimum of an empty set is infinite, i.e., the value of a word that has no accepting run is infinite. Every run $\pi$ on an empty word has length $1$ and the sequence ${C}(\pi)$ is empty, hence we define the value $f(\pi)$ as an external (not a real number) value $\bot$. Thus, the value of the empty word is either $\bot$, if the empty word is accepted by ${\cal A}$, or $\infty$ otherwise. To indicate a particular value function $f$ that defines the semantics, we will call a weighted automaton ${\cal A}$ an $f$-automaton. \medskip \Paragraph{Value functions}. We will consider the classical functions and their natural variants for value functions. For finite runs we consider the following value functions: for runs of length $n+1$ we have \begin{enumerate} \item {\em Max and min:} $\textsc{Max}(\pi) = \max_{i=1}^n ({C}(\pi))[i]$ and $\textsc{Min}(\pi) = \min_{i=1}^n ({C}(\pi))[i]$. \item \emph{Sum, absolute sum and bounded sum:} the sum function $\textsc{Sum}(\pi) = \sum_{i=1}^{n} ({C}(\pi))[i]$, the absolute sum $\textsc{Sum}^+(\pi) = \sum_{i=1}^{n} \mathop{\mathsf{Abs}}(({C}(\pi))[i])$, where $\mathop{\mathsf{Abs}}(x)$ is the absolute value of $x$, and the bounded sum value function returns the sum if all the partial absolute sums are below a bound $B$, otherwise it returns the exceeded bound $-B$ or $B$, i.e., formally, $\fBsum{B}(\pi) = \textsc{Sum}(\pi)$, if for all prefixes $\pi'$ of $\pi$ we have $\mathop{\mathsf{Abs}}(\textsc{Sum}(\pi')) \leq B$, otherwise $\fBsum{B}(\pi) = \textrm{sgn} \cdot B$ where $sgn$ is the sign of the shortest prefix whose sum is outside $[-B,B]$. \end{enumerate} We denote the above class of value functions for finite words as $\mathsf{FinVal}=\{\textsc{Max},\textsc{Min},\fBsum{B},\textsc{Sum}\}$. For infinite runs we consider: \begin{enumerate} \item {\em Supremum and Infimum, and Limit supremum and Limit infimum}: $\textsc{Sup}(\pi) = \sup \{ ({C}(\pi))[i] : i > 0 \}$, $\textsc{Inf}(\pi) = \inf \{ ({C}(\pi))[i] : i > 0 \}$, $\textsc{LimSup}(\pi) = \lim\sup \{ ({C}(\pi))[i] : i > 0\}$, and $\textsc{LimInf}(\pi) = \lim\inf \{ ({C}(\pi))[i] : i > 0 \}$. \item {\em Limit average:} $\textsc{LimAvg}(\pi) = \limsup\limits_{k \rightarrow \infty} \frac{1}{k} \cdot \sum_{i=1}^{k} ({C}(\pi))[i]$. \end{enumerate} We denote the above class of value functions for infinite words as $\mathsf{InfVal}=\{\textsc{Sup},\textsc{Inf},\textsc{LimSup},\textsc{LimInf},\textsc{LimAvg}\}$. \medskip \Paragraph{Silent moves}. Consider a $(\mathbb{Z} \cup \{ \bot\})$-labeled automaton. We can consider such an automaton as an extension of a weighted automaton in which transitions labeled by $\bot$ are \emph{silent}, i.e., they do not contribute to the value of a run. Formally, for every function $f \in \mathsf{InfVal}$ we define $\silent{f}$ as the value function that applies $f$ on sequences after removing $\bot$ symbols. The significance of silent moves is as follows: it allows to ignore transitions, and thus provide robustness where properties could be specified based on desired events rather than steps. \section{Extensions of weighted automata} \newcommand{\aut_{\textrm{diff}}}{{\cal A}_{\textrm{diff}}} In this section we consider two extensions of weighted automata, namely, automata with monitor counters and nested weighted automata. \subsection{Automata with monitor counters} Automata with monitor counters are intuitively extension of weighted automata with counters, where the transitions do not depend on the counter value. We define them formally below. \smallskip \Paragraph{Automata with monitor counters.} \newcommand{{\cal A}^{\textrm{m-c}}}{{\cal A}^{\textrm{m-c}}} An \emph{automaton with $n$ monitor counters} ${\cal A}^{\textrm{m-c}}$ is a tuple $\tuple{ \Sigma, Q, Q_0, \delta, F}$ where (1)~$\Sigma$ is the alphabet, (2)~$Q$ is a finite set of states, (3)~$Q_0 \subseteq Q_0$ is the set of initial states, (4)~$\delta$ is a finite subset of $Q \times \Sigma \times Q \times (\mathbb{Z} \cup \{ s,t \})^n$ called a transition relation, (each component refers to one monitor counter, where letters $s,t$ refer to starting and terminating the counter, respectively, and the value from $\mathbb{Z}$ is the value that is added to the counter), and (5)~$F$ is the set of accepting states. Moreover, we assume that for every $(q,a,q',\vec{u}) \in \delta$, at most one component in $\vec{u}$ contains $s$, i.e., at most one counter is activated at each position. Intuitively, the automaton ${\cal A}^{\textrm{m-c}}$ is equipped with $n$ counters. The transitions of ${\cal A}^{\textrm{m-c}}$ do not depend on the values of counters (hence, we call them monitor counters); and every transition is of the form $(q,a,q',\vec{v})$, which means that if ${\cal A}^{\textrm{m-c}}$ is in the state $q$ and the current letter is $a$, then it can move to the state $q'$ and update counters according to $v$. Each counter is initially inactive. It is activated by the instruction $s$, and it changes its value at every step by adding the value between $-N$ and $N$ until termination $t$. The value of the counter at the time it is terminated is then assigned to the position where it has been activated. An automaton with monitor counters ${\cal A}^{\textrm{m-c}}$ is \emph{deterministic} if and only if $Q_0$ is a singleton and $\delta$ is a function from $Q \times \Sigma$ into $Q \times (\mathbb{Z} \cup \{ s,t \})^n$. \smallskip \Paragraph{Semantics of automata with monitor counters.} A sequence $\pi$ of elements from $Q \times (\mathbb{Z} \times \{\bot\})^n$ is a \emph{run} of ${\cal A}^{\textrm{m-c}}$ on a word $w$ if (1)~$\pi[0] = \tuple{q_0, \vec{\bot}}$ and $q_0 \in Q_0$ and (2)~for every $i > 0$, if $\pi[i-1] = \tuple{q,\vec{u}}$ and $\pi[i] = \tuple{q', \vec{u}'}$ then ${\cal A}^{\textrm{m-c}}$ has a transition $(q,w[i],q',\vec{v})$ and for every $j \in [1,n]$ we have (a)~if $v[j] = s$, then $u[j] = \bot$ and $u'[j] = 0$, (b)~if $v[j] = t$, then $u[j] \in \mathbb{Z}$ and $u'[j] = \bot$, and (c)~if $v[j] \in \mathbb{Z}$, then $u'[j] = u[j] + v[j]$. A run $\pi$ is \emph{accepting} if some state from $F$ occurs infinitely often on the first component of $\pi$, infinitely often some counter is activated and every activated counter is finally terminated. An accepting run $\pi$ defines a sequence $\pi^W$ of integers and $\bot$ as follows: let the counter started at position $i$ be $j$, and let the value of the counter $j$ terminated at the earliest position after $i$ be $x_j$, then $\pi^W[i]$ is $x_j$. The semantics of automata with monitor counters is given, similarly to weighted automata, by applying the value function to $\pi^W$. \begin{remark} Automata with monitor counters are very similar in spirit to the register automata considered in the works of~\cite{DBLP:conf/lics/AlurDDRY13}. The key difference is that we consider infinite words and value functions associated with them, whereas previous works consider finite words. Another key difference is that in this work we will consider probabilistic semantics, and such semantics has not be considered for register automata before. \end{remark} \begin{example}[Blocks difference] \label{ex:AMC} Consider an alphabet $\Sigma = \{a,\#\}$ and a language ${\cal L}$ of words $(\#^2 a^* \# a^* \#)^{\omega}$. On the words from ${\cal L}$ we consider a quantitative property ``the maximal block-length difference between odd and even positions'', i.e., the value of word $\#^2 a^{n[1]} \# a^{n[2]} \#^3 \ldots $ is $\sup_{0 \leq i} |n[2*i+1] - n[2*i+2]|$. This property can be expressed by a $\textsc{Sup}$-automaton $\aut_{\textrm{diff}}$ with two monitor counters depicted in Figure~\ref{fig:autDiff}. \begin{figure} \centering \begin{tikzpicture} \tikzstyle{state}=[draw,circle,minimum size=0.8cm] \tikzstyle{Astate}=[draw,circle,minimum size=0.70cm] \newcommand{2.0}{1.9} \node[state] (q1) at (0,0) {$q_0$}; \node[Astate] at (0,0) {}; \node[state] (q2) at (2.0,0) {$q_1$}; \node[state] (q5) at (2*2.0,0) {$q_2$}; \node[state] (q6) at (3*2.0,0) {$q_3$}; \node (null) at (-1.0,0.3) {}; \draw[->] (q1) to node[above] {(\#,s,0)} (q2); \draw[->] (q2) to node[above] {(\#,0,s)} (q5); \draw[->, loop above] (q5) to node[left] {(a,1,-1)} (q5); \draw[->] (q5) to node[above] {(\#,0,0)} (q6); \draw[->, loop above] (q6) to node[left] {(a,-1,1)} (q6); \draw[->, bend left] (q6) to node[above] {(\#,t,t)} (q1); \draw[->] (null) to (q1); \end{tikzpicture} \caption{The automaton $\aut_{\textrm{diff}}$ computing the maximal difference between the lengths of blocks of $a$'s at odd and the following even positions.} \label{fig:autDiff} \end{figure} The automaton $\aut_{\textrm{diff}}$ has a single initial state $q_0$, which is also the only accepting state. It processes the word $w$ in subwords $\#^2 a^{k} \# a^{m} \#$ in the following way. First, it reads $\#^2$ upon which it takes transitions from $q_0$ to $q_1$ and from $q_1$ to $q_2$, where it starts counters $1$ and $2$. Next, it moves to the state $q_2$ where it counts letters $a$ incrementing counter $1$ and decrementing counter $2$. Then, upon reading $\#$, it moves to $q_3$, where it counts letters $a$, but it decrements counter $1$ and increments counter $2$. After reading $\#^2 a^{k} \# a^{m}$ the value of counter $1$ is $k-m$ and counter $2$ is $m-k$. In the following transition from $q_3$ to $q_0$, the automaton terminates both counters. The aggregating function of $\aut_{\textrm{diff}}$ is $\textsc{Sup}$, thus the automaton discards the lower value, i.e., the value of $\#^2 a^{k} \# a^{m} \#$ is $|k-m|$ and the automaton computes the supremum over values of all blocks. It follows that the value of $\#^2 a^{n[1]} \# a^{n[2]} \#^3 \ldots $ is $\sup_{0 \leq i} |n[2*i+1] - n[2*i+2]|$. \end{example} \subsection{Nested weighted automata} In this section we describe nested weighted automata introduced in~\cite{nested}, and closely follow the description of~\cite{nested}. For more details and illustration of such automata we refer the reader to~\cite{nested}. We start with an informal description. \smallskip\noindent{\em Informal description.} A \emph{nested weighted automaton} consists of a labeled automaton over infinite words, called the \emph{master automaton}, a value function $f$ for infinite words, and a set of weighted automata over finite words, called \emph{slave automata}. A nested weighted automaton can be viewed as follows: given a word, we consider the run of the master automaton on the word, but the weight of each transition is determined by dynamically running slave automata; and then the value of a run is obtained using the value function $f$. That is, the master automaton proceeds on an input word as an usual automaton, except that before it takes a transition, it can start a slave automaton corresponding to the label of the current transition. The slave automaton starts at the current position of the word of the master automaton and works on some finite part of the input word. Once a slave automaton finishes, it returns its value to the master automaton, which treats the returned value as the weight of the current transition that is being executed. Note that for some transitions the master automaton might not invoke any slave automaton, and which corresponds to \emph{silent} transitions. If one of slave automata rejects, the nested weighted automaton rejects. We define this formally as follows. \medskip \Paragraph{Nested weighted automata}. A \emph{nested weighted automaton} (NWA) $\mathbb{A}$ is a tuple $\tuple{{\cal A}_{mas}; f; {\mathfrak{B}}_1, \ldots, {\mathfrak{B}}_k}$, where (1)~${\cal A}_{mas}$, called the \emph{master automaton}, is a $\{1, \ldots, k\}$-labeled automaton over infinite words (the labels are the indexes of automata ${\mathfrak{B}}_1, \ldots, {\mathfrak{B}}_k$), (2)~$f$ is a value function on infinite words, called the \emph{master value function}, and (3)~${\mathfrak{B}}_1, \ldots, {\mathfrak{B}}_k$ are weighted automata over finite words called \emph{slave automata}. Intuitively, an NWA can be regarded as an $f$-automaton whose weights are dynamically computed at every step by a corresponding slave automaton. We define an \emph{$(f;g)$-automaton} as an NWA where the master value function is $f$ and all slave automata are $g$-automata. \medskip \Paragraph{Semantics: runs and values}. A \emph{run} of an NWA $\mathbb{A}$ on an infinite word $w$ is an infinite sequence $(\Pi, \pi_1, \pi_2, \ldots)$ such that (1)~$\Pi$ is a run of ${\cal A}_{mas}$ on $w$; (2)~for every $i>0$ we have $\pi_i$ is a run of the automaton ${\mathfrak{B}}_{{C}(\Pi[i-1], w[i], \Pi[i])}$, referenced by the label ${C}(\Pi[i-1], w[i], \Pi[i])$ of the master automaton, on some finite word of $w[i,j]$. The run $(\Pi, \pi_1, \pi_2, \ldots)$ is \emph{accepting} if all runs $\Pi, \pi_1, \pi_2, \ldots$ are accepting (i.e., $\Pi$ satisfies its acceptance condition and each $\pi_1,\pi_2, \ldots$ ends in an accepting state) and infinitely many runs of slave automata have length greater than $1$ (the master automaton takes infinitely many non-silent transitions). The value of the run $(\Pi, \pi_1, \pi_2, \ldots)$ is defined as $\silent{f}( v(\pi_1) v(\pi_2) \ldots)$, where $v(\pi_i)$ is the value of the run $\pi_i$ in the corresponding slave automaton. The value of a word $w$ assigned by the automaton $\mathbb{A}$, denoted by $\valueL{\mathbb{A}}(w)$, is the infimum of the set of values of all {\em accepting} runs. We require accepting runs to contain infinitely many non-silent transitions because $f$ is a value function over infinite sequences, so we need the sequence $v(\pi_1) v(\pi_2) \ldots$ with $\bot$ symbols removed to be infinite. \medskip \Paragraph{Deterministic nested weighted automata}. An NWA $\mathbb{A}$ is \emph{deterministic} if (1)~the master automaton and all slave automata are deterministic, and (2)~slave automata recognize prefix-free languages, i.e., languages ${\cal L}$ such that if $w \in {\cal L}$, then no proper extension of $w$ belongs to ${\cal L}$. Condition (2) implies that no accepting run of a slave automaton visits an accepting state twice. Intuitively, slave automata have to accept the first time they encounter an accepting state as they will not see an accepting state again. \medskip \Paragraph{Bounded width.} An NWA has \emph{bounded width} if and only if there exists a bound $C$ such that in every run at every position at most $C$ slave automata are active. \begin{example}[Average response time with bounded requests] \label{ex:NWA} Consider an alphabet $\Sigma$ consisting of requests $r$, grants $g$ and null instructions $\#$. The average response time (ART) property asks for the average number of instructions between any request and the following grant. It has been shown in~\cite{nested} that NWA can express ART. However, the automaton from~\cite{nested} does not have bounded width. To express the ART property with NWA of bounded width we consider only words such that between any two grants there are at most $k$ requests. Average response time over words where between any two grants there are at most $k$ requests can be expressed by an $(\textsc{LimAvg};\textsc{Sum})$-automaton $\mathbb{A}$. Such an automaton $\mathbb{A} = ({\cal A}_{mas}; \textsc{LimAvg}; {\mathfrak{B}}_1, {\mathfrak{B}}_2)$ is depicted in Fig.~\ref{fig:ART-k}. The master automaton of $\mathbb{A}$ accepts only words with infinite number of requests and grants, where every grant is followed by a request and there are at most $k$ requests between any two grants. On letters $\#$ and $g$, the master automaton invokes a dummy automaton ${\mathfrak{B}}_1$, which immediately accepts; the result of invoking such an automaton is equivalent to taking a silent transition as the automaton ${\mathfrak{B}}_1$ returns $\bot$, the empty value. On letters $r$, denoting requests, the master automaton invokes ${\mathfrak{B}}_2$, which counts the number of letters to the first occurrence of letter $g$, i.e., the automaton ${\mathfrak{B}}_2$ computes the response time for the request on the position it is invoked. The automaton $\mathbb{A}$ computes the limit average of all returned values, which is precisely ART (on the accepted words). Note that the width of $\mathbb{A}$ is bounded by $k$. \begin{figure} \begin{tikzpicture} \tikzstyle{state}=[draw,circle,minimum size=0.8cm] \tikzstyle{Astate}=[draw,circle,minimum size=0.7cm] \newcommand{2.0}{2.0} \node[state] (q0) at (-1,0) {$q_0$}; \node[Astate] at (-1,0) {$q_0$}; \node[state] (q1) at (2.0,0) {$q_1$}; \node[state, inner sep =1pt] (q2) at (2*2.0,0) {$q_{k-1}$}; \node[state] (q3) at (3*2.0,0) {$q_k$}; \node at (1.5*2.0,0) {$\ldots$}; \draw[->, loop above] (q1) to node[left] (e2) {$(\#,1)$} (q1); \draw[->, loop above] (q2) to node[left] (e3) {$(\#,1)$} (q2); \draw[->, loop above] (q3) to node[left] (e4) {$(\#,1)$} (q3); \draw[->,bend left=10] (q1) to node[below] (e5) {$(g,1)$} (q0); \draw[->,bend left=45] (q2) to node[below] (e6) {$(g,1)$} (q0); \draw[->,bend left=65] (q3) to node[below] (e7) {$(g,1)$} (q0); \draw[->] (q0) to node[above] (e8) {$(r,2)$} (q1); \draw[->] (q2) to node[above] (e9) {$(r,2)$} (q3); \node at (5.5,-1.5) {${\cal A}_{mas}$}; \begin{scope}[yshift=3cm] \node[state] (q00) at (0,0) {$q_0^1$}; \node[Astate] at (0,0) {}; \node (A1) at (0,-0.8) {${\mathfrak{B}}_1$}; \node[state] (q10) at (4,0) {$q_0^2$}; \node[state] (q11) at (6,0) {$q_1^2$}; \node[Astate] at (6,0) {}; \node (A2) at (5,-0.8) {${\mathfrak{B}}_2$}; \draw[->] (q10) to node[above] {$(g,0)$} (q11); \draw[->, loop above] (q10) to node[left] {$(\#,1)$} (q10); \draw[->, loop left] (q10) to node[left] {$(r,1)$} (q10); \end{scope} \draw[->, dashed] (e2) to (A1); \draw[->, dashed] (e3) to (A1); \draw[->, dashed] (e4) to (A1); \draw[->, dashed, bend left = 15] (e5) to (A1); \draw[->, dashed, bend left = 45] (e6) to (A1); \draw[->, dashed, bend left = 60] (e7) to (A1); \draw[->, dotted] (e8) to (A2); \draw[->, dotted, bend left = 10] (e9) to (A2); \end{tikzpicture} \caption{The $(\textsc{LimAvg};\textsc{Sum})$-automaton computing the average response time over words with infinite number of requests and grants such that between any two grants there are at most $k$ requests.} \label{fig:ART-k} \end{figure} \end{example} \subsection{Translation} We now present translations from NWA to automata with monitor counters and vice-versa. \begin{restatable}{lemma}{MCvsNested}[Translation Lemma] \label{l:mc-vs-nested} For every value function $f \in \mathsf{InfVal}$ on infinite words we have the following: (1)~Every deterministic $f$-automaton with monitor counters ${\cal A}^{\textrm{m-c}}$ can be transformed in polynomial time into an equivalent deterministic $(f;\textsc{Sum})$-automaton of bounded width. (2)~Every non-deterministic (resp., deterministic) $(f;\textsc{Sum})$-automaton of bounded width can be transformed in exponential time into an equivalent non-deterministic (resp., deterministic) $f$-automaton with monitor counters. \end{restatable} We illustrate below the key ideas of the above translations of Lemma~\ref{l:mc-vs-nested} to automata from Examples~\ref{ex:AMC}~and~\ref{ex:NWA}. The detailed technical proof is in the appendix. \begin{example}[Translation of automata with monitor counters to nested weighted automata] Consider a deterministic automaton ${\cal A}$ with $k$ monitor counters. We construct an NWA $\mathbb{A}$ equivalent to ${\cal A}$. The automaton $\mathbb{A}$ uses $k$ slave automata to track values of $k$ monitor counters in the following way. The master automaton of $\mathbb{A}$ simulates ${\cal A}$; it invokes slave automata whenever ${\cal A}$ starts monitor counters. Slave automata simulate ${\cal A}$ as well. Each slave automaton is associated with some counter $i$; it starts in the state (of ${\cal A}$) the counter $i$ is initialized, simulates the value of counter $i$, and terminates when counter $i$ is terminated. Figure~\ref{fig:AMCtoNWA} presents the result of transition of the automaton $\aut_{\textrm{diff}}$ from Example~\ref{ex:AMC} to a $(\textsc{Sup};\textsc{Sum})$-automaton of width bounded by $3$. \begin{figure} \begin{tikzpicture} \tikzstyle{state}=[draw,circle,minimum size=0.8cm] \tikzstyle{Astate}=[draw,circle,minimum size=0.70cm] \newcommand{2.0}{0.8} \begin{scope}[yshift=4cm,xshift=-2cm] \node[state] (q11) at (0,-0.5) {$q_3$}; \node[Astate] at (0,-0.5) {}; \node[state] (q12) at (2*2.0,-0.5) {$q_0$}; \node[state] (q15) at (2*2.0,-2) {$q_1$}; \node[state] (q16) at (0,-2) {$q_2$}; \node (null) at (0.8,0.0) {}; \draw[->] (q12) to node[right] {(\#,0)} (q15); \draw[->, loop right] (q15) to node[above] {(a,1)} (q15); \draw[->] (q15) to node[above] {(\#,0)} (q16); \draw[->, loop left] (q16) to node[above] {(a,-1)} (q16); \draw[->] (null) to (q12); \draw[->] (q16) to node[left] {(\#,0)} (q11); \node (A1) at (1,-2.7) {${\mathfrak{B}}_1$}; \end{scope} \begin{scope}[yshift=3.8cm,xshift=1.8cm] \node[state] (q21) at (0,-0.5) {$q_2$}; \node[Astate] at (0,-0.5) {}; \node[state] (q25) at (2*2.0,-2) {$q_0$}; \node[state] (q26) at (0,-2) {$q_1$}; \node (null) at (1.0,-1.0) {}; \draw[->, loop right] (q25) to node[below] {(a,1)} (q25); \draw[->] (q25) to node[below] {(\#,0)} (q26); \draw[->, loop left] (q26) to node[below] {(a,-1)} (q26); \draw[->] (null) to (q25); \draw[->] (q26) to node[left] {(\#,0)} (q21); \node (A2) at (1,-2.7) {${\mathfrak{B}}_2$}; \end{scope} \node[state] at (-2,0) {$q_0$}; \node[Astate] at (-2,0) {}; \node (A3) at (-2,-0.8) {${\mathfrak{B}}_3$}; \node[state] (q1) at (0,0) {$q_0$}; \node[Astate] at (0,0) {}; \node[state] (q2) at (2*2.0,0) {$q_1$}; \node[state] (q5) at (2*2.0,-2) {$q_2$}; \node[state] (q6) at (0,-2) {$q_3$}; \node (null) at (-1.0,0.3) {}; \draw[->] (q1) to node[above] (e1) {(\#,1)} (q2); \draw[->] (q2) to node[right] (e2) {(\#,2)} (q5); \draw[->, loop right] (q5) to node[right] (e3) {(a,3)} (q5); \draw[->] (q5) to node[above] (e4) {(\#,3)} (q6); \draw[->, loop left] (q6) to node[left] (e5) {(a,3)} (q6); \draw[->] (q6) to node[left] (e6) {(\#,3)} (q1); \draw[->] (null) to (q1); \draw[->,dotted] (e1) to (A1); \draw[->,dotted] (e2) to (A2); \end{tikzpicture} \caption{A nested weighted automaton resulting from translation of the automaton $\aut_{\textrm{diff}}$ from Example~\ref{ex:AMC}.} \label{fig:AMCtoNWA} \end{figure} \end{example} \begin{example}(Translation of nested weighted automata of bounded width to automata with monitor counters) Consider an $(f;\textsc{Sum})$-automaton $\mathbb{A}$ of width bounded by $k$. The automaton $\mathbb{A}$ can be simulated by an automaton with monitor counters which simulates the master automaton and up to $k$ slave automata running in parallel. To simulate values of slave automata it uses monitor counters, each counter separately for each slave automaton. Figure~\ref{fig:NWAtoAMC} shows the result of translation of the automaton $\mathbb{A}$ from Example~\ref{ex:NWA} to the automaton with monitor counters ${\cal A}_{\mathbb{A}}$. The set of states of ${\cal A}_{\mathbb{A}}$ there is ${q_0, \ldots, q_k} \times (\{q_0^2, \bot\})^k$, i.e., the states of the master automaton and all non-accepting states of slave automata (in deterministic NWA accepting states are sink states, hence storing them is redundant). Now, observe that only reachable states of ${\cal A}_{\mathbb{A}}$ are $(q_0, \bot, \ldots, \bot), (q_1, q_0^2, \bot, \ldots, \bot), \ldots, (q_k, q_0^2, \ldots, q_0^2)$, i.e., the reachable part of ${\cal A}_{\mathbb{A}}$ is isomorphic (in the sense of graphs) to the master automaton of $\mathbb{A}$. \begin{figure} \begin{tikzpicture} \tikzstyle{state}=[draw,circle,minimum size=0.8cm] \tikzstyle{Astate}=[draw,circle,minimum size=0.7cm] \newcommand{2.0}{2.0} \node[state] (q0) at (-1,0) {$q_0$}; \node[Astate] at (-1,0) {$q_0$}; \node[state] (q1) at (2.0,0) {$q_1$}; \node[state, inner sep =1pt] (q2) at (2*2.0,0) {$q_{k-1}$}; \node[state] (q3) at (3*2.0,0) {$q_k$}; \node at (1.5*2.0,0) {$\ldots$}; \draw[->, loop above] (q1) to node[left] (e2) {$(\#,\vec{0})$} (q1); \draw[->, loop above] (q2) to node[left] (e3) {$(\#,\vec{0})$} (q2); \draw[->, loop above] (q3) to node[left] (e4) {$(\#,\vec{0})$} (q3); \draw[->,bend left=10] (q1) to node[below] (e5) {$(g,\vec{t})$} (q0); \draw[->,bend left=45] (q2) to node[below] (e6) {$(g,\vec{t})$} (q0); \draw[->,bend left=65] (q3) to node[below] (e7) {$(g,\vec{t})$} (q0); \draw[->] (q0) to node[above] (e8) {$(r,\vec{s_1})$} (q1); \draw[->] (q2) to node[above] (e9) {$(r,\vec{s}_k)$} (q3); \node at (5.5,-1.5) {${\cal A}_{\mathbb{A}}$}; \end{tikzpicture} \caption{The (reduced) result of translation of the automaton $\mathbb{A}$ from Example~\ref{ex:NWA} to an automaton with monitor counters. Vector $\vec{0}$ (resp., $\vec{t}$) denotes the $k$-dimensional vector whose all components equal $0$ (resp., $t$). Vector $\vec{s}_i$ denotes the $k$-dimensional vector whose $i$-th component is $s$ and other components are $0$. } \label{fig:NWAtoAMC} \end{figure} \end{example} \begin{remark}[Discussion] Lemma~\ref{l:mc-vs-nested} states that deterministic automata with monitor counters have the same expressive power as deterministic NWA of bounded width. However, the latter may be exponentially more succinct. In consequence, lower bounds on deterministic automata with monitor counters imply lower bounds on NWA of bounded width. Conversely, deterministic NWA can be considered as automata with infinite number of monitor counters, therefore upper bounds on deterministic NWA imply upper bounds on deterministic counter automata. \end{remark} \section{Problems} \subsection{Classical questions} The classical questions in automata theory are \emph{emptiness} and \emph{universality} (of a language). These problems have their counterparts in the quantitative setting of weighted automata and their extensions. The (quantitative) emptiness and universality problems are defined in the same way for weighted automata, NWA and automata with monitor counters, i.e., in the following definition the automaton ${\cal A}$ can be a weighted automaton, an NWA or an automaton with monitor counters. \begin{itemize} \item \textbf{Emptiness}: Given an automaton ${\cal A}$ and a threshold $\lambda$, decide whether there exists a word $w$ with ${\cal L}_{\cal A}(w) \leq \lambda$. \item \textbf{Universality}: Given an automaton ${\cal A}$ and a threshold $\lambda$, decide whether for every word $w$ we have ${\cal L}_{\cal A}(w) \leq \lambda$. \end{itemize} The universality question asks for \emph{non-existence} of a word $w$ such that ${\cal L}_{\cal A}(w) > \lambda$. \subsection{Probabilistic questions} The classical questions ask for the existence (or non-existence) of words for input automata, whereas in the probabilistic setting, input automata are analyzed w.r.t. a probability distribution. We consider probability distributions over infinite words $\Sigma^{\omega}$, and as a finite representation consider the classical model of Markov chains. \medskip \Paragraph{Labeled Markov chains}. A \emph{(labeled) Markov chain} is a tuple $\tuple{\Sigma,S,s_0,E}$, where $\Sigma$ is the alphabet of letters, $S$ is a finite set of states, $s_0$ is an initial state, $E : S \times \Sigma \times S \mapsto [0,1]$ is the edge probability function, which for every $s \in S$ satisfies that $\sum_{a \in \Sigma, s' \in S} E(s,a,s') = 1$. \medskip \Paragraph{Distributions given by Markov chains}. Consider a Markov chain $\mathcal{M}$. For every finite word $u$, the probability of $u$, denoted $\mathbb{P}_{\mathcal{M}}(u)$, w.r.t. the Markov chain $\mathcal{M}$ is the sum of probabilities of paths labeled by $u$, where the probability of a path is the product of probabilities of its edges. For basic open sets $u\cdot \Sigma^\omega = \{ uw : w \in \Sigma^{\omega} \}$, we have $\mathbb{P}_{\mathcal{M}}(u\cdot \Sigma^\omega)=\mathbb{P}_{\mathcal{M}}(u)$, and then the probability measure over infinite words defined by $\mathcal{M}$ is the unique extension of the above measure (by Carath\'{e}odory's extension theorem~\cite{feller}). We will denote the unique probability measure defined by $\mathcal{M}$ as $\mathbb{P}_{\mathcal{M}}$, and the associated expectation measure as $\mathbb{E}_{\mathcal{M}}$. \medskip \Paragraph{Automata as random variables.} Note that weighted automata, NWA, or automata with monitor counters all define measurable functions, $f: \Sigma^\omega \mapsto \mathbb{R}$, and thus can be interpreted as random variables w.r.t. the probabilistic space we consider. Hence given an automaton ${\cal A}$ and a Markov chain $\mathcal{M}$, we consider the following fundamental quantities: \begin{enumerate} \item \textbf{Expected value}: $\mathbb{E}_{\mathcal{M}}({\cal A})$ is the expected value of the random variable defined by the automaton ${\cal A}$ w.r.t. the probability measure defined by the Markov chain $\mathcal{M}$. \item \textbf{(Cumulative) distribution}: $\mathbb{D}_{\mathcal{M}, {\cal A}}(\lambda) = \mathbb{P}_{\mathcal{M}}(\{w : \valueL{{\cal A}}(w) \leq \lambda \})$ is the cumulative distribution function of the random variable defined by the automaton ${\cal A}$ w.r.t. the probability measure defined by the Markov chain $\mathcal{M}$. \end{enumerate} \medskip \Paragraph{Computational questions.} Given an automaton ${\cal A}$ and a Markov chain $\mathcal{M}$, we consider the following basic computational questions: (Q1)~The \emph{expected question} asks to compute $\mathbb{E}_{\mathcal{M}}({\cal A})$. (Q2)~The \emph{distribution question} asks, given a threshold $\lambda$, to compute $\mathbb{D}_{\mathcal{M}, {\cal A}}(\lambda)$. Questions (1) and (2) have their approximate variants, which, given an additional input $\epsilon > 0$, ask to compute values that are $\epsilon$-close to $\mathbb{E}_{\mathcal{M}}({\cal A})$ or $\mathbb{D}_{\mathcal{M}, {\cal A}}(\lambda)$, i.e., given $\epsilon > 0$ (Q3)~The \emph{approximate expected question} asks to compute a value $\eta$ such that $|\eta - \mathbb{E}_{\mathcal{M}}({\cal A})| \leq \epsilon$, and (Q4)~The \emph{approximate distribution question} asks to compute a value $\eta$ such that $|\eta - \mathbb{D}_{\mathcal{M}, {\cal A}}(\lambda)| \leq \epsilon$. Additionally, a special important case for the distribution question is (Q5)~The \emph{almost-sure distribution question} asks whether for a given $\lambda$ the probability $\mathbb{D}_{\mathcal{M}, {\cal A}}(\lambda)$ is exactly $1$. We refer to questions (Q1)-(Q5) as \emph{probabilistic questions}. Note that an upper bound on the complexity of the expected and distribution questions imply the same upper bound on all probabilistic questions as approximate and almost-sure variants are special cases. \begin{example}[Expected average response time] Consider an NWA $\mathbb{A}$ from Example~\ref{ex:NWA}. Recall that it computes ART on words it accepts (bounded number of requests between any two grants). Next, consider a Markov chain $\mathcal{M}$ which gives a distribution on words over $\{r,g,\#\}$. In such a case, the value $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ is the expected ART. \end{example} \section{Results on classical questions} \smallskip\noindent{\bf Existing results.} The complexity of the classical decision problems for NWA has been established in~\cite{nested} which is presented in Table~\ref{tab1}. \begin{table}[t] \centering \def5pt{5pt} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{}& $\textsc{Inf}$& $\textsc{Sup}$ & \multirow{2}{*}{$\textsc{LimAvg}$} \\ \multicolumn{2}{|c|}{}& $\textsc{LimInf}$ & $\textsc{LimSup}$ & \\ \hline $\textsc{Min}, \textsc{Max}$ &Empt.& \multicolumn{3}{|c|}{ \multirow{2}{*}{$\textsc{PSp.}{}$-c }} \\ \cline{2-2} $\fBsum{B}$ &Univ.& \multicolumn{3}{|c|}{} \\ \hline \multirow{2}{*}{$\textsc{Sum}$} & Empt. & $\textsc{PSp.}{}$-c & Undec. & \multirow{2}{*}{Open} \\ \cline{2-4} & Univ. & Undec. & $\textsc{PSp.}{}$-c & \\ \hline \multirow{2}{*}{$\textsc{Sum}^+$} & Empt. & \multicolumn{2}{|c|}{ \multirow{2}{*}{$\textsc{PSp.}{}$-c }} & \multirow{2}{*}{$\textsc{ExpSp.}{}$ } \\ &Univ. & \multicolumn{2}{|c|}{} & \\ \hline \end{tabular} \caption{Decidability and complexity of emptiness and universality for deterministic $(f;g)$-automata. Functions $f$ are listed in the first row and functions $g$ are in the first column. $\textsc{PSp.}{}$ denotes $\textsc{PSpace}{}$, $\textsc{ExpSp.}{}$ denotes $\textsc{ExpSpace}{}$, and Undec.\ denotes undecidability.} \label{tab1} \end{table} \smallskip\noindent{\bf New results.} Due to Lemma~\ref{l:mc-vs-nested}, decidability of deterministic $(f;\textsc{Sum})$-automata implies decidability of deterministic automata with monitor counters with the value function $f$. However, the undecidability result of NWA does not imply undecidability for automata with monitor counters. Our following result presents the decidability picture also for automata with monitor counters (i.e., the decidability result coincides with the $\textsc{Sum}$ row of Table~\ref{tab1}). \begin{restatable}{theorem}{EigthCountersUndecidable} (1)~The emptiness problem is undecidable for deterministic $\textsc{Sup}$-automata (resp., $\textsc{LimSup}$-automata) with $8$ monitor counters. (2)~The universality problem is undecidable for deterministic $\textsc{Inf}$-automata (resp., $\textsc{LimInf}$-automata) with $8$ monitor counters. \label{th:undecidable-limsup} \end{restatable} \section{Results on probabilistic questions} \newcommand{\mathsf{SAT}}{\mathsf{SAT}} In this section we present our results for the probabilistic questions and deterministic NWA. First we present some basic properties, then some basic facts about Markov chains, and then present our results organized by value functions of the master automaton. \medskip \noindent{\em Property about almost-sure acceptance.} Observe that if the probability of the set of words rejected by an automaton ${\cal A}$ is strictly greater than $0$, then the expected value of such an automaton is infinite or undefined. In the next lemma we show that given a deterministic NWA $\mathbb{A}$ and a Markov chain $\mathcal{M}$ whether the set of words accepted has probability~1 can be decided in polynomial time. In the sequel when we consider all the computational problems we consider that the set of accepted words has probability~1. This assumption does not infuence the compexity of computatonal questions related to the expected value, but has an influence on the complexity of distribution questions, which we discuss in the appendix. \begin{restatable}{proposition}{AcceptAlmostAllInP} \label{prop:almostAll} Given a deterministic NWA $\mathbb{A}$ and a Markov chain $\mathcal{M}$, we can decide in polynomial time whether $\mathbb{P}_{\mathcal{M}}(\{w : \mathsf{Acc}(w) \neq \emptyset\})=1$? \end{restatable} \subsection{Basic facts about Markov chains} \Paragraph{Labeled Markov chains with weights.} A labeled Markov chain with weights is a (labeled) Markov chain $\mathcal{M}$ with a function $r$, which associates integers with edges of $\mathcal{M}$. Formally, a \emph{(labeled) Markov chain with weights} is a tuple $\tuple{\Sigma,S,s_0,E,r}$, where $\tuple{\Sigma,S,s_0,E}$ is a labeled Markov chain and $r : S \times \Sigma \times S \mapsto \mathbb{Z}$. \medskip \Paragraph{Graph properties on Markov chains}. Standard graph notions have their counterparts on Markov chains by considering edges with strictly positive probability as present and edges with probability $0$ as absent. For examples, we consider the following graph notions: \begin{itemize} \item \textbf{(reachability)}: A state $s$ is {\em reachable} from $s'$ in a Markov chain if there exists a sequence of edges with positve probability starting in $s' $ and ending in $s$. \item \textbf{(SCCs)}: A subset of states $Q$ of a Markov chain is a \emph{strongly connected component} (SCC) if and only if from any state of $Q$ all states in $Q$ are reachable. \item \textbf{(end SCCs)}: An SCC $Q$ is an \emph{end} SCC if and only if there are no edges leaving $Q$. \end{itemize} \medskip \Paragraph{The product of an automaton and a Markov chain.} Let ${\cal A} = \tuple{ \Sigma, Q, q_0, \delta, F, {C}}$ be a deterministic weighted automaton and let $\mathcal{M} = \tuple{\Sigma,S,s_0,E, r}$. We define the product of ${\cal A}$ and $\mathcal{M}$, denoted by ${\cal A} \times \mathcal{M}$, as a Markov chain $\tuple{ \Sigma, Q \times S, (q_0, s_0), E', r')}$, where (1)~$E'(\tuple{q_1, s_1}, a, \tuple{q_2, s_2}) = E(s_1, a, s_2)$ if $(q_1, a, q_2) \in \delta$ and $E'(\tuple{q_1, s_1}, a, \tuple{q_2, s_2}) = 0$ otherwise, and (2)~$r'(\tuple{q_1, s_1}, a, \tuple{q_2, s_2}) = {C}(q_1, a, q_2) + r(s_1, a, s_2)$. The expected value and distribution questions can be answered in polynomial time for deterministic weighted automata with value functions from $\mathsf{InfVal}$~\cite{ChatterjeeDH09LimInf}. \begin{fact} \label{t:weighted-inf-expected} \label{t:weighted-limavg-expected} Let $f \in \mathsf{InfVal}$. Given a Markov chain $\mathcal{M}$, a deterministic $f$-automaton ${\cal A}$ and a value $\lambda$, the values $\mathbb{E}_{\mathcal{M}}({\cal A})$ and $\mathbb{D}_{\mathcal{M}, {\cal A}}(\lambda)$ can be computed in polynomial time. \end{fact} \subsection{$\textsc{LimInf}$ and $\textsc{LimSup}$ value functions} \label{s:liminf} In this section we study NWA with $\textsc{LimInf}$ and $\textsc{LimSup}$ value functions for the master automaton. We establish polynomial-time algorithms for all probabilistic questions. We start with a result for the special case when the master automaton is strongly connected w.r.t.\ the Markov chain. \medskip \Paragraph{An automaton strongly connected on a Markov chain.} We say that a deterministic automaton ${\cal A}$ is \emph{strongly connected on} a Markov chain $\mathcal{M}$ if and only if the states reachable (with positive probability) in ${\cal A} \times \mathcal{M}$ from the initial state form an SCC. \begin{restatable}{lemma}{stronglyConnectedComponenets} Let $g \in \mathsf{FinVal}$, $\mathcal{M}$ be a Markov chain, and $\mathbb{A}$ be a deterministic $(\textsc{Inf};g)$-automaton (resp., $(\textsc{LimInf}; g)$-automaton). If the master automaton of $\mathbb{A}$ is strongly connected on $\mathcal{M}$ then there exists a unique value $\lambda$, with $|\lambda| \leq |\mathbb{A}|$ or $\lambda = -\infty$ (or $\lambda =-B$ for $g = \fBsum{B}$), such that $\mathbb{P}_{\mathcal{M}}(\{ w : \mathbb{A}(w) = \lambda \}) = 1$. Moreover, given $\mathcal{M}$ and $\mathbb{A}$, the value $\lambda$ can be computed in polynomial time in $|\mathcal{M}| + |\mathbb{A}|$. \label{l:in-scc-all-equal} \end{restatable} Intuitively, in the probabilistic setting, for the condition saying that a given infimal value appears infinitely often, we establish in Lemma~\ref{l:in-scc-all-equal} some sort of 0-1 law for SCCs which shows that a given infimum appears infinitely often either on almost all words or only on words of probability zero. Consider the product of $\mathcal{M}$ and the master automaton of $\mathbb{A}$, and consider a state $(s,q)$ in the product. Consider a slave automaton ${\mathfrak{B}}_i$ that can be invoked in $q$, and let $\lambda_{s,q,i}$ be the minimal value that can be achieved over finite words with positive probability given that the Markov chain starts in state $s$. Then we establish that $\lambda=\min_{s,q,i} \lambda_{s,q,i}$, i.e., it is the minimal over all such triples. Lemma~\ref{l:in-scc-all-equal} implies the following main lemma of this section. \begin{restatable}{lemma}{liminfIsPolynomial} Let $g \in \mathsf{FinVal}$. For a deterministic $(\textsc{LimInf};g)$-automata (resp., $(\textsc{LimSup}; g)$-automata) $\mathbb{A}$ and a Markov chain $\mathcal{M}$, given a threshold $\lambda$, both $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$ can be computed in polynomial time. \label{th:liminfIsPoly} \end{restatable} \smallskip\noindent{\em The key ideas.} Consider a $(\textsc{LimInf}; g)$-automaton (resp., $(\textsc{LimSup}; g)$-automaton) $\mathbb{A}$. The value $\mathbb{A}(w)$ depends only on the infinite behavior of the (unique) run of $\mathbb{A}$ on $w$. Almost all runs of $\mathbb{A}$ end up in one of the end SCCs of $\mathcal{M} \times {\cal A}_{mas}$, where, by Lemma~\ref{l:in-scc-all-equal} almost all words have the same value which can be computed in polynomial time. Thus, to compute $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$, it suffices to compute probabilities of reaching all end SCCs and values of $\mathbb{A}$ in these components. In a similar way, we can compute $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$. \begin{theorem} Let $g \in \mathsf{FinVal}$. The complexity results for the probabilistic questions for $(\textsc{LimInf}; g)$-automata (resp., $(\textsc{LimSup}; g)$-automata) are summarized in Table~\ref{tab:compLimInf}. \label{th:compLimInf} \end{theorem} \begin{table}[h] \begin{tabular}{|c|c|} \hline & $\textsc{Min}, \textsc{Max}, \fBsum{B},\textsc{Sum}^+, \textsc{Sum}$ \\ \hline All probabilistic & \textsc{PTime}{} \\ questions & (Lemma~\ref{th:liminfIsPoly}) \\ \hline \end{tabular} \caption{The complexity results for various problems for deterministic NWA with $\textsc{LimSup}$ and $\textsc{LimInf}$ value functions.} \label{tab:compLimInf} \end{table} \begin{remark}[Contrast with classical questions] \label{remark:LimInf-classical-vs-probabilistic} Consider the results on classical questions shown in Table~\ref{tab1} and the results for probabilistic questions we establish in Table~\ref{tab:compLimInf}. Quite contrastingly while for classical questions the problems are $\textsc{PSpace}{}$-complete or undecidable, we establish polynomial-time algorithms for all probabilistic questions. \end{remark} \subsection{$\textsc{Inf}$ and $\textsc{Sup}$ value functions} \label{s:inf} In contrast to $\textsc{LimInf}$ and $\textsc{LimSup}$ value functions, for which all probabilistic questions can be answered in polynomial time (Lemma~\ref{th:compLimInf}), we first present several hardness results for $\textsc{Inf}$ and $\textsc{Sup}$ value functions for NWA. \begin{restatable}{lemma}{InfimumIsHard}[Hardness results] \label{l:hardness-for-det-inf} Let $g \in \mathsf{FinVal}$ be a value function, and $\mathcal{U}$ denote the uniform distribution over the infinite words. \begin{enumerate} \item The following problems are $\textsc{PSpace}{}$-hard: Given a deterministic $(\textsc{Inf};g)$-automaton (resp., $(\textsc{Sup}; g)$-automaton) $\mathbb{A}$, decide whether $\mathbb{E}_{\mathcal{U}}(\mathbb{A})=0$; and decide whether $\mathbb{D}_{\mathcal{U},\mathbb{A}}(0)=1$? \item The following problems are \#P-hard: Given $\epsilon > 0$ and a deterministic $(\textsc{Inf};g)$-automaton (resp., $(\textsc{Sup}; g)$-automaton) $\mathbb{A}$, compute $\mathbb{E}_{\mathcal{U}}(\mathbb{A})$ up to precision $\epsilon$; and compute $\mathbb{D}_{\mathcal{U}, \mathbb{A}}(0)$ up to precision $\epsilon$. \end{enumerate} \end{restatable} \smallskip\noindent{\em The key ideas.} We present the key ideas: \smallskip\noindent{\em $\textsc{PSpace}{}$-hardness.} NWA have the ability to invoke multiple slave automata which independently work over the same word. In particular, one can express that the intersection of languages of finite-word automata ${\cal A}_1, \ldots, {\cal A}_k$ is non-empty by tuning these automata into slave automata that return $1$ if the original automaton accepts and $0$ otherwise. Then, the infinum over all values of slave automata is $1$ if and only if the intersection is non-empty. Note however, that words of the minimal length in the intersection can have exponential length. The probability of such word can be doubly-exponentially small in the size of $\mathbb{A}$, and thus the $\textsc{PSpace}{}$-hardness does not apply to the approximation problems (which we establish below). \smallskip\noindent{\em $\#P$-hardness.} We show $\#P$-hardness of the approximate variants by reduction from $\#\mathsf{SAT}$, which is $\#P$-complete~\cite{valiant1979complexity,papadimitriou2003computational}. The $\#\mathsf{SAT}$ problem asks, given a CNF formula $\varphi$, for the number of assignments satisfying $\varphi$. In the proof, (the prefix of) the input word gives an assignment, which is processed by slave automata. Each slave automaton checks the satisfaction of one clause and return $1$ if it is satisfied and $0$ otherwise. Thus, all slave automata return $0$ if and only if all clauses are satisfied. In such case, one can compute from $\mathbb{E}_{\mathcal{U}}(\mathbb{A})$ and $\mathbb{D}_{\mathcal{U}, \mathbb{A}}(0)$ the number of satisfying assignments of $\varphi$. \medskip \Paragraph{Upper bounds for $g\in \mathsf{FinVal} \setminus \{ \textsc{Sum}^+,\textsc{Sum} \}$.} We now present upper bounds for value functions $g \in \mathsf{FinVal} \setminus \{ \textsc{Sum}^+,\textsc{Sum} \}$ of the slave automata. First we show an exponential-time upper bound for general NWA with $\textsc{Inf}$ and $\textsc{Sup}$ value functions (cf. with the $\textsc{PSpace}{}$-hardness from Lemma~\ref{l:hardness-for-det-inf}). \begin{restatable}{lemma}{InfExpectedSolution} \label{l:infSolutions} Let $g \in \mathsf{FinVal} \setminus \{ \textsc{Sum}^+, \textsc{Sum} \}$ be a value function. Given a Markov chain $\mathcal{M}$, a deterministic $(\textsc{Inf};g)$-automaton (resp., $(\textsc{Sup}; g)$-automaton) $\mathbb{A}$, and a threshold $\lambda$ in binary, both $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$ can be computed in exponential time. Moreover, if $\mathbb{A}$ has bounded width, then the above quantities can be computed in polynomial time. \end{restatable} \begin{remark} We show in Lemma~\ref{l:infSolutions} a polynomial-time upper bound for NWA with bounded width, which gives a polynomial-time upper bound for automata with monitor counters. \end{remark} \smallskip\noindent{\em Key ideas.} For $g \in \mathsf{FinVal} \setminus \{ \textsc{Sum}^+, \textsc{Sum} \}$, it has been shown in~\cite{nested} that $(\textsc{Inf};g)$-automata (resp., $(\textsc{Sup}; g)$-automata) can be transformed to exponential-size $\textsc{Inf}$-automata (resp., $\textsc{Sup}$-automata). We observe that the transformation preserves determinism. Then, using Fact~\ref{t:weighted-inf-expected}, both $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$ can be computed in exponential time. \medskip \Paragraph{The $\textsc{Sum}^+$ and $\textsc{Sum}$ value functions for slave automata.} We now establish the result when $g=\textsc{Sum},\textsc{Sum}^+$. First we establish decidability of the approximation problems, and then undecidability of the exact questions. \begin{restatable}{lemma}{InfSumSolution} \label{l:infSumSolution} Let $g \in \{\textsc{Sum}^+, \textsc{Sum}\}$. Given $\epsilon > 0$, a Markov chain $\mathcal{M}$, a deterministic $(\textsc{Inf};g)$-automaton (resp., $(\textsc{Sup}; g)$-automaton) $\mathbb{A}$, a threshold $\lambda$, both $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{D}_{\mathcal{M}, \mathbb{A}}(\lambda)$ can be computed up to precision $\epsilon$ in exponential time, and the dependency on $\epsilon$ is linear in the binary representation of $\epsilon$. \end{restatable} \smallskip\noindent{\em Key ideas.} The main difference between $\textsc{Inf}$ and $\textsc{LimInf}$ value functions is that the latter discards all values encountered before the master automaton reaches an end SCC where the infimum of values of slave automata is easy to compute (Lemma~\ref{l:in-scc-all-equal}). We show that for some $B$, exponential in $|\mathbb{A}|$ and polynomial in the binary representation of $\epsilon$, the probability that any slave automaton returns value $\lambda$ with $|\lambda| > B$ is smaller than $\epsilon$. Therefore, to approximate $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{D}_{\mathcal{M}, \mathbb{A}}(\lambda)$ up to precision $\epsilon$, we can regard a given $(\textsc{Inf}; \textsc{Sum})$-automaton (resp., $(\textsc{Sup}; \textsc{Sum})$-automaton) as $(\textsc{Inf}; \fBsum{B})$-automaton (resp., $(\textsc{Sup}; \fBsum{B})$-automaton) and use Lemma~\ref{l:infSolutions}. \begin{restatable}{lemma}{InfSumUndecidable} \label{l:inf-prob-undec} Let $\mathcal{U}$ denote the uniform distribution over the infinite words. The following problems are undecidable: (1)~Given a deterministic $(\textsc{Inf}; \textsc{Sum})$-automaton (resp., $(\textsc{Sup}; \textsc{Sum})$-automaton) $\mathbb{A}$ of width bounded by $8$, decide whether $\mathbb{D}_{\mathcal{U}, \mathbb{A}}(-1) = 1$. (2)~Given two deterministic $(\textsc{Inf}; \textsc{Sum})$-automata (resp., $(\textsc{Sup}; \textsc{Sum})$-automata) $\mathbb{A}_1, \mathbb{A}_2$ of width bounded by $8$, decide whether $\mathbb{E}_{\mathcal{U}}(\mathbb{A}_1) = \mathbb{E}_{\mathcal{U}}(\mathbb{A}_2)$. \end{restatable} \smallskip\noindent{\em Key ideas.} On finite words, $\mathbb{D}_{\mathcal{U}, \mathbb{A}}(-1) = 1$ holds if and only if every word has the value not exceeding $-1$, i.e., the distribution question and the universality problem are equivalent. We observe that in automata from the proof of Theorem~\ref{th:undecidable-limsup} such an equivalence holds as well, i.e., there exists a word with the value exceeding $-1$ if and only if $\mathbb{D}_{\mathcal{U}, \mathbb{A}}(-1) < 1$. To show (2), we consider the automaton $\mathbb{A}$ from (1) and its copy $\mathbb{A}'$ that at the first transition invokes a slave automaton that returns $-1$. On every word $w$, we have $\mathbb{A}'(w) = \min(-1, \mathbb{A}(w))$. Thus, the expected values are equal if and only if $\mathbb{D}_{\mathcal{U}, \mathbb{A}}(-1) = 1$. Finally, we have the following result for the absolute sum value function. \begin{restatable}{lemma}{PositiveSum} \label{positivesum} (1)~Given a Markov chain $\mathcal{M}$, a deterministic $(\textsc{Inf};\textsc{Sum}^+)$-automaton $\mathbb{A}$, and a threshold $\lambda$ in binary, both $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$ can be computed in exponential time. (2)~Given a Markov chain $\mathcal{M}$, a deterministic $(\textsc{Sup};\textsc{Sum}^+)$-automaton $\mathbb{A}$, and a threshold $\lambda$ in binary $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$ can be computed in exponential time. \end{restatable} The problem, how to compute $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ for deterministic $(\textsc{Sup};\textsc{Sum}^+)$-automata $\mathbb{A}$ remains open. \begin{theorem} Let $g \in \mathsf{FinVal}$. The complexity results for the probabilistic questions for $(\textsc{Inf}; g)$-automata and $(\textsc{Sup},g)$-automata are summarized in Table~\ref{tab:compInf}, with the exception of the expected question of $(\textsc{Sup}; \textsc{Sum}^+)$-automata. \label{th:compInf} \end{theorem} \begin{table}[h] \begin{tabular}{|c|c|c|c|} \hline & $\textsc{Min}, \textsc{Max},$ & \multirow{2}{*}{$\textsc{Sum}$} \\ & $\fBsum{B},\textsc{Sum}^+$ & \\ \hline Expected value & \multirow{3}{*}{$\textsc{ExpTime}{}$~(L.~\ref{l:infSolutions},\ref{l:infSumSolution})} & \multirow{4}{*}{Uncomputable}\\ \cline{1-1} {Distribution} & \multirow{3}{*}{$\textsc{PSpace}{}$-hard~(L.~\ref{l:hardness-for-det-inf})} & \multirow{4}{*}{(L.~\ref{l:inf-prob-undec})}\\ \cline{1-1} Almost sure & & \\ distribution & & \\ \hline Approximate: & \multicolumn{2}{|c|}{\multirow{2}{*}{$\textsc{ExpTime}{}$~(L.~\ref{l:infSolutions},\ref{l:infSumSolution})}} \\ (a)~expected value & \multicolumn{2}{|c|}{\multirow{2}{*}{\#P-hard~(L.~\ref{l:hardness-for-det-inf})}} \\ (b)~distribution & \multicolumn{2}{|c|}{} \\ \hline \end{tabular} \caption{The complexity results for various problems for deterministic NWA with $\textsc{Inf}$ and $\textsc{Sup}$ value functions, with exception of expected question of $(\textsc{Sup},\textsc{Sum}^+)$-automata which is open. Columns represent slave-automata value functions, rows represent probabilistic questions. } \label{tab:compInf} \end{table} \smallskip\noindent{\em Open question.} The decidability of the expected question of $(\textsc{Sup}; \textsc{Sum}^+)$-automata is open. This open problem is related to the language inclusion problem of deterministic $(\textsc{Sup}; \textsc{Sum}^+)$-automata which is also an open problem. \begin{remark}[Contrast with classical questions] \label{remark:Inf-classical-vs-probabilistic} Consider Table~\ref{tab1} for the classical questions and our results established in Table~\ref{tab:compInf} for probabilistic questions. There are some contrasting results, such as, while for $(\textsc{Sup},\textsc{Sum})$-automata the emptiness problem is undecidable, the approximation problems are decidable. \end{remark} \begin{remark}[Contrast of $\textsc{LimInf}$ vs $\textsc{Inf}$] \label{remark:LimInf-vs-Inf} We remark on the contrast of the $\textsc{LimInf}$ vs $\textsc{Inf}$ value functions. For the classical questions of emptiness and universality, the complexity and decidability always coincide for $\textsc{LimInf}$ and $\textsc{Inf}$ value functions for NWA (see Table~\ref{tab1}). Surprisingly we establish that for probabilistic questions there is a substantial complexity gap: while the $\textsc{LimInf}$ problems can be solved in polynomial time, the $\textsc{Inf}$ problems are undecidable, $\textsc{PSpace}{}$-hard, and even $\#P$-hard for approximation. \end{remark} \subsection{$\textsc{LimAvg}$ value function} \label{s:limavg} \begin{lemma} \label{l:limavg-poly} Let $g \in \mathsf{FinVal}$. Given a Markov chain $\mathcal{M}$ and a deterministic $(\textsc{LimAvg};g)$-automaton $\mathbb{A}$, the value $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ can be computed in polynomial time. \end{lemma} \begin{proof}[Proof sketch] We present the most interesting case when $g=\textsc{Sum}$. Let $\mathbb{A}$ be a $(\textsc{LimAvg};\textsc{Sum})$-automaton and let $\mathcal{M}$ be a Markov chain. We define a weighted Markov chain $\mathcal{M}^{\nestedA}$ as the product ${\cal A}_{mas} \times \mathcal{M}$, where ${\cal A}_{mas}$ is the master automaton of $\mathbb{A}$. The weights of $\mathcal{M}^{\nestedA}$ are the expected values of invoked slave automata, i.e., the weight of the transition $\tuple{(q,s),a,(q',s')}$ is the expected value of ${\mathfrak{B}}_i$, the slave automaton started by ${\cal A}_{mas}$ in the state $q$ upon reading $a$, w.r.t. the distribution given by $\mathcal{M}$ starting in $s$. One can show that the expected value of $\mathbb{A}$ w.r.t. $\mathcal{M}$, denoted by $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$, and the expected value of $\mathcal{M}^{\nestedA}$ coincide. The Markov chain $\mathcal{M}^{\nestedA}$ can be computed in polynomial time and has polynomial size in $|\mathbb{A}| + |\mathcal{M}|$. Thus, we can compute the expected values of $\mathcal{M}^{\nestedA}$, and in turn $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$, in polynomial time in $|\mathbb{A}| + |\mathcal{M}|$. \end{proof} \begin{restatable}{lemma}{LimAvgDistribution} \label{l:limavg-dist-poly} Let $g \in \mathsf{FinVal}$. Given a Markov chain $\mathcal{M}$, a deterministic $(\textsc{LimAvg}; g)$-automaton $\mathbb{A}$ and a value $\lambda$, the value $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$ can be computed in polynomial time. \end{restatable} \smallskip\noindent{\em Key ideas.} We show that the distribution is discrete. More precisely, let ${\cal A}$ be the product of the Markov chain $\mathcal{M}$ and the master automaton of $\mathbb{A}$. We show that almost all words, whose run end up in the same end SCC of ${\cal A}$, have the same value, which is equal to the expected value over words that end up in that SCC. Thus, to answer the distribution question, we have to compute for every end SCC $C$ of ${\cal A}$, the expected value over words that end up in $C$ and the probability of reaching $C$. Both values can be computed in polynomial time (see Lemma~\ref{l:limavg-poly}). \begin{theorem} Let $g \in \mathsf{FinVal}$. The complexity results for the probabilistic questions for $(\textsc{LimAvg}; g)$-automata are summarized in Table~\ref{tab:compLimAvg}. \label{th:compLimAvg} \end{theorem} \begin{table}[h] \begin{tabular}{|c|c|} \hline & $\textsc{Min}, \textsc{Max}, \fBsum{B},\textsc{Sum}^+, \textsc{Sum}$ \\ \hline All probabilistic & \textsc{PTime}{} \\ questions & (Lemmas~\ref{l:limavg-poly}~and~\ref{l:limavg-dist-poly}) \\ \hline \end{tabular} \caption{The complexity results for various problems for deterministic NWA with $\textsc{LimAvg}$ value function.} \label{tab:compLimAvg} \vspace{-1em} \end{table} \begin{remark}[Contrast with classical questions] \label{remark:LimAvg-classical-vs-probabilistic} Our results summarized in Table~\ref{tab:compLimAvg} contrast the results on classical questions shown in Table~\ref{tab1}. While classical questions are $\textsc{PSpace}{}$-complete, in $\textsc{ExpSpace}{}$ or open, we establish polynomial-time algorithms for all probabilistic questions. \end{remark} \section{Results on non-deterministic automata} \label{s:nondeterminism} In this section, we briefly discuss non-deterministic NWA evaluated on Markov chains. We present two negative results. \smallskip\noindent{\em Conceptual difficulty.} The evaluation of non-deterministic (even non-nested) weighted automaton over a Markov chain is conceptually different as compared to the standard model of Markov decision processes (MDPs). Indeed, in an MDP, probabilistic transitions are interleaved with non-deterministic transitions, whereas in the case of automaton, it runs over a word that has been already generated by the Markov chain. In MDPs, the strategy to resolve non-determinism can only rely on the past, whereas in the automaton model the whole future is available (i.e., there is a crucial distinction between online vs offline processing of the word). Below we present an example to illustrate the conceptual problem. \begin{example} Consider a non-deterministic $\textsc{LimAvg}$-automaton ${\cal A}$, depicted in Figure~\ref{fig:nondet-vs-MDP}. The automaton ${\cal A}$ has two states $q_0, q_1$ and it work over the alphabet $\Sigma = \{a,b,\#\}$. The state $q_0$ has transitions to itself labeled by $a,b,\#$ of weights $1,-1,0$, respectively. The state $q_1$ has the same self-loops as $q_0$, but weight of $a$ is $-1$ and $b$ is $1$. Also, there are transitions from $q_0$ to $q_1$ and back labeled by $\#$ of weight $0$. The automaton ${\cal A}$ is depicted in Intuitively, the automaton processes a given word in blocks of letters $a,b$ separated by letters $\#$. At the beginning of every block it decides whether the value of this block is the number of $a$ letters $n_a$ minus the number of $b$ letters $n_b$ divided by $n_a + n_b$ (i.e., $\frac{n_a - n_b}{n_a + n_b}$) or the opposite (i.e., $\frac{n_b - n_a}{n_a + n_b}$). \begin{figure} \centering \begin{tikzpicture} \tikzstyle{state}=[draw,circle,minimum size=0.8cm] \tikzstyle{Astate}=[draw,circle,minimum size=0.70cm] \node[state] (Q0) at (0,0) {$q_0$}; \node[Astate] at (0,0) {}; \node[state] (Q1) at (3,0) {$q_1$}; \node[Astate] at (3,0) {}; \draw[->,loop left] (Q0) to node[left]{$(\#,0)$} (Q0); \draw[->,loop above] (Q0) to node[left]{$(a,1)$} (Q0); \draw[->,loop below] (Q0) to node[left]{$(b,-1)$} (Q0); \draw[->,loop right] (Q1) to node[right]{$(\#,0)$} (Q1); \draw[->,loop above] (Q1) to node[right]{$(a,-1)$} (Q1); \draw[->,loop below] (Q1) to node[right]{$(b,1)$} (Q1); \draw[->,bend left] (Q0) to node[above]{$(\#,0)$} (Q1); \draw[->,bend left] (Q1) to node[below]{$(\#,0)$} (Q0); \end{tikzpicture} \caption{An example of non-deterministic automaton, in which non-deterministic choices has to ``depend on the future'' in order to obtain the infimum.} \label{fig:nondet-vs-MDP} \vspace{-1em} \end{figure} Let $\mathcal{U}$ be the uniform distribution on infinite words over $\Sigma$. Suppose that the expected value of ${\cal A}$ w.r.t. $\mathcal{U}$ is evaluated as in MDPs case, i.e., non-deterministic choices depend only on the read part of the word. Then, since the distribution is uniform, any strategy results in the same expected value, which is equal to $0$. Now, consider $\mathbb{E}_{\mathcal{U}}({\cal A})$. The value of every block is at most $0$ as the automaton works over fully generated word and at the beginning of each block can guess whether the number of $a$'s or $b$'s is greater. Also, the blocks $a\#, b\#$ with the average $-\frac{1}{2}$ appear with probability $\frac{2}{9}$, hence $\mathbb{E}_{\mathcal{U}}({\cal A}) < -\frac{1}{9}$. Thus, the result of evaluating a non-deterministic weighted automaton over a Markov chain is different than evaluating it as an MDP. \end{example} \smallskip\noindent{\em Computational difficulty.} In contrast to our polynomial-time algorithms for the probabilistic questions for deterministic NWA with $(\textsc{LimSup},\textsc{Sum})$ value function, we establish the following undecidability result for the non-deterministic automata with width~1. Lemma~\ref{th:nondeterminism-is-hard} implies Theorem~\ref{c:nondet-undecidable}. \begin{restatable}{lemma}{NondeterminismLemma} The following problem is undecidable: given a non-deterministic $(\textsc{LimSup}; \textsc{Sum})$-automaton $\mathbb{A}_M$ of width $1$, decide whether $\mathbb{P}(\{ w: {\cal A}_M(w) = 0 \}) = 1$ or $\mathbb{P}(\{ w: {\cal A}_M(w) = -1 \}) = 1$ w.r.t. the uniform distribution $\{0,1\}$. \label{th:nondeterminism-is-hard} \end{restatable} \begin{restatable}{theorem}{NondeterministicUncomputable} All probabilistic questions (Q1-Q5) are undecidable for non-deterministic $(\textsc{LimSup},\textsc{Sum})$-automata of width~1. \label{c:nondet-undecidable} \end{restatable} \section{Discussion and Conclusion} In this work we study the probabilistic questions related to NWA and automata with monitor counters. We establish the relationship between NWA and automata with monitor counters, and present a complete picture of decidability for all the probabilistic questions we consider. Our results establish a sharp contrast of the decidability and complexity of the classical questions (of emptiness and universality) and the probabilistic questions for deterministic automata (see Tables~\ref{tab1},~\ref{tab:compLimInf},~\ref{tab:compInf},~and~\ref{tab:compLimAvg}). In addition, there is also a sharp contrast for deterministic and non-deterministic automata. For example, for $(\textsc{LimSup},\textsc{Sum})$-automata, the classical questions are undecidable for deterministic and non-deterministic automata, while the probabilistic questions are decidable for deterministic automata, but remain undecidable for non-deterministic automata (see Table~\ref{tab2}). We have some complexity gap (e.g., $\textsc{ExpTime}{}$ vs $\textsc{PSpace}{}$) which is due to the fact that the computational questions we consider for Markov chains are in $\textsc{PTime}{}$ (as compared to $\textsc{NLogSpace}{}$ for graphs), and we need to evaluate exponential-size Markov chains. Closing the complexity gap is an interesting open question. \begin{table} \begin{tabular}{|c|c|c|} \hline & Det. & Non-det. \\ \hline Emptiness & \multicolumn{2}{|c|}{Undec.~(from~\cite{nested})} \\ \hline Probabilistic & \textsc{PTime}{}~(L.~\ref{l:limavg-poly}) & Uncomputable~(Th.~\ref{c:nondet-undecidable}) \\ questions & & \\ \hline \end{tabular} \caption{Decidability and complexity status of the classical and probabilistic questions for $(\textsc{LimSup};\textsc{Sum})$-automata. The negative results hold also for NWA of bounded width and automata with monitor counters.} \label{tab2} \vspace{-1em} \end{table} \section{Proofs} \section{Equivalence of nested weighted automata and automata with monitor counters} \begin{comment} \begin{example}[Translation of automata with monitor counters to nested weighted automata] Consider a deterministic automaton ${\cal A}$ with $k$ monitor counters. We construct an NWA $\mathbb{A}$ equivalent to ${\cal A}$. The automaton $\mathbb{A}$ uses $k$ slave automata to track values of $k$ monitor counters in the following way. The master automaton of $\mathbb{A}$ simulates ${\cal A}$; it invokes slave automata whenever ${\cal A}$ starts monitor counters. Slave automata simulate ${\cal A}$ as well. Each slave automaton is associated with some counter $i$; it starts in the state (of ${\cal A}$) the counter $i$ is initialized, simulates the value of counter $i$, and terminates when counter $i$ is terminated. Figure~\ref{fig:AMCtoNWA} presents the result of transition of the automaton $\aut_{\textrm{diff}}$ from Example~\ref{ex:AMC} to a $(\textsc{Sup};\textsc{Sum})$-automaton of width bounded by $3$. \begin{figure} \begin{tikzpicture} \tikzstyle{state}=[draw,circle,minimum size=0.8cm] \tikzstyle{Astate}=[draw,circle,minimum size=0.70cm] \newcommand{2.0}{0.8} \begin{scope}[yshift=4cm,xshift=-2cm] \node[state] (q11) at (0,-0.5) {$q_3$}; \node[Astate] at (0,-0.5) {}; \node[state] (q12) at (2*2.0,-0.5) {$q_0$}; \node[state] (q15) at (2*2.0,-2) {$q_1$}; \node[state] (q16) at (0,-2) {$q_2$}; \node (null) at (0.8,0.0) {}; \draw[->] (q12) to node[right] {(\#,0)} (q15); \draw[->, loop right] (q15) to node[above] {(a,1)} (q15); \draw[->] (q15) to node[above] {(\#,0)} (q16); \draw[->, loop left] (q16) to node[above] {(a,-1)} (q16); \draw[->] (null) to (q12); \draw[->] (q16) to node[left] {(\#,0)} (q11); \node (A1) at (1,-2.7) {${\mathfrak{B}}_1$}; \end{scope} \begin{scope}[yshift=3.8cm,xshift=1.8cm] \node[state] (q21) at (0,-0.5) {$q_2$}; \node[Astate] at (0,-0.5) {}; \node[state] (q25) at (2*2.0,-2) {$q_0$}; \node[state] (q26) at (0,-2) {$q_1$}; \node (null) at (1.0,-1.0) {}; \draw[->, loop right] (q25) to node[below] {(a,1)} (q25); \draw[->] (q25) to node[below] {(\#,0)} (q26); \draw[->, loop left] (q26) to node[below] {(a,-1)} (q26); \draw[->] (null) to (q25); \draw[->] (q26) to node[left] {(\#,0)} (q21); \node (A2) at (1,-2.7) {${\mathfrak{B}}_2$}; \end{scope} \node[state] at (-2,0) {$q_0$}; \node[Astate] at (-2,0) {}; \node (A3) at (-2,-0.8) {${\mathfrak{B}}_3$}; \node[state] (q1) at (0,0) {$q_0$}; \node[Astate] at (0,0) {}; \node[state] (q2) at (2*2.0,0) {$q_1$}; \node[state] (q5) at (2*2.0,-2) {$q_2$}; \node[state] (q6) at (0,-2) {$q_3$}; \node (null) at (-1.0,0.3) {}; \draw[->] (q1) to node[above] (e1) {(\#,1)} (q2); \draw[->] (q2) to node[right] (e2) {(\#,2)} (q5); \draw[->, loop right] (q5) to node[right] (e3) {(a,3)} (q5); \draw[->] (q5) to node[above] (e4) {(\#,3)} (q6); \draw[->, loop left] (q6) to node[left] (e5) {(a,3)} (q6); \draw[->] (q6) to node[left] (e6) {(\#,3)} (q1); \draw[->] (null) to (q1); \draw[->,dotted] (e1) to (A1); \draw[->,dotted] (e2) to (A2); \end{tikzpicture} \caption{A nested weighted automaton resulting from translation of the automaton $\aut_{\textrm{diff}}$ from Example~\ref{ex:AMC}.} \label{fig:AMCtoNWA} \end{figure} \end{example} \begin{example}(Translation of nested weighted automata of bounded width to automata with monitor counters) Consider an $(f;\textsc{Sum})$-automaton $\mathbb{A}$ of width bounded by $k$. The automaton $\mathbb{A}$ can be simulated by an automaton with monitor counters which simulates the master automaton and up to $k$ slave automata running in parallel. To simulate values of slave automata it uses monitor counters, each counter separately for each slave automaton. Figure~\ref{fig:NWAtoAMC} shows the result of translation of the automaton $\mathbb{A}$ from Example~\ref{ex:NWA} to the automaton with monitor counters ${\cal A}_{\mathbb{A}}$. The set of states of ${\cal A}_{\mathbb{A}}$ there is ${q_0, \ldots, q_k} \times (\{q_0^2, \bot\})^k$, i.e., the states of the master automaton and all non-accepting states of slave automata (in deterministic NWA accepting states are sink states, hence storing them is redundant). Now, observe that only reachable states of ${\cal A}_{\mathbb{A}}$ are $(q_0, \bot, \ldots, \bot), (q_1, q_0^2, \bot, \ldots, \bot), \ldots, (q_k, q_0^2, \ldots, q_0^2)$, i.e., the reachable part of ${\cal A}_{\mathbb{A}}$ is isomorphic (in the sense of graphs) to the master automaton of $\mathbb{A}$. \begin{figure} \begin{tikzpicture} \tikzstyle{state}=[draw,circle,minimum size=0.8cm] \tikzstyle{Astate}=[draw,circle,minimum size=0.7cm] \newcommand{2.0}{2.0} \node[state] (q0) at (-1,0) {$q_0$}; \node[Astate] at (-1,0) {$q_0$}; \node[state] (q1) at (2.0,0) {$q_1$}; \node[state, inner sep =1pt] (q2) at (2*2.0,0) {$q_{k-1}$}; \node[state] (q3) at (3*2.0,0) {$q_k$}; \node at (1.5*2.0,0) {$\ldots$}; \draw[->, loop above] (q1) to node[left] (e2) {$(\#,\vec{0})$} (q1); \draw[->, loop above] (q2) to node[left] (e3) {$(\#,\vec{0})$} (q2); \draw[->, loop above] (q3) to node[left] (e4) {$(\#,\vec{0})$} (q3); \draw[->,bend left=10] (q1) to node[below] (e5) {$(g,\vec{t})$} (q0); \draw[->,bend left=45] (q2) to node[below] (e6) {$(g,\vec{t})$} (q0); \draw[->,bend left=65] (q3) to node[below] (e7) {$(g,\vec{t})$} (q0); \draw[->] (q0) to node[above] (e8) {$(r,\vec{s_1})$} (q1); \draw[->] (q2) to node[above] (e9) {$(r,\vec{s}_k)$} (q3); \node at (5.5,-1.5) {${\cal A}_{\mathbb{A}}$}; \end{tikzpicture} \caption{The (reduced) result of translation of the automaton $\mathbb{A}$ from Example~\ref{ex:NWA} to an automaton with monitor counters. Vector $\vec{0}$ (resp., $\vec{t}$) denotes the $k$-dimensional vector whose all components equal $0$ (resp., $t$). Vector $\vec{s}_i$ denotes the $k$-dimensional vector whose $i$-th component is $s$ and other components are $0$. } \label{fig:NWAtoAMC} \end{figure} \end{example} \end{comment} \MCvsNested* \begin{proof} \Paragraph{(Translation of automata with monitor counters to NWA)}: Consider a deterministic $f$-automaton ${\cal A}^{\textrm{m-c}}$ with $k$ monitor counters. We define an $(f;\textsc{Sum})$-automaton $\mathbb{A}$, which consists of a master automaton ${\cal A}_{mas}$ and slave automata ${\mathfrak{B}}_1, \ldots, {\mathfrak{B}}_{k+1}$ defined as follows. The slave automaton ${\mathfrak{B}}_{k+1}$ is a dummy automaton, i.e., it has only a single state which is both the initial and the accepting state. Invoking such an automaton is equivalent to taking a silent transition (with no weight). Next, the master automaton ${\cal A}_{mas}$ and slave automata ${\mathfrak{B}}_1, \ldots, {\mathfrak{B}}_k$ are variants of ${\cal A}^{\textrm{m-c}}$, i.e., they share the underlying transition structure. The automaton ${\cal A}_{mas}$ simulates ${\cal A}^{\textrm{m-c}}$, i.e., it has the same states and the transitions among these states as ${\cal A}^{\textrm{m-c}}$. However, whenever ${\cal A}^{\textrm{m-c}}$ activates counter $i$, the master automaton invokes the slave automaton ${\mathfrak{B}}_i$. The accepting condition of ${\cal A}_{mas}$ is the same as of ${\cal A}^{\textrm{m-c}}$. Slave automata ${\mathfrak{B}}_1,\ldots, {\mathfrak{B}}_k$ keep track of counters $1, \ldots, k$, i.e., for every $i \in \{1,\ldots, k\}$, the slave automaton ${\mathfrak{B}}_i$ simulates ${\cal A}^{\textrm{m-c}}$ and applies instructions of ${\cal A}^{\textrm{m-c}}$ for counter $i$ to its value. That is, whenever ${\cal A}^{\textrm{m-c}}$ changes the value of counter $i$ by $m$, the automaton ${\mathfrak{B}}_i$ takes a transition of the weight $m$. Finally, ${\mathfrak{B}}_i$ terminates precisely when ${\cal A}^{\textrm{m-c}}$ terminates counter $i$. The semantics of automata with monitor counters implies that $\mathbb{A}$ accepts if and only if ${\cal A}^{\textrm{m-c}}$ accepts and, for every word, the sequences of weights produced by the runs of $\mathbb{A}$ and ${\cal A}^{\textrm{m-c}}$ on that word coincide. Therefore, the values of $\mathbb{A}$ and ${\cal A}^{\textrm{m-c}}$ coincide on every word. \Paragraph{(Translation of NWA of bounded width to automata with monitor counters)}: We show that non-deterministic (resp., deterministic) $f$-automata with monitor counters subsume non-deterministic (resp., deterministic) $(f;\textsc{Sum})$-automata of bounded width. Consider a non-deterministic $(f;\textsc{Sum})$-automaton $\mathbb{A}$ with width bounded by $k$. We define an $f$-automaton ${\cal A}^{\textrm{m-c}}$ with $k$ monitor counters that works as follows. Let $Q_{mas}$ be the set of states of the master automaton of $\mathbb{A}$ and $Q_s$ be the union of the sets of states of the slave automata of $\mathbb{A}$. The set of states of ${\cal A}^{\textrm{m-c}}$ is $Q_{mas} \times Q_{s} \times \ldots \times Q_s = Q_{mas} \times (Q_s)^k$. The automaton ${\cal A}^{\textrm{m-c}}$ simulates runs of the master automaton and slave automata by keeping track of the state of the master automaton and states of up to $k$ active slave automata. Moreover, it uses counters to simulate the values of slave automata, i.e., whenever a slave automaton is activated, ${\cal A}^{\textrm{m-c}}$ simulates the execution of this automaton and assigns some counter $i$ to that automaton. Next, when the simulated slave automaton takes a transition of the weight $m$ the automaton ${\cal A}^{\textrm{m-c}}$ changes the value of counter $i$ by $m$. Finally, ${\cal A}^{\textrm{m-c}}$ terminates counter $i$ when the corresponding slave automaton terminates. Since $\mathbb{A}$ has width bounded by $k$, the simulating automaton ${\cal A}^{\textrm{m-c}}$ never runs out of counters to simulate slave automata. Moreover, as it simulates runs of the master automaton and slave automata of $\mathbb{A}$, there is a one-to-one correspondence between runs of ${\cal A}^{\textrm{m-c}}$ and runs of $\mathbb{A}$ and accepting runs of $\mathbb{A}$ correspond to accepting runs of ${\cal A}^{\textrm{m-c}}$. Finally, the sequence of weights for the master automaton determined by a given run of $\mathbb{A}$ coincides with the sequence of weights of ${\cal A}^{\textrm{m-c}}$ on the corresponding run. Therefore, the values of $\mathbb{A}$ and ${\cal A}^{\textrm{m-c}}$ coincide on every word. Thus, non-deterministic $f$-automata with monitor counters subsume non-deterministic $(f;\textsc{Sum})$-automata of bounded width. Moreover, the one-to-one correspondence between runs of $\mathbb{A}$ and ${\cal A}^{\textrm{m-c}}$ implies that if $\mathbb{A}$ is deterministic, then ${\cal A}^{\textrm{m-c}}$ is deterministic. Therefore, deterministic $f$-automata with monitor counters subsume deterministic $(f;\textsc{Sum})$-automata of bounded width. This completes the proof. \end{proof} \section{Basic properties} \Paragraph{Almost-sure acceptance for deterministic NWA}. We present the proof of Proposition~\ref{prop:almostAll}. \AcceptAlmostAllInP* \begin{proof} The master automaton has to accept almost all words. For all pairs $(q,s)$, where $q$ is the initial state of some slave automaton ${\mathfrak{B}}_i$ and $s$ is a state of the Markov chain $\mathcal{M}$, we check that either ${\mathfrak{B}}_i$ is almost-surely not invoked while $\mathcal{M}$ is in the state $s$ or ${\mathfrak{B}}_i$ almost-surely accepts (w.r.t. the distribution given by $\mathcal{M}$ started in $s$). One can easily check that this condition is necessary and sufficient, and it can be checked in polynomial time. \end{proof} \begin{remark}[Almost-sure acceptance] In the main article we consider that we have almost-sure acceptance. As mentioned (in paragraph ``Property of almost-sure acceptance'' in before Section~\ref{s:liminf}), the answer to the expected value problem does not change even without the assumption. We will show next that without the almost-sure acceptance condition, the distribution questions become similar to $\textsc{Inf}$ and $\textsc{Sup}$ value functions. Hence in the main article we consider the almost-sure acceptance property, and presented the conceptually interesting results. Moreover, classically weighted automata have been considered without any acceptance conditions (i.e., all words are accepted), and then the almost-sure acceptance is trivially ensured. \end{remark} \smallskip\noindent{\bf Non almost-sure acceptance.} We show that removing the restriction that almost all words are accepted changes the complexity of distribution questions. The intuition behind this is that the condition ``all slave automata accept'' allows to simulate (in a restricted way) $\textsc{Inf}$ value function. This also indicates that the assumption that almost all words are accepting is justified in the probabilistic framework. \begin{lemma} Assume that the set of rejected words can have non-zero probability. Then, for all $f \in \mathsf{InfVal}$ and $g \in \mathsf{FinVal}$, we have \begin{compactenum} \item The distribution question for deterministic $(f;g)$-automata is $\textsc{PSpace}{}$-hard. \item The approximate distribution question for deterministic $(f;g)$-automata is $\#P$-hard. \end{compactenum} \end{lemma} \begin{proof} The distribution (resp., approximate distribution) question for deterministic $(\textsc{Inf};g)$-automata with slaves returning only values $0,1$ reduce to the distribution (resp., approximate distribution) question for $(f;g)$-automata. Given a deterministic $(\textsc{Inf};g)$-automaton $\mathbb{A}$ we modify it to $\mathbb{A}^f$ by modifying all slave automata so that each slave automaton instead of returning $1$, rejects. Observe that if a deterministic $g$-automaton returns only values $0,1$, then the set of words with value $1$ is regular. Now, for every word $w$, the following conditions are equivalent \begin{compactitem} \item $\mathbb{A}^f(w)$ accepts, \item $\mathbb{A}^f(w) = 0$, and \item $\mathbb{A}(w) = 0$. \end{compactitem} By Lemma~\ref{l:hardness-for-det-inf}, the distribution question for deterministic $(\textsc{Inf};g)$-automata is $\textsc{PSpace}{}$-hard and the approximate distribution question for deterministic $(\textsc{Inf};g)$-automata is $\#P$-hard. Observe that slave automata in the proof of Lemma~\ref{l:hardness-for-det-inf} return values $0,1$. Hence, the result follows. \end{proof} In the sequel of the appendix, we present the conceptually interesting results with the almost-sure acceptance condition. As mentioned above, in the classical setting of weighted automata which has no accepting condition, the almost-sure acceptance is trivially satisfied. Before the other results, we present a technical duality result, which will be used in the proofs. \Paragraph{Duality property between infimum and supremum}. In the sequel, when we consider the expected value and the distribution, in most cases we consider only $\textsc{Inf}$ and $\textsc{LimInf}$ value functions, and by duality, we obtain results for $\textsc{Sup}$ and $\textsc{LimSup}$ value functions, respectively. The only exception are $(\textsc{Inf}, \textsc{Sum}^+)$-automata and $(\textsc{Sup}, \textsc{Sum}^+)$-automata, which have to be considered separately. For every value function $g \in \mathsf{FinVal} \setminus \{\textsc{Sum}^+\}$ we define $-g$ as follows: $-{\textsc{Min}} = \textsc{Max}, -{\textsc{Max}} = \textsc{Min}$ and $-{g} = g$ for $g \in \{\fBsum{B}, \textsc{Sum}\}$. \begin{restatable}{lemma}{} For every $g \in \mathsf{FinVal} \setminus \{\textsc{Sum}^+\}$, every deterministic $(\textsc{Sup}; g)$-automaton (resp. $(\textsc{LimSup}; g)$-automaton) $\mathbb{A}_1$ can be transformed to a deterministic $(\textsc{Inf}; -{g})$-automaton (resp. $(\textsc{LimInf}; -{g})$-automaton) $\mathbb{A}_2$ of the same size such that for every word $w$ we have $\mathbb{A}_1(w) = -\mathbb{A}_2(w)$. \label{l:sup-to-inf} \end{restatable} \begin{proof} The automaton $\mathbb{A}_2$ is obtained from $\mathbb{A}_1$ by multiplying all the weights by $-1$. \end{proof} \section{Undecidability results} In this section we prove Theorem~\ref{th:undecidable-limsup}, Lemma~\ref{l:inf-prob-undec} and Theorem~\ref{th:nondeterminism-is-hard}. They have similar proofs, therefore we give the proof of Theorem~\ref{th:undecidable-limsup} in detail and for the remaining two results we only discuss the changes needed to adapt the proof of Theorem~\ref{th:undecidable-limsup} to show these results. \newcommand{\mathcal{M}}{\mathcal{M}} \EigthCountersUndecidable* \begin{proof} We show undecidability of the emptiness problem for deterministic $\textsc{LimSup}$-automata with $8$ monitor counters. The proof for deterministic $\textsc{Sup}$-automata is virtually the same. Then, the reduction of the universality problem for deterministic $\textsc{Inf}$-automata (resp., $\textsc{LimInf}$-automata) to the emptiness problem for deterministic $\textsc{Sup}$-automata (resp., $\textsc{LimSup}$-automata) follows from the converse of Lemma~\ref{l:sup-to-inf}. Intuitively, it suffices to multiply all weights in a deterministic $\textsc{Inf}$-automaton (resp., $\textsc{LimInf}$-automaton) by $-1$. The halting problem for deterministic two-counter machines is undecidable~\cite{minsky1961recursive}. Let $\mathcal{M}$ be a deterministic two-counter machine and let $Q$ be the set of states of $\mathcal{M}$. We define a deterministic $\textsc{LimSup}$-automaton ${\cal A}$ with $8$ monitor counters such that ${\cal A}$ has a run of the value not exceeding $0$ if and only if $\mathcal{M}$ has an accepting computation. Consider the alphabet $\Sigma = Q \cup \{ 1,2,\#,\$\}$. We encode computations of $\mathcal{M}$ as a sequence of configurations separated by $\#$. A single configuration of $\mathcal{M}$, where the machine is in the state $q$, the first counter has the value $x$ and the second $y$ is encoded by the word $q 1^{x} 2^{y}$. Finally, computations of $\mathcal{M}$ are separated by $\$$. We define the automaton ${\cal A}$ that for a word $w \in \Sigma^*$ returns the value $0$ if (some infinite suffix of) $w$ encodes a sequence valid accepting computations of $\mathcal{M}$. Otherwise, ${\cal A}$ returns the value at least $1$. The automaton ${\cal A}$ works as follows. On a single computation, i.e., between symbols $\$$, it checks consistency of the transitions by checking two conditions: (C1)~Boolean consistency, and (C2)~counter consistency. The condition (C1) states that encoded subsequence configurations, which are represented by subwords $q 1^x 2^y \# q' 1^{x'} 2^{y'}$, are consistent with the transition function of $\mathcal{M}$ modulo counter values, i.e., under counter abstraction to values $0$ and ``strictly positive''. Observe that a finite automaton can check that. The conditions that need to be checked are as follows: (C1-1)~Boolean parts of transitions are consistent; the automaton checks only emptiness/nonemptiness of counters and based on that verifies whether a subword $q 1^x 2^y \# q'$ is valid w.r.t. transitions of $\mathcal{M}$. For example, consider transition $(q,\bot, +, q', +1, -1)$ of $\mathcal{M}$ stating that ``if $\mathcal{M}$ is in state $q$, the first counter is $0$ and the second counter is positive, then change the state to $q'$ increment the first counter and decrement the second one''. This transition corresponds to the regular expression $q 2^+ \# q'$. (C1-2)~The initial and finite configurations in each computation (between $\$$ symbols) are respectively $q_I 1^0 2^0$ and $q_f 1^0 2^0$. (C1-3)~The word encodes infinitely many computations, i.e., the word contains infinitely many $\$$ symbols. The last conditions rejects words encoding non-terminating computations. To check the condition (C2), ${\cal A}$ uses monitor counters. It uses $4$ monitor counters to check transitions between even and odd positions and the remaining $4$ to check validity of the remaining transitions. Then, between even and odd positions it uses $2$ monitor counters for each counter of $\mathcal{M}$. These monitor counters encode the absolute values between the intended values of counters (i.e., assuming that counter values are consistent with the instructions) and the actual values. For example, for a subword $q 1^x 2^y \# q' 1^{x'} 2^{y'}$, the automaton ${\cal A}$ checks whether the value of counter $1$ is consistent with transition $(q,\bot, +, q', +1, -1)$ in the following way. First monitor counter ignores letters $2$ and initially decrements its value at every letter $1$ until it reads letter $\#$ (where its value is $-x$). Next, it switches its mode and increments its value at letters $1$ while ignoring letters $2$. In that way its value upon reading $q 1^x 2^y \# q' 1^{x'} 2^{y'}$ equals $-x + x'$. Finally, it increments once counter $1$. Thus, the value of the monitor counter $1$ is $-x +x' +1$. The second monitor counter works in a similar way, but it decrements whenever the first counter increments and vice versa. At the end, the value of the second monitor counter is $x - x' -1$. Observe that the $\sup$ of them is $|x - (x'+1)|$, which is $0$ if and only if the value of counter $1$ is consistent with the transition $(q,\bot, +, q', +1, -1)$. It follows that the supremum over the values of all counters is $0$ only if all counter values are consistent with the transitions. Therefore, the value $\textsc{LimSup}$ of the whole word is $0$ if and only if starting at some point all computations are valid and accepting. The latter is possible only if $\mathcal{M}$ has at least one such a computation. Otherwise, the value of $\textsc{LimSup}$ is at least $1$. \end{proof} \InfSumUndecidable* \begin{proof} \Paragraph{(1)}: In the following, we discuss how to adapt the proof of Theorem~\ref{th:undecidable-limsup} to prove this lemma. Given a deterministic two-counter machine $\mathcal{M}$, we construct a deterministic $(\textsc{Inf}; \textsc{Sum})$-automaton $\mathbb{A}_M$ such that for a word $w$ of the form $\$u\$ w'$ it returns $0$ if $u$ is a valid accepting computation of $\mathcal{M}$ and a negative values otherwise. We use $\Sigma = Q \cup \{ 1,2,\#,\$\}$ for convenience; one can encode letters from $\Sigma$ using two-letter alphabet $\{0,1\}$. On words that are not of the form $\$u\$ w'$, the automaton $\mathbb{A}_M$ returns $-1$. Basically, the automaton $\mathbb{A}_M$ simulates on $u$ the execution of ${\cal A}$ (as defined in the proof of Theorem~\ref{th:undecidable-limsup})) with the opposite values of counters, i.e., if a counter of ${\cal A}$ has a value $k$, the corresponding counter in the automaton we simulate stores $-k$. The simulation is virtually the same as in Lemma~\ref{l:mc-vs-nested}. Recall, that the supremum of monitor counters at a subword $\$u\$$ is $0$ if and only if $u$ encodes valid and accepting computation of $\mathcal{M}$. Otherwise, the supremum is at least $1$. Thus, in our case, the infimum over the values of slave automata is $0$ if and only if $u$ encodes a valid and accepting computation of $\mathcal{M}$. Otherwise, the value of $\textsc{Inf}$ is at most $-1$. Therefore, $\mathbb{D}_{\mathcal{U},\mathbb{A}}(-1) = 1$ if and only if $\mathcal{M}$ does not have an accepting computation. \Paragraph{(2)}: We show that knowing how to compute the expected value of deterministic $(\textsc{Inf}; \textsc{Sum})$-automata, we can decide equality in the distribution question. Let $\mathbb{A}$ be an automaton and we ask whether $\mathbb{D}_{\mathcal{U},\mathbb{A}}(-1) = 1$. We construct another automaton $\mathbb{A}'$ that simulates $\mathbb{A}$ but at the first transition invokes a slave automaton that returns the value $-1$. The values of automata $\mathbb{A}$ and $\mathbb{A}'$ differ precisely on words which have values (assigned by $\mathbb{A}$) greater than $-1$. Thus, their expected values $\mathbb{E}_{\mathcal{U}}(\mathbb{A})$ and $\mathbb{E}_{\mathcal{U}}(\mathbb{A}')$ differ if and only if $\mathbb{D}_{\mathcal{U},\mathbb{A}}(-1)$ is different than $1$. Due to undecidability of the latter problem, there is no terminating Turing machine that computes the expected value of $(\textsc{Inf};\textsc{Sum})$-automata over the uniform distribution. \end{proof} \NondeterminismLemma* \begin{proof} In the following, we discuss how to adapt the proof of Theorem~\ref{th:undecidable-limsup} to prove this lemma. First, observe that we can encode any alphabet $\Sigma$ using two-letter alphabet $\{0,1\}$, therefore we will present our argument for multiple-letters alphabet as it is more convenient. Given a two-counter machine $\mathcal{M}$ we construct a non-deterministic $(\textsc{LimSup}; \textsc{Sum})$-automaton $\mathbb{A}_M$ such that ${\cal A}_M(w) = 0$ if and only if $w$ contains infinitely many subsequences that correspond to valid accepting computations of $\mathcal{M}$. As in (1), for every subsequence $\$ u \$$, where $u$ does not contain $\$$, we check whether $u$ is an encoding of a valid accepting computation of $\mathcal{M}$. To do that, we check conditions (C1) and (C2) as in the proof of Theorem~\ref{th:undecidable-limsup}, but using slave automata. At the letter $\$$, the master automaton non-deterministically decides whether $u$ violates (C1) or (C2) and either starts a slave automaton checking (C1) or (C2). The slave automaton checking (C1) works as in the proof of Theorem~\ref{th:undecidable-limsup}. It returns $-1$ if (C1) is violated and $0$ otherwise. The slave automaton checking (C2) non-deterministically picks the position of inconsistency and one of the monitor counter from the proof of Theorem~\ref{th:undecidable-limsup} that would return a negative value. The slave automaton simulates this monitor counter. Finally, at the letter $\$$ following $u$, the master automaton starts the slave automaton that returns the value $-1$. It follows that the supremum of all values of slave automata started at $u\$$ is either $-1$ or $0$. By the construction, there is a (sub) run on $u\$$ such that the supremum of the values of all slave automata is $-1$ if and only if $u$ does not encode valid accepting computation of $\mathcal{M}$. Otherwise, this supremum is $0$. Therefore, the value of the word $w$ is $0$ if and only if $w$ contains infinitely many subsequences that correspond to valid accepting computations of $\mathcal{M}$. Now, if $\mathcal{M}$ has at least one valid accepting computation $u$, then almost all words contain infinitely many occurrences of $u$ and almost all words have value $0$. Otherwise, all words have value $-1$. This implies that there is no terminating Turing machine that computes any of probabilistic questions. \end{proof} \section{Proofs from Section~\ref{s:liminf}} \stronglyConnectedComponenets* \begin{proof} In a nutshell, we exploit the fact that the set of words in which all finite subwords occur infinitely often has probability $1$. Therefore, every slave automaton (which is invoked infinitely often) runs on every finite word infinitely often. In particular, every slave automaton runs infinitely often on the words that correspond to runs of the minimal value. More precisely, since $\mathbb{A}$ is deterministic, all runs of $\mathbb{A}$ on the distribution given by $\mathcal{M}$ correspond to the runs in ${\cal A}_{mas} \times \mathcal{M}$, where ${\cal A}_{mas}$ is the master automaton of $\mathbb{A}$. Among runs in ${\cal A}_{mas} \times \mathcal{M}$, the set of runs where a given finite sequence of states occurs finitely many times has probability $0$. In particular, for every state $(q,s)$, we consider a sequence that corresponds to the master automaton invoking some slave ${\mathfrak{B}}_i$ followed by a word on which ${\mathfrak{B}}_i$ return its minimal value. Therefore, almost all runs contain the considered sequence infinitely often. It follows that the value of almost all runs is the minimum over reachable states $(q,s)$ from ${\cal A}_{mas} \times \mathcal{M}$ and transitions $(s,a,s')$ of $\mathcal{M}$ of the minimal value the slave automaton invoked in $(q,a,q')$ can achieve on all words generated by $\mathcal{M}$ starting with the transition $(s,a,s')$. Such values can be computed in polynomial time in $|\mathcal{M}| + |\mathbb{A}|$. Each of these values is either $-\infty$ (or $-B$ for $g = \fBsum{B}$) or it is the sum of some subset of weights in some slave automaton. Since we consider weights to be given in the unary notation, the sum of a subset is bounded by the size of the automaton. Thus, $|\lambda| \leq |\mathbb{A}|$ or $\lambda = -\infty$ (resp., $-B$ for $g = \fBsum{B}$). \end{proof} \liminfIsPolynomial* \begin{proof} First, we discuss how to compute the expected and the distribution questions of a deterministic $(\textsc{LimInf}; \textsc{Sum})$-automaton $\mathbb{A}$. The value of $(\textsc{LimInf}; \textsc{Sum})$-automaton $\mathbb{A}$ on a word depends on weights that appear infinitely often. Since $\mathbb{A}$ reaches some end SSC with probability $1$, we can neglect values of slave automata returned before the master automaton ${\cal A}_{mas}$ (of $\mathbb{A}$) reaches an end SCC of ${\cal A}_{mas} \times \mathcal{M}$. Thus, the expected value of $(\textsc{LimInf}; \textsc{Sum})$-automaton $\mathbb{A}$ w.r.t. a Markov chain $\mathcal{M}$ can be computed in the following way. Let $S_1, \ldots, S_l$ be all end SSCs of ${\cal A}_{mas} \times \mathcal{M}$. We compute probabilities $p_1, \ldots, p_l$ of reaching the components $S_1, \ldots, S_l$ respectively. These probabilities can be computed in polynomial time. Next, for every component $S_i$ we compute in polynomial time the unique value $m_i$, which $\mathbb{A}$ returns on almost every word whose run ends up in $S_i$ (Lemma~\ref{l:in-scc-all-equal}). The expected value $\mathbb{E}_{\mathcal{M},\mathbb{A}}$ is equal to $p_1 \cdot m_1 + \ldots + p_l \cdot m_l$. Observe that, given a value $\lambda$, the distribution $\mathbb{D}_{\mathcal{M}, \mathbb{A}}(\lambda)$ is equal to the sum the probabilities $p_i$ over such $i$ that $m_i \leq \lambda$. Hence, the expected and the distribution questions can be computed in polynomial time. The remaining probabilistic questions are special cases of the expected and the distribution questions. Due to Lemma~\ref{l:sup-to-inf}, the case of $\textsc{LimSup}$ reduces to the case of $\textsc{LimInf}$. All value functions from $\mathsf{FinVal}$ are special cases of $\textsc{Sum}$. This concludes the proof. \end{proof} \section{The proofs from Section~\ref{s:inf}} \subsection{Hardness results} \InfimumIsHard* \begin{proof} We present the following argument for $g = \textsc{Min}$ and the proof for works for $g = \textsc{Max}$. Lemma~\ref{l:sup-to-inf} implies that problems in (i) and (ii) for nested weighted automaton with $\textsc{Sup}$ value function reduce to the corresponding problems for nested weighted automaton with $\textsc{Inf}$ value function. Since, $\textsc{Min}$ can be regarded as a special case of $\fBsum{B}, \textsc{Sum}^+$ or $\textsc{Sum}$, the result holds for these functions as well. Hence, we consider only the case of $(\textsc{Inf}, \textsc{Min})$-automata. \Paragraph{$\textsc{PSpace}{}$-hardness}: We show $\textsc{PSpace}{}$-hardness by reduction from the emptiness problem for the intersection of regular languages. Let ${\cal L}_1, \ldots, {\cal L}_n \subseteq \{a,b\}^*$ be regular languages recognized by deterministic finite automata ${\cal A}_1, \ldots, {\cal A}_n$. We define a deterministic $(\textsc{Inf};\textsc{Min})$-automaton $\mathbb{A}$ that at first $n$ steps starts slave automata ${\mathfrak{B}}_1, \ldots, {\mathfrak{B}}_n$ and then it invokes only a dummy slave automaton that returns $1$ after a single step. For every $i$, the slave automaton ${\mathfrak{B}}_i$ first reads $n-i$ letters which it ignores, then it simulates ${\cal A}_i$ until the first $\#$ when it terminates. It returns $1$ if the simulated automaton ${\cal A}_i$ accepts and $0$ otherwise. More precisely, ${\mathfrak{B}}_i$ works on subwords $uv \#$, where $u \in \{a,b,\#\}^{n-i}, v \in \{a,b \}^*$ and returns $1$ if $v \in {\cal L}_i$ and $0$ otherwise. Observe that on a word $w = u v \# w'$ where $u \in \{a,b,\# \}^{n}, v \in \{a,b\}^*$ and $w' \in \{a,b,\# \}^{\omega}$, the automaton $\mathbb{A}$ returns $1$ if and only if all automata ${\cal A}_1, \ldots, {\cal A}_n$ accept $v$. Otherwise, $\mathbb{A}$ assigns value $0$ to $w$. In consequence, the following conditions are equivalent: (1)~the intersection ${\cal L}_1 \cap \ldots \cap {\cal L}_n$ is empty, (2)~the expected value $\mathbb{E}_{\mathcal{U}}(\mathbb{A})$ is $0$, and (3)~the distribution $\mathbb{D}_{\mathcal{U}, \mathbb{A}}(0) = 1$. Note that the almost-sure distribution question in $\textsc{PSpace}{}$-hard as well. Observe that if the intersection ${\cal L}_1 \cap \ldots \cap {\cal L}_n$ is non-empty it might be the case that the word of the minimal length in the intersection consists of a single word of exponential length. In such a case, the values $\mathbb{E}_{\mathcal{U}}(\mathbb{A})$ and $|1-\mathbb{D}_{\mathcal{U}, \mathbb{A}}(0)|$ are non-zero, but doubly-exponentially small. Therefore, we cannot use this reduction to show hardness of the approximate versions of the probabilistic problems. \Paragraph{$\#P$-hardness}: We show \#P-hardness by reduction from the \#SAT, which, given a propositional formula $\varphi$ in conjunctive normal form asks for the number of valuations that satisfy $\varphi$. Let $n$ be the number of variables of $\varphi$ and let $C_1, \ldots, C_m$ be the clauses of $\varphi$. For every $i \in [1,m]$, we define a slave automaton ${\mathfrak{B}}_i$ (associated with $C_i$) that ignores first $m-i$ letters, next considers the following $n$ letters $0,1$ as the valuation of the successive variables and checks whether this valuation satisfies the clause $C_i$. If it does, the slave automaton returns $1$, otherwise it returns $0$. The master automaton first invokes slave automata ${\mathfrak{B}}_1, \ldots, {\mathfrak{B}}_m$ and then it invokes a dummy slave automaton that returns $1$ after a single step. Observe that for $w = uvw'$, where $u \in \{0,1\}^m, v \in \{0,1\}^n$ and $w' \in \{0,1\}^{\omega}$, the automaton $\mathbb{A}$ returns $1$ on $w$ if and only if the valuation given by $v$ satisfies all clauses $C_1, \ldots, C_m$, i.e., it satisfies $\varphi$. Otherwise, $\mathbb{A}$ returns $0$ on $w$. Therefore, the following values are equal and multiplied by $2^{n}$ give the number of valuations satisfying $\varphi$: $\mathbb{E}_{\mathcal{U}}(\mathbb{A})$ and $1 - \mathbb{D}_{\mathcal{U}, \mathbb{A}}(0)$. Therefore, all approximate probabilistic questions are $\#P$-hard. \end{proof} \subsection{The upper bound for $g \in \mathsf{FinVal} \setminus \{\textsc{Sum}\}$} \Paragraph{Overview}. First, we show the translation lemma (Lemma~\ref{l:bsum-to-inf}), which states that deterministic $(\textsc{Inf}; \fBsum{B})$-automata to can be translated to deterministic $\textsc{Inf}$-automata with exponential blow-up. Moreover, this blow-up can be avoided by considering NWA of bounded width and $B$ given in unary. Since the probabilistic questions can be solved for $\textsc{Inf}$-automaton in polynomial time, Lemma~\ref{l:bsum-to-inf} implies that all probabilistic questions can be solved in exponential time for deterministic $(\textsc{Inf}; \fBsum{B})$-automata. \begin{restatable}{lemma}{BoundedSumReducesToInf} (1)~Given $B >0$ in the binary notation and a deterministic $(\textsc{Inf}; \fBsum{B})$-automaton $\mathbb{A}$, one can construct in exponential time an exponential-size deterministic $\textsc{Inf}$-automaton ${\cal A}$ such that for every word $w$ we have $\mathbb{A}(w) = {\cal A}(w)$. (2)~Let $k > 0$. Given $B >0$ in the unary notation and a deterministic $(\textsc{Inf}; \fBsum{B})$-automaton $\mathbb{A}$ of width bounded by $k$, one can construct in polynomial time a polynomial-size deterministic $\textsc{Inf}$-automaton ${\cal A}$ such that for every word $w$ we have $\mathbb{A}(w) = {\cal A}(w)$. \label{l:bsum-to-inf} \end{restatable} \begin{proof} \Paragraph{(1)}: Let $Q_m$ be the set of states of the master automaton and $Q_s$ be the union of the sets of states of slave automata of $\mathbb{A}$. We define an $\textsc{Inf}$-automaton ${\cal A}$ over the set of states $Q_m \times (Q_s \times [-B, B] \cup \{ \bot\})^{|Q_s|}$. Intuitively, ${\cal A}$ simulates runs of $\mathbb{A}$ by simulating (a)~the run of the master automaton using the component $Q_m$ and (b)~selected runs of up to $|Q_s|$ slave automata using the component $(Q_s \times [-B, B])^{|Q_s|}$. Slave automata are simulated along with their values, which are stored in the state, i.e., the state $(q, l)$ encodes that a given slave automaton is in the state $q$ and its current value is $l$. Then, the value of a given transition of ${\cal A}$ is the minimum over the values of simulated slave automata that terminate at the current step. Finally, the symbol $\bot$ denotes ``free'' components in the product $(Q_s \times [-B, B] \cup \{ \bot\})^{|Q_s|}$, which can be used to simulate newly invoked slave automata. We need to convince ourselves that we need to simulate at most $|Q_s|$ slave automata. Therefore, every time a new slave automaton is invoked, we have a free component to simulate it. Observe that if at some position two slave automata ${\mathfrak{B}}_1, {\mathfrak{B}}_2$ are in the same state $q$ and they have collected partial values $l_1 \leq l_2$, than we can discard the simulation of the automaton ${\mathfrak{B}}_2$, which collected the value $l_2$. Indeed, since slave automata are deterministic and recognize prefix free languages, the remaining runs of both slave automata ${\mathfrak{B}}_1, {\mathfrak{B}}_2$ are the same, i.e., they either both reject or both return values, respectively, $l_1 + v$ and $l_2 +v$ for some common $v$. Thus, the run of ${\mathfrak{B}}_2$ does not change the value of the infimum and we can stop simulating it, i.e., we can substitute $(q, l_2)$ by $\bot$. Therefore, at every position at most $|Q_s|$ components are necessary. It follows from the construction that the values of $\mathbb{A}$ and ${\cal A}$ coincide on every word. \Paragraph{(2)}: If $B$ is given in the unary notation and the width is bounded by $k$, we can basically repeat the construction as above for the automaton with the set of states $Q_m \times (Q_s \times [-B, B] \cup \{ \bot\})^{k}$, which is polynomial in $\mathbb{A}$. Thus, the result of such construction is of polynomial size and can be constructed in polynomial time. \end{proof} \InfExpectedSolution* \begin{remark} For $g = \fBsum{B}$, the value $B$ given in binary can be a part of input. \end{remark} We first prove Lemma~\ref{l:infSolutions} for $g \in \mathsf{FinVal} \setminus \{ \textsc{Sum},\textsc{Sum}^+\}$. Next, in Lemma~\ref{l:bound-from-below} we show that Lemma~\ref{l:infSolutions} for deterministic $(\textsc{Inf};\textsc{Sum}^+)$-automata. The statement of Lemma~\ref{l:bound-from-below} is more general though. \begin{proof} Observe that deterministic weighted automata with $\textsc{Min}$ and $\textsc{Max}$ values functions can be transformed in polynomial time to deterministic weighted automata with $\fBsum{B}$ value function. Basically, a deterministic $\fBsum{B}$-automaton simulating an $\textsc{Min}$-automaton (resp., $\textsc{Max}$-automaton) takes transitions of weight $0$ and stores in its states the current minimal (resp., maximal) weight. Its finial transition has the weight equal to the minimal (resp., maximal) weight encountered among the run. Such a $\fBsum{B}$-automaton computes on every word the same value as the given $\textsc{Min}$-automaton (resp., $\textsc{Max}$-automaton). Therefore, we need to focus on $g = \fBsum{B}$. Consider a deterministic $(\textsc{Inf};\fBsum{B})$-automaton $\mathbb{A}$. By Lemma~\ref{l:bsum-to-inf}, $\mathbb{A}$ can be transformed in exponential time into an exponential-size deterministic $\textsc{Inf}$-automaton ${\cal A}$ such that for every word $w$ we have $\mathbb{A}(w) = {\cal A}(w)$. It follows that for every Markov chain $\mathcal{M}$ over the alphabet of $\mathbb{A}$ and every value $\lambda$ we have $\mathbb{E}_{\mathcal{M}}(\mathbb{A}) = \mathbb{E}_{\mathcal{M}}({\cal A})$ and $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda) = \mathbb{D}_{\mathcal{M},{\cal A}}(\lambda)$. The values $\mathbb{E}_{\mathcal{M}}({\cal A}), \mathbb{D}_{\mathcal{M},{\cal A}}(\lambda)$ can be computed in polynomial time in ${\cal A}$ (Fact~\ref{t:weighted-inf-expected}), which amounts to exponential time in $\mathbb{A}$. Observe, however, that for $\mathbb{A}$ of bounded width the automaton ${\cal A}$ has polynomial size (assuming that the bound on the width is constant), and the values $\mathbb{E}_{\mathcal{M}}({\cal A}), \mathbb{D}_{\mathcal{M},{\cal A}}(\lambda)$ can be computed in polynomial time in $\mathbb{A}$. \end{proof} Now, we turn to deterministic $(\textsc{Inf};\textsc{Sum})$-automata. First, we show that under additional assumptions on slave automata, the probabilistic questions can be computed. \begin{lemma} Given a Markov chain $\mathcal{M}$, a value $\lambda$ and a deterministic $(\textsc{Inf};\textsc{Sum})$-automaton such that the value of every slave automaton is bounded from below, the values $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$ can be computed in exponential time. \label{l:bound-from-below} \end{lemma} \begin{proof} Consider a deterministic $(\textsc{Inf};\textsc{Sum})$-automaton $\mathbb{A}$ such that the value of every slave automaton is bounded from below. Let $B = |\mathbb{A}|$ and let $\mathbb{A}'$ be $\mathbb{A}$ considered as a deterministic $(\textsc{Inf}; \fBsum{B})$-automaton. We show that on almost all words $w$ we have $\mathbb{A}(w) = \mathbb{A}'(w)$. Then, $\mathbb{E}_{\mathcal{M}}(\mathbb{A}) = \mathbb{E}_{\mathcal{M}}(\mathbb{A}')$ and $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda) = \mathbb{D}_{\mathcal{M},\mathbb{A}'}(\lambda)$ and the values $\mathbb{E}_{\mathcal{M}}(\mathbb{A}')$ and $\mathbb{D}_{\mathcal{M},\mathbb{A}'}(\lambda)$ can be computed in exponential time by Lemma~\ref{l:infSolutions} taking into account the remark about $B$ being input. Since the value of every slave automaton ${\mathfrak{B}}_i$ is bounded from below, the (reachable part of) automaton ${\mathfrak{B}}_i$ considered as a weighted graph does not have negative cycles. Therefore, the minimal value ${\mathfrak{B}}_i$ can achieve is greater than $|{\mathfrak{B}}_i| < |\mathbb{A}|$. Moreover, every run of $\mathbb{A}$ ends up in some SCC of ${\cal A}_{mas} \times \mathcal{M}$, where almost all words have the same value~(Lemma~\ref{l:in-scc-all-equal}), which is bounded from above by $|\mathbb{A}|$ and can be computed in polynomial time. Therefore, the value of almost all words belong to the interval $[- |\mathbb{A}|, |\mathbb{A}|]$. \end{proof} The above lemma implies that the probabilistic questions for deterministic $(\textsc{Inf};\textsc{Sum}^+)$-automata can be answered in exponential time, i.e., we have showed (1) from the following lemma. \PositiveSum* We show that the distribution question for deterministic $(\textsc{Sup};\textsc{Sum}^+)$-automata is decidable in $\textsc{ExpTime}{}$, i.e., (2) from Lemma~\ref{positivesum}. Decidability of the expected question for deterministic $(\textsc{Sup};\textsc{Sum}^+)$-automata is left as an open question. \begin{lemma} The distribution question for deterministic $(\textsc{Sup};\textsc{Sum}^+)$-automata can be computed in exponential time. \end{lemma} \begin{proof} Let $\mathbb{A}$ be a deterministic $(\textsc{Sup};\textsc{Sum}^+)$-automaton and let $\mathcal{M}$ be a Markov chain. Let $\lambda$ be a threshold in the distribution question. Consider $\mathbb{A}'$ defined as $\mathbb{A}$ considered as a $(\textsc{Sup};\fBsum{B})$-automaton with $B = \lambda+1$. Observe that for every word $w$ we have $\mathbb{A}(w) \leq \lambda$ iff $\mathbb{A}'(w) \leq \lambda$. Therefore, $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda) = \mathbb{D}_{\mathcal{M},\mathbb{A}'}(\lambda)$. The latter value can be computed in exponential time (Lemma~\ref{l:infSolutions}). \end{proof} \subsection{The upper bound for the approximation problems with $g = \textsc{Sum}$} \InfSumSolution* \noindent\emph{Key ideas}. In end SCCs, deterministic $(\textsc{Inf};\textsc{Sum})$-automata collect the same values as their variants with $\textsc{LimInf}$ as the master value function. Thus, the only difference in the values of a $(\textsc{Inf};\textsc{Sum})$-automaton and $(\textsc{LimInf};\textsc{Sum})$-automata comes from finite prefix of the word, before an end SCC of $\mathcal{M} \times {\cal A}_{mas}$ is reached, which happens with hight probability after exponentially many steps $N$. We show that for exponential $D$, with high probability, all slave automata invoked in first $N$ steps of the master automaton, terminate after at most $D$ steps. Therefore, all values of these slave automata are bounded from below by $C \cdot D$, where $C$ is the minimal weight in all slave automata. Thus, with high probability, a deterministic $(\textsc{Inf};\textsc{Sum})$-automaton returns the value from a bounded interval. Thus, the sum value function can be replaced with the bounded sum and we can invoke Lemma~\ref{l:infSolutions}. \begin{proof} Consider a deterministic $(\textsc{Inf};\textsc{Sum})$-automaton $\mathbb{A}$. Let $\mathbb{A}^{lim}$ be $\mathbb{A}$ considered as a $(\textsc{LimInf};\textsc{Sum})$-automaton. First, we assume that $\mathbb{A}^{lim}$ has finite expected value. We can check whether this assumption holds in polynomial time by computing $\mathbb{E}_{\mathcal{M}}(\mathbb{A}^{lim})$ (Lemma~\ref{th:liminfIsPoly}). Then, we show the following {\bf claim}: for every $\epsilon >0$, there exists $B>0$, exponential in $|\mathbb{A}| + |\log(\epsilon)|$ such that for $\mathbb{A}'$ defined as $\mathbb{A}$ considered as an $(\textsc{Inf}; \fBsum{B})$-automaton we have $|\mathbb{E}_{\mathcal{M}}(\mathbb{A}) - \mathbb{E}_{\mathcal{M}}(\mathbb{A}_B)| \leq \epsilon$. \medskip \Paragraph{The claim implies the lemma}. Observe that due to Lemma~\ref{l:infSolutions} with the following remark on $B$, the expected value $\mathbb{E}_{\mathcal{M}}(\mathbb{A}_B)$ can be computed in polynomial time in $\mathbb{A}_B$, hence exponential time in $\mathbb{A}$. Therefore, we can approximate $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ up to $\epsilon$ in exponential time. Due to Markov inequality, for every $\lambda$ we have $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda+\epsilon) - \mathbb{D}_{\mathcal{M},\mathbb{A}_B}(\lambda-\epsilon) < \epsilon$. However, the values of $\mathbb{A}$ are integers, therefore for $\epsilon < 0.5$ we get $|\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda) - \mathbb{D}_{\mathcal{M},\mathbb{A}_B}(\lambda)| < \epsilon$. Therefore, again by Lemma~\ref{l:infSolutions}, we can approximate $\mathbb{D}_{\mathcal{M},\mathbb{A}}(\lambda)$ in exponential time in $\mathbb{A}$ and polynomial in $\epsilon$. \medskip \Paragraph{The proof of the claim}. First, we observe that every run ends up in some end SCC of ${\cal A}_{mas} \times \mathcal{M}$, and, hence, Lemma~\ref{l:in-scc-all-equal} implies that values of all words are bounded from above by $|\mathbb{A}|$. Next, the values of all slave automata invoked in end strongly connected components (SCCs) of ${\cal A}_{mas} \times \mathcal{M}$ are bounded from below. Otherwise, the expected value of $\mathbb{A}$ as a $(\textsc{LimInf};\textsc{Sum})$-automaton is $-\infty$. Assume that the values of all slave automata invoked in end SCCs of ${\cal A}_{mas} \times \mathcal{M}$ are bounded from below, which implies that they are bounded by $-|\mathbb{A}|$. Then, we need to estimate the influence on the expected value of the slave automata invoked before the master automaton reaches an end SCC of ${\cal A}_{mas} \times \mathcal{M}$. Let $E_1$ be the expected value of a slave automaton of $\mathbb{A}$ below $-B$, i.e., the expected value of the random variable $X_B(w) = min(0, {\mathfrak{B}}(u)-B)$ for any slave automaton ${\mathfrak{B}}$ over any subword $u$ of $w$, and let $E_2$ be the expected number of steps of the master automaton before it reaches an end SCC. It follows that for $B > |\mathbb{A}|$, $|\mathbb{E}_{\mathcal{M}}(\mathbb{A}) - \mathbb{E}_{\mathcal{M}}(\mathbb{A}_B)| < |E_1| \cdot E_2$. To estimate $|E_1|$ we estimate the expected number of steps of a slave automaton exceeding $B$, i.e., the expected value of random variable $Y_B(u)$ defined as the maximum of $0$ and the number of steps of a slave automaton minus $B$. Let $p$ be the minimal probability that occurs in $\mathcal{M}$ and let $n = |\mathbb{A}|$. We show that for $B > \frac{n}{p^n} |\log \frac{n^2}{p^n} \epsilon|$, we have $\mathbb{E}(Y_B) \cdot E_2 < \epsilon$, which implies $ |E_1| \cdot E_2 < \epsilon$. We show the estimate on $E_2$ first. Observe that starting from every state, there exists at least one word of length at most $|{\cal A}_{mas}|$ upon which the master automaton reaches an end SCC of ${\cal A}_{mas} \times \mathcal{M}$. Therefore, the master automaton reaches an end SCC in $|{\cal A}_{mas}|$ steps with probability at least $p^{|{\cal A}_{mas}|}$, and, hence, the number of steps before ${\cal A}_{mas}$ reaches an end SCC is estimated from above by $|{\cal A}_{mas}|$ multiplied by the geometric distribution with the parameter $p^{|{\cal A}_{mas}|}$. Hence, $E_2$ is bounded by $\frac{n}{p^{n}}$. Now, we estimate $\mathbb{E}(Y_B)$ Observe that for every reachable state $q$ of any slave automaton ${\mathfrak{B}}$, there exists a word of the length at most $|{\mathfrak{B}}|$ such that ${\mathfrak{B}}$, starting in $q$ terminates upon reading that word. Therefore, the probability $q_l({\mathfrak{B}})$ that ${\mathfrak{B}}$ works at least $l$ steps is bounded by $(1-p^{|{\mathfrak{B}}|})^{\lfloor \frac{l}{|{\mathfrak{B}}|} \rfloor }$. Now, $\mathbb{E}(Y_B) $ is bounded by the maximum over slave automata ${\mathfrak{B}}$ of $\sum_{l \geq B} q_l({\mathfrak{B}})$. We have $\sum_{l \geq B} q_l({\mathfrak{B}}) \leq \frac{n}{p^n} \cdot (1-p^n)^{\frac{B}{n}}$. Hence, $\mathbb{E}(Y_B) \leq \frac{n}{p^n} \cdot (1-p^n)^{\frac{B}{n}}$ and $\mathbb{E}(Y_B) \cdot E_2 \leq \frac{n^2}{p^n} \cdot (1-p^n)^{\frac{B}{n}}$. Observe that for $B > \frac{n}{p^n} s$, where $s = |\log \frac{n^2}{p^n} \epsilon|$, we have $\frac{n^2}{p^n} \cdot (1-p^n)^{\frac{B}{n}} \leq \frac{n^2}{p^n} \cdot (\frac{1}{2})^s$ and $\mathbb{E}(Y_B) \cdot E_2 \leq \epsilon$. Observe that $\frac{n}{p^n} \cdot |\log \frac{n^2}{p^n} \epsilon|$ is exponential in $|\mathbb{A}| + \log(|\epsilon|)$ and linear in $|\log \epsilon|$. \Paragraph{Lifting the assumption}. Now, we discuss how to remove the assumption that $\mathbb{E}_{\mathcal{M}}(\mathbb{A}^{lim})$ is finite. For the expected question, observe that $\mathbb{E}_{\mathcal{M}}(\mathbb{A}) \leq \mathbb{E}_{\mathcal{M}}(\mathbb{A}^{lim})$, hence if the latter is $-\infty$, we can return the answer $\mathbb{E}_{\mathcal{M}}(\mathbb{A}) = -\infty$. For the distribution question, consider threshold $\lambda$. Observe that for every $w$, we have $\mathbb{A}(w) \leq \mathbb{A}_B(w)$. Moreover, $\mathbb{A}(w)$ is less than $\lambda$ while $\mathbb{A}_B(w) \geq \lambda $ holds only if there is a slave automaton run before ${\cal A}_{mas}$ reaches an end SCC which runs more than $B$ steps. Therefore, the probability $\mathbb{P}_\mathcal{M}(\{w : \mathbb{A}(w) \leq \lambda \wedge \mathbb{A}_B(w) \geq \lambda \})$ is bounded from above by $\mathbb{E}({Y_B}) \cdot E_2$. Thus, by the previous estimate on $\mathbb{E}({Y_B}) \cdot E_2$, for $B > max(\lambda+1, \frac{n}{p^n} \log \frac{n^2}{p^n} \epsilon|)$ we have $\mathbb{P}_\mathcal{M}(\{w : \mathbb{A}(w) \leq \lambda \wedge \mathbb{A}_B(w) \geq \lambda \}) < \epsilon$ and $|\mathbb{D}_{\mathcal{M}, \mathbb{A}}(\lambda) - \mathbb{D}_{\mathcal{M},\mathbb{A}_B}(\lambda)| < \epsilon$. Again, $\mathbb{D}_{\mathcal{M},\mathbb{A}_B}(\lambda)$ can be computed in exponential time in $|\mathbb{A}|$. \end{proof} \section{The proofs from Section~\ref{s:limavg}} \label{s:proof-of-limavg-poly} We begin with the remaining part of the proof of Lemma~\ref{l:limavg-poly}. Recall that the weighted Markov chain $\mathcal{M}^{\nestedA}$ is defined as the product ${\cal A}_{mas} \times \mathcal{M}$, where ${\cal A}_{mas}$ is the master automaton of $\mathbb{A}$ and the weights of $\mathcal{M}^{\nestedA}$ are the expected values of invoked slave automata. More precisely, the weight of the transition $\tuple{(q,s),a,(q',s')}$ is the expected value of ${\mathfrak{B}}_i$, the slave automaton started by ${\cal A}_{mas}$ in the state $q$ upon reading $a$, w.r.t. the distribution given by $\mathcal{M}$ starting in $s$. \begin{lemma} \label{l:limavgReducesToMC} Let $\mathbb{A}$ be a deterministic $(\textsc{LimAvg};\textsc{Sum})$-automaton. The values $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{E}(\mathcal{M}^{\nestedA})$ coincide. \end{lemma} In the following we prove Lemma~\ref{l:limavgReducesToMC}. First, we show that lemma for $(\textsc{LimAvg};\textsc{Sum})$-automata in which duration of runs of slave automata is bounded by $N$. Next, we show how to solve the general case of all $(\textsc{LimAvg};\textsc{Sum})$-automata by the reduction to this special case. Before we continue, we discuss computing the expected values of Markov chains with silent moves. \medskip \Paragraph{Expected limit averages of Markov chains with silent moves}. \newcommand{\markov_{\textrm{sil}}}{\mathcal{M}_{\textrm{sil}}} Let $\markov_{\textrm{sil}}$ be a Markov chain with weights from $\mathbb{N} \cup \{ \lambda \}$, where $\lambda$ corresponds to a silent transition. We consider the limit average value function with silent moves $\silent{\textsc{LimAvg}}$, which applied to a sequence $a_1 a_2 \ldots$ of elements of $\mathbb{N} \cup \{\lambda\}$ removes all $\lambda$ symbols are applies the standard $\textsc{LimAvg}$ function to the sequence consisting of the remaining elements. The expected value of the limit average of a path in $\markov_{\textrm{sil}}$ can be computed by a slight modification of a standard method~\cite{filar} for Markov chains without silent transitions. Namely, we associate with each transition $(s,a,s')$ of $\markov_{\textrm{sil}}$ a real-valued variable $x[(s,a,s')]$. Next, we state the following equations: \begin{compactenum}[(1)] \item for every transition $(s,a,s')$ we put $x[(s,a,s')] = \sum_{s'' \in S_{\markov_{\textrm{sil}}}, a' \in \Sigma} E((s',a',s''))$, and \item $x[e_1] + \ldots + x[e_k] = 1$, where $e_1, \ldots, e_k$ are all non-silent transitions, and the following inequalities \item $0 \leq x[e] \leq 1$ for every transition $e$. \end{compactenum} Then, following the argument for Markov chains without silent moves~\cite{filar}, we can show that the expected limit average of $\markov_{\textrm{sil}}$ is given as $c(e_1) \cdot x[e_1] + \ldots + c(e_k) \cdot x[e_k]$ (once again $e_1, \ldots, e_k$ are all non-silent transitions). \subsection{The expected value in the bounded-duration case} First, we show that Lemma~\ref{l:limavgReducesToMC} holds if we assume that for some $N>0$ all slave automata take at most $N$ transitions. \begin{lemma} \label{l:limavgReducesToMC-bounded-width} Let $\mathbb{A}$ be a $(\textsc{LimAvg};\textsc{Sum})$-automaton in which duration of runs of slave automata is bounded by $N$ and let $\mathcal{M}^{\nestedA}$ be the Markov chain corresponding to $\mathbb{A}$. The values $\mathbb{E}_{\mathcal{M}}(\mathbb{A})$ and $\mathbb{E}(\mathcal{M}^{\nestedA})$ coincide. \end{lemma} \medskip \Paragraph{The plan of the proof}. We define a $\silent{\textsc{LimAvg}}$-automaton ${\cal A}$ that simulates runs of $\mathbb{A}$; the value on ${\cal A}$ on every word coincides with $\mathbb{A}$. Then, we transform the Markov chain ${\cal A} \times \mathcal{M}$ into a Markov chain $\mathcal{M}_E$ by adjusting its weights only. We change all weights to silent weight $\lambda$ except for the transitions corresponding to the invocation of slave automata, where the weight is the expected value of the invoked slave automaton w.r.t. the distribution given by $\mathcal{M}$ in the current state. In the proof we argue that the expected values of limit average of ${\cal A} \times \mathcal{M}$ and $\mathcal{M}_E$ coincide. We show that by looking at the linear equations corresponding to computing the expected limit average of each of the Markov chains. Basically, the frequency of each transition is the same in both Markov chains and changing the value of the slave automaton from its actual value to the expected value does not affect the solution to the set of equations. Next, we observe that runs of slave automata past the first transition do not matter. Indeed, all runs of slave automata are accepting and all weights past the first transition are $0$. Thus, we can reduce $\mathcal{M}_E$ to $\mathcal{M}_R$ by projecting out information about the runs of slave automata past the first transition. Finally, we observe that such a Markov chain $\mathcal{M}_R$ is in fact $\mathcal{M}^{\nestedA}$. Hence, we have shown that \[ \mathbb{E}_{\mathcal{M}}(\mathbb{A}) = \mathbb{E}_{\mathcal{M}}({\cal A}) = \mathbb{E} (\mathcal{M}_E) = \mathbb{E}(\mathcal{M}_R) = \mathbb{E}(\mathcal{M}^{\nestedA}) \] \begin{proof} Every slave automaton of $\mathbb{A}$ takes at most $N$ steps. Therefore, $\mathbb{A}$ has width bounded by $N$. Moreover, without loss of generality, we assume that each slave automaton takes transitions of weight $0$ except for the last transition, which may have a non-zero weight, and all slave automata are either trivial, i.e., they start in the accepting state and take no transitions, or they take precisely $N$ transitions. Basically, slave automata may keep track of the accumulated values and the number of steps in their states. \Paragraph{The automaton ${\cal A}$}. Let $Q_{mas}$ be the set of states of the master automaton of $\mathbb{A}$ and let $Q_s$ be the union of the set of states of the slave automata of $\mathbb{A}$. We define ${\cal A}$ as a $\silent{\textsc{LimAvg}}$ automaton over the set of states $Q_{mas} \times (Q_s \cup \{\bot\})^N$. The component $Q_{mas}$ is used to keep track of the run of the master automaton while the component $(Q_s \cup \{\bot\})^N$ is used to keep track of up to $N$ slave automata running concurrently. The symbol $\bot$ corresponds to an empty slot that can be used to simulate another slave automaton. Since $\mathbb{A}$ has width bounded by $N$, the automaton ${\cal A}$ can simulate the Boolean part of the run of $\mathbb{A}$. The weight of a transition of ${\cal A}$ is either $\lambda$ if not automaton terminates or it is the value of terminating slave automaton (non-trivial slave automata take precisely $N$ steps, so at most one can terminate at each position). Transitions at which no slave automaton terminates are silent transitions. The automata $\mathbb{A}$ and ${\cal A}$ encounter the same weights but differ in their aggregation. The value of a slave automaton is associated to the position at which it is invoked, while in ${\cal A}$ it is associated with the position at which the slave automaton terminates. However, these positions differ by $N$, therefore the limit average of both sequences coincides. Hence, for every word $w$, the values $\mathbb{A}(w)$ and ${\cal A}(w)$ coincide. It follows that $\mathbb{E}_{\mathcal{M}}(\mathbb{A}) = \mathbb{E}_{\mathcal{M}}({\cal A})$. \Paragraph{The Markov chain $\mathcal{M}_E$}. We define $\mathcal{M}_E$ as ${\cal A} \times \mathcal{M}$ with altered weights defined as follows. All transitions which correspond to the invocation of a slave automaton ${\mathfrak{B}}_i$ with the state of the Markov chain $\mathcal{M}$ being $s$ have weight equal to the expected value of ${\mathfrak{B}}_i$ w.r.t. the distribution given by $\mathcal{M}$ starting in the state $s$. Other transitions are silent. \Paragraph{Expected value of ${\cal A}_{mas} \times \mathcal{M}$ and $\mathcal{M}_E$ coincide}. Recall that the expected limit average of a Markov chain with silent moves is given by $c(e_1) \cdot x[e_1] + \ldots + c(e_k) \cdot x[e_k]$ where variables $x[e]$, over all transitions $e$, form a solution to the system of equations and inequalities (1), (2) and (3), and $e_1, \ldots, e_k$ are all non-silent transitions. Now, observe that the equations (1) and inequalities (2) are the same for both Markov chains ${\cal A} \times \mathcal{M}$ and $\mathcal{M}_E$. The equation (2) is, in general, different for ${\cal A} \times \mathcal{M}$ and for $\mathcal{M}_E$. However, non-silent transitions of ${\cal A} \times \mathcal{M}$, denoted by $e_1, \ldots, e_k$, are all states at which at least one slave automaton terminates, while non-silent transitions of $\mathcal{M}_E$, denoted by $e_1', \ldots, e_l'$ are all states at which some (non-trivial) slave automaton is invoked. Observe that every terminating slave automaton has been invoked, and, in ${\cal A}$, every invoked slave automaton terminates. Therefore, the cumulated frequency of invocations and terminations of slave automata coincides, i.e., equations (1) imply $x[e_1] + \ldots + x[e_k] = x[e_1']+\ldots +x[e_l']$. It follows that equations (1), (2) and (3) corresponding to ${\cal A} \times \mathcal{M}$ and to $\mathcal{M}_E$ have the same solution. It remains to show that $c(e_1) \cdot x[e_1] + \ldots + c(e_k) \cdot x[e_k] = c'(e_1') \cdot x[e_1'] + \ldots + c'(e_l') \cdot x[e_l']$, where $c$ (resp. $c'$) are weights in ${\cal A} \times \mathcal{M}$ (resp., $\mathcal{M}_E$). Since $c'(e')$ is the expected value of the slave automaton started at $e'$, the expected value $c'(e')$ is given by $c'(e') = \sum_{e'' \in T} p(e',e'') c(e'')$, where $T$ is the set of transitions that correspond the the final transitions of the slave automaton started at the transition $e'$, and $p(e',e'')$ is the probability of reaching the transition $e''$ from $e'$ omitting the set $T$. Indeed, each (non-trivial) slave automaton takes precisely $N$ transition, hence at each position at most one non-trivial slave automaton terminates and $c(e'')$ is the value of the slave automaton terminating at $e''$. Therefore, $c'(e') = \sum_{e'' \in T} p(e',e'') c(e'')$. Now, we take $c'(e_1') \cdot x[e_1'] + \ldots + c'(e_l') \cdot x[e_l']$ and substitute each $c(e_i')$ by the corresponding $c'(e_i') = \sum_{e'' \in T_i} p(e',e'') c(e'')$. Then, we now group in all the terms by $e''$, i.e., we write it as $c(e_1) (x[e_1'] p(e_1',e_1) + \ldots + x[e_l'] p(e_l',e_1)) + \ldots $ Observe that the frequency of taking the transition $e_1$ at which some slave automaton ${\mathfrak{B}}$ terminates is equal to the sum of frequencies on transitions at which this slave automaton ${\mathfrak{B}}$ has been invoked weighted by the probability of reaching the terminating transition $e_1$ from a given invoking transition. Therefore, we have $x[e_1'] p(e_1',e_1) + \ldots + x[e_l'] p(e_l',e_1) = x[e_1]$. It follows that $c(e_1) \cdot x[e_1] + \ldots + c(e_k) \cdot x[e_k] = c'(e_1') \cdot x[e_1'] + \ldots + c'(e_l') \cdot x[e_l']$ and $ \mathbb{E}_{\mathcal{M}}({\cal A}) = \mathbb{E} (\mathcal{M}_E)$. \Paragraph{The Markov chain $\mathcal{M}_R$}. We construct $\mathcal{M}_R$ from $\mathcal{M}_E$ by projecting out the component $(Q_s \cup \{\bot\})^N$. We claim that this step preserves the expected value. First, observe that the distribution is given by an unaffected component $\mathcal{M}$ and the weights depend only on the state of the Markov chain $\mathcal{M}$ and the state of the master automaton ${\cal A}_{mas}$. Thus, projecting out the component $(Q_s \cup \{\bot\})^N$ does not affect the expected value, i.e., $ \mathbb{E}_{\mathcal{M}}(\mathcal{M}_E) = \mathbb{E} (\mathcal{M}_R)$. Now, observe that the set of states of $\mathcal{M}_R$ is $Q_{mas} \times Q_{\mathcal{M}}$. Observe that the probability and the weights of the transitions of $\mathcal{M}_R$ match the conditions of the definition of $\mathcal{M}^{\nestedA}$. Therefore, $\mathcal{M}_R = \mathcal{M}^{\nestedA}$. \end{proof} \subsection{Reduction to the bounded-duration case} Let $\mathbb{A}$ be a $(\textsc{LimAvg};\textsc{Sum})$-automaton. For every $N$, we define $\mathbb{A}_N$ as $\mathbb{A}$ with the bound $N$ imposed on slaves, i.e., each slave automaton terminates either by reaching an accepting state or when it takes $N$-th step. Let $\mathcal{M}^{\nestedA}_N$ be the Markov chain that corresponds to $\mathbb{A}_N$. Observe that as $N$ tends to infinity, weights in $\mathcal{M}^{\nestedA}_N$ converge to the weights in $\mathcal{M}^{\nestedA}$. It remains to be shown that, as $N$ tends to infinity, the expected values of $\mathbb{A}^N$ converge to the expected value of $\mathbb{A}$. We show in the following Lemma~\ref{l:convergence} that random variables generated by $\mathbb{A}^N$ converge in probability to the random variable generated by $\mathbb{A}$, i.e., for every $\epsilon > 0$ we have \[ \lim_{N \rightarrow \infty} \mathbb{P}_{\mathcal{M}}(\{ w : |\mathbb{A}(w) - \mathbb{A}^N(w)| \geq \epsilon \}) = 0 \] Convergence in probability implies convergence of the expected values. It follows that the expected values of $\mathbb{A}$ and $\mathcal{M}^{\nestedA}$ coincide. \begin{lemma} The random variables defined by $\{ \mathbb{A}^N \}_{N\geq )}$ converge in probability to the random variable defined by $\mathbb{A}$. \label{l:convergence} \end{lemma} \begin{proof} \newcommand{\excessA}[1]{\mathbb{A}^{\geq #1}} \newcommand{\partialExcessA}[1]{\mathbb{A}[#1]} \newcommand{\textsc{LimAvgSup}}{\textsc{LimAvgSup}} We define an $(\textsc{LimAvgSup}; \textsc{Sum})$-automaton $\excessA{N}$ as the automaton obtained from $\mathbb{A}$ in the following way. First, each slave automaton take transitions of weight $0$ for the first (up to) $N$ steps, past which it takes transitions of weight $1$ until it terminates. Second, the value function of the master automaton is $\textsc{LimAvgSup}$ defined on $a_1, a_2, \ldots$ as $\textsc{LimAvgSup}(a_1 \ldots ) = \limsup_n \frac{1}{n} \sum_{i=1}^{n} a_i$. Intuitively, the automaton $\excessA{N}$ computes the limit average (supremum) of the steps slave automata take above the threshold $N$. Let $C$ be the maximal absolute weight in slave automata of $\mathbb{A}$. Then, for every word $w$ we have \[ \mathbb{A}_N(w) - C \cdot \excessA{N}(w) \leq \mathbb{A}(w) \leq \mathbb{A}_N(w) + C \cdot \excessA{N}(w). \] It follows that \[ \mathbb{P}_{\mathcal{M}}(\{ w : |\mathbb{A}(w) - \mathbb{A}^N(w)| \geq \epsilon \}) = \mathbb{P}_{\mathcal{M}}(\{ w : |\excessA{N}(w)| \geq \frac{\epsilon}{C} \}) \] We show that with $N$ increasing to infinity, $\mathbb{P}_{\mathcal{M}}(\{ w : |\excessA{N}(w)| \geq \frac{\epsilon}{C}\} )$ converge to $0$. From that we conclude that $\mathbb{A}_N$ converge in probability to $\mathbb{A}$ as $N$ tends to infinity. Observe that for every word $w$ and every $N$ we have $0 \leq \excessA{N}(w)$ and $\excessA{N}(w) \geq \excessA{N+1}(w)$. Therefore, we only need to show that for every $\epsilon > 0$ there for $N$ large enough $\mathbb{E}_{\mathcal{M}}(\excessA{N}) \leq \epsilon$. Then, by Markov inequality, $\mathbb{P}_{\mathcal{M}}(\{ w : |\excessA{N}(w)| \geq \sqrt{\epsilon}) < \sqrt{\epsilon}$. To estimate the value of $\mathbb{E}_{\mathcal{M}}(\excessA{N})$ we consider $\silent{\textsc{LimAvgSup}}$-automata $\partialExcessA{K,i}$ defined as follows. The automaton $\partialExcessA{K,i}$ simulates the master automaton $\mathbb{A}$ and slaves that are invoked at positions $\{ K\cdot l + i : l \in \mathbb{N} \}$. For every $l>0$, the transition at the position $K \cdot (l+1) + i $ has the weight $1$ if the slave invoked at the position $K \cdot l + i$ works for at least $K$ steps. Otherwise, this transition has weight $0$. On the remaining positions, transitions have weight $0$. Observe that due to distributivity of the limit supremum, the limit average supremum of the number of slave automata that take at least $K$ steps at a given word $w$ is bounded by $\sum_{i=0}^{K-1} \partialExcessA{K,i}$. It follows that for every word $w$ we have $\excessA{N}(w) \leq \sum_{K \geq N} \sum_{i=0}^{K-1} \partialExcessA{K,i}(w)$. Therefore, \[ (*) \mathbb{E}_{\mathcal{M}}(\excessA{N}) \leq \sum_{K \geq N} \sum_{i=0}^{K-1} \mathbb{E}_{\mathcal{M}}(\partialExcessA{K,i}). \] Now, we estimate $\mathbb{E}_{\mathcal{M}}(\partialExcessA{K,i})$. Let $n$ be the maximal size of a slave automaton in $\mathbb{A}$ and let $k$ be the number of slave automata. We assume, without loss of generality, that every state of slave automata is reached along some run on words generated by $\mathcal{M}$. Now, observe that from every state of slave automata some accepting state is reachable. Otherwise, there would be a set of strictly positive probability at which $\mathbb{A}$ does not accept. Moreover, as it is reachable, it is reachable within $n$ steps. Therefore, for there exists a probability $p>0$ such that any slave automaton in any state terminates after next $n$ steps. It follows that $\mathbb{E}_{\mathcal{M}}(\partialExcessA{K,i}) \leq \frac{1}{K} p^{\lfloor \frac{K}{n} \rfloor}$. With that estimate, we obtain from (*) that $\mathbb{E}_{\mathcal{M}}(\excessA{N}) \leq \sum_{K \geq N} p^{\lfloor \frac{K}{n} \rfloor} \leq n \cdot \frac{p^{\lfloor \frac{N}{n} \rfloor }}{1-p}$. Therefore, $\mathbb{E}_{\mathcal{M}}(\excessA{N})$ converges to $0$ as $N$ increases to infinity. Finally, this implies that $\mathbb{A}_N$ converge in probability to $\mathbb{A}$ as $N$ tends to infinity. \end{proof} \subsection{The distribution question} \LimAvgDistribution* \begin{proof} Let $\mathbb{A}$ be a deterministic $(\textsc{LimAvg};\textsc{Sum})$-automaton with the master automaton ${\cal A}_{mas}$ and let $\mathcal{M}$ be a Markov chain. Moreover, let $\mathcal{M}^{\nestedA}$ be the Markov chain obtained from $\mathcal{M}$ and $\mathbb{A}$. We show that the distribution $\mathbb{D}_{\mathcal{M},\mathbb{A}}$ and the distribution defined by ${\mathcal{M}^{\nestedA}}$ coincide. \Paragraph{A single SCC case}. Assume that $\mathcal{M} \times {\cal A}_{mas}$ is an SCC. Observe that event ``the value of $\mathbb{A}$ equals $\lambda$'' is a tail event w.r.t. the Markov chain $\mathcal{M}$, i.e., it does not depend one finite prefixes. Therefore, its probability is either $0$ or $1$~\cite{feller}. It follows that the value of almost all words is equal to the expected value of $\mathbb{A}$. Now, $\mathcal{M}^{\nestedA}$ is structurally the same as $\mathcal{M} \times {\cal A}_{mas}$, hence it is also an SCC. Therefore, also in $\mathcal{M}^{\nestedA}$ almost all words of have the same value, which is equal to $\mathbb{E}(\mathcal{M}^{\nestedA})$. As $\mathbb{E}_{\mathcal{M}}(\mathbb{A}) = \mathbb{E}(\mathcal{M}^{\nestedA})$ (Lemma~\ref{l:limavgReducesToMC}) we have $\mathbb{D}_{\mathcal{M},\mathbb{A}}$ and the distribution defined by ${\mathcal{M}^{\nestedA}}$ coincide. \Paragraph{The general case}. Consider the case where $\mathcal{M} \times {\cal A}_{mas}$ consists multiple end SCC $S_1, \ldots, S_k$. Using conditional probability, we can repeat the single-SCC-case argument to show that in each end SCC $S_1, \ldots, S_k$ the values of $\mathbb{A}$ are the same and equal to the expected values in these SCC. Similarly, in each end SCC of ${\mathcal{M}^{\nestedA}}$, all words have the same value, which is equal to the expected value in that SCC. Since $\mathcal{M} \times {\cal A}_{mas}$ is structurally the same as $\mathcal{M}^{\nestedA}$, each SCC $S_1, \ldots, S_k$ corresponds to an SCC in ${\mathcal{M}^{\nestedA}}$. Lemma~\ref{l:limavgReducesToMC} states that $\mathbb{E}_{\mathcal{M}}(\mathbb{A}) = \mathbb{E}(\mathcal{M}^{\nestedA})$. By applying Lemma~\ref{l:limavgReducesToMC} to $\mathcal{M}$ and $\mathbb{A}$ with different initial states of $\mathcal{M}$ and ${\cal A}_{mas}$ (in each $S_1, \ldots, S_k$), we infer that in every SCC $S_1, \ldots, S_k$ the expected values of $\mathbb{A}$ and $\mathcal{M}^{\nestedA}$ coincide. Therefore, the distribution $\mathbb{D}_{\mathcal{M},\mathbb{A}}$ and the distribution defined by ${\mathcal{M}^{\nestedA}}$ coincide. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:level1}Introduction} Broad beam low-energy ion bombardment of surfaces can lead to the self-organized formation of patterns including nanodots \cite{ozaydin2008effects}, nanoscale ripples \cite{chan2007making} and nanoscale pits/holes \cite{wei2009self}, as well as to ultrasmoothening \cite{moseler2005ultrasmoothness}. The differences in morphology can be achieved by varying irradiation conditions such as ion energy, fluence, bombardment angle, ion species and substrate \cite{cuerno2020perspective}. In the case of elemental semiconductors patterned at room temperature the surface is amorphized by the ions and off-axis bombardment can produce ripple patterns. Competing theories of self-organized ion beam nanopatterning advocate for or combine models of different physical processes believed to play important roles, including curvature-dependent sputtering \cite{sigmund1969theory,sigmund1973mechanism,bradley1988theory}, lateral mass redistribution \cite{carter1996roughening}, surface diffusion \cite{bradley1988theory}, ion-enhanced viscous flow \cite{umbach2001spontaneous} and stress-induced flow \cite{castro2012hydrodynamic,castro2012stress,norris2012stress,moreno2015nonuniversality,munoz2019stress}. Despite much experimental study, agreement on which effects dominate in a given situation has not been reached. Theory and experiment are often compared using the average \textit{kinetics}, i.e. the evolution of the nanoscale surface structure spatially averaged over the area sampled by the experiment. Parameters entering theories are often poorly known \textit{a priori} and can therefore be varied to fit the average kinetics observed in a given study, allowing competing theories to claim agreement with experiment. Going beyond the average \textit{kinetics} to examine the detailed fluctuation \textit{dynamics} of the nanopatterning process offers a new route to test theory and gain better understanding. By \textit{dynamics} we mean the temporal evolution of fluctuations about the average nanoscale structure. The dynamics of such surface evolution is becoming accessible through developments in the coherent x-ray scattering technique of X-ray Photon Correlation Spectroscopy (XPCS) \cite{sutton2008review}. Continued increases in coherent x-ray flux at synchrotron and free-electron laser sources is enabling the application of XPCS to surface growth and patterning, but such studies remain few in number and the technique's full potential is not yet clear. Thus, an important part of the current work is also the continued exploration and development of the technique's capabilities for such investigations. XPCS studies of ion beam nanopatterning have previously been performed for Ar$^+$ patterning of GaSb \cite{bikondoa2013ageing} and of SiO$_2$ \cite{mokhtarzadeh2019nanoscale}. In both cases, however, the dominant length scale in the system - that of the ripples formed on the surface - coarsens beyond the length scales that could be observed in the experiment. In the present study (Part I) and its companion paper (Part II \cite{myint2020nanoscale}) we examine the classic cases of Si surface nanopatterning by Ar$^+$ and Kr$^+$ respectively. We are able to examine dynamics on the length scales of the self-organized ripples, and to compare those results with predictions of theory. In the nonlinear regime, theories of surface evolution cannot be solved analytically and simulation must be used to determine their predictions. Moreover, obtaining sufficient statistics for analysis of the dynamics requires more lattice realizations than for analyzing average kinetics. Therefore, within the scope of the present paper we could compare results with predictions of a single theory - a recent minimal model by Harrison, Pearson and Bradley (HPB) which incorporates a cubic nonlinearity \cite{pearson2014theory,harrison2017emergence}. It's known that the full inclusion of lower order terms to the widely-used anisotropic Kuramoto-Sivashinsky (aKS) equation \cite{makeev2002morphology} can also give several kinetics behaviors similar to those observed below, including interrupted coarsening, broken parity, and development of ripples with preferential asymmetric slopes \cite{loew2019effect}. Therefore, our focus on simulations of the minimal HPB model does not imply the inability of other models to produce similar results. The plan of the paper is as follows: In Sect. II below we describe the methods used in the experiments and simulations. Section III provides a broad brush overview of the basic behavior observed in the speckle-averaged x-ray intensity evolution during nano-ripple formation. The early stages of the patterning process are analyzed in Sect. IV to determine the linear theory coefficients which can themselves be compared with theoretical predictions and which inform the parameters used in the subsequent simulations. Section V examines the late-stage coarsening kinetics in both experiment and simulation while the evolution of fluctuation dynamics is examined in Sect. VI. The results and their implications are discussed in Sect. VII. \section{Methods} \begin{figure} \includegraphics[width=3.2 in]{GISAXS_geometry_publication_2.pdf}% \caption{\label{fig:GISAXS} Schematic diagram of the GISAXS experiment. The ion source is placed at the polar angle $\theta$, which causes self-organized rippling on the silicon surface. The sample is positioned so that the X-ray incident angle $\alpha_i$ is slightly above the critical angle of total external reflection. The scattering is recorded as a function of the exit angles $\alpha_f$ and $\psi$ using a 2D detector.} \end{figure} \begin{figure} \includegraphics[width=3.2 in]{AFM_Ar.pdf}% \hspace{0.3in} \includegraphics[width=2.8 in]{slopes_Ar.pdf}% \hspace{0.4in} \caption{\label{fig:AFM}Top: \textit{Post facto} AFM images of silicon surface. The direction of the projection of ion beam's path onto the images is from right to left, while that of X-ray's path is from bottom to top. Bottom: The slope distribution calculated from the AFM image above; a denser distribution of positive slopes indicates that the slopes on the left side of the terraces, shown in the cross-section images, are more defined than the slopes of the other side.} \end{figure} \begin{figure*} \includegraphics[width=3.2 in]{Detector_image_4000_to_4099_20200204162442.pdf}% \includegraphics[width=3.2 in]{S_vs_q_Ar_timeslice.pdf}% \caption{\label{fig:sq_Ar} Left: A detector image during nanopatterning. The Yoneda wing, spread across $q_{z}\,=\,0.36 \; \mathrm{nm}^{-1}$ (corresponding to $q_{z}^\prime\,=\,0.156 \; \mathrm{nm}^{-1}$) is the surface-sensitive scattering exiting the sample at the critical angle, $\alpha_c$. Correlation peaks at $q_{||}\,\simeq\pm\,0.18 \; \mathrm{nm}^{-1}$ are due to the correlated nanoripples on the surface. Right: Evolution of the GISAXS pattern . The 1-d patterns are obtained by averaging along $q_{z}$ across the Yoneda wing, as indicated by the dashed box in the left diagram.} \end{figure*} \subsection{\label{sec:Exp-setup}Samples and Ion Bombardment} The experiments utilized 640 $\mu$m thick p-doped (B) Si(100) wafers cut into 1 $\times$ 1 cm$^2$ pieces and cleaned with acetone, isopropyl alcohol, and methanol. Samples were firmly affixed to a stage by Indium bonding. To prevent sputtering of impurities onto the surface, the sample stage geometry was designed to ensure that nothing was above the sample surface. The temperature of the water-cooled sample stage was monitored using a thermocouple and the stage was electrically isolated except for a wire leading out to an ammeter in order measure ion flux. The sample holder was mounted in a custom UHV chamber with mica X-ray windows and a base pressure of 5 $\times$ $10^{-7}$ Torr. Samples were kept at room temperature and bombarded with a broad beam of 1 keV Ar$^+$ ions, which were generated by a 3-cm graphite-grid ion source from Veeco Instruments Inc. placed at 65$^{\circ}$ ion incidence angle ($\theta$), as indicated in Fig. \ref{fig:GISAXS}. This ion incidence angle was chosen because it is known to cause self-organized rippling on the silicon surface \cite{madi2011mass,norris2017distinguishing}. The ion beam flux was measured to be 1 $\times$ 10$^{15}$ ions cm$^{-2}$s$^{-1}$ at the operating chamber pressure of 1 $\times$ $10^{-4}$ Torr. The final fluence was 2.2 $\times$ 10$^{18}$ ions cm$^{-2}$. The ion beam was sufficiently broad that it uniformly covered the entire sample. It is important to note that the coordinate system convention of Fig. \ref{fig:GISAXS} follows that often used for GISAXS experiments \textit{and is therefore rotated 90$^{\circ}$ with respect to the coordinate system typically used in the ion bombardment literature}. Thus, in these experiments "parallel-mode" ripples form with their wavevector pointing in the y-direction rather than in the x-direction, as would conventionally be the situation in studies of ion beam nanopatterning. \subsection{\label{sec:coGISAXS} Coherent grazing-incidence small-angle X-ray scattering (Co-GISAXS)} Real-time X-ray scattering experiments were performed at the Coherent Hard X-ray (CHX) beamline at the National Synchrotron Light Source-II (NSLS-II) of Brookhaven National Laboratory. The photon energy of 9.65 keV (wavelength $\lambda = 0.1258 \; \mathrm{nm}$) was selected with a flux of approximately 5 $\times$ $10^{11}$ photon s$^{-1}$ and beam dimensions 10 $\times$ 10 $\mu$m$^2$. Experiments used an Eiger-X 4M detector (Dectris) with 75 $\mu$m pixel size, which was located 10.3 m from the sample. The incident X-ray angle $\alpha_i$ was 0.26$^{\circ}$, which is slightly above the critical angle of total external reflection for silicon of 0.186$^{\circ}$. The projected incident X-ray beam direction on the sample was perpendicular to the projected ion beam direction. This allowed scattering in the GISAXS geometry to probe the dominant direction of ripple formation for the chosen ion bombardment angle. The diffuse scattering was recorded as a function of the exit angle $\alpha_f$ and $\psi$ using the 2D detector. The change in X-ray wavevector $\mathbf{q}$ can be calculated from those angles: \begin{equation} \mathbf{q} = \mathbf{k_f}-\mathbf{k_i} = \begin{pmatrix} q_x \\ q_y \\ q_z \end{pmatrix} = \frac{2\pi}{\lambda} \begin{pmatrix} \cos (\alpha_i)- \cos (\alpha_f) \cos (\psi) \\ \cos (\alpha_f) \sin (\psi) \\ \sin (\alpha_i) + \sin (\alpha_f) \end{pmatrix} \label{equ:wavenumber_conversion} \end{equation} Since $q_x$ is small, the horizontal component $q_{||}$ (parallel to the surface) can be approximated as simply $q_y$ and the vertical component as $q_z$ (perpendicular to the surface). In the analysis of this paper, we will primarily be interested in the scattering along the Yoneda wing (Fig. \ref{fig:sq_Ar}), which is particularly sensitive to surface structure \cite{renaud2009probing}. For simplicity, we will use the term ``GISAXS pattern'' for the one-dimensional intensity curve $I(q_{||},t)$ obtained by averaging speckles in the detector vertical direction (approximately $q_z$) across the Yoneda wing as shown in Fig. \ref{fig:sq_Ar}. \subsection{\label{sec:simulations}Simulations} Simulations were performed using the HPB model \cite{pearson2014theory,harrison2017emergence}: \begin{eqnarray} \frac{\partial h(\textbf{r},t)}{\partial t} = &&A \, h_y + S_x \, h_{xx} + S_y \, h_{yy} + \lambda _x \, h_x^2 +\lambda _y \, h_y^2 + \nonumber\\&&\gamma _y \,h_y^3 -B \nabla^{4}h +\eta(\textbf{r},t), \label{equ:HPB} \end{eqnarray} where $\eta(\textbf{r},t)$ is a Gaussian white noise. For $S_x > 0$ and $S_y < 0$, this produces ripples in the $y$-direction. Numerical integrations were performed on a 2048 $\times$ 2048 lattice using the one-step Euler scheme for the temporal discretization with an integration step $\Delta t$ = 0.001. The spatial derivatives were calculated by standard central finite difference discretization method on a square lattice with periodic boundary conditions. To check the accuracy of our calculations, we also used the Lam-Shin discretization \cite{lam1998improved} to compute nonlinear terms and found the results were similar. In the simulations, the surface is taken to be initially flat. For comparison with experiment, the lattice size and time units in the simulation are set as 1 nm and 1 second respectively. The linear coefficients $S_y$ and $B$ in the simulation were determined from a preliminary linear theory analysis of the measured early-stage kinetics as discussed in Sect. \ref{sec:early_kinetics}. The $S_x$ coefficient was assigned the same magnitude as $S_y$, but with opposite sign based on the measurements suggested by Norris \textit{et. al} \cite{norris2017distinguishing}. The amplitude of the noise term $<\eta^2>$ was also suggested by the linear theory analysis of kinetics in the early time. The $A$ and nonlinear coefficients were calculated from the sputter yield $Y(\theta)$ curve as discussed in Pearson \textit{et al.}, \cite{pearson2014theory}. In sum, the parameters used in the simulations were: A = -0.26 nm/s, $S_x$ = 0.45 nm$^2$/s, $S_y$ = -0.45 nm$^2$/s, $B$ = 6.96 nm$^4$/s, $\lambda_x$ = 1.94 nm/s, $\lambda_y$ = 1.94 nm/s, $\gamma_y$ = 11.89 nm/s, $<\eta^2>$ = 0.1 nm$^2$/s$^2$, $\Delta t$ = 0.001 s. \begin{figure} \includegraphics[width=3.41 in]{Normalized_intensity_AFM_Xray.pdf}% \caption{\label{fig:Normalized_intensity} Comparison of final X-ray scattering pattern and predicted GISAXS intensity from the \textit{post facto} AFM topograph. AFM results are averaged over four points on the sample.} \end{figure} For comparison with experiment, lattices were saved after every 1000 steps (i.e. equivalent to every second); the total number of images generated was 1300 and the video is uploaded on Youtube \footnote{HPB equation simulation video:\url{https://youtu.be/JY3n37PR4WI}}. \subsection{\label{sec:compare_real_Xray}Method of Comparing Real-Space Structure with X-ray Scattering} In order to connect simulated surfaces and \textit{post facto} Atomic Force Microscopy (AFM) topographs with X-ray scattering, we calculate their predicted GISAXS scattering patterns using the equation: \begin{equation} I(q_x,q_y,q_z) \propto \frac{1}{A} \left | \frac{1}{q_z^\prime} \iint dx \ dy \ e^{-iq_z^\prime h(x,y)} e^{-i(q_x x+q_y y)} \ \right |^2 \label{equ:Born-approx} \end{equation} where $A$ is illuminated area, and $q^\prime_z$, which is calculated by using the refracted incident $\alpha_i' = \sqrt{\alpha_i^2 - \alpha_c^2}$ and exit $\alpha_f' = \sqrt{\alpha_f^2 - \alpha_c^2}$ angles, is the z-component of the wave-vector change inside the material \cite{sinha1988x}. The geometrical value $q_z$ is used for display purposes in detector images since it is zero at the direct beam position on the detector, but in the data analysis, we use $q^\prime_z = 0.156 \; \mathrm{nm}^{-1}$ which is the average $q_z^\prime$ of detector pixels along the Yoneda wing used in the analysis of the X-ray data. In the case of small $q_z^\prime$, the intensity $I(q_{||},t)$ becomes proportional to the height-height structure factor, but for accuracy, the exponential term is kept in the calculations. The \textit{post facto} AFM topographs show the development of ripple structures (Fig. \ref{fig:AFM}). The GISAXS pattern calculated from the AFM images agrees well with the final GISAXS patterns actually observed as shown in Fig. \ref{fig:Normalized_intensity}. This allowed the measured GISAXS pattern to be normalized to an absolute scale relative to surface structure height. Equation \ref{equ:Born-approx} gives the units of intensity as (length)$^4$, and so the resulting units of normalized intensity here are nm$^4$. This is a natural unit for surface scattering and, when $q_z^\prime$ is small, reflects that the intensity is proportional to the height-height structure factor whose two-dimensional integral in reciprocal space is equal to the square of the RMS roughness. \section{\label{sec:overview}Overview} In the experiments, ion bombardment started at $t$ = 0 s, and a clear correlation peak can be seen growing around $t = 100 \; \mathrm{s}$ due to the formation of correlated ripples on the surface. The initial peak wavenumber is at $q_0 \approx 0.22 \; \mathrm{nm}^{-1}$ so that the initial wavelength of ripples is approximately $2\pi/$0.22 nm$^{-1}$ $\approx$ 28.6 nm. In Sect. \ref{sec:early_kinetics} below, we quantitatively analyze this behavior using linear theory of nanopatterning. A typical detector pattern and GISAXS intensity patterns at particular times in the evolution are shown in Fig. \ref{fig:sq_Ar}. After the time regime of linear theory, coarsening occurs with the correlation peak position $\pm q_0$ shifting to smaller wave number. The coarsening proceeds at an ever decreasing rate and, by the end of the experiment, the average GISAXS pattern changes only slowly; the final ripple wavelength suggested by the correlation peak position was approximately $2\pi/$0.12 nm$^{-1}$ $\approx$ 50 nm. In addition to the primary correlation peak at $\pm q_0$, a harmonic is seen to form at $\pm 2q_0$. These behaviors are analyzed below. \section{\label{sec:early_kinetics}SPECKLE-AVERAGED EARLY-TIME KINETICS} \begin{figure} \includegraphics[clip,trim={0.0in 0 0 0},width=3.41in]{R_vs_q_Ar.pdf}% \caption{\label{fig:linear-th-fits} Amplification factors obtained from linear theory analysis of speckle-averaged intensity evolution during early stages of nanopatterning.} \end{figure} At early stages of nanopatterning, when surface slopes are small, nonlinear equations such as the HPB model reduce to a more tractable linear stability theory which, in reciprocal space, takes the form \cite{bradley1988theory}: \begin{equation} \frac{\partial\tilde{h}\left(\mathbf{q},t\right)}{\partial t}=R\left(\mathbf{q}\right)\tilde{h}\left(\mathbf{q},t\right)+\tilde{\eta}\left(\mathbf{q},t\right) \label{eq: dispersion-general-form} \end{equation} where $\tilde{h}\left(\mathbf{q},t\right)$ is the Fourier transform of the surface height $h\left(\mathbf{r},t\right)$, $R\left(\mathbf{q}\right)$ is the \emph{amplification factor} or \emph{dispersion relation}, and $\tilde{\eta}\left(\mathbf{q},t\right)$ is the Fourier transform of a stochastic noise. The amplification factor differentiates surface stability or instability; a positive $R(\mathbf{q})$ at a given bombardment angle drives exponential amplification of modes of wavevector $\mathbf{q}$ resulting in surface instability, while a negative $R(\mathbf{q})$ damps fluctuations and stabilizes modes of wavevector $\mathbf{q}$. In the x-ray measurement direction, $R(q)$ is related to the parameters of the nonlinear HPB theory Eq. \ref{equ:HPB} by: \begin{equation} R(q_x \approx 0, q_{||}) \equiv R(q_{||}) =-S_{y}\,q_{||}^2-B\,q_{||}^4 \label{equ:long-wave} \end{equation} A linear theory analysis of the observed early-time speckle-averaged kinetics thus allows extraction of experimental values for the coefficients $S_y$ and $B$ for comparison with theoretical prediction and for use in the HPB model simulations. At early times, when surface roughness is small, the x-ray scattering intensity $I(q,t)$ is proportional to the height-height structure factor, which can be calculated from Eq. 4 to yield \cite{madi2011mass,norris2017distinguishing}: \begin{eqnarray} \label{equ:hhstructure-factor} I(\mathbf{q},t)&&= \left\langle h(\mathbf{q},t) \, h^*(\mathbf{q},t)\right\rangle\nonumber\\&& =\left(I_0(\mathbf{q})+\frac{n}{2R(\mathbf{q})}\right)e^{2R(\mathbf{q})t}-\frac{n}{2R(\mathbf{q})} \end{eqnarray} where $n$ is the magnitude of the stochastic noise: $\left\langle \eta\left(\mathbf{r},t\right) \eta\left(\mathbf{r^\prime},t\right) \right\rangle = n \, \delta(\mathbf{r}-\mathbf{r^\prime})\delta(t-t^\prime)$. To determine $R(q_{||})$, the intensity values $I(q_{||},t)$ were first averaged over 5 detector pixels in the $q_{||}$ direction and 100 pixels in the $q_z$ direction to remove speckle from the scattering pattern. The temporal evolution of the scattering from each wavenumber bin was then fit with a function of the form $I(q_{||},t) = a(q_{||}) e^{2R(q_{||})t} + b(q_{||})$, with $a$, $b$ and $R$ being the independent fit parameters for each $q_{||}$ bin (Fig. \ref{fig:linear-th-fits}). The resulting $R(q_{||})$ values are shown in Fig. \ref{fig:linear-th-fits} with subsequent fits to Eq. \ref{equ:long-wave}. The bumps in $R(q_{||})$ at low $q_{||}$'s on each side of the GISAXS pattern are assumed to be due to overlap with tails of the specularly reflected X-ray beam and are not included in the $R(q_{||})$ fitting. Fit values are $S_y = -0.47 \; \mathrm{nm}^2 s^{-1}$ and $B = 7.27 \; \mathrm{nm}^4 s^{-1}$. Nonlinear least square fitting was used for the fits but, since $R(q_{||})$ at a high $q_{||}$ has high error bars, Least Absolute Deviation (LAD) and Ordinary Least Square (OLS) were also examined; they gave similar results. The initial fastest growing wavenumber according to the linear theory is $q^{max}_{||} = \sqrt{|S_y|/(2B)} = 0.18 \; \mathrm{nm}^{-1}$. The fit values of the curvature coefficient $S_y$ and the ion-induced viscous relaxation coefficient $B$ can be compared with those obtained from fits in previous non-coherent real-time X-ray experiments by our group and collaborators using an ion source with lower fluxes. Scaled up by the higher ion flux here, Madi \textit{et al.} \cite{madi2011mass} obtained $S_y$ = -1 nm$^2$s$^{-1}$ and $B$ = 5.5 nm$^4$s$^{-1}$. Thus the values of $B$ found between the two experiments differ by about 25\% while there is approximately a factor of two difference in the measurements of $S_y$. This level of agreement/disagreement must be attributed to some combination of different ion sources, with ion flux varied by a factor of 500, and different experimental set-ups. For theoretical comparison with the measured $S_y$, we examine the erosive formalism of Bradley and Harper \cite{bradley1988theory} and the redistributive formulism of Carter and Vishnyakov \cite{carter1996roughening} in accordance with our use of the HPB model, while acknowledging that stress-driven theories offer competing views \cite{castro2012hydrodynamic,castro2012stress,norris2012stress,moreno2015nonuniversality,munoz2019stress}. To evaluate the parameters in the erosive and redistributive models we follow the general approaches of Bobes \textit{et al.} \cite{bobes2012ion} and Hofs{\"a}ss \cite{hofsass2014surface} using SDTrimSP \cite{mutzke2019sdtrimsp} binary collision approximation simulations. These give an erosive contribution $S_y^{eros} \approx 0.51$ nm$^2$/s and a redistributive contribution $S_y^{redist} \approx -1.39$ nm$^2$/s, for a total $S_y^{eros+redist} \approx -0.88$ nm$^2$/s. This splits the difference between the measurement of Madi \textit{et al.} \cite{madi2011mass} and the present one. On the other hand, a different approach \cite{norris2014pycraters} using the PyCraters Python framework \cite{PyCraters2017} for crater function analysis on the SDTrimSP results gives $S_y^{total} \approx -0.58$ nm$^2$/s, closer to our measured value. \section{Speckle-Averaged Late-time kinetics and \textit{Post Facto} AFM} \begin{figure} \includegraphics[width=3.2 in]{S_vs_q_Ar_latetimeslice.pdf}% \caption{\label{fig:late_kinetics} Coarsening slows down in the late stage of Ar$^+$ patterning of silicon. The inset shows the evolution of the correlation peak position on a log-log scale.} \end{figure} The ripple correlation peaks coarsen with time, but at an ever decreasing rate. Beyond $t$ = 1000 s, the GISAXS pattern changes very little - the peak moves only a few pixels as shown in Fig. \ref{fig:late_kinetics}. While the range of time scales available is too limited to make a definitive statement about the nature of the relaxation, the peak motion can be fit as a weak power law evolution. At late times, it's well known that the ripples begin to form asymmetric sawtooth structures. As a result, the scattering pattern becomes asymmetric \cite{ludwig2002si,perkinson2018sawtooth}. Here it's observed in Fig. \ref{fig:sq_Ar} that the correlation peak at $- q_0$ grows slightly higher than the one at $+q_0$. More insight comes from the \textit{post facto} AFM topograph, which shows the asymmetric structure, as evidenced by the cut through the topograph and the slope analysis shown in Fig. \ref{fig:AFM}. Simple calculations of the scattering expected from a sawtooth structure show that, if the negative terrace slope is larger in magnitude than the positive terrace slope on the structure, the negative $q_{||}$ peak should be higher, as observed. In this case, the negative terrace slope is facing the incoming ion beam. Such calculations also show that, in this case, the harmonic peak at $+2q_0$ should be higher than the one at $-2q_0$, as is also observed. \begin{figure} \includegraphics[width=3.2 in]{Simulation_sq.pdf} \caption{\label{fig:Simulation_sq} Simulated GISAXS pattern evolution calculated by averaging results of 100 simulations. As in the experiment, coarsening is observed and kinetic processes slow down over time.} \end{figure} \begin{figure} \includegraphics[width=3.2 in]{Simulation_lattice.pdf} \includegraphics[width=2.8 in]{slopes_Ar_simulation.pdf} \caption{\label{fig:Simulation_lattice} Top: A simulated lattice at $t$ = 600 s. Bottom: Slope distribution calculated from the simulated lattice image. Both results can be compared with the measurements in Fig. \ref{fig:AFM}.} \end{figure} The simulations produce speckle-averaged GISAXS scattering patterns (Fig. \ref{fig:Simulation_sq}) showing an initial peak wavenumber $q_0 \approx 0.22$ nm$^{-1}$, in agreement with experiment (Fig. \ref{fig:sq_Ar}), as well as coarsening. A selected simulation lattice image at $t$ = 1200 s and its slope analysis (Fig. \ref{fig:Simulation_lattice}) can be compared to the \textit {post facto} AFM topograph and slope analysis of Fig. \ref{fig:AFM}. The maximum time simulated was limited by a subsequent transition to a longer-wavelength sawtooth structure, a phenomenon which has been noted in the literature \cite{gago2002nanopatterning,perkinson2018sawtooth}. The current experiments had not yet reached that regime. \section{Speckle Correlation Study of fluctuation dynamics} \begin{figure} \includegraphics[width=3.2 in]{Ar_TTCF_plot.pdf} \caption{\label{fig:TT} Evolution of the two-time correlation function (TTCF). The surface was originally smooth with little scattering, so the function is initially very noisy. Ion bombardment began at t = 0 s, after 100 s of static scan. The gray areas with dashed boundaries represent the static data taken before and after the ion bombardment.} \end{figure} Although the speckle-averaged GISAXS pattern shows the average kinetics, the strength of coherent experiments lies in their ability to measure temporal correlations of the detailed speckle pattern through XPCS, illuminating the underlying fluctuation dynamics. The two-time correlation function (TTCF) measures how the structure on a given length scale changes between time $t_1$ and time $t_2$ as the sample evolves: \begin{equation} C(q_{||},t_1,t_2)= \frac{\left\langle I(q_{||},t_1)I(q_{||},t_2)\right\rangle }{\left\langle I(q_{||},t_1)\right\rangle \left\langle I(q_{||},t_2)\right\rangle} \label{equ:twotime} \end{equation} where the angular brackets denote an average over equivalent $q_{||}$ values and the denominator values can be considered as speckle-averaged intensities one would have obtained using non-coherent scattering. TTCF's are shown in Fig. \ref{fig:TT} for a wavenumber $q_{||}$ near the scattering peak. The central diagonal ridge of correlation going from the bottom left to top right indicates the high correlation expected for $t_1 \approx t_2$. One way to understand how a surface changes on a given length scale is by observing the width of the central correlation ridge, which is a measure of correlation time on the surface. As seen in Fig. \ref{fig:TT}, the width continuously increases, but at a steadily decreasing rate. At other wavenumbers, the peak width appears to reach a constant value. \begin{figure*} \includegraphics[clip,trim={0.0in 0 0 0},width=3.41in]{Ar_evolutionsofTau_diffQs.pdf}% \includegraphics[clip,trim={0.0in 0 0 0},width=3.41in]{TT_Ar_slice_n.pdf}% \caption{\label{fig:TT_slice_selectq} Evolution of correlation times $\tau(q_{||})$ and relaxation exponents $n(q_{||})$ from KWW fits through diagonal cuts of the TTCF's. Different adjacent time averaging was performed for $n(q_{||})$ to highlight the quick transition from values near 1 to values of 1.6-1.8.} \end{figure*} \begin{figure*}[!ht] \includegraphics[clip,trim={0.0in 0 0 0},width=7.0in]{Ar_TTCF_slice_fits_tau_v2.pdf} \includegraphics[clip,trim={0.0in 0 0 0},width=7.0in]{Simulation_Ar_TTCF_slice_fits_tau.pdf} \caption{\label{fig:TT_slice_allq_Ar} The first two rows are plots of $\tau(q_{||})$ during relatively early stages of patterning obtained from KWW fits of TTCF diagonal cuts. The last two rows are plots of $\tau(q_{||})$ calculated from the HPB simulation. Note that the experimental results and the simulation results are each plotted to the highest magnitude of $q_{||}$ for which results could reliably be obtained. Therefore, the horizontal axes are different between the top two and the bottom two rows.} \end{figure*} Quantitative measurement of the evolving dynamics is made by taking diagonal cuts through the central ridge at a constant average bombardment time $T = (t_1+t_2)/2 $ as a function of $\Delta t = |t_2 - t_2|$ at each wavenumber $q_{||}$. The decay in correlation with time is fit with the Kohlrausch-Williams-Watts (KWW) form\cite{williams1970non}: \begin{equation} g_2^T(q_{||},\Delta t)= b+\beta(q_{||})\, e^{-2({\frac{\Delta t}{\tau(q_{||})}})^{n(q_{||})} }, \label{equ:KWW} \end{equation} where $\tau(q_{||})$ is the correlation time, and $n(q_{||})$ is an exponent which determines whether the function is a simple ($n$ = 1), stretched ($0 < n < 1$), or compressed ($n > 1$) exponential. $b$ is the baseline, which was set as 1 or allowed to vary between 0.9 - 1.1. $\beta(q_{||})$ describes the contrast, which depends on experimental factors including the effective resolution of the experiment. The magnitude of the central diagonal ridge of correlations in Fig. \ref{fig:TT} increases with time, indicating an increasing contrast. This is probably because background incoherent scattering (e.g. from slits or windows) causes the apparent contrast to decrease at early times when the scattering from the sample is relatively small. As ripples form on the sample, the scattering from the sample increases and the apparent contrast approaches its limiting value. Finally, to improve statistics for the fits to Eq. \ref{equ:KWW}, results from $\pm$ 10 s around the central mean growth time $T$ were chosen for averaging. Figure \ref{fig:TT_slice_selectq} shows the evolution of $\tau$ and $n$ for selected wavenumbers. Near the peak wavenumber $q_0$, $\tau$ increases continuously, first rapidly and then more slowly. Away from the peak, the $\tau$ values initially increase but then seem to relax to a steady state. Near the peak, the relaxation exponent $n$ rapidly increases from approximately one, indicative of simple exponential decay, to 1.6-1.8, showing compressed exponential behavior. Figure \ref{fig:TT_slice_allq_Ar} shows the behavior of $\tau$ as a function of wavenumber {$q_{||}$} for selected times. It's seen that the $\tau(q_{||})$ values near the scattering peak $\pm q_0$ grow strongly to become much larger than the relaxation times at smaller and larger wavenumbers. This distinctive behavior is reproduced in the simulations, as seen in Fig. \ref{fig:TT_slice_allq_Ar}. Near the end of the experiment, when the correlations are changing more slowly, more detail can be obtained from averaging over a larger time period of $T = 500-1000 $ s, i.e. mean $T = 750$ s, using the auto-correlation function: \begin{equation} g_2(q_{||},\Delta t)= \frac{\left\langle I(q_{||},t^{\prime})I(q_{||},t^{\prime}+\Delta t)\right\rangle }{\left\langle I(q_{||}) \right\rangle ^2}. \label{equ:g2} \end{equation} The angular brackets indicate a time averaging over $t^\prime$ and equivalent $q$ values. Again the calculated $g_2(q_{||},\Delta t)$ function is fit with the KWW form Eq. \ref{equ:KWW}. \begin{figure*} \includegraphics[width=3.2 in]{Ar_g2_tau_Si_freeBL.pdf \includegraphics[width=3.2 in]{Simulation_g2_tau_100sil.pdf}% \includegraphics[width=3.2 in]{Ar_g2_n_Si_freeBL.pdf}% \includegraphics[width=3.2 in]{Simulation_g2_n_100sil.pdf}% \caption{\label{fig:Ar_g2_and_sil} $\tau(q_{||})$ and $n(q_{||})$ during the late stage of Ar$^{+}$ patterning of Si. Left figures: Measurements. Right figures: Simulation results.} \end{figure*} Plots of experimental $\tau(q_{||})$ and $n(q_{||})$ are shown in Fig. \ref{fig:Ar_g2_and_sil}. The trends seen in Fig. \ref{fig:TT_slice_allq_Ar} are confirmed and extended. Now it can be observed that the correlation time $\tau_{||}$ is asymmetric, being higher at $+q_0$ than at $-q_0$, This is the opposite direction of the relative peak intensities. In addition, it's seen that there is also a peak in $\tau(q_{||})$ at the harmonic wavenumbers $\pm 2q_0$. The peak of $\tau(q_{||})$ appears to be relatively more pronounced at the harmonic wavenumbers than does the corresponding peak in the scattering $I(q_{||})$ itself. Near the primary correlation peaks $\pm q_0$, $n(q_{||}) > 1$, so that the relaxation is compressed exponential, as noted before. At higher values of $q_{||}$, $n$ decreases to below one, indicative of stretched exponential behavior. The behavior of the simulations, also shown for comparison in Fig. \ref{fig:Ar_g2_and_sil}, shows generally similar trends as experiment, as discussed below. In addition, it appears that there may be a shoulder on $n(q_{||})$ near the position of the harmonic peaks $\pm 2q_0$. \section{Discussion} The final surface slope distribution clearly exhibits asymmetry with preferential tendency toward a particular slope value, especially on the positive slope side (Fig. \ref{fig:AFM}). The simulation slope distribution (Fig. \ref{fig:Simulation_lattice}) is also asymmetric, though more compact. While the structure does not reach a highly defined sawtooth stage during the period of these experiments, it appears to be moving toward such structure. Consistent with the developing ripple asymmetry, both the experiment and simulation show that asymmetries develop in the intensities of the ripple correlation peaks at $\pm q_{||}$ (Figs. \ref{fig:sq_Ar} and \ref{fig:Simulation_sq}). The asymmetry is more prominent for the experiments than for the simulations. However, peaks at the ripple wavenumber $\pm q_0$ in the simulated speckle-averaged intensity are sharper in the simulation, with the harmonic peaks being significantly more pronounced. This reflects that the simulated lattice looks more ordered (Fig. \ref{fig:Simulation_lattice}) compared to the experiment (Fig. \ref{fig:AFM}). The experiments and simulations show rich structure in the development of the correlation dynamics as seen in the parameters $\tau(q_{||})$ and $n(q_{||})$. For $\tau(q_{||})$, on length scales of the ripple structure, local structure is becoming ever more long-lived as coarsening progresses (Figs. \ref{fig:TT}, \ref{fig:TT_slice_selectq} and \ref{fig:TT_slice_allq_Ar}). For a total patterning time of T = 700 s, the correlation time for local ripple structure is about 240 s. This qualitative behavior does not come as a surprise. However, actual experimental confirmation of the evolving longevity is rare and, to our knowledge, the ability of coherent x-ray scattering to quantify the behavior is currently unique outside of the specialized environment of FIB/SEM instruments. Figures \ref{fig:TT_slice_allq_Ar} and \ref{fig:Ar_g2_and_sil} show that, near the peak wavenumbers $\pm \; q_0$, the scattering intensity initially grows much more rapidly than does the correlation time $\tau$, but that at later times the intensity grows only slowly while $\tau$ continues to grow significantly. Eventually $\tau(q_{||})$ develops a peak on length scales corresponding to the ripple wavelength. There is a secondary peak in relaxation times at the harmonic wavenumber $\pm 2q_0$ of the ripples. It's noteworthy that, just as the peaks in the simulation intensity at $\pm q_0$ are sharper than in experiment, so too the peaks in $\tau(q_{||})$ are sharper in the simulation. There are also differences in behavior of the experiment and simulations for $\tau(q_{||})$ near the origin $q_{||} = 0$. However, there are reasons for additional care in trusting results in this range because, in the experiment, the scattered x-rays may be mixing with the tails of the specular beam and, in the simulations, finite size effects presumably become important. The $\tau(q_{||})$ values are noticeably higher on the positive side of the $q_{||}$ axis, particularly in the experimental results. Presumably this reflects asymmetry in the dynamic processes on the two sides of the ripples. The HPB theory predicts rich dynamics on the terraces of the late-stage sawtooth structures \cite{harrison2017emergence}, so this may be related. However, we have not been able to construct a simple model explaining the asymmetry. Summarizing comparison of ripple structure, intensities and correlation times, while there is more order to the ripple pattern in the simulations than in the experimental results, there is more asymmetry in the experiment as observed in both the speckle-averaged intensity $I(q_{||})$ and $\tau(q_{||})$. It should be noted that there is uncertainty in the coefficients of the terms entering the HPB simulations because of uncertainty in $Y(\theta)$ and no attempt was made to vary the coefficients in an \textit{ad hoc} manner to seek better agreement. Moreover, no attempt was made to include any effects associated with initial surface structure \cite{munoz2012independence, kim2013role}, though we expect those to be small since experiments started with a polished Si wafer. Turning to the relaxation exponent $n(q_{||})$, at early times the fluctuation relaxation processes are consistent with being simple exponential in nature (i.e. exponent \textit{n} = 1), as expected for linear theory behavior. As patterning continues, however, Figs. \ref{fig:TT_slice_selectq} and \ref{fig:TT_slice_allq_Ar} show that the relaxation exponents $n(q)$ evolve in both the experiments and the simulations, with the system exhibiting compressed exponential relaxation on length scales comparable to or longer than that of the ripples and stretched exponential relaxation on much shorter length scales. The simulations show clear peaks in $n(q_{||})$ at slightly higher $|q_{||}|$ than the intensity peak positions $\pm q_0$. These are less clear in the experimental results, but those results are also suggestive, especially on the positive side of the $q_{||}$ axis. Why relaxation exponents should peak there is unknown. As we noted in Ref. \cite{myint2021gennes}, a common feature of nonlinear models of ion beam nanopatterning is the inclusion of the Kardar-Parisi-Zhang (KPZ) quadratic nonlinearities $h_x^2,h_y^2$. For small $\Delta t$, simulations of the KPZ model are well fit with compressed exponential behavior \cite{mokhtarzadeh2017simulations} and the leading terms in the KPZ model dynamics \cite{katzav2004numerical} suggest an effective exponent $n \approx (2+2\alpha)/z \approx 1.74$, where $\alpha$ and $z$ are the roughness and dynamic exponents respectively for (2+1) dimensional growth. Thus, the KPZ effective compressed exponent $n$ for small $\Delta t$ is approximately equal to that observed at the nanoripple wavenumber peaks $\pm q_0$ in the present experiments. It is unknown whether the HPB equation is in the KPZ universality class at long length scales, though the relaxation exponents found here appear to be similar. As we noted previously, the inclusion of lower order terms \cite{makeev2002morphology} in the minimal Kuramoto-Sivashinsky (aKS) equation reproduces some of the kinetics features observed here, and the aKS equation is known to exhibit KPZ dynamics at large length scales. This could potentially provide a clear connection with the KPZ behavior. As we also discuss in Ref. \cite{myint2021gennes}, the strong peak in intensity and correlation times at the ripple wavenumbers $\pm q_0$ is reminiscent of de Gennes narrowing in liquids. Moreover, as discussed there, compressed exponential decay observed here on length scales comparable to the ripple wavelength suggests the absence of short decay times and may be related to the concept of structural persistence. In contrast, the lower exponents observed at longer and shorter length scales suggests exponential or even stretched exponential behavior at early times, possibly indicating the lack of such persistence. Moreover, compressed exponential behavior at short times $\Delta t$ in both soft materials \cite{cipelletti2005slow} and metallic glasses \cite{ruta2012atomic} has been attributed to collective ballistic flow of local structures due to internal stress relaxation. It's notable that some theoretical approaches to understanding ion beam nanopatterning use fluid dynamic models with stress relaxation as a driving force \cite{castro2012hydrodynamic,castro2012stress,norris2012stress,moreno2015nonuniversality,munoz2019stress}. These might provide a direct connection between the compressed exponential behavior of ion beam nanopatterning observed here and that observed in glasses. This would be an attractive direction for future study. \begin{acknowledgments} We thank Andreas Mutzke for providing the SDTrimSP simulation program, R.M. Bradley for discussions and S. Norris for help with the PyCraters library. We also thank Josh Bevan (Boston University Research Computing Services) for help with optimizing the numerical simulations and our reviewers for their constructive comments. This material is based on work partly supported at BU by the National Science Foundation (NSF) under Grant No. DMR-1709380. X.Z. and R.H. were partly supported at UVM by the U.S. Department of Energy (DOE) Office of Science under Grant No. DE-SC0017802. Experiments were done at the Coherent Hard X-ray (CHX) beamline at National Synchrotron Light Source II (NSLS-II), a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Brookhaven National Laboratory under Contract No. DE-SC0012704. The custom UHV sample holder, designed by P.M. and K.F.L, was built at Scientific Instrumentation Facility (SIF) at Boston university. For the AFM images, Bruker Dimension 3000 Atomic Force Microscope at Precision Measurement Laboratory at the Boston University Photonics Center was utilized. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Correlation energy} Electron correlation in many-electron system is of two kinds, one due to the Coulombic repulsion between the electrons and the other due to Fermi-Dirac statistics of electrons -- referred as Coulomb and Pauli correlations, respectively. Coulomb correlations cannot be treated exactly as the precise form of the wavefunction for a many-electron system cannot be determined since the Schr\"{o}dinger equation for a many-electron system is not solvable. On the other hand, the effects of Pauli correlation can be explicitly taken care of by ensuring the wavefunction to be antisymmetric with respect to the interexchange of electron coordinates. For example, in the Hartree-Fock treatment of the many-electron problem, the wavefunction is made antisymmetric by writing it as a Slater determinant in terms of single-particle orbitals. The difference between the exact non-relativistic energy $E_{exact}^{NR}$ (which may be calculated to high accuracy by various techniques) and the Hartree-Fock energy $E_{HF}$ is traditionally referred as the correlation energy $E^{QC}_c$, and is given as \begin{align} E^{QC}_c = E_{exact}^{NR} - E_{HF}. \label{eq:ecorr-def} \end{align} $E^{QC}_c$ will always be negative because the Hartree-Fock energy is an upper bound to the exact energy by the variational principle. Although the correlation energy is small compared to the total energy, its inclusion is important as in the ionization potential, electron affinities, excitation energy calculations. Obtaining $E_c$ is one of the challenges in many-electron problem. In the following sections, we present some of our attempts to estimate the correlation energies of atoms in ground- and excited-states. \subsection{Lee-Yang-Parr (LYP) correlation energy functional} A correlation energy formulae due to Colle Salvetti (CS)~\cite{colle-salvetti:1975}, in which the correlation energy density is obtained from an approximate correlated wavefunction, was adapted to density functional form by Lee, Yang and Parr (LYP)~\cite{lee-yang-parr:1988}, and is given for ground-states by the formula \begin{align} E_c^{\text{LYP}} = -a \myint{ \frac{ \rho(\hemvec{r}) + 2b\rho(\hemvec{r})^{-5/3} \left[ \rho_{\alpha}(\hemvec{r}) t^{\alpha}_{HF} + \rho_{\beta}(\hemvec{r}) t^{\beta}_{HF} - \rho(\hemvec{r}) t_w(\hemvec{r}) \right] e^{-c\rho(\hemvec{r})^{1/3}} }{ 1 + d\rho(\hemvec{r})^{-1/2} } \gamma(\hemvec{r}) }{ \hemvec{r} } \label{eq:ec-lyp} \end{align} where parameters $a,b,c,$ and $d$ are chosen to get the correlation energy of the ground-state of He atom, and \begin{align} \gamma(\hemvec{r}) = 2 \left[ 1 - \frac{ \rho_{\alpha}^2(\hemvec{r}) +\rho_{\beta}^2(\hemvec{r}) }{ \rho^2(\hemvec{r}) } \right] \end{align} is a dimensionless constant. The Hartree-Fock kinetic energy density corresponding to up-spin electron ($t^{\alpha}$) is given by \begin{align} t^{\alpha}(\hemvec{r}) = \frac{1}{2} t_{HF}(2\rho_{\alpha}(\hemvec{r}),\hemvec{r}) \end{align} Similarly, the corresponding kinetic energy density ($t^{\beta}$) expression for the down-spin electron is \begin{align} t^{\beta}(\hemvec{r}) = \frac{1}{2} t_{HF}(2\rho_{\beta}(\hemvec{r}),\hemvec{r}) \end{align} The total Hartree-Fock kinetic energy density ($t_{HF}$) is given by \begin{align} t_{HF} = t_{TF} + \frac{1}{9} t_W(\hemvec{r}) + \frac{1}{18}\nabla^2 \rho \end{align} where $t_{TF},$ and $t_{W}$ are the kinetic energy densities by Thomas-Fermi and Weizs\"{a}cker respectively, and is given by \begin{align} t_{TF} = \frac{3}{10} \left(3\pi^2\right)^{2/3} \rho^{5/3} \\ t_{W} = \frac{1}{8} \frac{\left|\nabla \rho\right|^2}{\rho} -\frac{1}{8} \nabla^2 \rho \end{align} It has been shown that the $E_c^{\text{LYP}}$ gives atomic correlation energies for ground-states within a few percent of their accurate values. LYP functional has been employed to calculate energies of excited-states of atoms using Harbola-Sahni orbitals~\cite{harbola-sahni:1989a,roy-jalbout:2007}. Attempts to estimate correlation energies for excited-states by extending the LYP functional using the method of splitting $k$-space was pursued recently~\cite{thesis:shamim}. This is based on the observation that the derivation of Colle-Salvetti and LYP formulae are quite general, and the ideas are equally applicable to excited states also. The modified LYP functional for an excited state corresponding to one-gap system is obtained by replacing $t_{TF}$ and $t_{W}$ in~\eref{eq:ec-lyp} with the modified Thomas Fermi kinetic energy density ($t_{mTF}$) \begin{align} t_{mTF} = \frac{3}{10} \left(3\pi^2\right)^{2/3} \left[ \rho_3^{5/3} - \rho_2^{5/3} + \rho_1^{5/3} \right] \end{align} and the modified Weizsacker term ($t_{mW}$) \begin{align} t_{mW} = \frac{1}{8} \left[ \frac{\left|\nabla \rho_1\right|^2}{\rho_1} + \frac{\left|\nabla \rho_3\right|^2}{\rho_3} - \frac{\left|\nabla \rho_2\right|^2}{\rho_2} \right] -\frac{1}{8} \left[ \nabla^2 \rho_1 + \nabla^2 \rho_3 - \nabla^2 \rho_2 \right] \end{align} The parameters ($a, b, c$ and $d$) in the modified LYP functional for the excited-state calculations are chosen to be same as in the ground-state calculations. It is observed that the modified LYP functional leads to insignificant improvement over the correlation energy obtained with ground state functional. In addition to chosing the ground-state parameters for the modified LYP functional, a new set of parameters were also obtained by fitting for a particular excited state of He. The correlation energies so obtained for the excited states of other atoms doesn't improve the results. This study indicates that some other approach should be adopted to estimate the correlation energies for excited states. In the next section, we try to estimate the correlation energies following the previous work by Chakravorty and Clementi~\cite{chakravorty-clementi:1989}. \subsection{Correlation energy by modelling pair-correlation function} Chakravorty and Clementi~\cite{chakravorty-clementi:1989} proposed a method to include the Coulomb hole in the Hartree-Fock method. In this method, a soft-Coulomb hole of Gaussian nature is introduced in the expressions for Hartree-energy \begin{equation} E_{H}^{HF} = \frac{1}{2} \sum_{i,j} \iint \frac{ \psi^*_i(\mathbf{r})\psi_i(\mathbf{r}) \psi_j(\mathbf{r}')\psi^*_j(\mathbf{r}') } {\left|\mathbf{r}-\mathbf{r}'\right|} d\mathbf{r} d\mathbf{r}' \label{eq:ecoul} \end{equation} and the exchange-energy \begin{equation} E_{x}^{HF} = -\frac{1}{2} \sum_{i,j}{}^{'} \iint \frac{ \psi^*_{i,\sigma}(\mathbf{r})\psi^*_{j,\sigma}(\mathbf{r}') \psi_{i,\sigma}(\mathbf{r}')\psi_{j,\sigma}(\mathbf{r}) } {\left|\mathbf{r}-\mathbf{r}'\right|} d\mathbf{r} d\mathbf{r}' . \label{eq:eexch} \end{equation} The modified expression for the corresponding energies are given by \begin{align} E^{HF}_{H,\gamma} &= \frac{1}{2} \sum_{i,j} \iint \frac{ \psi^*_i(\mathbf{r})\psi_i(\mathbf{r}) \psi_j(\mathbf{r}')\psi^*_j(\mathbf{r}') \left[1-\exp({-\gamma \left|\mathbf{r}-\mathbf{r}'\right|^2}) \right] } {\left|\mathbf{r}-\mathbf{r}'\right|} d\mathbf{r} d\mathbf{r}' \label{eq:ecoul-clem} \\ E^{HF}_{x,\gamma} &= -\frac{1}{2} \sum_{i,j}{}^{'} \iint \frac{ \psi^*_{i,\sigma}(\mathbf{r})\psi^*_{j,\sigma}(\mathbf{r}') \psi_{i,\sigma}(\mathbf{r}')\psi_{j,\sigma}(\mathbf{r}) \left[1-\exp({-\gamma \left|\mathbf{r}-\mathbf{r}'\right|^2}) \right] } {\left|\mathbf{r}-\mathbf{r}'\right|} d\mathbf{r} d\mathbf{r}' \label{eq:eexch-clem} \end{align} The parameter $\gamma$ determines the size of the Coulomb hole and is parameterized in their work~\cite{chakravorty-clementi:1989}. The above equation reduces to Hartree energy $E_H^{HF}$ and exchange-energy $E_x^{HF}$ of the Hartree-Fock model in the limit $\gamma=\infty$. The correlation energy is then obtained by \begin{align} E_{c} = (E^{HF}_{\textrm{H}} + E^{HF}_x) - (E_{\textrm{H},\gamma} + E_{x,\gamma}) \label{eq:ecorrhf} \end{align} Like in traditional quantum theory, in the density-functional theory too, the exact exchange-correlation energy functional can be mathematically expressed as \begin{align} E_{xc}[\rho] = \frac{1}{2} \iint \frac{\rho(\mathbf{r}_1) \rho_{xc}(\mathbf{r}_1,\mathbf{r}_2)}{\left|\mathbf{r}_1-\mathbf{r}_2 \right|} d\mathbf{r}_1 d\mathbf{r}_2 \end{align} where, $\rho_{xc}(\mathbf{r}_1,\mathbf{r}_2)$ is the exchange-correlation hole. The difference in the traditional correlation energies and the DFT correlation energies are numerically very small. The exchange- and correlation- holes are usually decoupled as $\rho_{xc}(\mathbf{r}_1,\mathbf{r}_2) = \rho_{x}(\mathbf{r}_1,\mathbf{r}_2) + \rho_{c}(\mathbf{r}_1,\mathbf{r}_2)$. In terms of exchange-hole, the exchange-energy functional is given by \begin{align} E^{DFT}_x[\rho] &= \frac{1}{2} \iint \frac{\rho(\mathbf{r}_1) \rho_{x}(\mathbf{r}_1,\mathbf{r}_2)}{\left|\mathbf{r}_1-\mathbf{r}_2 \right|} d\mathbf{r}_1 d\mathbf{r}_2 \end{align} and the corresponding correlation-energy functional in terms of correlation-hole is \begin{align} E^{DFT}_c[\rho] &= \frac{1}{2} \iint \frac{\rho(\mathbf{r}_1) \rho_{c}(\mathbf{r}_1,\mathbf{r}_2)}{\left|\mathbf{r}_1-\mathbf{r}_2 \right|} d\mathbf{r}_1 d\mathbf{r}_2 \label{eq:ecdft} \end{align} The explicit dependence of Coulomb correlation hole $\rho_{c}(\mathbf{r}_1,\mathbf{r}_2)$ on density $\rho$ is unknown and has to be approximated. However, the constraints to be satisfied by the $\rho_{c}(\mathbf{r}_1,\mathbf{r}_2)$ are known and are obtained from the exact constraints on the $\rho_{xc}(\mathbf{r}_1,\mathbf{r}_2)$ and $\rho_{x}(\mathbf{r}_1,\mathbf{r}_2)$: \begin{subequations} \begin{align} \lim_{r_{12} \rightarrow \infty} \frac{\rho_{xc}(\mathbf{r}_1,\mathbf{r}_2)}{\rho(\mathbf{r}_2)} &= 0 & \lim_{r_{12} \rightarrow \infty} \frac{\rho_{x}(\mathbf{r}_1,\mathbf{r}_2)}{\rho(\mathbf{r}_2)} &= 0 \\ \lim_{r_{12} \rightarrow 0} \frac{\rho_{xc}(\mathbf{r}_1,\mathbf{r}_2)}{\rho(\mathbf{r}_2)} &= -1 & \lim_{r_{12} \rightarrow 0} \frac{\rho_{x}(\mathbf{r}_1,\mathbf{r}_2)}{\rho(\mathbf{r}_2)} &= -\frac{1}{2} \\ \int \rho_{xc}(\mathbf{r}_1,\mathbf{r}_2) d\mathbf{r}_2 &=-1 & \int \rho_x(\mathbf{r}_1,\mathbf{r}_2) d\mathbf{r}_2 &=-1 \end{align} \end{subequations} These give the constraints on Coulomb hole $\rho_{c}(\mathbf{r}_1,\mathbf{r}_2)$ from $\rho_{c}(\mathbf{r}_1,\mathbf{r}_2)=\rho_{xc}(\mathbf{r}_1,\mathbf{r}_2)-\rho_{c}(\mathbf{r}_1,\mathbf{r}_2)$ as \begin{subequations} \begin{align} \lim_{r_{12} \rightarrow \infty} \frac{\rho_{c}(\mathbf{r}_1,\mathbf{r}_2)}{\rho(\mathbf{r}_2)} &= 0 \label{eq:rhocorr-1}\\ \lim_{r_{12} \rightarrow 0} \frac{\rho_{c}(\mathbf{r}_1,\mathbf{r}_2)}{\rho(\mathbf{r}_2)} &= -\frac{1}{2} \label{eq:rhocorr-2}\\ \int \rho_c(\mathbf{r}_1,\mathbf{r}_2) d\mathbf{r}_2 &=0 \label{eq:rhocorr-3} \end{align} \end{subequations} From~\erefsto{eq:ecoul}{eq:ecorrhf}, it is easily seen that the Coulomb hole $\rho_{c}(\mathbf{r}_1,\mathbf{r}_2)$ in the Chakravorty and Clementi method is \begin{align} \rho_c(\mathbf{r}_1,\mathbf{r}_2) = \rho_c(\gamma,r_{12}) = \left[ - \rho(\mathbf{r}_2) + \rho_x(\mathbf{r}_1,\mathbf{r}_2) \right] \exp({-\gamma \left|\mathbf{r}_1-\mathbf{r}_2\right|^2}) \label{eq:coulombhole-chakra} \end{align} where $r_{12}=\mathbf{r}_1-\mathbf{r}_2$. It is observed that the Coulomb hole in the Chakravorty and Clementi method does not satisfy the charge neutrality condition (\eref{eq:rhocorr-3}). In the next section, we try to model the correlation hole using the Yukawa form for the Coulomb hole along the same lines as the works by Chakravorty and Clementi. We, however, also put in additional terms to satisfy the charge neutrality condition. \section{Yukawa model for the Coulomb correlation hole} The Hartree- ($E_{\textrm{H}}^{\text{Yuk},\gamma}$) and the exchange-energy ($E_{\textrm{H}}^{\text{Yuk},\gamma}$) obtained using the Yukawa form instead of Gaussian form in~\erefs{eq:ecoul-clem}{eq:eexch-clem} is given as \begin{equation} E_{\textrm{H}}^{\text{Yuk},\gamma} = \frac{1}{2} \iint \frac{ \rho(\mathbf{r}_1) \rho(\mathbf{r}_2) \left[1- C \exp({-\gamma \left|\mathbf{r}_1-\mathbf{r}_2\right|}) \right] } {\left|\mathbf{r}_1-\mathbf{r}_2\right|} d\mathbf{r}_1 d\mathbf{r}_2 \label{eq:ecoul-yuk} \end{equation} and \begin{equation} E^{\text{Yuk},\gamma}_x = -\frac{1}{2} \iint \frac{ \rho(\mathbf{r}_1) \rho_x(\mathbf{r}_1,\mathbf{r}_2) \left[1- C \exp({-\gamma \left|\mathbf{r}_1-\mathbf{r}_2\right|}) \right] } {\left|\mathbf{r}_1-\mathbf{r}_2\right|} d\mathbf{r}_1 d\mathbf{r}_2 \label{eq:eexch-yuk} \end{equation} where $C$ is a constant. Using these, the correlation energy $E_c$ is then given by \begin{align} E_{c} & = (E^{\text{Yuk},\gamma}_H + E^{\text{Yuk},\gamma}_x) - (E^{\text{Yuk},\gamma=0}_H + E^{\text{Yuk},\gamma=0}_x) \\ & = - \frac{C}{2} \iint \frac{ \rho(\mathbf{r}_1) \left[ \rho(\mathbf{r}_2) + \rho_x(\mathbf{r}_1,\mathbf{r}_2) \right] \exp({-\gamma \left|\mathbf{r}_1-\mathbf{r}_2\right|}) } {\left|\mathbf{r}_1-\mathbf{r}_2\right|} d\mathbf{r}_1 d\mathbf{r}_2 \nonumber \\ &= C \bar{E}_{\text{corr}} \label{eq:constC} \end{align} Comparing the above equation with the~\eref{eq:ecdft}, we have for the Coulomb correlation hole \begin{align} \rho_{c}(\mathbf{r}_1,\mathbf{r}_2) = \rho_{c}(\gamma,\mathbf{r}_{12}) = - C \left[ \rho(\mathbf{r}_2) + \rho_x(\mathbf{r}_1,\mathbf{r}_2) \right] \exp({-\gamma \left|\mathbf{r}_1-\mathbf{r}_2\right|}) \end{align} Similar to the Chakravorty and Clementi Coulomb hole, the above correlation hole also doesn't satisfy the charge neutrality condition~(\eref{eq:rhocorr-3}). In addition, the above Coulomb hole does not go to zero in the limit $\gamma \rightarrow 0$. In the following, we proposed a model form for Coulomb correlation hole which goes to zero as required. Furthermore, it is also has a term so that it satisfies the charge neutrality condition. The proposed model Coulomb correlation hole \begin{align} \rho_c(\gamma,r_{12}) = \rho_c(\mathbf{r}_1,\mathbf{r}_2) = C \left[ - \rho(\mathbf{r}_2) + \rho_x(\mathbf{r}_1,\mathbf{r}_2) \right] \exp({-\gamma \left|\mathbf{r}-\mathbf{r}'\right|}) \sin(2\gamma \left|\mathbf{r}_1-\mathbf{r}_2\right|) \label{eq:coulombhole-mod} \end{align} which goes to zero in the limit $\gamma \rightarrow 0$. The factor $\sin(2\gamma \left|\mathbf{r}_1-\mathbf{r}_2\right|)$ is reminiscent of Friedel oscillations near a defect in a solid~\cite{book:ziman:1972}. In our calculations, the parameter $\gamma$ in the model is to be tuned to satisfy the charge neutrality. \begin{equation} \int \rho_c(\mathbf{r}_1,\mathbf{r}_2) d\mathbf{r}_2 = 0 \, \, \text{for all} \, \, \mathbf{r}_1 \label{eq:rhoc-cond1} \end{equation} In an inhomogeneous system, we replace condition~(\eref{eq:rhoc-cond1}) by \begin{equation} \iint \rho_c(\mathbf{r}_1,\mathbf{r}_2) d\mathbf{r}_1 d\mathbf{r}_2 = 0 \end{equation} which makes it independent of $\hemvec{r}_1$. The parameter $\gamma$ in the Coulomb correlation hole is now chosen to satisfy this condition. In the following, we first apply our method to ground-states to check its validity. We then extend it to excited-states to explore its applicability there. \section{Ground-state results} We now use the correlation hole of~\eref{eq:coulombhole-mod} to calculate the correlation energies. For this, the orbitals obtained from the Harbola-Sahni exchange-only calculations are used. Shown in~\tref{tab:corr-gr} are the results obtained by tuning the parameter $\gamma$ in the modelled correlation hole of~\eref{eq:coulombhole-mod} to satisfy the charge neutrality constraint. $\bar{E}_{\textrm{corr}}$ obtained from~\eref{eq:eexch-yuk} corresponding to the optimized $\gamma$ are also shown in the table. The unknown normalization factor in the modelled Coulomb hole is obtained by taking the ratio of the $\bar{E}_{\textrm{corr}}$ and the experimental correlation energies. It is worth noting that factor $\text{Expt.}/\bar{E}_{\textrm{corr}}$ is nearly independent of $Z$ and is maximum for Li from an average value close to $2.3$. This is also evident from~\fref{fig:corr-fit} where the experimental correlation energies and the $\bar{E}_{\textrm{corr}}$ are plotted. The dotted line is the linear fit of the data, with the slope equal to $2.115$. \begin{table} \centering \caption { Correlation energies of atoms in their ground-states. Numbers given are in atomic units. } \begin{tabular}{|p{3cm}|p{2cm}|p{3cm}|p{3cm}|p{3cm}|} \hline Atom &$\gamma$ &-$\bar{E}_{\textrm{corr}}$ &-Expt. &Expt/$\bar{E}_{\textrm{corr}}$ \\ \hline He &5.2 &0.0156 &0.042 &2.69 \\ Li &8.0 &0.0271 &0.045 &1.67 \\ Be &10.8 &0.0398 &0.094 &2.36 \\ B &13.6 &0.0521 &0.124 &2.38 \\ C &16.3 &0.0656 &0.155 &2.36 \\ N &18.9 &0.0802 &0.186 &2.32 \\ O &21.2 &0.0986 &0.254 &2.58 \\ F &23.6 &0.1168 &0.316 &2.70 \\ Ne &25.8 &0.1383 &0.381 &2.82 \\ Na &28.2 &0.1591 &0.386 &2.43 \\ Mg &30.6 &0.1809 &0.428 &2.36 \\ Al &32.8 &0.2058 &0.459 &2.23 \\ Si &35.3 &0.2272 &0.494 &2.17 \\ P &37.5 &0.2533 &0.521 &2.06 \\ S &39.8 &0.2785 &0.595 &2.14 \\ Cl &42.0 &0.3056 &0.667 &2.18 \\ Ar &44.1 &0.3348 &0.732 &2.36 \\ \hline \end{tabular} \label{tab:corr-gr} \end{table} \begin{figure} \centering \input{pl-corr-gr.tex} \caption { Plot of calculated $\bar{E}_{\text{corr}}$ and the experimental correlation energies. The dotted line is the linear fit of the data. } \label{fig:corr-fit} \end{figure} In the following section, we use this scaling factor to estimate the correlation energies of atoms in their excited-states. \clearpage \subsection{Results for excited-state correlation energies} Similar to the ground-state calculations, the orbitals obtained from the Harbola-Sahni potential are used to calculate the correlation energies for excited-states. Shown in~\tref{tab:corr-ex-en} are the results obtained for excited-states of atoms by tuning the parameter $\gamma$ in the modelled Coulomb hole to satisfy the exact constraint. The correlation energies obtained using the ground-state LYP functional are also shown in the table. Also shown in the last column is the correlation energies obtained from~\eref{eq:ecorr-def} using the Harbola-Sahni and the Hartree-Fock exchange-energy respectively. The exact non-relativistic energies are taken from the Monte-Carlo calculations~\cite{galvez-buendia-sarsa:2002,galvez-buendia-sarsa:2005}. \begin{table}[h] \centering \caption { Correlation energies of atoms in their excited-states. Numbers given are in atomic units. } \begin{tabular}{|p{3cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline Atom &$\gamma$ &-$\bar{E}_{\textrm{corr}}$ &-$2.115\bar{E}_{\textrm{corr}}$ &-$E_c^{\text{LYP}}$ &\multicolumn{2}{c}{-$E_c$} \\ &&&& &H &H \\ \input{1.dat} \hline \end{tabular} \label{tab:corr-ex-en} \end{table} The $\gamma$ is observed to be almost the same for a given atomic number and is state-independent. For example, $\gamma$ is equal to $8.0$ for all the excited-states of Li, for Boron, out of four excited-states considered, $\gamma$ is $13.5$ for one case and is equal to $13.7$ for the rest of the three cases. However, applying it further to estimate the correlation energies of excited-state atoms are not quite accurate. A further study is required. One reason for this, is the ground- and excited-state correlation energies are almost similar. \section{Concluding remarks} In this chapter, we have tried to estimate the correlation energies of various atoms in their excited-states. For this, the Coulomb hole is modelled in terms of the orbitals following the previous work by Chakravorty and Clementi. The parameter in the model is fixed by making the corresponding Coulomb hole satisfy the exact constraint of charge neutrality. The ground-state results obtained with this modelled Coulomb hole is shown to be indenpendent of $Z$. Extending the ground-state parameter to the excited-states, we have calculated the excited-state correlation energies. The correlation energies so obtained for excited-states in majority of cases match with the exact values. Only for ions with high ionicity they do not match with the exact values. A further study is required. Other systematic approach to calculate the correlation energies is through the response function calculations. We plan to take this approach in the near future for estimating the correlation energies of excited-states. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} When planning kinematic paths from a start to a goal, robot motion planners often minimize the distance the robot travels in configuration space. When deciding on a goal configuration within a goal set \cite{dragan_ratliff_srinivasa_2011}, integrating over squared velocities as in trajectory optimization \cite{Ratliff–2009–10204, Schulman:2014:MPS:2675301.2675308}, or summing up the distances between every two consecutive waypoints as in randomized planning \cite{DBLP:journals/corr/abs-1105-1186, DBLP:journals/corr/GammellSB14a, DBLP:journals/corr/JansonP13}, these planners implicitly make an assumption: that the right distance metric to use in configuration space is the \emph{Euclidean} metric. When computing distances between two configurations, they take the difference between the two vectors and compute the squared norm of the difference. In this paper, we revisit this implicit assumption. We take inspiration from distance metrics in \emph{trajectory space}. For those spaces, it is established that Euclidean metrics do not work well \cite{Ratliff–2009–10204}: they treat each waypoint along the trajectory as independent. When we want to change one waypoint along a trajectory $\xi$ from $\xi_t$ to a new configuration $q'$, the Euclidean metric would move that single waypoint to $q'$, and keep the rest of the trajectory the same. Call this new trajectory $\xi_I$. This seems intuitive at first, but actually what is special about trajectories is that consecutive waypoints are intrinsically not independent from each other -- they are coupled through \emph{time}. Prior work has shown how if we correlate consecutive waypoints in a non-Euclidean metric $M$, rather than treating them as independent, we end up with something different from $\xi_I$: we get a $\xi_M$ that not only has the $t^{th}$ waypoint shifted to $q'$, but also smoothly propagates that change to the rest of the trajectory \cite{DBLP:conf/icra/DraganMBS15}. $\xi_M$ is further from $\xi$ according to the Euclidean metric, because more waypoints change. However, according to $M$, $\xi_M$ is actually closer. A significant amount of trajectory optimization work utilizes metrics with these temporal correlations. \cite{Ratliff–2009–10204,Kalakrishnan_RAIIC_2011,Schulman:2014:MPS:2675301.2675308}, which is one way of formalizing the pioneering concept of elastic bands \cite{DBLP:conf/icra/QuinlanK93}. Prior work has also explored learning the metric from demonstrations of what trajectories \emph{people} find more similar \cite{DBLP:conf/icra/DraganMBS15}. In this paper, we study whether some of the same ideas of correlation applied to metrics in the trajectory space also hold for metrics in the configuration space. In fact, prior work in generating natural motion \cite{gielniak_thomaz_2011} argues for cost functions with potentials between consecutive joints, which can be seen as correlating or anticorrelating consecutive joints in a non-Euclidean configuration space metric. Intuitively, this is akin to the trajectory metric, because joints are coupled too -- not through time, but through the kinematic chain. Similarly, we might think that penalizing movement for different joints differently might be helpful. \figref{fig:cover} illustrates this with an example. The robot starts in an initial configuration $q_s$ and needs to bend the elbow to $90^\textbf{o}$. The Euclidean metric outputs $q_I^*$ as the closest configuration to $q_s$ that satisfies this constraint. But to us, humans, the configuration on the right, $q_M^*$, actually looks more similar to $q_s$. The Euclidean metric disagrees, but a non-Euclidean metric $M$ warps the space to push $q_I^*$ further away from $q_s$ and bring $q_M^*$ closer (\figref{fig:cover}, right). It does this by coupling the shoulder and the elbow: if the elbow moves to the right, it is cheaper for the shoulder to move to the left to \emph{compensate}, than to stay still. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{cover_hong.pdf} \caption{A comparison between the solutions for the Euclidean metric $I$ and a non-Euclidean metric $M$ to the problem of finding the closest configuration to $q_s$ for which the elbow is at $90^\text{o}$. $M$ produces a configuration that looks visually more similar to the $q_s$.} \label{fig:cover} \end{figure} \begin{figure*} \centering \includegraphics[height=2.5in, width=6.0in]{DiffMetrics.png} \caption{Different metrics lead to visually different solutions for the same task (here, reaching an end effector height).} \label{fig:7dof_difference} \vspace{-0.2cm} \end{figure*} In this paper we make two main contributions towards understanding configuration space metrics: \noindent\textbf{Understanding the effects of changing the metric.} We start with a 3DOF arm, which enables us to visualize the configuration space, and explore the effects of introducing joint correlations or making joints cheap or expensive. We describe these effects visually, and so characterize ill-conditioned optimization problems that some metrics unfortunately induce. \noindent\textbf{Testing whether the Euclidean metric is right.} We then test with both a 3DOF and a 7DOF robot whether the Euclidean metric is the right one with respect to user preferences, under different criteria: producing natural-looking configurations, producing configurations that are visually similar to where the robot starts, and producing configurations that match what people would expect the robot to do. We collect data of user choices, and test how well the Euclidean metric explains these choices, comparing it to a learned metric, designed to best fit the data. Our analysis looks at several tasks that involve varying constraints on the robot's end effector position. We find that tasks fall into two groups: 1) Tasks where Euclidean does well, and learning a metric only marginally helps fit real user data and 2) Tasks where the Euclidean metric does poorly. For the latter, metrics that tend to explain user preferences better are similar across criteria, and are rather surprising. They penalize the elbow the most, and not necessarily the shoulder as we might intuit. For 3DOF arms, they correlate the shoulder and the wrist, and not consecutive joints. Overall, we see evidence that to produce more natural and predictable behavior from robots, we need to change the default understanding of distance in configuration space for certain types of tasks. \section{Problem Formulation} We start with an exploration of \emph{whether}, and if so, \emph{how} the choice of the configuration space (C-Space) metric affects what the robot does as a result of an optimization. The metric defines the robot's understanding of \emph{similarity} or \emph{distance} in its C-Space. Typically, they are optimized to be efficient--- reasoning about what the \emph{shortest} way to achieve the goal is. When we change the definition of distance, we might change what this most efficient solution is. Goal configurations that were further away may now be closer, and vice-versa, so the robot might choose to approach a given task differently when the notion of efficiency changes. When faced with a task, like reaching for an object, the robot needs to find a goal configuration that satisfies some constraint $c(q)=0$. For instance, its end effector might need to be at a particular position, or lie on some manifold in task space. In exploring the effects of different C-Space metrics, we solve constrained optimization problems of the form: \begin{equation} \begin{aligned} q^* =\ & \underset{q \in Q}{\argmin} & & \|q_s - q\|^2_M \\ & \text{subject to} & & c(q) = 0,\\ \end{aligned} \end{equation} where $Q$ is the space of robot configurations and $q_s$ is the starting configuration of the robot. Distance---the notion of closest---is defined via some metric $M$: \begin{equation} \|q_s - q\|^2_M = (q_s - q)^\top M (q_s - q). \end{equation} The choice of this metric influence the robot's decision, as we can already see in \figref{fig:7dof_difference} -- we explore this in more detail in the next section. The constraint here is very important in being able to analyze a metric -- without it, the solution would be $q_s$. Further, problems of this sort appear in motion planning, where we might be interested for instance in moving to the closest configuration that satisfies our task (e.g. a grasping configuration for an object). Which robot configuration is most suitable to plan towards given our current configuration and our constraint? Formally, a metric is a $d$ by $d$ symmetric and positive definite matrix, where $d$ is the number of degrees of freedom the robot has (also the dimension of $Q$, the C-Space). The metric is a direct result of the inner product we use in the configuration space: \begin{equation} \langle q_1,q_2 \rangle=q_1^TMq_2 \end{equation} $M$'s entries can be divided into two groups: diagonal and off-diagonal. These groups lead to two important concepts in understanding the effects of a the metric. Diagonal entries lead to \textit{joint cost}, while the non-diagonal entries lead to \textit{joint correlation}. Next, we will dissect the effects of altering \textit{joint cost} and \textit{joint correlation}. \begin{figure} \centering \includegraphics[width=.6\columnwidth]{Symmetry.pdf} \caption{The constraint manifold in C-Space exhibits point symmetry about its centroid. We see that this point symmetry in C-Space translates to reflective symmetry in work space.} \label{fig:Symmetry} \end{figure} \section{The Effect of the Metric} We begin with an exploration of robot arms in a simplified, lower dimensional space. Specifically, we consider arms with 3 DOFs operating in a 2D world. The convenience of a 3 DOF arm is that we can visualize its configuration space and gain a better understanding of how the metric affects the robot's choice for how to solve the task. We can plot the feasible set of our constrained optimization problems and \textit{see} how different metrics project onto this set. \subsection{Preliminaries} \begin{figure} \centering \includegraphics[width=.8\columnwidth]{Prelim.pdf} \caption{We have 2 sets of start configurations (black border), their closest point on the manifold w.r.t. Euclidean distance (gray border) and an arbitrary different metric (orange). We notice that when we project onto the ``inside" of the manifold (left), we move the end effector closer to the robot base. On the other hand, when projecting onto the ``outside" of the manifold (right), we move the end effector further away.} \label{fig:Preliminaries} \vspace{-0.2cm} \end{figure} \noindent\textbf{End Effector Position Constraints.} In this work, we will look specifically at constraints on the robot's end effector position. In \figref{fig:Symmetry} we see the feasible set of such a constraint. It is a 1-manifold with point symmetry with respect to the manifold's centroid. One of the sides corresponds to configurations with elbow less than $\pi$ ("left arm" configurations), while the other corresponds to configurations with the elbow greater than $\pi$ ("right arm" configurations). \figref{fig:Preliminaries} also shows that as we \emph{increase} the distance between the robot's base and the desired end effector position, the points on the manifold get \emph{closer} to the manifold's centroid. \noindent\textbf{Contraction vs. Expansion Tasks.} \figref{fig:Preliminaries}(left and right) demonstrates two instances of end effector position constraints. The first instance is a \emph{\textbf{contraction}} task. The starting configuration has the end effector further away from the base than the constraint requires. As a result, all the points in the manifold correspond to configurations in which the end effector would ``contract" closer to the base. For such tasks, the starting configuration is on the ``inside" of the manifold. The second instance is an \emph{\textbf{expansion}} task. The starting configuration has the end effector closer than it should be, and is on the "outside" of the manifold. We notice that in the contraction task, the choice of a metric (Euclidean in gray vs. non-Euclidean in orange) changes the projection point onto the manifold significantly. On the other hand, in the expansion task, the two metrics return nearly identical solutions. Next, we will explore how different metrics influence the optimal solutions, while bearing in mind that this might be different in expansions vs. contractions. \subsection{Joint Cost} Each diagonal term $M_{ii}$ specifies the cost incurred by moving joint $i$: when computing the squared norm of a difference in configurations $q_s-q$, i.e. $\|q_s-q\|^2_M$, $M_{ii}$ weighs the term $(q_s^{(i)}- q^{(i)})^2$---the displacement in joint $i$. The Euclidean metric weighs all joints equally. However, by breaking this symmetry, we can effectively encourage or discourage the movement of certain joints. The following sections will illustrate that our choice of diagonal weights has intuitive and significant effects on the optimal goal configuration. While it is possible that the Euclidean metric balances the joint costs in just the right way, it is also possible that to achieve our desired goal configurations for the robot, we may want to penalize different joints differently. \vspace{1em} \noindent\textbf{Cheap Joints.} A cheap joint $j$ is one for which $M_{jj} << M_{ii};\quad i \ne j$. When $M_{jj} << M_{ii}$, motion along joint $j$ incurs negligible cost relative to motion along other joints. This has a simple effect: the robot moves joint $j$ more in order to spare motion in the other joints. As a result, when minimizing $\|q_s-q\|_M^2$ for a cheap joint metric $M$, we reduce this $3$ dimensional norm minimization to a $2$ dimensional minimization because cost in the cheap joint is negligible. \vspace{1em} \noindent\textit{Cheap Shoulder:} In \figref{fig:Intuitive}, the first column shows the effect of a cheap shoulder metric. This metric (solution in orange) moves the shoulder significantly more than the Euclidean metric (gray) relative to the starting configuration (black). \figref{fig:CheapShoulder}(top) shows that this is in general reflected for many start configurations in contraction tasks. On the other hand, all expansion tasks tended to produce very similar results with the two metrics, as we saw before in the preliminaries. \begin{figure*} \centering \includegraphics[height=3.0in, width=7.0in]{Intuitive.pdf} \caption{Intuitive effects of different metrics on the solutions to end effector location constraints (red dots). Black border is the start configuration. Orange border is the solution of the metric shown on the bottom row. Gray border is the Euclidean metric's solution. Green tiles denote positive correlation and red ones negative.} \label{fig:Intuitive} \vspace{-0.2cm} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{CheapShoulder.pdf} \caption{The effects of a cheap shoulder metric on contraction tasks (top). While this metric induces more shoulder movement, it also as a result involves less wrist movement. Meanwhile, for expansion tasks, we see no distinction between the Euclidean metric's solutions and the cheap shoulder's (bottom).} \label{fig:CheapShoulder} \vspace{-0.2cm} \end{figure} \vspace{1em} \noindent\textit{Cheap Elbow:} The second column in \figref{fig:Intuitive} shows an example comparing a cheap elbow metric and the Euclidean one. Again, the elbow moves more to reduce motion in the wrist and shoulder. For this metric too, our analysis revealed that expansion tasks led to smaller differences. For contraction tasks differences were only large when the wrist was close to $\pi$, as in the example from \figref{fig:Intuitive}. \figref{fig:7dof_difference} (second column) shows a cheap elbow metric on a 7DOF arm (where the constraint is reaching an end effector height). Compared to Euclidean (first column), we again see that the robot moves its elbow more in order to reduce movement in the shoulder and the wrist. \vspace{1em} \noindent\textit{Cheap Wrist:} A cheap wrist solution is in \figref{fig:Intuitive}. With this metric, the shape of the manifold is such that many starting points end up being projected to the same two configurations (\figref{fig:Problems-singluarity}, left). Looking at only the left side of the manifold, all configurations in the red shaded region project to the configuration corresponding to the maximum elbow joint value and the minimum shoulder joint value. Such a point exists with cheap wrist metrics because they create manifolds in which the point with the maximum elbow joint value coincides with the point with minimum shoulder joint value. \figref{fig:Problems-singluarity}(right) depicts why: the solid arm is the configuration with the maximum elbow value and reducing the elbow value only increases (and can not decrease) the shoulder value, thus corresponding to the minimum shoulder value as well. \vspace{1em} \noindent\textbf{Expensive Joints} A joint $j$ is expensive when $M_{jj} >> M_{ii};\ i\ne j$. In this case, the robot moves joint $j$ as little as possible. While cheap joints reduced a $3D$ distance minimization to a $2D$ minimization, expensive joints reduce $3D$ to $1D$. \vspace{1em} \noindent\textit{Expensive Shoulder:} In \figref{fig:Intuitive} (4th column) we see an intuitive instance of the expensive shoulder metric. The Euclidean metric moves the shoulder a considerable amount to reach the end effector location, but the expensive shoulder metric barely moves it at all. For contraction tasks, when we minimize just over this one dimension, we unfortunately experience ill-conditioned behavior. We define \textit{ill-conditioning} as a scenario when the lowest cost sub-level set defined by the metric for a given starting configuration is disjoint. \figref{fig:Problems-instability} illustrates this phenomenon by encoding distance to the manifold in a heat-map (cyan meaning close, lavender meaning far). The cyan regions are at completely opposite ends of the manifold. This behavior occurs because we are minimizing distance in the shoulder dimension while our manifold has reflective symmetry across the shoulder joint, always giving us 2 equidistant solutions. \begin{figure} \centering \includegraphics[width=.75\columnwidth]{Singularity.pdf} \caption{(left) we see that all starting configurations in the red volumes map to their unique solution (Red dot). (Note volume, not area because of the wrist dimension). In the left red volume, this is because the point on the manifold with maximum elbow value coincides with the point of minimum shoulder value. Vice versa for the right red volume.} \label{fig:Problems-singluarity} \end{figure} \begin{figure} \centering \includegraphics[width=1.4in]{Instability.pdf} \caption{The above metric induces ill-conditioning. The optimal cyan regions are disjoint, causing small changes in the start configuration (black dots) to drastically change the solutions (orange dots) on the manifold} \label{fig:Problems-instability} \vspace{-0.2cm} \end{figure} As a result, two nearly identical starting configurations map to two very distinct solutions. Note that ill-conditioning only appears in contraction tasks, and not in expansion tasks. \vspace{1em} \noindent\textit{Expensive Elbow:} Expensive elbow metrics minimize movement in the elbow. \figref{fig:Intuitive} (5th column) gives a simple example. \figref{fig:7dof_difference} (3rd column) shows the same for the 7DOF robot -- compared to Euclidean, the elbow barely moves, whereas the shoulder and especially wrist move a lot more. However, in general, expensive elbow metrics leads to a singularity in the optimization in both contraction and expansion tasks for the 3DOF robot. We notice that an expensive elbow metric behaves identically to a cheap wrist metric for these tasks. For contraction tasks this is intuitive: to contract an arm, the robot must reel in its elbow joint. There must exist a unique configuration that minimizes the amount we adjust the elbow while still allowing the robot's end effector to reach the desired location. This one configuration is shown in red in \figref{fig:Problems-singluarity}. The same follows for expansion tasks. \vspace{1em} \noindent\textit{Expensive Wrist:} Expensive wrist metrics are adverse to moving the wrist. \figref{fig:Intuitive}(6th column) demonstrates such a scenario. In general, such a metric is more well-behaved, in the sense that different starting configurations project to different solutions if they have a different wrist value. \subsection{Joint Correlation} The off-diagonal term $M_{ij}$ specifies the correlation between joints $i$ and $j$, because it weighs the term $\Delta q_i\Delta q_j$ (and $\Delta q_j\Delta q_i$) where $\Delta q_i = q_s^{(i)}-q^{(i)}$. The Euclidean metric has no correlation between joints. However, by applying correlations, we can encourage certain joints to move together. If $M_{ij}$ is negative, the robot is incentivized to move joints $i$ and $j$ together in the same direction. On the other hand, if $M_{ij}$ is positive, the robot prefers moving joints $i$ and $j$ together in opposite directions. From a biological standpoint, human arms exhibit a degree of coupling in muscles/joints, so we might expect that natural metrics might be non-Euclidean \cite{10.1007/978-3-642-02809-0_9}, \cite{10.1371/journal.pone.0164050}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Correlate.pdf} \caption{Joint correlation creates directions of low cost in C-Space. The green arrows denote this low cost direction and as a result, the projections run $\sim$ parallel to the green arrows.} \label{fig:Correlation} \vspace{-0.2cm} \end{figure} \figref{fig:Intuitive} (columns 7 through 9) show examples of correlations: positive shoulder-elbow, positive elbow-wrist, and negative elbow-wrist. In each situation, the joints move together in the opposite or in the same directions, while with Euclidean they move independently. \figref{fig:7dof_difference} also shows a positive (4th column) and a negative (last column) correlation between shoulder yaw and elbow. Compared to Euclidean, the positive correlation ends up moving the shoulder clockwise, because the elbow has moved counterclockwise. This is a prime example where there is clearly more total motion than Euclidean (by definition), and yet the resulting configuration would believably look much more natural to some users. \figref{fig:Correlation} depicts the projections onto the manifold for correlations. Correlations induce an axis along which movement is cheap, and projections (orange) tend to run parallel to this axis compared to Euclidean (gray). \section{Is Euclidean the Right Metric?} In the previous section, we saw that different metrics can lead to different outcomes, especially for contraction tasks. This begs the question of which metric a robot should be optimized over. The answer depends on what our objective is. If we want spatial efficiency, the Euclidean metric is an obvious choice. But what if we want the robot to move more naturally? Or predictably? We investigate the fit of the Euclidean metric as the answer to such questions. We do so by leveraging learning as a tool: we ask people which configurations they find more natural, visually similar to the starting configuration, predictable, etc. We then learn a metric that agrees with their answers, analyze its characteristics, and compare it to the Euclidean metric. If the Euclidean metric agrees with user answers almost as well as the learned metric, that suggests that Euclidean is actually a good choice. Otherwise, we may need to reconsider our notion of efficiency when operating around people. \subsection{Learning a Metric} Now that we have a general intuition of C-Space metrics, we can consider learning metrics for different criteria (for example: naturalness, predictability or visually similarity). To learn a metric, we use preference-based learning: users provide their preferred solutions among several alternatives, and use their answer as evidence about the metric they are using to determine the solution. We chose preference-based learning for our analysis because it is actually feasible to collect preferences from end users. In contrast, asking users to demonstrate the solution is not only more burdensome, but configuration spaces are counterintuitive enough to make figuring out the desired configuration difficult to impossible. With preference-based learning, our framework consists of $n$ multiple choice questions $Q_1, Q_2, ..., Q_n$. Each question consists of a robot in its starting configuration $q^{(s)}_{i}$ followed by $m$ feasible goal configurations that make up the answer choices: $Q_i = \{q^{(s)}_{i}, q^{(1)}_{i}, q^{(2)}_{i}, ..., q^{(m)}_{i}\}$. We ask users to select the answer choice that best fits a given objective, like naturalness. Of course, not every person will select the same answer. When we aggregate responses across multiple users, we produce a distribution $f(Q_i)$ over answer choices for each question. This distribution reflects both which solutions users preferred more and also by how much they preferred it to others. The metric should produce shorter distances to more popular solutions. Furthermore, the difference in distance should be larger if the subjects greatly favor one solution over another. With this in mind, we fit a metric that minimizes the Kullback$-$Leibler Divergence between the distribution induced by user answers and the one induced by the metric: \begin{equation} \begin{aligned} M^* =\ & \underset{M \in S^{++}}{\argmin} & \sum_{i=1}^{n} D_{KL}[f(Q_i) | \sigma(M, q^{(s)}_i)] \\ \end{aligned} \end{equation} where \begin{equation} \sigma(M, q^{(s)}_i)_j = \frac{e^{-\|q^{(s)}_{i} - q^{(j)}_i\|_M^2}}{\sum_{j=1}^{m} e^{-\|q^{(s)}_{i} - q^{(j)}_{i}\|_M^2}} \end{equation} the softmax negative squared distances defined by metric $M$ \cite{conf/aaai/ZiebartMBD08}, \cite{bradley_terry_1952}, \cite{luce_2005}. With this optimization, we learn a metric that attributes low cost to more frequently selected answer choices while keeping answer choices of similar frequency at the same distance. To solve this optimization, we perform gradient descent on a matrix variable $M$. Since softmax is sensitive to scaling of $M$, we apply the constraint $\|M\|_F = 1.0$ on top of positive definiteness. The matrix variable $M$ can be thought of as $6$ scalar variables $M_{11}, M_{22}, M_{33}, M_{12}, M_{13}, M_{23}$ which represent the entries of matrix $M$. The quadratic form $\|q^{(s)}-q^{(j)}\|_{M}$ is therefore linear in these variables $M_{ab}$. The negative softmax KL Divergence results in: \begin{equation} \begin{aligned} &\sum_{i=1}^{n} \sum_{j=1}^{m}& f(Q_i)_j \bigg[\log \Big(\frac{f(Q_i)_j*\sum_{k=1}^{m} e^{-\|q^{(s)}_{i} - q^{(k)}_{i}\|_M^2}}{e^{-\|q^{(s)}_{i} - q^{(j)}_i\|_M^2}}\Big)\bigg]\\ = &\sum_{i=1}^{n} \sum_{j=1}^{m}& f(Q_i)_j*\big[LSE(d^{(1)}_{i}, ..., d^{(m)}_{i}) + \|q^{(s)}_{i} - q^{(j)}_i\|_M^2\big]\\ \end{aligned} \end{equation} \begin{table} \vspace{0.4cm} \caption{KL Divergence for Euclidean and Learned Metric} \label{tab:freq} \centering \begin{tabular}{ccc} \toprule & Euclidean & Learned\\ \toprule Naturalness Contraction & 8.531006144 & 2.090473224 \\ Naturalness Expansion & 3.840278556 & 1.900694689 \\ \midrule Similarity Contraction & 12.0261777 & 2.520342637 \\ Similarity Expansion & 2.666917419 & 1.664360558 \\ \midrule Closeness Contraction & 13.22421051 & 2.141058327 \\ Closeness Expansion & 2.699116968 & 1.968807375 \\ \midrule Predictability Contraction & 6.767907939 & 1.594664342 \\ Predictability Expansion & 2.320071361 & 1.66843745 \\ \bottomrule \end{tabular}\label{tab:kl} \vspace{-0.4cm} \end{table} This is a non-negative linear combination of convex functions so the overall objective is convex as well. \subsection{Data} \noindent\textbf{Queries.} We systematically generated the multiple choice queries. Firstly, we disallowed the elbow joint to ``flip" (cross $q[2] = \pi$) while traveling from start to goal. Afterwards, we applied wrist joint limits and eliminated self collisions. Finally, we sorted the configurations by increasing wrist value and picked $m$ points uniformly across this sorted set. The uniform selection gives us high chances of providing at least one good solution while keeping each the solutions different enough from each other. Each multiple choice question had $1$ image of a robot starting configuration along with $4$ images as answer choices generated from the strategy described above. Along with the robot, each image had a red dot specifying the location that the robot end effector must reach. There were 36 questions (18 contractions, 18 expansions). \noindent\textbf{Subjects. } Each participant answered all 36 multiple choice questions. We recruited $23$ participants (mean age of 34, female 48$\%$) via Amazon Mechanical Turk. All participants were from the United States and had a minimal approval rating of $95\%$. \noindent\textbf{Criteria for Metrics.} We were interested in learning metrics for various human preferences so we decided to ask for 4 separate answers for each query. \begin{enumerate} \item \emph{\textbf{Naturalness: }}In which answer choice does the robot look most \textbf{natural}? \item \emph{\textbf{Visual similarity: }}In which answer choice does the robot look most \textbf{visually similar} to the start position? \item \emph{\textbf{Closeness: }}In which answer choice does is the robot \textbf{closest} to the start position? \item \emph{\textbf{Predictability: }}In which answer choice does the robot move how you would \textbf{expect} given the start configuration and red dot? \end{enumerate} Each participant answered all $4$ sub parts of each question and we used their responses to optimize $4$ separate metrics. \subsection{Analysis} With the user data, we learned an expansion and contraction metric for each of the 4 criteria. \vspace{1em} \noindent\textbf{Euclidean better for expansion.} Table \ref{tab:kl} compares the KL Divergence of the learned metrics with those of the Euclidean metric. We first notice that across all 4 criteria, the Euclidean metric's KL Divergence is significantly lower for expansion tasks than contraction ones. This suggests that the Euclidean metric is a better fit for expansion tasks, and that perhaps other metrics might be more suitable for contraction tasks. \begin{figure} \centering \includegraphics[width=.8\columnwidth]{LearnResults.pdf} \caption{The effects of our learned contraction metric (orange). Compared to the Euclidean metric (gray), the learned metric produces a much more natural looking solution. Participants also found this configuration more visually similar, close, and predictable. } \label{fig:LearnResults} \vspace{-0.2cm} \end{figure} This is in line with our analysis from the previous section. Firstly, expansion tasks have a smaller feasible solution set than contraction tasks. There are less ways to reach for a far away object than to reach for a close one. In order to reach far away objects, our wrist and elbow joint angles must be to some degree straight. We can observe this by comparing the solution sets of expansion and contraction tasks in \figref{fig:CheapShoulder}. The lower subfigure displays an expansion task's solution set and it 1) is more compact than its contraction counterpart 2) avoids joint limits in all $3$ joints. Solutions near joint limits are almost always unnatural looking. As a result they are likely not the solutions that users would predict the robot to move to or consider "visually similar" to more natural looking start configurations. \vspace{1em} \noindent\textbf{Learned metrics for contraction.} Across all 4 criteria, the metrics learned for contraction tasks were nearly indistinguishable, and all reached much lower KL Divergence than the Euclidean. Key features of these metrics were: \begin{itemize} \item expensive elbow joint, \item very strong ($\sim$.99) positive correlation of shoulder and wrist, \item moderate ($\sim$.50) positive correlation of shoulder and elbow, and moderate ($\sim$.50) positive correlation of the elbow and wrist. \end{itemize} \figref{fig:LearnResults} illustrates the effects of this metric. We notice the significant effect of shoulder/wrist positive correlation. The orange bordered configuration looks especially more natural. The Euclidean metric in comparison maps to an uncomfortable contracted positions that users probably disliked. \vspace{1em} \noindent\textbf{Learned metrics for expansion.} Across all 4 criteria, the learned metrics were different from Euclidean, but only fit the user data slightly better. Interestingly, different criteria led to different metrics. \vspace{1em} \noindent\textit{Expansion Naturalness Learned Metric:} The learned naturalness metric had: \begin{itemize} \item small amount ($\sim$.3) of positive correlation along shoulder and elbow. \item small amount ($\sim$.3) of positive correlation along elbow and wrist. \item small amount ($\sim$-.25) of negative correlation along shoulder and wrist. \item expensive elbow, neutral shoulder, cheap wrist. \end{itemize} The magnitude of work space change is noticeably smaller for expansion tasks than contraction ones. This agrees with the observation that Euclidean metric KL Divergence was significantly lower for expansion tasks. However, among the notable differences was a concerted effort to keep the wrist angle near $\pi$ while not in a singularity. This is understandable for a naturalness metric because the wrist is often perfectly straight when reaching for distant locations. The absence of a singularity suggests that while users want the wrist to be straight, they prefer that their goal configuration, to a degree, resembles their start configuration. \vspace{1em} \noindent\textit{Expansion Visual Similarity and Closeness.} Visual Similarity and Closeness learning converged to nearly identical metrics. Some notable characteristics were \begin{itemize} \item strong ($\sim$.97) positive correlation between the shoulder and elbow, \item moderate ($\sim$.4) positive correlation between the shoulder and wrist, \item negligible correlation in the elbow and wrist, \item and a cheap shoulder (by an order of magnitude). \end{itemize} \vspace{1em} \noindent\textit{Expansion Predictability:} The predictability metric had \begin{itemize} \item moderate ($\sim$0.70) positive correlation in the shoulder and elbow, \item moderate ($\sim$ 0.65) negative correlation in the shoulder and wrist, \item expensive elbow, neutral shoulder, cheap wrist \end{itemize} The Euclidean had larger spread and the learned metric resisted spread, generating solutions with wrist value $\sim \pi$. \vspace{1em} \noindent\textbf{Summary.} Overall, the Euclidean metric did not seem like the best fit for contraction tasks, where the learned metric consistently ended up with an expensive elbow and a strong correlation of the shoulder and wrist. We had expected the shoulder to be most expensive, since it is higher up the kinematic chain (instead it was the elbow), and for consecutive joints to correlate (instead they only had moderate correlations, the strongest being shoulder-wrist). We find this important, because it teaches us not only that we can't necessarily rely on the Euclidean metric as a default, but also that users might contradict our intuition. \section{Metrics in 7DOFs for the Jaco 7} Upscaling to 7DOF arms in 3D task space, we can learn metrics that encode even more. \figref{fig:Learned7DOF} illustrates a fascinating behavior that a non-Euclidean metric learned. Instead of constraining its motion to a plane and actuating shoulder pitch, elbow, and wrist pitch to reach the book in the shelf, the metric (orange) learned to use the wrist roll to rotate the wrist and reach the book. As humans, we use this motion all the time when picking up or reaching for objects (Imagine picking fruit from a tree. While your hand may start down by your side, you will flip the orientation of your wrist/forearm to eventually grab the fruit with your palm facing you.) This human-like motion is rarely replicated via a Euclidean metric (gray). \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Learned7DOF.pdf} \caption{The robot is situated at a starting configuration (black). It must reach the book in the shelf. Notice that with the learned metric (orange), the robot learned to rotate the wrist roll joint to reach the book. This is a motion we as humans very liberally take. The Euclidean metric (gray) makes no use of these auxiliary joints and as a result, is resigned to stiff, robotic solutions.} \label{fig:Learned7DOF} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[height=1.in, width=1.in]{Fig4_1.pdf} & \includegraphics[height=1.in, width=1.in]{Fig4_2.pdf} \\ \includegraphics[height=1.in, width=1.in]{Fig4_3.pdf} & \includegraphics[height=1.in, width=1.in]{Fig5_4.png} \\ \end{tabular} \caption{Results of satisfying an elbow $90^{\circ}$ constraint on 3 different starting configurations, with a metric that positively correlates the shoulder and elbow joints. While 1) and 2) are mirror images of each other, their solutions are not. In 3), the shoulder actuates, as the elbow moves to $90^{\circ}$, moving the arm out of the page. With this fixed positive correlation, the robot's behavior does not generalize well across different starting configurations.}\label{fig:constantproblems} \vspace{-0.2cm} \end{figure} \subsection{ Metrics and Correlations} While Gielnaik et al. \cite{gielniak_thomaz_2011, todorov_jordan_2002}, posit that spatially correlating the joints of a robot will lead to more human-like robot motion, we find that for robot arms (as opposed to humanoids), this spatial correlation isn't robust across different starting configurations. When we examine correlation in 7 dimensional C-Space , we find that joint correlations can not be agnostic to the start configuration. \figref{fig:constantproblems} demonstrates problems that arise from \emph{\textbf{positive}} correlation between joints $1$ (shoulder pitch) and $3$ (elbow). From the starting configuration (transparent) in \figref{fig:constantproblems} (1), with shoulder yaw $=0$, actuating the elbow moves the shoulder in a direction that looks natural. However, if we apply this same metric and to the starting configuration in \figref{fig:constantproblems} (2) with shoulder yaw =$\pi$, the orientations of the joints becomes reversed. Decreasing joint $3$'s value moves the forearm counterclockwise now instead of clockwise while joint $1$ retains is orientation. Visually, \figref{fig:constantproblems} (2) is the mirror image of \figref{fig:constantproblems} (1) so we would expect it to produce a mirrored end configuration but this is not the case. The orientation flip now requires a \emph{\textbf{negative}} correlation to produce the natural mirrored solution. Worse yet, \textit{adversarial} starting configurations like \figref{fig:constantproblems} (3) with shoulder yaw $=\frac{\pi}{2}$ can produce even more undesired joint coupling. Here, the rotational axis of joint $1$ is orthogonal to that of joint $3$. Moving joint $3$ will also move joint $1$ from correlation but the end result is nothing like the natural result of \figref{fig:constantproblems} (1). We wanted positive correlation in \figref{fig:constantproblems} because it produced a natural robot configuration but it came at the cost of messing up behavior in different starting configurations. We cannot rely on fixed \textit{joint correlation} terms in 7D space to be robust across all starting configurations. \subsection{Learned Metrics} We learn a diagonal metric in 7 dimensions using the same learning algorithm employed in the 3 dimensional case. We ignore correlations in this analysis because of our finding from above -- even so, the analysis should tell us whether Euclidean joint costs are appropriate. We again learn separate metrics for contraction and expansion tasks but now we dress up the end effector location constraint in a real-world scenario. The contraction task is disguised as the robot bringing a book from a bookshelf closer to the robot base. The expansion task is shown in \figref{fig:Learned7DOF}: the robot reaching for a book in the bookshelf. To collect data, we queried 20 participants (mean age of 33, female $30\%$) via Amazon Mechanical Turk. All participants were from the United States and had a minimal approval rating of $95\%$. Each subject answer 36 questions (18 contractions, 18 expansions). We used the same 4 criteria from the 3DOF experiments. \noindent\textit{Euclidean Better for Contraction.} From \figref{tab:kl_7dof}, we notice that in 7DOFs, \emph{\textbf{contractions}} perform significantly better with the Euclidean metric than \emph{\textbf{expansion}}. This is contrary to what occurred with 3DOFs. \begin{table} \vspace{0.45cm} \caption{KL Divergence for Euclidean and Learned Metric} \label{tab:freq} \centering \begin{tabular}{ccc} \toprule & Euclidean & Learned\\ \toprule Naturalness Contraction & 5.0758441747 & 3.5453962573 \\ Naturalness Expansion & 11.26325615 & 2.2152094803 \\ \midrule Similarity Contraction & 4.7339982443 & 2.6521939985 \\ Similarity Expansion & 12.05742235 & 2.7618380878 \\ \midrule Closeness Contraction & 5.6072539842 & 2.790643385 \\ Closeness Expansion & 11.556934461 & 2.4291169369 \\ \midrule Predictability Contraction & 4.8851149143 & 2.92711599519 \\ Predictability Expansion & 11.498342560 & 3.2229499891 \\ \bottomrule \end{tabular}\label{tab:kl_7dof} \vspace{-0.4cm} \end{table} \noindent\textit{Learned Metrics for Expansion.} The learned expansion metric for all 4 criteria included an expensive joint $3$ (elbow). It is interesting that across both the 3DOF and 7DOF cases, expansion metrics consistently prefer an expensive elbow joint. Additionally, shoulder roll, shoulder yaw, and wrist roll were all very cheap. This allowed the learned metric to perform the wrist flip in \figref{fig:Learned7DOF}. expensive elbow again \noindent\textit{Learned Metrics for Contraction.} For contractions, the learned metrics consistently had expensive shoulder pitch and yaw. This is the intuitive result because motion in the shoulder moves the entire arm more than motion in other joints (i.e. elbow or wrist). This would lead to robot configurations that users would find less visually similar and further away from the start configuration. \section{Summary of Findings} Overall, contraction and expansion tasks tend to determine how good of a fit the Euclidean metric is. For 3DOF arms, expansion tasks are well fit by the Euclidean metric. Contraction tasks in 3DOFs require tuning to match human preferences, specifically expensive elbow with strong positive shoulder-wrist correlation. In 7DOFs, we neglected correlations for better robustness and found that the Euclidean metric performed well on contraction tasks. After learning a metric from 7DOF contraction tasks, we recovered the intuitive expensive shoulder metric. 7DOF Expansion tasks needed learning and resulted in an expensive elbow joint, like for 3DOFs. Lastly, across these robots and tasks, we consistently saw expensive elbow cost. From all this, we have reason to believe that for robots to act naturally and predictably, their notion of distance should be more sophisticated on some tasks than the Euclidean metric in C-Space. In the future, we hope to conduct experiments demonstrating the merits of different metrics when integrated in various motion planning algorithms i.e. trajopt and RRT \cite{Lavalle98rapidly-exploringrandom}. \section*{Acknowledgments} This research was supported by funding from the AFOSR and NSF CAREER Award. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} Measured geodesic laminations were pointed out by Thurston in the late of 70's and since then they have played a fundamental role in low-dimensional topology and geometry.\par Given a hyperbolic surface $F$, whose topological support $S$ is a closed orientable surface of genus $g\geq 2$, we will denote by ${\mathcal M}{\mathcal L}(F)$ the space of measured geodesic laminations on $F$. We just mention some important facts about that space, referring to Section~\ref{sec1} for some details.\par - A natural \emph{action} of $\mathbb{R}_{>0}$ on ${\mathcal M}{\mathcal L}(F)$ is defined by setting $t\lambda$ as the lamination with the same support as $\lambda$ and such that the $t\lambda$-total mass of any transverse arc is equal to the $\lambda$-mass multiplied by $t$.\par - Every measured geodesic lamination $\lambda$ induces a positive-valued function $\iota_\lambda$ on the space ${\mathcal C}_F$ of closed geodesics on $F$ by setting $\iota_\lambda(C)$ equal to the $\lambda$-mass of $C$. In this way we obtain a map ${\mathcal M}{\mathcal L}(F)\rightarrow\mathbb{R}_{\geq 0}^{{\mathcal C}_F}$. Such a map is injective and we will consider on ${\mathcal M}{\mathcal L}(F)$ the topology induced by $\mathbb{R}_{\geq 0}^{{\mathcal C}_F}$.\par - An important fact is that there exists a topological description of this space involving only the topology of $S$. It is possible to define a canonical identification between ${\mathcal M}{\mathcal L}(F)$ and the space ${\mathcal M}{\mathcal F}(S)$ of measured foliations on $S$ that in turn is homeomorphic to $\mathbb{R}^{6g-6}$.\par If ${\mathcal T}_g$ denotes the Teichm\"uller space of $S$ we can consider the set \[ {\mathcal T}_g\times{\mathcal M}{\mathcal L}_g=\bigcup_{[F]\in{\mathcal T}_g}{\mathcal M}{\mathcal L}(F) \] By previous facts it follows that ${\mathcal T}_g\times{\mathcal M}{\mathcal L}_g$ is \emph{ a trivial fiber bundle} on ${\mathcal T}_g\times{\mathcal M}{\mathcal L}_g$ with fiber equal to $\mathbb{R}^{6g-6}$.\\ Measured geodesic laminations are deeply involved in many contexts in low-dimensional topology and geometry. In this paper we will focus on some applications of measured geodesic laminations. We will see that in each context we will deal with a natural homeomorphism between ${\mathcal M}{\mathcal L}(F)$ and a $6g-6$-real vector space arises. Moreover this homeomorphism preserves the product by positive numbers. We will be interested in studying the linear structures on ${\mathcal M}{\mathcal L}(F)$ obtained by such homeomorphisms. We will see that even if they arise in different frameworks, the linear structures induced on ${\mathcal M}{\mathcal L}(F)$ coincide (so ${\mathcal M}{\mathcal L}(F)$ is equipped with a well-defined linear structure). First let us introduce the applications of measured geodesic laminations we will deal with. \medskip\par 1) The first one is \emph{the earthquake theory}. Given a measured geodesic lamination $\lambda$ on a hyperbolic surface $F$ the (left or right) earthquake on $F$ along $\lambda$ is a way to produce a new hyperbolic structure $E_\lambda(F)$ on $S$. This construction was pointed out by Thurston \cite{Thurston:earth} and in a sense is a generalization of Dehn twist action on ${\mathcal T}_g$. An important result due to Thurston is that given any pair of hyperbolic structures $(F,F')$ there exists a unique (left) earthquake on $F$ relating them. \medskip\par 2) The second application occurs in Thurston parameterization of the space of \emph{projective structures} on $S$. A projective structure is a maximal $(\mathbb{C}\mathbb P^1, PSL(2,\mathbb{C}))$-atlas. Thurston pointed out a geometric construction to associate to every hyperbolic structure $F$ equipped with a measured geodesic lamination $\lambda$ a projective structure $Gr_\lambda(F)$ (called \emph{the grafting} of $F$ along $\lambda$) (see \cite{Thurston, EpMa, KulPin, McMullen} for details). Moreover the map \[ {\mathcal T}_g\times{\mathcal M}{\mathcal L}_g\ni (F,\lambda)\mapsto Gr_\lambda(F)\in {\mathcal P}(S) \] turns to be a homeomorphism between ${\mathcal T}_g\times{\mathcal M}{\mathcal L}_g$ with the equivalence classes of (marked) projective structures. \medskip\par 3) An important application of measured geodesic laminations occurs in $(2+1)$-Lorentzian geometry. Given any $\kappa\in\{0,\pm 1\}$ Mess \cite{Mess} pointed out an explicit construction to associate to every hyperbolic surface $F$ equipped with a measured geodesic lamination $\lambda$ \emph{a maximal spacetime} $Y_\kappa(F,\lambda)$ with \emph{constant curvature} equal to $\kappa$ and a Cauchy surface diffeomorphic to $S$. Moreover he proved that for $\kappa\in\{0,-1\}$ his construction furnishes a parameterizations of maximal spacetimes with constant curvature equal to $\kappa$ and Cauchy surface diffeomorphic to $S$. An analogous statement was proved by Scannell \cite{Scannell} for the case $\kappa=1$. Hence ${\mathcal T}_g \times {\mathcal M}{\mathcal L}_g$ arises as the fundamental structure encoding a priori rather different geometric objects. A clean geometric explanation of this pervasive role of ${\mathcal T}_g \times {\mathcal M}{\mathcal L}_g$ was recently furnished in~\cite{BenBon} by means of a general Wick rotation-rescaling theory.\\ In the present paper we focus on the fact that each of the above applications yields to a natural linear structure on ${\mathcal M}{\mathcal L}(F)$. Our aim is to investigate these linear structures. In particular we would like to give a {\it geometric description of the sum} of two measured geodesic laminations. We will show that it is actually a quite difficult task. For, although we can give a description of ${\mathcal M}{\mathcal L}(F)$ in purely topological terms (for instance by considering the atlas of ${\mathcal M}{\mathcal L}(F)$ given by train-tracks), the sum {\it heavily depends on the given hyperbolic structure on $S$}. This fact already arises in the simplest non trivial case of two weighted simple closed geodesics that meet each other at one point. Also in this simplest case the determination of the sum lamination is not trivial at all. \begin{remark}\emph{ We can consider the set of} differential quadratics \emph{${\mathcal Q}(F)$ with respect to the conformal structure induced by the hyperbolic metric on $F$. Every differential quadratic induces a horizontal foliation on $S$, that, in turn, corresponds to a measured geodesic lamination on $F$. It is well-known that such a correspondence yields an identification of ${\mathcal M}{\mathcal L}(F)$ with ${\mathcal Q}(F)$ \cite{Masur}. Notice that such a correspondence does not preserve the multiplication by positive numbers (if $\lambda$ is the lamination corresponding to $\omega$ the lamination corresponding to $t\omega$ is $t^{1/2}\lambda$). So we will not deal with the linear structure on ${\mathcal M}{\mathcal L}(F)$ arising from this identification. }\end{remark} Let us briefly describe the contents of this paper. In the first section we just give a brief sketch of constructions we have described and then we explain how it is possible to associate to ${\mathcal M}{\mathcal L}(F)$ a liner structure.\\ In the second section we prove that linear structures corresponding to different constructions actually coincide and in this way ${\mathcal M}{\mathcal L}(F)$ results equipped with a canonical linear structure. Anyway let us just remark that the topological identification between ${\mathcal M}{\mathcal L}(F)$ and ${\mathcal M}{\mathcal L}(F')$ described by means of the canonically identification of ${\mathcal M}{\mathcal L}(F)$ (and ${\mathcal M}{\mathcal L}(F')$) with ${\mathcal M}{\mathcal F}(S)$ \emph{ is not linear} with respect to those structures. Hence the linear structure on ${\mathcal M}{\mathcal L}(F)$ does depend on the geometry on $F$.\\ In the third section we will deal with the problem of the sum of two measured geodesic laminations. We will provide two partial results: \medskip\par 1) We will show that the set of laminations non-intersecting a surface with geodesic boundary $F'$ embedded in $F$ a subspace of ${\mathcal M}{\mathcal L}(F)$; \medskip\par 2) Given two weighted simple geodesics $(C,c),(D,d)$ intersecting each other only in one point we will construct a sequence of weighted simple curves $$(A_n,a_n), (C_n,c_n), (D_n,d_n)$$ such that \[ (C,c)+(D,d)=(A_n,a_n)+(C_n, c_n)+(D_n,d_n). \] Moreover $A_n$ is disjoint from $C_n$ and $D_n$ and $C_n$ and $D_n$ meets each other in one point. The sequence is constructed by recurrence. If for some $n$ we have $d_n=0$ then the process ends and the sum lamination is the simplicial lamination given by the union of $(A_n,a_n)$ and $(C_n, c_n)$ otherwise every term of the sum converge to a measured geodesic lamination. In particular $(A_n, a_n)$ tends to a weighted curve $(A_\infty, a_\infty)$ whereas the other terms tend to non-simplicial measured geodesic laminations $\lambda_\infty, \lambda'_\infty$ that are disjoint. Thus the sum lamination is the union of $(A_\infty, a_\infty)$, $\lambda_\infty$ and $\lambda'_\infty$. \section{Measured geodesic laminations}\label{sec1} \begin{figure}[h!] \begin{center} \input{MGL_fig_cap1_nonmeas.pstex_t} \caption{{\small In the picture examples of non-simplicial geodesic laminations. The first one does not support any measure.}}\label{cap1:nonmeas:fig} \end{center} \end{figure} In this paper $S$ will denote a closed orientable surface of genus $g$ and $F$ will denote $S$ equipped with a hyperbolic metric (that is a metric of constant curvature equal to $-1$). Moreover $\pi_1(S)$ will denote the fundamental group of $S$ (with respect to some base point $x_0$) whereas $\pi_1(F)$ will denote the automorphism group of a \emph{fixed} metric covering \[ \mathbb{H}^2\rightarrow F \] (so $\pi_1(F)$ is a discrete sub-group of $PSL(2,\mathbb{R})$).\\ A {\it geodesic lamination} $L$ on $F$ is a closed subset that is the disjoint union of simple complete geodesics. The following list summarizes the principal properties of geodesic laminations on a closed surface. A complete introduction to this topic can be found in \cite{Bona, Casson}. \medskip\par\noindent 1. The Lebesgue measure of a geodesic lamination is zero. \smallskip\par\noindent 2. If $L$ is a geodesic lamination then every point lies in a unique geodesic contained in $L$. In particular a unique partition of $L$ in complete geodesics exists. \smallskip\par\noindent 3. The number of components of the complementary of $L$ is finite. \smallskip\par\noindent 4. The number of components of $L$ is finite (but arc-connected components are in general uncountable). \smallskip\par\noindent 5. The partition of $L$ in geodesics induces a Lipschitz foliation on $L$. For this reason the geodesics contained in $L$ are called the leaves of the lamination. \medskip\par\noindent A typical example of geodesic lamination is a simple closed geodesic or more generally a multicurve that is a disjoint union of closed geodesics. Clearly there are more complicated geodesic laminations (see Fig.~\ref{cap1:nonmeas:fig}). Notice that the definition of geodesic lamination is well-founded because of property 2. Actually in order to generalize this notion for arbitrary surfaces it is necessary to refine the definition (see \cite{KulPin} for possible generalizations).\\ Given a measured geodesic lamination $L$ a differentiable arc $c$ is {\it transverse} to $L$ if for every point $x\in L\cap c$ the leaf through $x$ is transverse to $c$. A {\it transverse measure} on $L$ is the assignment of a Borel measure $\mu_c$ on every transverse path $c$ such that \medskip\par\noindent 1. The support of $\mu_c$ is $L\cap c$. \smallskip\par\noindent 2. If $c'$ is a sub-arc of $c$ then $\mu_{c'}=\mu_c|_{c'}$. \smallskip\par\noindent 3. If $c$ and $c'$ are transverse paths related by an $L$-preserving homotopy then such a homotopy sends $\mu_c$ to $\mu_{c'}$. \medskip\par\noindent A {\it measured geodesic lamination} $\lambda=(L,\mu)$ is a geodesic lamination $L$ (called the support) provided with a transverse measure $\mu$. A simple example of measured geodesic lamination is a {\it weighted multicurve} that is a multicurve provided with a positive number $a(C)$ for each component $C$. If $k$ is a transverse arc then it meets the multicurve in a finite number of points (see Fig.~\ref{cap1:simp:fig}). The associated measure is concentrated on such points (a sum of Dirac deltas) and the measure of any intersection point is equal to the weight of the curve containing that point. \begin{figure} \begin{center} \input{MGL_fig_cap1_simp.pstex_t} \caption{{\small A simplicial lamination.}}\label{cap1:simp:fig} \end{center} \end{figure} Carrying a transverse measure is not a property shared by all the geodesic laminations. Actually in order to carry a transverse measure a geodesic lamination have to satisfy certain geometric properties. For instance if $L$ is the support of a measured geodesic lamination that it decomposes in two sub-laminations \[ L=L'\cup L_s \] such that $L_s$ is a multicurve and $L'$ does not contain any closed geodesic. In Figure~\ref{cap1:nonmeas:fig} a geodesic lamination which does not satisfy such a property is shown. By multiplying a transverse measure $\mu$ by a positive number $a$ (that means that $\mu_c$ is multiplied by $a$ for every transverse path $c$) we obtain a new transverse measure that will be denoted by $a\mu$. Briefly given a measured geodesic lamination $\lambda$ and a positive number $a$ we set $a\lambda=(L,a\mu)$. If ${\mathcal M}{\mathcal L}(F)$ denote the set of measured geodesic laminations then the above rule define a left action of the multiplicative group $\mathbb{R}_{>0}$ on ${\mathcal M}{\mathcal L}(F)$. A particular lamination is the empty set. Such a lamination carries a unique transverse measure which is the zero measure (such that the measure of any path is zero). We will denote this degenerated measure lamination by $0$. Notice that $0$ is the unique point fixed by $\mathbb{R}_{>0}$ and the multiplication by $0$ sends every measured lamination to $0$. \medskip\par\noindent {\bf Topology on the space of measured geodesic laminations} \medskip\par\noindent We are going to describe a suitable topology on the space ${\mathcal M}{\mathcal L}(F)$ of measured geodesic laminations on $F$. As we are going to see this space will be described only in terms of topological features of $F$. Let ${\mathcal C}$ denote the set of loops in $S$ up to free-homotopy. The family of closed geodesic paths, denoted by ${\mathcal C}_F$, furnishes a complete set of representatives of the quotient ${\mathcal C}$. This fact will play a fundamental role in relating the geometry and the topology of $F$. In particular it will be useful to describe ${\mathcal M}{\mathcal L}(F)$ just in terms of topological features of $F$. In what follows whenever no ambiguity arises, we will use ${\mathcal C}$ to indicate the set of closed geodesics as well as the set of paths up to free homotopy. Finally notice that the metric covering map $\mathbb{H}^2\rightarrow F$ establishes a bijection between ${\mathcal C}$ and the set of conjugacy classes of $\pi_1(F)$. Given a geodesic lamination $L$ and a closed geodesic $C$ notice that either $C$ is a leaf of $L$ or it is transverse to $L$. For a given geodesic lamination $\lambda=(L,\mu)$ let us define the intersection function \[ \iota_\lambda:{\mathcal C}\rightarrow\mathbb{R}_{\geq 0} \] by setting \[ \iota_\lambda(C)=\left\{\begin{array}{ll} \mu_C(C) & \textrm{ if } C\textrm{ is transverse to } L\\ 0 & \textrm{otherwise.} \end{array}\right. \] Clearly $\iota$ is homogeneous with respect to the action of $\mathbb{R}^+$, that is \[ \iota_{a\lambda}(C)=a\iota_\lambda(C)\qquad\textrm{ for every simple geodesic }C\,. \] The set of simple closed geodesics of $F$ (corresponding to the loops without self-intersections), denoted by ${\mathcal S}$, is naturally identified to the subset of ${\mathcal M}{\mathcal L}(F)$ of curves carrying the weight $1$. With respect to such an identification the map $\iota_C$ associated to a simple curve $C$ is the classical intersection form. A classical result (\cite{Poin}) states that the intersection form provides an embedding \[ {\mathcal S}\rightarrow\mathbb{R}_{\geq 0}^{{\mathcal C}} \] (actually it is possible to choose a finite number of elements of ${\mathcal C}$ in such a way to obtain an inclusion of ${\mathcal S}$ into $\mathbb{R}^N$ for $N$ sufficiently large). The following result is an extension of that one for general measured geodesic laminations. In a sense it states that measured geodesic laminations are the completion of weighted curves on $F$. \begin{prop}\label{cap1:top_mgl:prop} The map \[ \iota:{\mathcal M}{\mathcal L}(F)\ni\lambda\mapsto\iota_\lambda\in\mathbb{R}_{\geq 0}^{{\mathcal C}} \] is injective. Its image is the closure of the image of $\mathbb{R}_+\times{\mathcal S}$ and is homeomorphic to $\mathbb{R}^{6g-6}$ \end{prop} A proof of this proposition can be found in~\cite{Penner}. \medskip\par\noindent {\bf Varying the surface} \smallskip\par\noindent Let $F,F'$ be two hyperbolic structures on $S$ and $\iota_F,\iota_{F'}$ be the corresponding intersection maps. Proposition~\ref{cap1:top_mgl:prop} implies that $\iota_F$ and $\iota_{F'}$ have the same image so a natural identification between ${\mathcal M}{\mathcal L}(F)$ and ${\mathcal M}{\mathcal L}(F')$ arises by considering the map \[ \varphi_{FF'}=\iota_F^{-1}\circ\iota_{F'}:{\mathcal M}{\mathcal L}(F')\rightarrow{\mathcal M}{\mathcal L}(F)\,. \] It is possible to describe geometrically the map $\varphi_{FF'}$. Indeed given any diffeomorphism \[ f: F'\rightarrow F \] we can consider the lifting to the universal covering spaces \[ \tilde f: \mathbb{H}^2\rightarrow \mathbb{H}^2 \] that in turn can be extended to a homeomorphism of the whole $\overline\mathbb{H}^2$ \cite{Casson}. The extension on the boundary considered up to post-composition by elements of $PSL(2,\mathbb{R})$ does not depend on $f$ but only on the Teichm\"uller classes of $F$ and $F'$. Given a lamination $L'$ of $F'$ let $\tilde L'$ denote its lifting to $\mathbb{H}^2$. For every leaf $l$ of $\tilde L'$ let $\hat f(l)$ be the geodesic with end-points equal to the images through $\tilde f$ of the end-points of $l$. Now the union of all $\hat f(l)$ is a geodesic lamination of $\mathbb{H}^2$ invariant under the action of $\pi_1(F)$. Thus it induces a lamination on $F$ that we denote by $\hat f(L')$. By the above remark about $f$ it turns out that $\hat f(L')$ does not depend on $f$ but only on $L, F, F'$. Given a measured geodesic lamination $\lambda'=(L',\mu')$ on $F'$ the support of the measured geodesic lamination $\lambda=\varphi_{FF'}(\lambda')$ is simply the lamination $L=\hat f(L')$. In order to describe the transverse measure $\mu$ of $\lambda$ notice that it is sufficient to describe the total mass of a geodesic segment. Now given a geodesic segment $c$ on $F$ let $l_-,l_+$ be the extremal leaves of $L$ cutting $c$. Let $l'_-$ and $l'_+$ be the corresponding leaves of $L'$ and $c'$ any geodesic segment joining them. Then we have $\mu_c(c)=\mu'_{c'}(c')$. If $F$ and $F'$ represent the same point of the Teichm\"uller space ${\mathcal T}_g$ then the map $\varphi_{FF'}$ is simply induced by the isometry.\\ Denote by ${\mathcal M}{\mathcal L}_g$ the image of the map $\iota$ in $\mathbb{R}_{\geq 0}^{{\mathcal C}}$. As we have seen this set depends only on $g$. Thus considering the trivial fiber bundle \[ {\mathcal T}_g\times{\mathcal M}{\mathcal L}_g\rightarrow {\mathcal T}_g \] it turns out that the fiber of a point represented by $F$ can be naturally identified to ${\mathcal M}{\mathcal L}(F)$. Therefore ${\mathcal T}_g\times{\mathcal M}{\mathcal L}_g$ is called the fiber bundle of measured geodesic laminations of hyperbolic surfaces of genus $g$. \medskip\par\noindent {\bf Intersection of measured geodesic laminations} \smallskip\par\noindent We have seen how it is possible to define an intersection form between a measured geodesic lamination and a simple geodesic. Actually by using density result of Proposition~\ref{cap1:top_mgl:prop} it is possible (see~\cite{Rees}) to define (in a unique way) a pairing \[ \iota:{\mathcal M}{\mathcal L}(F)\times{\mathcal M}{\mathcal L}(F)\rightarrow\mathbb{R}_+ \] such that \medskip\par 1. $\iota(a\lambda, a'\lambda')=a a'\iota(\lambda,\lambda')$ for every $\lambda,\lambda'\in{\mathcal M}{\mathcal L}(F)$ and $a,a'\in\mathbb{R}_{\geq 0}$; \medskip\par 2. if $C$ and $C'$ are simple geodesics then $\iota(C,C')$ is the number of the intersection points between $C$ and $C'$.\\ The pairing $\iota$ gives an important device to decide whether two measured geodesic laminations transversally intersect. \begin{teo} Given two measured geodesic laminations $\lambda, \lambda'$ we have that $\iota(\lambda, \lambda')=0$ if and only if $\lambda$ and $\lambda'$ do not transversally intersect. \end{teo} \nopagebreak\par\rightline{$_\blacksquare$} Notice that if two measured geodesic laminations $\lambda$ and $\lambda'$ do not transversally intersect then either they are disjoint or they share some component. Anyway the union of their supports is a geodesic lamination. \medskip\par\noindent {\bf Length of a measured geodesic lamination} \smallskip\par\noindent Given a hyperbolic surface $F$ of genus $g$ and a closed geodesic arc $C$ we denote by $\ell_F(C)$ its length. By identifying ${\mathcal C}$ with the set of closed geodesics of $F$ we get a map \[ \ell_F:{\mathcal C}\rightarrow\mathbb{R}_+ \] called the length spectrum of $F$. It is well-known that length spectra of hyperbolic surfaces are equal if and only if they represents the same point in Teichm\"uller space ${\mathcal T}_g$. Actually we can choose curves $C_1,\ldots,C_N$ such that the map \[ {\mathcal T}_g \ni [F]\mapsto (\ell_F(c_1),\ldots\ell_F(c_N))\in\mathbb{R}^N \] furnishes a real-analytical embedding of ${\mathcal T}_g$.\\ The length of that multicurve given by geodesics $C_1,\ldots, C_N$ equipped with weights $a_1,\ldots, a_n$ is simply \[ \ell_F(\lambda)=\sum_{i=1}^{n}a_i\ell_F(C_i)\,. \] \begin{prop}\label{cap1:length:prop} There exists a unique continuous function \[ \ell_F:{\mathcal M}{\mathcal L}(F)\rightarrow\mathbb{R}_{\geq 0} \] such that if $\lambda$ is a weighted multicurve then $\ell_F(\lambda)$ is its length. \end{prop} See~\cite{McMullen}. We call $\ell_F(\lambda)$ the length of the lamination $\lambda$. \section{Linear structure on ${\mathcal M}{\mathcal L}(F)$} As we are going to see there are several canonical identifications of ${\mathcal M}{\mathcal L}(F)$ with $\mathbb{R}^{6g-6}$ arising from a priori very different frameworks. We will see that the linear structures on ${\mathcal M}{\mathcal L}(F)$ induced by such identifications fit well so ${\mathcal M}{\mathcal L}(F)$ carries a natural linear structure. On the other hand we will see that the natural identification between the spaces of measured geodesic laminations on two different hyperbolic surfaces $F,F'$ is only a homogenous map (not linear) unless they represent the same point of Teichm\"uller space. Thus the linear structure depends on the geometry of $F$. \medskip\par\noindent {\bf Identification by flat Lorentzian geometry} \medskip\par\noindent Consider the isometric embedding of $\mathbb{H}^2$ into the Minkowski space $\mathbb{M}^3$ (that is $\mathbb{R}^3$ provided with the standard scalar Minkowski scalar product $\E{\cdot}{\cdot}$) yielded by identifying $\mathbb{H}^2$ with the set \[ \{x| \E{x}{x}=-1\textrm{ and }x_0>0\}\,. \] With respect to this embedding, the isometry group of $\mathbb{H}^2$ is identified to the group of orthocronus linear transformations of $\mathbb{R}^3$ preserving the Minkowskian product.\\ \begin{figure} \begin{center} \input{MGL_fig_cap2_regdom.pstex_t} \caption{{\small The construction of a regular domain associated to a measured geodesic laminations}} \end{center} \end{figure} Given a measured geodesic lamination $\lambda=(L,\mu)$ on a closed hyperbolic surface $F$ consider its lifting $\tilde\lambda=(\tilde L,\tilde\mu)$ on the universal covering $\mathbb{H}^2$. Now let us fix an \emph{oriented} arc $c$ in $\mathbb{H}^2$ transverse to $\tilde L$. Given a point $x\in c\cap\tilde L$ the leaf $l$ through $x$ is the intersection between a timelike plane $P_l$ and $\mathbb{H}^2$. Thus it makes sense to consider the direction orthogonal to $P_l$ that is a spacelike line. Denote by $v(x)$ the unit vector on such a line pointing as $c$ (note that $v(x)$ depends only on $l$ and on the orientation of $c$). For $x$ not lying on $\tilde L$ let us put $v(x)=0$. In this way we have defined a function \[ v:c\rightarrow\mathbb{R}^3 \] that is continuous on $c\cap\tilde L$ because the foliation on $\tilde L$ is Lipschitzian. Thus we can set \[ I(c)=\int_c v(x)\mathrm{d}\tilde\mu(x)\,. \] By a simple analysis of the geometry of laminations on $\mathbb{H}^2$ it is not hard to prove that $I(c)$ depends only on the end-points of $c$ and on the orientation of $c$. Thus given two points $x,y$ in $\mathbb{H}^2$ we choose any arc $c$ joining $x$ to $y$ and orient it from $x$ towards $y$, and set \[ \rho(x,y)=I(c)\,. \] Let us point some important properties of this function. \medskip\par\noindent 1. For every $x,y\in\mathbb{H}^2$ we have \begin{equation}\label{cap2:ineq:eq} \E{\rho(x,y)}{y-x}\geq 0 \end{equation} and the identity holds if and only if $x$ and $y$ lie in the same stratum of $\tilde L$ (a stratum is either a leaf or a connected component of $\mathbb{H}^2\setminus\tilde L$). Indeed if $t$ is a point on the geodesic segment $[x,y]$ by the choices made we have that $\E{v(t)}{y}\geq 0$ and $\E{v(t)}{x}\leq 0$ (actually the strict inequalities hold except if $v(t)=0$). \medskip\par\noindent 2. If $\tilde L_s$ denote the lifting of the simplicial part of $L$ then we have \begin{equation}\label{cap2:coc:eq} \rho(x,z)=\rho(x,y)+\rho(y,z) \end{equation} for every $x,z\in\mathbb{H}^2$ and $y\in\mathbb{H}^2-\tilde L_s$. \medskip\par\noindent 3. Since $\tilde\lambda$ is invariant by the action of $\pi_1(F)$ we easily get \begin{equation}\label{cap2:coc:eq2} \rho(\gamma x,\gamma y)=\gamma\rho(x,y) \end{equation} for every $x,y\in\mathbb{H}^2$ and $\gamma\in\pi_1(F)$.\\ Fix a base point $x_0\in\mathbb{H}^2-\tilde L_s$ and consider the function \[ \tau:\pi_1(F)\ni\gamma\mapsto\rho(x_0,\gamma x_0)\in\mathbb{R}^3\,. \] By equations~(\ref{cap2:coc:eq}) and (\ref{cap2:coc:eq2}) we have \[ \tau(\alpha\beta)=\tau(\alpha)+\alpha\tau(\beta)\,. \] thus $\tau$ is a cocycle of $\pi_1(F)$ taking values onto $\mathbb{R}^3$ (notice that $\mathbb{R}^3$ is naturally a $\pi_1(F)$-module, since the holonomy action of $\pi_1(F)$ extends to a linear action on $\mathbb{R}^3$). Moreover by choosing another base point $x_0'$ we obtain a new cocycle $\tau'$ that differs from $\tau$ by a coboundaries, namely \[ \tau'(\gamma)=\tau(\gamma)\,+\,\gamma\rho(x_0,x_0')-\rho(x_0, x_0')\,. \] Therefore we have defined a map \[ I_L:{\mathcal M}{\mathcal L}(F)\rightarrow\coom1(\pi_1(F),\mathbb{R}^3) \] that we are going to show to be bijective.\\ Given a cocycle $\tau\in Z^1(\pi_1(F), \mathbb{R}^3)$ we can associate to every $\gamma\in\pi_1(F)$ an affine map $\gamma_\tau$ with linear part equal to $\gamma$ and translation part equal to $\tau(\gamma)$. Clearly $\gamma_\tau$ is an isometry of the Minkowski space. Moreover the cocycle rule implies that the map \[ h_\tau:\pi_1(F)\ni\gamma\mapsto\gamma_\tau\mathrm{Iso}(\mathbb{M}^3) \] is a representation. Mess showed~\cite{Mess} that $h_\tau$ is the holonomy of a flat spacetime homeomorphic to $F\times\mathbb{R}$. Recall that a (future complete) \emph{regular domain} is an open convex subset of $\mathbb{R}^3$ that is the intersection of the future of a non-empty family of null planes (a null plane is a plan on which the Lorentzian form is degenerated). \begin{teo}\cite{Mess, Bon}\label{cap2:uniqueness:teo} Given $\tau\in Z^1(\pi_1(F), \mathbb{R}^3)$ there exists exactly one regular domain ${\mathcal D}_\tau$ that is invariant by the action of $\pi_1(F)$ induced by $h_\tau$. Moreover the action of $\pi_1(F)$ on ${\mathcal D}_\tau$ is free and properly discontinuous and the quotient $Y_\tau={\mathcal D}_\tau/\pi_1(F)$ is a maximal globally hyperbolic spacetime homeomorphic to $F\times\mathbb{R}$ \end{teo} \nopagebreak\par\rightline{$_\blacksquare$} We are going to sketch how it is possible to establish that the map $I_L$ is bijective. Indeed we will show \medskip\par\noindent 1) How to construct only in terms of $\lambda$ the regular domain ${\mathcal D}_\tau$ invariant for the cocycle $\tau$ associated to $\lambda$. \medskip\par\noindent 2) Given a cocycle $\tau$ how to construct a measured geodesic lamination $\lambda$ on $F$ looking at the geometry of ${\mathcal D}_\tau$.\\ 1) Let $x_0$ denote the base point of $\mathbb{H}^2$ used to compute $\tau$. For $x\in\mathbb{H}^2$ let us set $u(x)=\rho(x_0,x)$. This function turns to be constant on the strata of $\tilde L$. Given $x\in\mathbb{H}^2$ let $F(x)$ be the stratum through $x$ and $\partial_\infty F(x)$ the set of ideal points in the closure of $F(x)$ in $\overline\mathbb{H}^2$. Thus we can define the set \[ \Omega=\bigcap_{x\in\mathbb{H}^2-\tilde L}\ \bigcap_{[v]\in\partial_\infty F(x)}\fut(u(x)+\ort{v})\,. \] By inequality~(\ref{cap2:ineq:eq}) we have that $\fut(u(x))\subset\Omega$ and so $\Omega$ is a regular domain. On the other hand Equation~(\ref{cap2:coc:eq2}) implies that $\Omega$ is invariant by the $\pi_1(F)$-action induced by $h_\tau$. It follows that $\Omega={\mathcal D}_\tau$.\\ 2) Since ${\mathcal D}_\tau$ is a future-complete convex set it is not hard to see that for every point $x\in{\mathcal D}_\tau$ there exists a unique point $r(x)\in\partial{\mathcal D}_\tau\cap\pass(x)$ that maximizes the Lorentzian distance from $x$ (recall that in Minkowski space the Lorentzian distance between two points $x,y$ related by a timelike geodesic is simply $|x-y|=(-\E{x-y}{x-y})^{1/2}$). The function $T(x)=|x-r(x)|$ carries nice properties:\\ (i) It is the cosmological time of ${\mathcal D}_\tau$, that means that $T(x)$ is the $\sup$ of proper times of causal curves of ${\mathcal D}_\tau$ with future-endpoint equal to $x$;\\ (ii) It is a $\mathrm C^1$-submersion and its Lorentzian gradient at $x$ is simply the unit timelike vector \[ -\frac{1}{T(x)} (x-r(x))\,; \] (iii) $r(x)$ is the unique point on $\partial{\mathcal D}_\tau$ such that $x-r(x)$ is a spacelike support plane at $r(x)$. \smallskip\par\noindent The function $N:{\mathcal D}_\tau\rightarrow\mathbb{H}^2$ given by $N(x)=\frac{1}{T(x)}(x-r(x))$ is called \emph{the Gauss map}. Indeed it coincides with the (Lorentzian) Gauss map of the level surfaces of $T$. The following formula can be immediately deduced by definition \[ x=r(x)+T(x)N(x)\,. \] The image of $r$ is called \emph{the singularity} in the past of ${\mathcal D}_\tau$. By property (iii) it coincides with the set of points in $\partial{\mathcal D}_\tau$ admitting a spacelike support plane. Moreover for every point $p$ in the singularity the set ${\mathcal F}_p=N(r^{-1}(p))\subset\mathbb{H}^2$ represents the set of timelike directions orthogonal to some spacelike support plane at $p$. In particular it is not hard to see that ${\mathcal F}_p$ is a convex subset. Since ${\mathcal D}_\tau$ is a regular domain ${\mathcal F}_p$ turns to be the convex hull of its accumulation points on $\partial\mathbb{H}^2$ (notice that accumulation points of ${\mathcal F}_p$ on $\partial\mathbb{H}^2$ are the null directions orthogonal to null support planes through $p$). Finally inequality~(\ref{cap2:ineq:eq}) implies that for any $p,q$ in the singularity, the geodesic of $\mathbb{H}^2$ orthogonal to the spacelike vector $p-q$ separes ${\mathcal F}_p$ from ${\mathcal F}_q$. Thus the set \[ \tilde L=\bigcup_{p:{\mathcal F}_p\textrm{is a geodesic}}{\mathcal F}_p\ \cup\ \bigcup_{p:\dim{\mathcal F}_p=2}\partial{\mathcal F}_p \] is a geodesic lamination of $\mathbb{H}^2$. By the invariance of ${\mathcal D}_\tau$ for $\Gamma_\tau$ it easily follows that $\tilde L$ is invariant for $\Gamma$. Thus it induces a geodesic lamination $L$ on $F$\\ In order to define a transverse measure on $\tilde L$ (invariant for $\Gamma$) take an arc transverse $c$ to $\tilde L$. By technical arguments \cite{BonTh} it is possible to prove that $u=N^{-1}(c)\cap T^{-1}(1)$ is a rectifiable arc. Since $r$ is Lipschitzian (with respect to the Euclidean distance) we can consider its derivative $\dot r$ on $u$ that turns to be a spacelike vector with Lorentzian length less than $1$. Thus we can set $\tilde\mu_c$ the direct image through $N$ of the measure $|\dot r|\mathrm{d} s$ where $\mathrm{d} s$ is the natural Lebesgue measure on $u$. Clearly $\tilde\lambda=(\tilde L,\tilde\mu)$ is a $\Gamma$-invariant measured lamination so it induces a geodesic lamination $\lambda$ on $F$. By construction it is not hard to see that the the lamination $\lambda$ induces the cocycle $\tau$. \bigskip\par\noindent {\bf Identification by earthquake theory} \medskip\par\noindent Given a measured geodesic lamination $\lambda$ on $F$ Thurston introduced the notion of earthquake on $F$ with shearing locus $\lambda$. As we are going to explain, it is the natural extension of the Dehn-twist action. For the sake of simplicity we establish just the results we need, referring to the literature on this topic to a complete introduction \cite{Thurston:earth}. Given a weighted simple curve $(C,a)$ on a hyperbolic surface $F$ we can consider the surface $F'$ obtained by cutting $F$ along $C$ and by gluing again the geodesic boundaries of the cutted surface after a (left) twist of length $a$. In what follows we simply say that $F'$ is obtained by a left earthquake on $F$ with shearing lamination $(C,a)$ and denote it by ${\mathcal E}_{(C,a)}(F)$. Now Thurston showed that this procedure can be extended to general laminations. \begin{teo}\cite{Thurston:earth} There exists a continuous map \[ {\mathcal E}:{\mathcal T}_g\times{\mathcal M}{\mathcal L}_g\ni(F,\lambda)\mapsto{\mathcal E}_\lambda(F)\rightarrow{\mathcal T}_g \] such that if $\lambda$ is a weighted simple curve then ${\mathcal E}_\lambda(F)$ is the surface describe above. Given two elements $F,F'\in{\mathcal T}_g$ there exists a unique $\lambda\in{\mathcal M}{\mathcal L}(F)$ such that $F'={\mathcal E}_\lambda(F)$. \end{teo} \nopagebreak\par\rightline{$_\blacksquare$} Given a measured geodesic lamination $\lambda$ on a hyperbolic surface $F$ the path into the Teichm\"uller space \[ [0,1]\ni t\mapsto{\mathcal E}_{t\lambda}(F) \] turns to be differentiable. So we can associate to $\lambda$ the tangent vector at $0$: \[ u_F(\lambda)=\frac{\mathrm{d}\,}{\mathrm{d} t}|_{t=0}\,{\mathcal E}_{t\lambda}(F)\,\in\mathrm T_F{\mathcal T}_g. \] On the other hand, since the holonomy map is a diffeomorphism of ${\mathcal T}_g$ onto an open set of the variety of representations of $\pi_1(F)$ on $PSL(2,\mathbb{R})$ up to conjugacy, by general facts~\cite{CaEp, Goldman} it turns out that $\mathrm T_F{\mathcal T}_g$ is canonically identified to $\coom1_{\mathrm{Ad}}(\pi_1(F),\mathfrak s\mathfrak l(2,\mathbb{R}))$ where $\pi_1(F)$ acts on $\mathfrak s\mathfrak l(2,\mathbb{R})$ via the adjoint representation. In particular if $\rho_t:\Gamma=\pi_1(F)\mapsto PSL(2,\mathbb{R})$ is the holonomy corresponding to ${\mathcal E}_{t\lambda}(F)$ then the cocycle corresponding to the vector $u_F(\lambda)$ is simply \[ X_F(\lambda)[\gamma]=\frac{\mathrm{d}\rho_t(\gamma)}{\mathrm{d} t}|_{t=0}\gamma^{-1}\,. \] We can explicitly compute $X_F(\lambda)$. For every oriented geodesic $l$ of $\mathbb{H}^2$ the standard infinitesimal generator of the group of hyperbolic transformations with axis equal to $l$ is the element $Y_l\in\mathfrak s\mathfrak l(2,\mathbb{R})$ such that $\exp(Y_l)$ is the transformation with repulsive point equal to the starting point of $l$ and translation length equal to $1$. Now denote by $\tilde\lambda$ the lifting of $\lambda$ on $\mathbb{H}^2$ and fix a base point $x_0$. Then for $\gamma\in\Gamma$ consider the function \[ Y:[x_0,\gamma(x_0)]\rightarrow\mathfrak s\mathfrak l(2,\mathbb{R}) \] such that if $t\in\tilde\lambda$ then $Y(t)$ is the standard generator of the group of transformations with axis equal to the leaf trough $t$ (oriented as the boundary of the half-plane containing $x_0$) and is $0$ otherwise. Thus up to coboundaries we have~\cite{BenBon, EpMa} \[ X_F(\lambda)[\gamma]=\int_{[x_0,\gamma(x_0)]} Y(t)\mathrm{d}\tilde\mu(t)\,. \] Eventually we have produced a map \[ I_E:{\mathcal M}{\mathcal L}(F)\mapsto\in\coom1_{Ad}(\pi_1(F),\mathfrak s\mathfrak l(2,\mathbb{R})) \] that turns to be bijective. Thus a linear structure is induced on ${\mathcal M}{\mathcal L}(F)$. In what follows we are going to see that this structure matches with that induced by the map $I_L$ described above. The killing form on $\mathfrak s\mathfrak l(2,\mathbb{R})$ is a Minkowskian form so $\mathfrak s\mathfrak l(2,\mathbb{R})$ turns to be isometric to the Minkowskian space $\mathbb{R}^3$. Actually there exists a unique isometry \[ H: \mathfrak s\mathfrak l(2,\mathbb{R})\rightarrow\mathbb{R}^3 \] equivariant by the action of $PSL(2,\mathbb{R})$. The map $H$ yields a isomorphism \[ H_*:\coom1_{Ad}(\pi_1(F),\mathfrak s\mathfrak l(2,\mathbb{R}))\rightarrow\coom1(\pi_1(F),\mathbb{R}^3) \] and we are going to see that the following diagram commutes \[ \begin{CD} {\mathcal M}{\mathcal L}(F) @>I_E>> \coom1(\pi_1(F),\mathfrak s\mathfrak l(2,\mathbb{R}))\\ @VV Id V @VV H_*/2 V\\ {\mathcal M}{\mathcal L}(F) @>I_L>> \coom1(\pi_1(F),\mathbb{R}^3)\,. \end{CD} \] Indeed if $l$ is an oriented geodesic the standard generator $X_l$ is spacelike with norm equal to $1/2$ (it is sufficient to prove it when the axis has end-point $0,\infty$). Moreover since $H$ is equivariant we have \[ \exp(X_l) H(X_l)=H(X_l) \] thus $H(X_l)$ is orthogonal to $l$. Finally by an explicit computation we can see that $H(X_l)$ points outwards from the half-spaces of $\mathbb{H}^2$ bounded by $l$ and inducing the right orientation on it. Since $H$ is linear we have \[ \begin{array}{l} H(X_F(\lambda)[\gamma])=H\left(\int_[x_0,\gamma x_0] Y(t)\mathrm{d}\tilde\mu(t)\right)=\\ \int_{[x_0,\gamma x_0]} H(Y(t))\mathrm{d}\tilde\mu(t)= \frac{1}{2}\tau_F(\lambda)\,. \end{array} \] {\bf Identification by using the length function} \medskip\par\noindent Given a hyperbolic surface $F$ and a measured geodesic lamination $\lambda$ we have introduced the length of $\lambda$ with respect to $F$. Thus we can consider the positive-real function $\ell$ defined on the fiber bundle of measured laminations by setting \[ \ell(F,\lambda)=\ell_F(\lambda). \] We have: \begin{prop}\cite{McMullen} \label{cap2:length:prop} The map \[ \ell:{\mathcal T}_g\times{\mathcal M}{\mathcal L}_g\rightarrow\mathbb{R}_{\geq 0} \] is continuous. Moreover if we fix $\lambda\in{\mathcal M}{\mathcal L}_g$ then the map \[ u_\lambda:{\mathcal T}_g\ni F\mapsto\ell(F,\lambda)\in\mathbb{R}_{\geq 0} \] is real-analytic. \end{prop} \nopagebreak\par\rightline{$_\blacksquare$} By Proposition~\ref{cap2:length:prop} we can consider the gradient $\nabla u_\lambda$ of $u_\lambda$ with respect to the Weil-Petersson metric of ${\mathcal T}_g$. In this way we obtain a map \[ I_T: {\mathcal M}{\mathcal L}(F)\ni\lambda\mapsto\nabla u_\lambda(F)\in T_F{\mathcal T}_g\,. \] \begin{prop}\cite{McMullen} Let $J$ be the endomorphism of the tangent bundle of ${\mathcal T}_g$ corresponding to the multiplication by $i$ with respect to the complex structure of ${\mathcal T}_g$. Then the following diagram \[ \begin{CD} {\mathcal M}{\mathcal L}(F) @>I_E>> T_F{\mathcal T}_g\\ @| @VV J V\\ {\mathcal M}{\mathcal L}(F) @>I_T>> T_F{\mathcal T}_g \end{CD} \] is commutative. \end{prop} \nopagebreak\par\rightline{$_\blacksquare$} Thurston pointed out a construction to associate to every hyperbolic surface $F$ equipped with a measured geodesic lamination $\lambda$ a projective structure $Gr_\lambda(F)$ on $S$. This construction yields a parameterization of the space of projective structures on $S$ up to projective equivalence \[ Gr:{\mathcal T}_g\times{\mathcal M}{\mathcal L}_g\rightarrow{\mathcal P}(S). \] Given a projective structure on $S$ the maximal atlas determines a well-defined complex structure on $S$, so we have a natural map \[ {\mathcal P}(S)\rightarrow{\mathcal T}_g \] that turns to be a holomorphic bundle. In particular by projecting $Gr_\lambda(F)$ on ${\mathcal T}_g$ we obtain a map \[ {\mathcal T}_g\times{\mathcal M}{\mathcal L}_g\ni (F,\lambda)\mapsto gr_\lambda(F)\in{\mathcal T}_g\,. \] If we fix a pair $(F,\lambda)$ the path $t\mapsto gr_{t\lambda}(F)$ is a real analytic path starting from $F$ so we can consider the vector \[ v_F(\lambda)=\frac{\mathrm{d}\,}{\mathrm{d} t}|_{t=0}\,gr_{t\lambda}(F)\,\in\mathrm T_F{\mathcal T}_g. \] In~\cite{McMullen} it is shown that \[ v_F=\nabla u_\lambda(F) \] so in particular we see that the grafting map induces an identification between ${\mathcal M}{\mathcal L}(F)$ and $T_F{\mathcal T}_g$ which differ by $I_E$ by the multiplication by $i$ of $T_F{\mathcal T}_g$. \section{Sum of two laminations}\label{sum-section} We have defined a linear structure on the space ${\mathcal M}{\mathcal L}(F)$ of measured geodesic laminations on $F$ and we have given several different interpretations. In this section we will take two laminations $\lambda_1,\lambda_2\in{\mathcal M}{\mathcal L}(F)$ and we will investigate what is the sum lamination $\lambda=\lambda_1+\lambda_2$.\\ In the first part we will show that the set of measured geodesic lamination that does not intersect an embedded surface $F'\subset F$ with geodesic boundary is a subspace of ${\mathcal M}{\mathcal L}(F)$.\\ In the second part we give a procedure to approximate the sum lamination in the case when the terms of the sum are simple curve meeting each other in one point. \medskip\par\noindent {\bf The support of the sum lamination} \medskip\par\noindent Let us take $\lambda_1,\lambda_2\in\mathcal{ML}(F)$ and denote by $\lambda$ the sum lamination $\lambda_1+\lambda_2$. Thus the cohomological class associated with $\lambda$ is represented by the sum of cocycles $\tau_1$ and $\tau_2$ associated to $\lambda_1$ and $\lambda_2$.\par Let $X\subset F$ be a hyperbolic surface with totally geodesic boundary such that the supports of $\lambda_1$ and $\lambda_2$ are contained in $X$. We will show that the support of $\lambda$ is contained in $X$ too.\par Let us set $F'=\overline{F-X}$ and denote by $\tilde F'$ the inverse image of $F'$ in $\mathbb{H}^{2}$. \begin{figure} \begin{center} \input{MGL_fig_cap3_supportosomma.pstex_t} \caption{{\small $\tilde F'$ has infinite connected components, but each component is open and closed in $\tilde F'$.}}\label{sec.5.3-suppsomma-fig} \end{center} \end{figure} Now let us fix $x_0\in\tilde F'$ and consider functions \[ \rho_i:\tilde F'\ni x\mapsto\int_{x_0}^x v_i(t)\mathrm d\lambda_i \in \mathbb{R}^{2+1}\qquad\textrm{ for }i=1,2 \] where $v_i(x)$ is the vector orthogonal to the leaf of $\lambda_i$ through $x$. Up to adding co-boundaries we can suppose that $\tau_i(\gamma)=\rho_i(\gamma x_0)$. Moreover since $\tilde F'$ does not intersect $\lambda_1$ and $\lambda_2$ we have that $\rho_1$ and $\rho_2$ are locally constant functions. Since every connected component of $\tilde F'$ is open in $\tilde F'$ it follows that these maps are continuous.\par Now we have to show that the function $\rho(x)=\rho_1(x)+\rho_2(x)$ has good properties. \begin{lem}\label{sec.5.3-support of sum-lemma} For every $x,y\in\tilde F'$ we have that $\rho(x)-\rho(y)$ is a spacelike vector whose dual geodesic separates $x$ from $y$. Moreover $\rho(x)-\rho(y)$ points towards $x$. \end{lem} \emph{Proof : } We know that $\rho_1(x)-\rho_1(y)$ and $\rho_2(x)-\rho_2(y)$ are spacelike vectors and the corresponding dual geodesics separate $x$ from $y$. Moreover these vectors point towards $x$. Now we have two possibilities: either the dual geodesics intersect each other or they are disjoint. In the first the space generated by $\rho_1(x)-\rho_1(y)$ and $\rho_2(x)-\rho_2(y)$ is spacelike and so their sum is spacelike. In the second case, since they point towards $x$ we get that their scalar product is positive. Thus it easily follows that $\rho(x)-\rho(y)$ is spacelike.\par Since $\E{\rho_i(x)-\rho_i(y)}{x}\geq 0$ and $\E{\rho_i(x)-\rho_i(y)}{y}\leq 0$ the same holds for $\rho(x)-\rho(y)$. Thus if $\rho(x)-\rho(y)\neq 0$ then its dual geodesic separates $x$ from $y$ and $\rho(x)-\rho(y)$ points towards $x$. \nopagebreak\par\rightline{$_\blacksquare$} \begin{prop}\label{sec.5.3-support of sum-prop} Let us set $\tau=\tau_1+\tau_2$. Then we have \[ \mathcal D_\tau=\bigcap_{x\in\tilde F'}\fut(\rho(x)+\ort{x})\,. \] Moreover $\rho(x)$ lies on the singularity of $\mathcal D_\tau$. \end{prop} \emph{Proof : } Let $\Omega=\bigcap_{x\in\tilde F'}\fut(\rho(x)+\ort{x})$. First let us prove that it is a regular domain.\par By using inequality~(\ref{cap2:ineq:eq}) we see that $\rho(x)\in\partial\Omega$ and $\rho(x)+\ort{x}$ is a support plane through $\rho(x)$.\par Now let $\tilde F_1,\tilde F_2,\ldots,\tilde F_k,\ldots$ be the connected components of $\tilde F'$ and $\partial_\infty F_k$ be the set of ideal points of $\tilde F_k$. Finally let us put $\rho_k=\rho(x_k)$ where $x_k\in\tilde F_k$ (notice that $\rho_k$ does not depend on the choice of $x_k$). It is not hard to see that \[ \bigcap_{x\in F_k}\fut(\rho_k+\ort{x})=\bigcap_{[v]\in\partial_\infty F_k}\fut(\rho_k+\ort{v}) \] where $\partial_\infty F_k$ is the set of accumulation points of $F_k$ in the boundary of $\mathbb{H}^2$ (this equality is an easy consequence of the fact that $F_k$ is the convex hull of $\partial_\infty F_k$). In particular we get \[ \Omega=\bigcap_{k\in\mathbb N}\bigcap_{[v]\in\partial_\infty \tilde F_k} \fut(\rho_k+\ort{v})\,. \] It follows that $\Omega$ is a regular domain and $\rho(x)$ is in the singularity of $\Omega$. Moreover by construction we have that $\Omega$ is $h_\tau(\pi_1(F))$-invariant so by Theorem \ref{cap2:uniqueness:teo} we obtain that $\Omega=\mathcal D_\tau$. \nopagebreak\par\rightline{$_\blacksquare$} Let $\tilde S_1$ be the CT level surface $T^{-1}(1)$ of the domain $\mathcal D_\tau$. Moreover let $r:\mathcal D_\tau\rightarrow\partial\mathcal D_\tau$ and $N:\mathcal D_\tau\rightarrow\mathbb{H}^{2}$ be respectively the projection on the singularity and the Gauss map. By Proposition \ref{sec.5.3-support of sum-prop} we obtain the following result. \begin{cor}\label{sec.5.3-support of sum-cor} For every $x\in\tilde F'$ we have that $x+\rho(x)\in\tilde S_1$ and \begin{eqnarray*} r(x+\rho(x))=\rho(x)\\ N(x+\rho(x))=x\,. \end{eqnarray*} \end{cor} \nopagebreak\par\rightline{$_\blacksquare$} The lamination sum $\lambda$ is the dual lamination of the singularity $\Sigma_\tau$ of $\Omega_\tau$.\par We have seen in the proof of Proposition \ref{sec.5.3-support of sum-prop} that for every $[v]\in\partial_\infty\tilde F _k$ the ray $\rho(x_k)+\mathbb R v$ is contained in $\partial\mathcal D_\tau$. Thus we easily see that the plane through $\rho(x_k)$ orthogonal to $v$ is a support plane for $\Omega_\tau$. By definition of $\mathcal F(\rho_k)$ we obtain $\tilde F_k\subset\mathcal F(\rho_k)$. Thus $\lambda$ does not intersect the interior of $\tilde F$. In particular the following corollary holds. \begin{cor} Let $\lambda_1,\lambda_2\in\mathcal{ML}(F)$ such that they do not intersect an embedded surface $F'$ with totally geodesic boundary. Then the sum lamination $\lambda=\lambda_1+\lambda_2$ does not intersect (the interior of ) $F'$. Moreover let us fix a base point of $x_0$ belonging to the interior of $\tilde F'$ in the pre-image of $F'$ and consider the cocycles $\tau,\tau_1,\tau_2$ computed with base point $x_0$ and laminations $\lambda,\lambda_1,\lambda_2$ Then we have \[ \tau(\gamma)=\tau_1(\gamma)+\tau_2(\gamma)\qquad\textrm{ for all }\gamma\in\pi_1(F)\,. \] \end{cor} \begin{remark}\emph{ The last part of this corollary is not tautological. In fact by definition $\tau-\tau_1-\tau_2$ is a coboundary whereas the corollary states that $\tau-\tau_1-\tau_2$ is zero} \end{remark} \emph{Proof : } The first part of corollary is obvious. For the second one consider the above notations. We have that $\tau_i(\gamma)=\rho_i(\gamma(x_0))$. On the other hand $\tau(\gamma)$ is defined by the equation \[ N(\gamma(x_0)+\tau(\gamma))=\gamma(x_0)\,. \] By Corollary \ref{sec.5.3-support of sum-cor} we have \[ \tau(\gamma)=\rho_1(\gamma(x_0))+\rho_2(\gamma(x_0))= \tau_1(\gamma)+\tau_2(\gamma)\,. \] \nopagebreak\par\rightline{$_\blacksquare$} \medskip\par\noindent {\bf The sum of weighted simple curves intersecting each other only at one point} \medskip\par\noindent This is the simplest non trivial example of the sum of two laminations. However we will see that even in this case the description of the sum lamination is rather involved. We start from simple geodesics $C$ and $D$ with weights $c$ and $d$. Then we recursively construct a sequence of simple geodesics $A_k$, $C_k$ and $D_k$ with weights $a_k$, $c_k$ and $d_k$ such that \[ (C,c)+ (D,d)=(A_k,a_k)+(C_k,c_k)+(D_k,d_k) \] and such that $A_k$ is disjoint from $C_k$ and $D_k$ whereas $C_k$ and $D_k$ intersect each other at one single point. The construction ends if $c_k$ or $d_k$ are zero for some $k$. Otherwise the weighted curves $(A_k, a_k)$, $(C_k,c_k)$ and $(D_k,d_k)$ converge to a measured laminations $\mathcal A_\infty$, $\mathcal C_\infty$ and $\mathcal D_\infty$. Moreover the transverse intersection between $\mathcal C_\infty$ and $\mathcal D_\infty$ is zero. Thus the union $\mathcal L_\infty=\mathcal A_\infty\cup\mathcal C_\infty\cup\mathcal D_\infty$ is a measured lamination and we obtain that it is the sum lamination.\par We use the following notation: given an element $\gamma\in\pi_1(F)$ we denote by $A_\gamma$ the axis of $\gamma$ (that is an oriented geodesic in $\mathbb{H}^{2}$) and by $C_\gamma$ the image of $A_\gamma$ in $F$. We know that $C_\gamma$ is the unique oriented closed geodesic freely homotopic to $\gamma$. Finally given $\gamma,\gamma'\in\pi_1(F)$ such that $A_\gamma\cap A_{\gamma'}\neq\varnothing$ we denote by $\theta(\gamma,\gamma')\in[0,\pi)$ the angle between $A_\gamma$ and $A_{\gamma'}$. \par Now let $(C,c)$ and $(D,d)$ be two weighted simple curves intersecting each other at one point. We have to compute \[ (C,c)+(D,d)\,. \] Let us orient $C$ and $D$ in such a way that the angle between them is less or equal to $\pi/2$ (if the angle is less than $\pi/2$ there are two distinct ways to make this choice whereas if the angle is $\pi/2$ we can make every choice - i.e. there are $4$ choices).\par \begin{figure}[h!] \begin{center} \input{MGL_fig_cap3_intorno.pstex_t} \caption{{\small The curve $C_\alpha$ is freely homotopic to a boundary of a regular neighbourhood of $C_\gamma\cup C_\delta$, thus it is the boundary of a regular neighbourhood.}} \end{center} \end{figure} Choose $\gamma,\delta\in\pi_1(F)$ such that $C_\gamma=C$ and $C_\delta=D$ as oriented curves and $A_\gamma$ intersects $A_\delta$ at a point $p$ (we can choose $\gamma$ arbitrarily among elements of $\pi_1(F)$ such that $C_\gamma=C$, but the choice of $\gamma$ gives some constraints for the choice of $\delta$). Now let us set $\alpha=\delta^{-1}\gamma^{-1}\delta\gamma$, we have that $C_\alpha$ is a simple curve which does not intersect $C_\gamma$ and $C_\delta$. Moreover it disconnects $F$ in two regions. The region which contains $C\cup D$ is a regular neighbourhood of this set and we denote it by $X$. Notice that it is homeomorphic to a genus one surface with one boundary component (i.e. a torus minus a disk). The other one, say $F'$, is a hyperbolic surface with hyperbolic boundary.\par Let $\tilde X$ be the component of the lifting of $X$ in $\mathbb{H}^{2}$ which contains $A_\gamma\cup A_\delta$ and $\tilde F'$ be the component of the lifting of $F'=F-X$ which contains $A_\alpha$. By previous paragraph we get that the support of the sum lamination $\lambda=(C,c)+(D,d)$ is contained in $X$. The main proposition of this section is the following one. \begin{prop}\label{sec.5.3-sum-prop} Suppose that $\frac{c}{d}=r(\gamma,\delta)$ where \[ r(\gamma,\delta)=\frac{\cos\theta(\gamma\delta,\delta)}{\cos\theta(\gamma\delta,\gamma)}\,. \] Then we have \[ (C,c)+(D,d)=(C_\alpha, a) + (C_{\gamma\delta}, b) \] where $a$ and $b$ are explicit ($\mathrm C^\infty$) functions of $c,d$, the lengths of $C$ and $D$ and the angle between $C$ and $D$. \end{prop} The proposition is proved by a long computation. We postpone the proof to the end of this section. \begin{remark}\emph{ Notice that $\gamma,\delta\in\pi_1(F)$ depend (up to conjugation) on the choice of the orientation of $C$ and $D$, in particular $\gamma\delta$ depends on this choice. On the other hand the support of the sum lamination does not depend on any orientation.}\par \emph{ When the angle between $C$ and $D$ is less than $\pi/2$ we have two choices for the orientation. In particular if $\gamma$ and $\delta$ represent $C$ and $D$ for a given orientation then $\gamma^{-1}$ and $\delta^{-1}$ represent $C$ and $D$ for the other one. Since $\gamma\delta$ and $\gamma^{-1}\delta^{-1}$ are conjugated in $\pi_1(F)$ the result of Proposition \ref{sec.5.3-sum-prop} does not depend on our choices. On the other hand when the angle between $C$ and $D$ is $\pi/2$ then we can orient geodesics so that $\gamma$ and $\delta^{-1}$ represent $C$ and $D$. But $\gamma\delta^{-1}$ is not conjugated to $\gamma\delta$ nor to $(\gamma\delta)^{-1}$. However we will see that in this case the condition is $\frac{c}{d}=1$ and the weight $b$ is equal to $0$. }\end{remark} \begin{figure} \begin{center} \input{MGL_fig_cap3_tree.pstex_t} \caption{We have $\theta(\delta\gamma,\gamma)+\theta(\delta\gamma,\delta)<\theta(\delta,\gamma)$.}\label{sec.5.3-tree-fig} \end{center} \end{figure} \begin{remark}\label{sec.5.3-angle shortens-oss} \emph{ The stabilizer of $\tilde X$ in $\pi_1(F)$ is the free group generated by $\gamma$ and $\delta$ (actually it is the fundamental group of $C_\gamma\cup C_\delta$). Let us denote this group by $\pi_1(X)$. Let $T$ be the component of the inverse image of $C_\gamma\cup C_\delta$ containing $A_\gamma$. It is an infinite tree such that every edge has valence equal to $4$, see Fig.\ref{sec.5.3-tree-fig}. Vertices of $T$ are the translates of $p_0$ by elements of $\pi_1(X)$.}\par \emph{ Consider the Cayley graph $T'$ associated to $\pi_1(X)$: the vertices of $T'$ are the elements of $\pi_1(X)$ and two vertices are joined by an edge if they differ by a right multiplication for $\gamma,\delta,\gamma^{-1},\delta^{-1}$. We have that $T'$ is an infinite tree such that every vertex has valence $4$. Moreover there exists an isomorphism of trees between $T$ and $T'$ which takes the vertex $\eta\in\pi_1(X)$ to $\eta(p_0)$}.\par \emph{Notice that left translations give rise to a representation of $\pi_1(X)$ into the group of automorphisms of $T'$. Moreover we can choose the isomorphism between $T$ and $T'$ in such a way that the left multiplication corresponds to the natural action of $\pi_1(X)$ on $T$.} \emph{By using this construction we can study the limit points of $(\delta\gamma)^n(p_0)$ for $n\in\mathbb Z$. From this analysis it follows that $A_{\delta\gamma}$ is like in Figure \ref{sec.5.3-tree-fig}. By looking at the triangle with edges on $A_\gamma\cup A_\delta\cup A_{\delta\gamma}$ we get that $\theta(\delta\gamma,\gamma)+\theta(\delta\gamma,\delta)<\theta(\gamma,\delta)$ (see Fig.~\ref{sec.5.3-tree-fig}). Thus $\theta(\delta\gamma,\gamma)\in (0,\pi/2)$ so $r(\gamma,\delta)$ is well-defined. } \end{remark} Now we will construct by recurrence sequences $\gamma_k,\delta_k\in\pi_1(F)$ and $a_k,c_k,d_k\in\mathbb R_+$ such that \begin{enumerate} \item $(C,c)+(D,d)=(C_\alpha,a_k)+(C_{\gamma_k},c_k)+(C_{\delta_k},d_k)$. \item $C_\alpha$ is disjoint from $C_{\gamma_k}$ and $C_{\delta_k}$ whereas $C_{\gamma_k}$ and $C_{\delta_k}$ intersect each other only at one point. \item $\alpha$ is conjugated to the commutator of $\gamma_k$ and $\delta_k$. \item The angle between $C_{\gamma_k}$ and $C_{\delta_k}$ is less or equal to $\pi/2$ and $\frac{c_k}{d_k}\geq r(\gamma_k,\delta_k)$. \item Either there exists $k_0$ such that $d_{k_0}=0$ or the lengths of $C_{\delta_k}$ are not bounded in $\mathbb R$. \end{enumerate} The recurrence process ends if for some $N$ we have $d_N=0$ and in this case we obtain that the sum $(C,c)+(D,d)$ is equal to the weighted multicurve $(A,a_N)+(C_{\gamma_N},c_N)$. If the process does not end then we will see that the sequence converges to the lamination sum.\par The first step is the following. Up to exchanging $\gamma$ with $\delta$ we can suppose that $\frac{c}{d}>r(\gamma,\delta)$. Then let us put \begin{eqnarray*} \gamma_0=\gamma & \delta_0=\delta & a_0=0,\, c_0=c,\, d_0=d\,. \end{eqnarray*} Suppose that $\gamma_k$, $\delta_k$, $a_k$, $c_k$ and $d_k$ are defined, we have to describe the inductive step.\par If $d_k=0$ then we stop. Otherwise let us consider $r_k=r(\gamma_k,\delta_k)$. We can write \[ (C,c)+(D,d)= (C_\alpha,a_k)+(C_{\gamma_k},c_k-r_kd_k)+ (C_{\gamma_k},r_kd_k)+(C_{\delta_k},d_k)\,. \] Now by applying Proposition \ref{sec.5.3-sum-prop} we get that the sum of the two last terms is equal to \[ (C_\alpha,a)+(C_{\gamma_k\delta_k},b) \] for some $a,b\in\mathbb R_+$. Let us put $a_{k+1}=a_k+a$. For the other curves consider the following cases.\\ If $b=0$ then put $\gamma_{k+1}=\gamma_k$, $c_{k+1}=c_k-r_kd_k$ and $d_{k+1}=0$.\\ If $c_k=r_kd_k$ then put $\gamma_{k+1}=\gamma_k\delta_k$, $c_{k+1}=b$ and $d_{k+1}=0$.\\ If $\frac{b}{c_k-r_kd_k}\geq r(\gamma_k,\gamma_k\delta_k)$ put \[ \left\{\begin{array}{ll} \gamma_{k+1}=\gamma_k\delta_k & \delta_{k+1}=\gamma_k\\ c_{k+1}=b & d_{k+1}=c_k-r_kd_k. \end{array}\right.\,. \] Finally if $\frac{b}{c_k-r_kd_k}<r(\gamma_k,\gamma_k\delta_k)$ put \[ \left\{\begin{array}{ll} \gamma_{k+1}=\gamma_k & \delta_{k+1}=\gamma_k\delta_k\\ c_{k+1}=c_k-r_kd_k & d_{k+1}=b \end{array}\right.\,. \] Since $C_{\gamma_k}\cap C_{\delta_k}$ is a single point the same happens for $C_{\gamma_{k+1}}\cap C_{\delta_{k+1}}$. Moreover by Remark \ref{sec.5.3-angle shortens-oss} the angle between $A_{\gamma_{k+1}}$ and $A_{\delta_{k+1}}$ is smaller than the angle between $A_{\gamma_k}$ and $A_{\delta_k}$. Finally notice that the commutator of $\gamma_{k+1}$ and $\delta_{k+1}$ is conjugated to $\alpha$. Thus $C_\alpha$ is disjoint from $C_{\gamma_{k+1}}$ and $C_{\delta_{k+1}}$.\par By using these facts we can see that this sequence verifies properties 1-4. Suppose that the sequence is infinite. Since $\delta_k$'s are all different, they form a divergent sequence in $\pi_1(F)$. On the other hand since they are word in $\gamma$ and $\delta$ with all positive exponents we get that $A_{\delta_k}$ have endpoints in the opposite segments of $\partial\mathbb{H}^{2}-(A_\gamma\cup A_\delta)$. Thus the translation length of $\delta_k$ goes to infinity. \begin{lem}\label{sec.5.3-compactness-lemma} Suppose that the sequence $\{\gamma_k,\delta_k,a_k,c_k,d_k\}$ is infinite. Let us take $p_0\in\tilde F'$ and $\beta\in\pi_1(F)$. Suppose that the geodesic segment $[p_0,\beta p_0]\subset\mathbb{H}^{2}$ is not contained in the axis of any element of $\pi_1(F)$ in the conjugacy class of $\gamma_k$ and $\delta_k$. Let $N_k$ (resp. $M_k$) be the cardinality of the intersection of $[p_0,\beta p_0]$ with $\tilde C_k$ (resp. $\tilde D_k$) where $\tilde C_k$ (resp. $\tilde D_k$) is the lifting in $\mathbb{H}^{2}$ of the curve $C_k$ (resp. $D_k$). Then there exists $C\in\mathbb R_+$ such that $N_kc_k\leq C$ and $M_k d_k\leq C$.\par Moreover $a_k$'s are bounded. \end{lem} \begin{figure} \begin{center} \input{MGL_fig_cap3_triangle.pstex_t} \caption{{\small The angle at $q$ of the triangle $pqp'$ is equal to $\cos^{-1}(\E{u_i}{w_j})$.}}\label{sec.5.3-triangle-fig} \end{center} \end{figure} \emph{Proof : } The cocycle associated to the sum lamination $(C,c)+(D,d)$ computed with starting point $p_0$ is equal to the cocyle associated to $(C_\alpha,a_k)+(C_{\gamma_k},c_k)+(C_{\delta_k},d_k)$ computed with starting point $p_0$. Let $\tau$ be such a cocycle, we know that \[ \tau(\beta)=a_k \sum_{i=1}^K v_i +c_k\sum_{i=1}^{N_k} w_i + d_k\sum_{i=1}^{M_k} u_i \] where $K,N_k,M_k$ are respectively the cardinalities of the intersection of $[p_0,\beta p_0]$ with $\tilde C_\alpha$,$\tilde C_k$ and $\tilde D_k$, whereas $v_i$, $w_i$ and $u_i$ are respectively the unit vectors orthogonal to $\tilde C_\alpha$, $\tilde C_k$ and $\tilde D_k$ pointing towards $\beta p_0$. The geodesic corresponding to $v_i$ is disjoint from all the geodesics corresponding to $v_j$, $w_j$, $u_j$. By an usual argument we get $\E{v_i}{v_j}\geq 1$, $\E{v_i}{w_j}\geq 1$ and $\E{v_i}{u_j}>1$. In the same way we have that $\E{w_i}{w_j}\geq 1$ and $\E{u_i}{u_j}\geq 1$. Now we claim that there exists a number $L$ (independent of $n$) such that the number of couples $(u_i,w_j)$ such that $\E{u_i}{w_j}<0$ is less than $L$. By the claim we get that \[ \E{\tau(\beta)}{\tau(\beta)}\geq (k a_k)^2 + (N_k c_k)^2 + (M_k d_k)^2 - 2L c_kd_k \] (indeed if $\E{u_i}{w_i}<0$ then by construction the dual geodesics intersect each other and so $\E{u_i}{w_i}\geq-1$). Thus the lemma follows from the claim.\par Let us prove the claim. Suppose that $\E{u_i}{w_j}<0$, then the corresponding geodesics intersect each other at a point $q$. On the other hand let $p\in\mathbb{H}^{2}$ ($p'\in\mathbb{H}^{2}$) be the intersection of the segment $[p_0,\beta p_0]$ with the geodesic corresponding to $u_i$ (resp. $w_i$). Since $\E{u_i}{w_j}<0$ the angle at $q$ of the hyperbolic triangle $qpp'$ is greater than $\pi/2$ (see Fig.~\ref{sec.5.3-triangle-fig}). So the distance between $q$ and the segment $[p_0,\beta p_0]$ is less than the length of the segment. Let $H$ be the set of points whose distance from $[p_0,\beta p_0]$ is less than the length of this segment. We have that $H$ is a compact set so that it intersects just a finite number $L$ of the translates of a fixed fundamental domain for the action of $\pi_1(F)$.\par We will see that $L$ works. In fact we have that the point $q$ projects on the intersection of $C_{\gamma_k}$ and $C_{\delta_k}$. Thus $q$ runs over a set of $L$ elements of $\mathbb{H}^{2}$. On the other hand if we choose $q$ in this set the lifting of $C_{\gamma_k}$ (and $C_{\delta_k}$) passing through $q$ is unique, so the couple $(u_i,w_j)$ is determined by $q$. \nopagebreak\par\rightline{$_\blacksquare$} By Lemma \ref{sec.5.3-compactness-lemma} it follows that the families of weighted multi-curves $\{(C_{\gamma_k}, c_k)\}$ and $\{(C_{\delta_k},d_k)\}$ are relatively compact in $\mathcal{ML}(F)$. Thus up to passing to a subsequence we can suppose that they respectively converge to two measured laminations $\lambda'_\infty$ and $\lambda''_\infty$ and moreover $a_k\rightarrow a_\infty$. \begin{prop} We have \[ (C,c)+(D,d)=(C_\alpha,a_\infty) + \lambda'_\infty +\lambda''_\infty\,. \] Moreover we have \begin{eqnarray*} \iota(C_\alpha,\lambda'_\infty)=0 & \iota(C_\alpha,\lambda''_\infty)=0 & \iota(\lambda'_\infty,\lambda''_\infty)=0\,. \end{eqnarray*} where $\iota:{\mathcal M}{\mathcal L}(F)\times{\mathcal M}{\mathcal L}(F)\rightarrow\mathbb{R}_+$ is the intersection pairing. \end{prop} \emph{Proof : } The first statement follows from the construction of the sequence. The intersection of $C_\alpha$ and $\lambda'_\infty$ (resp. $\lambda''_\infty$) is zero because of we have that $\iota(C_\alpha,(C_{\gamma_k},c_k))=0$ (resp. $\iota(C_\alpha,(C_{\delta_k},d_k))=0$) and the intersection is continuous function of $\mathcal{ML}(F)\times\mathcal{ML}(F)$. Finally notice that \[ \iota(\lambda'_\infty,\lambda''_\infty)=\lim_{k\rightarrow\infty}\iota((C_{\gamma_k},c_k),(C_{\delta_k},d_k))= \lim_{k\rightarrow\infty}c_kd_k\,. \] We have noticed that the length of $C_{\delta_k}$ goes to infinite so $d_k$ goes to zero. On the other hand we know that $c_k$ is bounded in $\mathbb R$ so the proof is complete. \nopagebreak\par\rightline{$_\blacksquare$} Since the geometric intersection between $\lambda'_\infty$ and $\lambda''_\infty$ is zero we see that their supports have empty transverse intersection. So the union of their supports is a geodesic lamination too. Thus this lamination can be endowed with a transverse measure so that the corresponding measure geodesic lamination $\lambda_\infty$ is equal to $\lambda'_\infty+\lambda''_\infty$. Since $\lambda_\infty$ is disjoint from $C_\alpha$ it follows that the union of these measured laminations gives the sum lamination. \begin{remark}\emph{ Notice that the sum lamination has always simplicial components. Since the sequence $a_k$ is increasing we have $a_\infty\neq 0$ so that $(C_\alpha, a_\infty)$ is a simplicial sub-lamination of the sum lamination. }\end{remark} In the last part of this section we prove Proposition \ref{sec.5.3-sum-prop}. Given a hyperbolic transformation $\alpha\in\mathrm{SO}(2,1)$ we denote by $x^0(\alpha)\in\mathbb{R}^{2+1}$ the unit spacelike vector of $\mathbb{R}^{2+1}$ corresponding to $A_\alpha$, such that it induces on $A_\alpha$ the orientation from the repulsive fixed point towards the attractive fixed point. The following lemma is a technical result which we need for the proof of Proposition \ref{sec.5.3-sum-prop}. \begin{lem}\label{sec.5.3-comp.-lemma} Let $\gamma,\delta\in\pi_1(F)$ be such that $C_\gamma$ and $C_\delta$ are two simple curves which intersect each other at one single point. Let $\alpha=\delta^{-1}\gamma^{-1}\delta\gamma$ and let $W$ be the subspace of $\mathbb{R}^{2+1}$ generated by $x^0(\delta \gamma), x^0(\gamma)$ and $x^0(\alpha)-\delta x^0(\alpha)$. Then the dimension of $W$ is $2$. \end{lem} \emph{Proof : } Consider matrices \begin{eqnarray*} M(l)=\left(\begin{array}{ccc} \mathrm{ch\,} l & \mathrm{sh\,} l & 0\\ \mathrm{sh\,} l & \mathrm{ch\,} l & 0\\ 0 & 0 & 1 \end{array}\right) \\ R_\theta=\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta\\ 0 & \sin\theta & \cos\theta \end{array}\right)\,. \end{eqnarray*} We can choose coordinates in such a way that $\gamma=M(l)$ and $\delta=R_\theta M(m) R_{-\theta}$ where $l$ (resp. $m$) is the length of $C_\gamma$ (resp. $C_\delta$) and $\theta$ is the angle between $C_\gamma$ and $C_\delta$. Thus we have that \begin{eqnarray*} x^0(\gamma)=\left(\begin{array}{l} 0\\0\\1\end{array}\right) & x^0(\delta)=\left(\begin{array}{l} 0\\-\sin\theta\\ \cos\theta\end{array}\right)\,. \end{eqnarray*} By an explicit computation we have that \[ w=\left(\begin{array}{l} \sin\theta(\mathrm{ch\,} m-1)\mathrm{sh\,} l\\ -\sin\theta(\mathrm{ch\,} m -1)(\mathrm{ch\,} l+1)\\ \mathrm{sh\,} l\mathrm{sh\,} m + \cos\theta(\mathrm{ch\,} l+1)(\mathrm{ch\,} m-1) \end{array}\right) \] is a generator of $\ker(\delta\gamma-1)$. In order to compute a generator of $\ker(\alpha-1)$ notice that $\ker(\alpha-1)=\ker (\delta\gamma-\gamma\delta)$ The latter is a skew-symmetric matrix so it is straightforward to compute its kernel. By performing such a computation it turns out that $\ker(\alpha-1)$ is generated by \[ v=\left(\begin{array}{l} \mathrm{sh\,} l\mathrm{sh\,} m + (\mathrm{ch\,} l-1)(\mathrm{ch\,} m-1)\cos\theta\\ -\mathrm{sh\,} m(\mathrm{ch\,} l-1) - \mathrm{sh\,} l(\mathrm{ch\,} m-1)\cos\theta\\ -\sin\theta\mathrm{sh\,} l(\mathrm{ch\,} m-1) \end{array}\right)\,. \] By an explicit computation we have \[ \delta v= \left(\begin{array}{l} \mathrm{sh\,} l\mathrm{sh\,} m - (\mathrm{ch\,} l-1)(\mathrm{ch\,} m-1)\cos\theta\\ -\mathrm{sh\,} m(\mathrm{ch\,} l-1) + \mathrm{sh\,} l(\mathrm{ch\,} m-1)\cos\theta\\ +\sin\theta\mathrm{sh\,} l(\mathrm{ch\,} m-1) \end{array}\right)\,. \] So we obtain \[ v-\delta v =2(\mathrm{ch\,} m-1)\left(\begin{array}{l} (\mathrm{ch\,} l-1)\cos\theta\\ -\mathrm{sh\,} l\cos\theta\\ -\mathrm{sh\,} l\sin\theta\end{array}\right)\,. \] Notice that $W$ is generated by $x^0(\gamma),w,v-\delta v$. On the other hand an easy computation shows \[ \mathrm{det}\left[\begin{array}{lll} 0 & \sin\theta(\mathrm{ch\,} m-1)\mathrm{sh\,} l & (\mathrm{ch\,} l-1)\cos\theta\\ 0 & -\sin\theta(\mathrm{ch\,} m-1)(\mathrm{ch\,} l+1) & -\mathrm{sh\,} l\cos\theta\\ 1 & \mathrm{sh\,} l\mathrm{sh\,} m +\cos\theta(\mathrm{ch\,} l+1)(\mathrm{ch\,} m-1) & \mathrm{sh\,} l\sin\theta \end{array}\right]=0\,. \] \nopagebreak\par\rightline{$_\blacksquare$} \emph{Proof of Proposition \ref{sec.5.3-sum-prop}:} We use the notation introduced above. In particular let $p_0\in\tilde F'$ be a base point. For a given weighted curve $(A,a)$ we will denote by $(A,a)[\gamma]\in\mathbb{R}^{2+1}$ the value at $\gamma$ of the cocycle corresponding to $(A,a)$ computed with base point $p_0$. \begin{figure} \begin{center} \input{MGL_fig_cap3_cocycle.pstex_t} \caption{}\label{sec.5.3-cocycle-fig} \end{center} \end{figure} Now we want to show that under the assumptions of the proposition there exist \emph{positive} constants $a,b$ such that \begin{equation}\label{sec.5.3-target-eq} (C,c)[\beta] + (D,d)[\beta] = (C_\alpha,a)[\beta] + (C_{\delta\gamma},b)[\beta] \qquad\textrm{ for all }\beta\in\pi_1(F)\,. \end{equation} By an application of Van Kampen theorem we know that $\pi_1(F)$ is the amalgamation of the stabilizer $\pi_1(F')$ of $\tilde F'$ with the stabilizer of $\tilde X$ along the stabilizer of the geodesic $\tilde F'\cap \tilde X$. We have that the stabilizer of $X$ is the free group on $\gamma$ and $\delta$ whereas the stabilizer of $\tilde F'\cap X$ is the group generated by $\alpha$.\par Notice that for all $\beta\in\pi_1(F')$ all terms involved in expression (\ref{sec.5.3-target-eq}) are zero. Thus it is sufficient to find $a,b\in\mathbb R_+$ such that \begin{equation}\label{sec.5.3-target2-eq} \left\{\begin{array}{l} (C,c)[\gamma]+(D,d)[\gamma]=(C_\alpha,a)[\gamma]+(C_{\delta\gamma},b)[\gamma]\\ (C,c)[\delta]+(D,d)[\delta]=(C_\alpha,a)[\delta]+(C_{\delta\gamma},b)[\delta]\,. \end{array}\right. \end{equation} Thus let us compute the terms in this expression. By an analysis of Fig.~\ref{sec.5.3-cocycle-fig} we obtain \[ \left\{\begin{array}{ll} (C,c)[\gamma]=0 & (D,d)[\gamma]=-dx^0(\delta) \\ (C_\alpha,a)[\gamma]=a(1-\gamma)x^0(\alpha) & (C_{\delta\gamma},b)[\gamma]=-bx^0(\gamma\delta)=-b\gamma( x^0(\delta\gamma))\\ (C,c)[\delta]=x^0(\gamma) & (D,d)[\delta]=0 \\ (C_\alpha,a)[\delta]=a(1-\delta)x^0(\alpha) & (C_{\delta\gamma},b)[\delta]=b(x^0(\delta\gamma))\,. \end{array}\right. \] Thus equation (\ref{sec.5.3-target2-eq}) is equivalent to the system \begin{equation}\label{sec.5.3-target-eq2} \left\{\begin{array}{l} a(1-\delta)x^0(\alpha)+b x^0(\delta\gamma) = c x^0(\gamma)\nonumber\\ a(1-\gamma)x^0(\alpha)-b\gamma x^0(\delta\gamma)=-d x^0(\delta)\,. \end{array}\right. \end{equation} By Lemma \ref{sec.5.3-comp.-lemma} each equation of this system has a unique solution depending linearly on the weights $c$ and $d$. Thus there exists a real number $k$ such that the solution of the first equation coincides with the solution of the second one (i.e. the system (\ref{sec.5.3-target-eq2}) has solution) if and only if $c/d=k$. \par In order to compute the coefficient $k$ notice that it is sufficient to compute $b$ in both the equations. Now take the first equation and consider the scalar product of each terms with $x^0(\delta)$. We have \[ b\E{x^0(\delta\gamma)}{x^0(\delta)}=c\E{x^0(\gamma)}{x^0(\delta)} \] so \[ b=c\frac{\E{x^0(\gamma)}{x^0(\delta)}}{\E{x^0(\delta\gamma}{x^0(\delta)}}= c\frac{\cos\theta(\gamma,\delta)}{\cos\theta(\delta,\gamma\delta)}\,. \] On the other hand by taking the scalar product of the second equation with $x^0(\gamma)$ we get \[ -b\E{\gamma x^0(\delta\gamma)}{x^0(\gamma)}=-d\E{x^0(\delta)}{x^0(\gamma)} \] so that we have \[ b=d\frac{\E{x^0(\gamma)}{x^0(\delta)}}{\E{x^0(\gamma)}{x^0(\gamma\delta)}} =d\frac{\cos\theta(\gamma,\delta)}{\cos\theta(\gamma,\gamma\delta)}\,. \] Thus the system (\ref{sec.5.3-target-eq2}) has a solution if and only if \[ \frac{c}{d}=\frac{\cos\theta(\delta,\delta\gamma)}{\cos\theta(\gamma,\delta\gamma)}\,. \] Notice that we can argue this result in the case $\theta(\gamma,\delta)\neq\frac{\pi}{2}$. On the other hand since $k$ depends continuously on $\theta(\gamma,\delta)$ we have that this is true also in the case $\theta(\gamma,\delta)=\frac{\pi}{2}$.\par Now we have to show that in the case $\frac{c}{d}=k$ the solutions $a,b$ of equations (\ref{sec.5.3-target-eq2}) are non-negative. From the above calculation it follows that $b\geq 0$ and $b=0$ if and only if $\theta(\gamma,\delta)=\pi/2$. In order to compute $a$ notice that the second equation in (\ref{sec.5.3-target-eq2}) is equivalent to the following \[ a(\delta-\delta\gamma)x^0(\alpha)-bx^0(\delta\gamma)=-dx^0(\delta)\,. \] By summing this equation to the first one of (\ref{sec.5.3-target-eq2}) we get \[ a(1-\delta\gamma)x^0(\alpha)=c x^0(\gamma)-d x^0(\delta)\,. \] (notice that by taking the scalar product of this equation with $x^0(\delta\gamma)$ we recover the condition on $k$). Thus by taking the scalar product with $x^0(\alpha)$ we obtain \[ a(1-\E{\delta\gamma x^0(\alpha)}{x^0(\alpha)})= c \E{x^0(\gamma)}{x^0(\alpha)}-d\E{x^0(\delta)}{x^0(\alpha)}\,. \] Now a careful analysis of Figure \ref{sec.5.3-cocycle-fig} shows that \[ \begin{array}{l} \E{\delta\gamma x^0(\alpha)}{x^0(\alpha)}=\E{\gamma x^0(\alpha)}{\delta^{-1} x^0(\alpha)}<0\\ \E{x^0(\alpha)}{x^0(\gamma)}>0\\ \E{x^0(\alpha)}{x^0(\delta)}<0\,. \end{array} \] Thus it follows that $a >0$. \nopagebreak\par\rightline{$_\blacksquare$} \begin{remark} \emph{ If $\theta(\gamma,\delta)=\frac{\pi}{2}$ the process ends at first step. Thus it turns out that if the angle between the geodesics $C_\gamma$ and $C_\delta$ is $\pi/2$ then the sum is always a weighted multicurve (actually it has either one component $(C_\alpha,a)$ or two components $(C_\alpha,a)+(C_\gamma,c-kd)$).} \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Energetic ions accelerated in corotating interaction regions (CIRs) have elemental abundances very close to the fast solar wind composition, except for the overabundance of $^{4}$He, Ne and C \cite{lab1}. The overabundance of $^{3}$He has been recently reported by Mason \etal \cite{lab2}, suggesting that the remnant impulsive flare ions are accelerated in CIRs. The singly ionized interstellar He, Ne and inner source C pick-up ions provide another population which is accelerated in CIRs \cite{lab3,lab4}. Although many compositional features of the suprathermal heavy ions in CIRs are known the relative contribution from different sources is not well understood. In this paper we report the elemental abundances of the suprathermal H, He, O, NeS and Fe in CIRs and discuss event-to-event variations of the abundance ratios over the long solar minimum period between January 2007 and December 2010. \section{Observations} The measurements presented here were made with the Suprathermal Ion Telescope (SIT) instruments \cite{lab5} onboard the STEREO-A and -B spacecraft. The SIT instrument is a time-of-flight mass spectrometer which measures ions from H to Fe in the energy range from 20 keV/n to several MeV/n. \begin{figure*}[!t] \vspace{-6 mm} \centering \includegraphics[width=13. cm]{icrc0159_fig01.eps} \caption{Panel (A): SIT/STEREO-A 1-hr averaged He intensity \#/(cm$^{2}$\,s\,sr\,MeV/n) for 189, 384, and 787 keV/n. Panels (B - E): Event-averaged 386 keV/n He/H, 193 keV/n He/O, 137 keV/n NeS/O and 137 keV/n Fe/O elemental ratios. CIR events are marked by {\it filled circles}, SEP events by {\it crosses}. {\it Dashed lines} show abundances measured in various particle populations present in the heliosphere: CIR events ({\it blue}), LSEP events ({\it green}), ISEP events ({\it yellow}) and IP shocks events ({\it red}).} \label{fig01} \end{figure*} \begin{table*}[!t] \begin{center} \begin{tabular}{ccccccc} \hline & CIR$^{a}$ & CIR$^{b}$ & LSEP$^{c}$ & LSEP$^{d}$ & ISEP$^{e}$ & IP shock$^{f}$\\ & (150 keV/n) & (385 keV/n) & (358 keV/n) & ($>$300 keV/n) & (358 keV/n) & (750 keV/n) \\ \hline He/H & [0.125$\pm$0.065] & -- & -- & [0.032$\pm$0.003] & -- & -- \\ He/O & 113$\pm$20 & [273$\pm$72] & [75.0$\pm$23.6] & 52$\pm$4 & [54$\pm$14] & [44.4$\pm$14.4] \\ NeS/O & 0.48$\pm$0.13 & [0.477$\pm$0.017]& [0.675$\pm$0.020] & -- & [1.158$\pm$0.022]& 0.678$\pm$0.014 \\ Fe/O & [0.08$\pm$0.03] & 0.088$\pm$0.007 & [0.404$\pm$0.047] & 0.24$\pm$0.03 & [0.95$\pm$0.005] & [0.236$\pm$0.01]\\ \hline \end{tabular} \begin{tabular}{l} \hspace{-18mm} Note: The ratios in square brackets correspond to the dashed lines in Figure \ref{fig01}.\\ \hspace{-18mm} $^{a}$Average of 17 CIR events during solar minimum between December 1992 and July 1995 \cite{lab6}.\\ \hspace{-18mm} $^{b}$Average of 41 CIR events during Solar cycle 23 betwen November 1, 1997 and June 1, 2007 \cite{lab2}.\\ \hspace{-18mm} $^{c}$Average of 64 LSEP events between November 1997 and January 2005 \cite{lab7}.\\ \hspace{-18mm} $^{d}$Average of 10 LSEP events between late 1977 and early 1981 \cite{lab8}.\\ \hspace{-18mm} $^{e}$Average of 20 ISEP events between September 1997 and April 2003 \cite{lab9}.\\ \hspace{-18mm} $^{f}$Avearge of 72 IP shocks between October 1997 and September 2002 \cite{lab10}. \end{tabular} \caption{Heavy ion abundances.}\label{table1} \end{center} \end{table*} Figure \ref{fig01} provides an overview of the data over the period from January 2007 to December 2010. During the investigated period the monthly mean sunspot number taken from the NOAA never exceed number 15, indicating a low solar activity level. Panel (A) shows suprathermal 1-hr averaged He ion intensity \#/(cm$^{2}$\,s\,sr\,MeV/n) for 189, 384, and 787 keV/n measured by SIT-A. Panels (B - E) show event-integrated abundance ratios He/H, He/O, NeS/O and Fe/O. Shown are abundances where 1-hr averaged 189 keV/n He ion intensities exceed 5 particles/(cm$^{2}$\,s\,sr\,MeV/n). We use SIT pulse height analysis data to determine the abundance ratios. The horizontal dashed lines present average abundances for CIR \cite{lab2,lab6}, large solar energetic particle (LSEP) \cite{lab7,lab8}, impulsive SEP \cite{lab9} and interplanetary shock (IP) \cite{lab10} events listed in Table \ref{table1}. The filled circles in Figure \ref{fig01} indicate CIR events. We identify CIR events using the list of the CIRs compiled by the STEREO magnetometer team at the University of California Los Angeles. The CIR events in the period January 2007-September 2009 were examined in \cite{lab11}. The events marked by crosses show sharp rise in the intensity of the relativistic electrons. The list of the STEREO electron events has been compiled by the SEPT instrument team at Universit\"{a}t Kiel. Taken together with the elemental abundances the intensity increases marked by crosses are likely related to SEP events. Figure \ref{fig01} shows that the CIR event He/H ratios during the period 2007-2009 are consistent with the average He/H ratio, observed in the previous solar minimum \cite{lab6}. Large event-to-event variations of the He/H in the CIRs occurred in 2010 when the SEP event activity considerably increased. In 2007-2009 the CIR event He/O ratios were close to the average He/O ratio obtained in the earlier surveys \cite{lab2,lab6}. The CIR event He/O ratio showed also large spread in 2010. In contrast to the He/H and He/O there is no observed increase in scatter of the CIR Fe/O and NeS/O ratios in 2010. Notice in Figure \ref{fig01} that the CIR NeS/O ratio stays relatively constant in 2010. An interesting feature seen in Figure \ref{fig01} is the shape of the temporal variation of the CIR event Fe/O ratios between January 2007 and the beginning of 2009. The Fe/O ratios show local minima near the beginning and end of 2007 and near the end of 2008. Although there is some scatter in the data points the local maxima in the Fe/O have a tendency to occur in the middle of 2007 and 2008. Another interesting feature is that during the Fe/O minimum at the end of 2007 the He/O and He/H ratios show enhancements. The behavior seen in the ratios He/H, He/O and Fe/O is not apparent in the temporal profile of the NeS/O ratio. \begin{figure}[!b] \vspace{-5mm} \centering \includegraphics[width=7.5 cm]{icrc0159_fig02.eps} \caption{CIR event abundance ratios. Panels (A-B): Fe/O for 97 and 137 keV/n. Panels (C-D): He/O for 193 and 273 keV/n. Panels (E-F): He/H for 386 and 546 keV/n. {\it Shaded areas} are described in the text.} \label{fig02} \end{figure} In Figure \ref{fig02} we explore in more detail variations of the CIR event elemental ratios in the period January 2007-December 2009. Panels (A) and (B) show Fe/O ratios for energies 97 and 137 keV/n; panels (C) and (D) He/O ratios for 193 and 273 keV/n; panels (E) and (F) He/H ratios for 386 and 546 keV/n. The wider shaded bar denotes approximate period (October 2007-February 2008) when both He/O and He/H elemental ratios show local increase (by a factor of $\sim$ 2.5) and Fe/O ratio shows decrease (by a factor of $\sim$ 5). The narrow bar marks another such period from the middle of October 2008 to the middle of December 2008. In addition to these two intervals the He/H and Fe/O ratios have a local maximum and a minimum, respectively, at the beginning of 2007. Thus, the He/H and Fe/O ratios show variations on nearly annual basis in 2007-2008. Similar, but less pronounced variations in the He/O and He/H ratios are also seen on STEREO-B. The Fe/O ratio on STEREO-B shows a random spread about the nominal value. This can be due to a poorer mass resolution in SIT-B caused by noise in the detector. Figure \ref{fig03} shows a scatter plot of the CIR Fe/O ratios versus corresponding standard deviations. This figure shows no relation between the ratio and its statistical error. This indicates that the trend in the variations of the Fe/O ratios seen in Figures \ref{fig01}-\ref{fig02} is not accidental. \begin{figure}[!t] \vspace{-5mm} \centering \includegraphics[width=6.cm]{icrc0159_fig03.eps} \caption{CIR Fe/O ratios for 137 keV/n vs. corresponding statistical errors.} \label{fig03} \end{figure} Figure \ref{fig04} compares elemental abundances for the selected CIR events in 2008 and 2010. Panel (A) shows SIT-A 1-hr He ion intensity in five energy channels. Panel (B) shows solar wind speed from PLASTIC instrument \cite{lab12}. Panels (C - F) show 1-hr averages of He/H, He/O, NeS/O and Fe/O abundance ratios. The shaded bars indicate the compression regions from previously cited list of CIRs. The CIR events in February-March 2008 (left side in Figure \ref{fig04}) have characteristic corotating elemental abundances while the CIR events in March-April 2010 (right side) have He/H and He/O ratios decreased to the SEP abundances. The NeS/O and Fe/O ratios in March-April 2010 remained at corotating values and essentially do not differ from the abundances observed in February-March 2008 \section{Discussion} Using data from the PLASTIC instrument aboard STEREO-A, Drews \etal \cite{lab13} reported on enhancements of He$^{+}$ and Ne$^{+}$ pick-up ions during the helium cone traversal around November 6, 2007 and October 1, 2008 with an approximate half width of 54 days. The authors observed that He focusing cone was more pronounced on November 6, 2007 than on the second passage around October 1, 2008. \begin{figure*}[t!] \vspace{-5mm} \centering \includegraphics[width=15. cm]{icrc0159_fig04.eps} \caption{Panels (A): SIT/STEREO-A 1-hr averaged He intensity \#/(cm$^{2}$\,s\,sr\,MeV/n) for 189, 269, 384, 550 and 787 keV/n. Panels (B): Solar wind speed. Panels (C - F): 1-hr averaged 386 keV/n He/H, 193 keV/n He/O, 137 keV/n NeS/O and 137 keV/n Fe/O elemental ratios. {\it Grey shaded} regions mark the time intervals of the CIRs. {\it Dashed lines} present abundances in various particle populations (see Figure \ref{fig01}).} \label{fig04} \end{figure*} The observations by SIT-A instrument show that the approximate start of the period of He/H and He/O enhancement and Fe/O depletion in October 2007 matches well with the start time of the pick-up ions enhancements reported in \cite{lab13}. This suggests that the enhanced He/H and He/O ratios at high energies might result in an enhanced production rate of pick-up He$^{+}$ seed population entering to the CIR acceleration. The pattern observed by SIT-A remained until January-February 2008, exceeding the period of the pick-up ions enhancement observed in \cite{lab13}. In addition to the He/H maximum at the end of 2007 we found two other maxima, one near the beginning of 2007 and other around the end of 2008. The timing of all tree increases well agree with the yearly passage of the He focusing cone. Kallenbach \etal \cite{lab14} discussed that suprathermal He$^{+}$/He$^{2+}$ abundance ratio reflects the annual variations of the He$^{+}$ pick-up ions. In contrast, M\"{o}bius \etal \cite{lab3} and Kucharek \etal \cite{lab15} have not found signatures of the gravitational focusing cone in the observations of the He$^{+}$/He$^{2+}$ ratio in the energetic population. The authors discussed that injection and acceleration conditions masked He$^{+}$ pick-up ion variations. Note the observations reported in \cite{lab3,lab15} were acquired over the period relatively close to the sunspot maximum of the solar cycle 23. The observations reported in this survey were performed during prolonged solar minimum period under very simple solar wind conditions dominated by stably recurring CIRs \cite{lab11}. This probably led to the much more uniform injection and acceleration conditions in the CIRs making it possible to see the sign of the focusing cone. Drews \etal \cite{lab13} noted that O$^{+}$ pick-up ions were distributed evenly in time and do not show any enhancement during focusing cone traversal. Therefore the variations of the Fe/O observed by the SIT are likely due to other causes and need further investigations. We note that the negative correlation between the He/O and Fe/O ratios, apparent from our observations, has been previously reported in \cite{lab2} with no definitive conclusion. The authors suggested temporal or solar cycle effects. We observed a number of CIR events in 2010 with decreased He/H and He/O abundance ratios to the SEP composition while the NeS/O and Fe/O ratios remained close to the corotating abundances. It is interesting that changes in the CIR composition appeared in the period of the enhanced SEP activity. These observations are consistent with previous suggestions that CIRs reaccelerate particles from earlier SEP events \cite{lab16}. The intensity of the heavier ions in the SEP seed population was probably too low to change the NeS/O and Fe/O abundances in the reported CIR events. \vspace{3 mm} This work was supported by the Bundesministerium f\"ur Wirtschaft under grant 50 OC 0904. The work at the Johns Hopkins University/Applied Physics Laboratory was supported by NASA under contract SA4889-26309 from the University of the California Berkeley.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Traditionally, grasping has constituted one of the most fundamental manipulation primitives, being ubiquitous in pick-and-place tasks. Unfortunately, grasping is usually treated as a final condition on a stationary object \cite{prattichizzo2016grasping} rather than a process, ignoring the fact that the manipulator must initially reach and interact with the object. Because of this, grasp synthesis, through criteria such as force and form closure, relies heavily on three assumptions: 1) Perfect modeling of the object geometry, 2) Exact knowledge of the object pose, and 3) Perfect tracking of the grasp plan. Unfortunately, none of these assumptions are valid on the real world when considering uncertainty, allowing for robustness only in the wrench dimension. Hence, traditional criteria for assessing grasp quality is unable to account for unexpected scenarios (e.g. a finger making contact with a facet before the others).\vspace{6pt} This ''static'' approach to grasping makes precise manipulation a very volatile process, as small modelling errors can lead to large undesired effects. Hence, applications that require precise knowledge of the object pose with respect to manipulator, such as assembly, become very challenging under the presence of slight uncertainty \cite{yu2016iros}. In this context, the ability to formally certify that the object will be driven to a desired final pose, under the presence of some pose uncertainty, would significantly mitigate the dependence on perfect sensing, tracking and modeling.\vspace{6pt} In this work, we study grasping as a process that accounts for bounded pose uncertainty and develop a mathematical framework that provides formal certificates of success. To do this, we rely on the non-prehensile tool of planar caging, which bounds the mobility of the object over different conditions. Then, we can generate a trajectory that drives an uncertainty set towards the desired object configuration, under which the object pose can always be retrieved from proprioceptive sensing (observability). This allows us to provide a formal certificate of success over the grasping process, ensuring that the object will be grasped despite its initial pose uncertainty. This framework can then be transcribed as an optimization problem. Therefore, this work presents three main contributions: \begin{itemize} \item \textbf{Mathematical Framework:} We develop a set of convex-combinatorial conditions to certifiably drive a set of initial configurations towards a goal object configuration. This formulation is able to handle arbitrary piece-wise polygonal objects and point-contact manipulators. \item \textbf{Optimization Model:} Using our mathematical framework, we transcribe its conditions as a union of convex constraints. This is posed as a Mixed-Integer optimization problem, which can always be solved to global optimality. \item \textbf{Experimental Validation:} We validate our model by grasping a set of random polygonal objects under uncertainty in simulation. Then, we perform a set of sensor-less robotic grasping experiments over planar objects with different initial configurations certified by our model. \end{itemize} The remainder of this paper is organized as follows. Section II presents an overview of previous work relevant to this paper. Section III presents the main concepts and the notation used in this work. Section IV provides an overview of the framework proposed. Section V describes the caging model adopted. Section VI develops the conditions required to funnel the object pose uncertainty in configuration space. Section VII develops a theory of observability under a planar grasp. Section VIII presents an optimization formulation for certified grasping and the results obtained from its implementation in a simulated environment and a real robotic system. Finally, Section IX discusses the contributions of this work. \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{Figures/f2_overview.pdf} \caption{\small Approach outline: At the first step (c), all initial configurations the object (a) are caged by the manipulator (b). At the second step (d), the manipulator reaches towards a final grasp, funneling all initial conditions. Finally, the grasp is certified as observable (e) by recovering the object pose from proprioceptive sensing with mapping $F^{-1}$. As a result, we obtain a grasp plan with a certificate of success (f). } \label{fig:f2} \vspace{-12pt} \end{figure*} \section{Related Work} This section reviews some of the previous research relevant to the work presented in this paper. \subsection{Grasping} Most grasp-synthesis algorithms optimize over a static pose of the object, performing a search over different contact configurations while maximizing some metric of quality \cite{bicchi2000robotic,shimoga1996robot,ferrari1992planning,dai2018synthesis,hang2017framework}. More recently, the paradigm has shifted towards data-driven methods \cite{bohg2014data,pinto2016supersizing,zeng2018icra} thanks to the availability of large data-sets, although being unable to provide any type of certificates of success. A few other approaches have integrated uncertainty factors in grasping, either in the wrench component by force analysis \cite{zheng2005coping}, in the pose component by exploration or sampling \cite{dang2014stable,zhou2017probabilistic,johnson2016convergent}, or in the shape of the object through a topological analysis \cite{li2016dexterous}. As mentioned above, these methods either restrict the plan to the terminal contact locations in the grasp or are unable to provide certificates of success. \subsection{Caging} On the other hand, to cage an object is to bound its mobility such that it cannot escape from the manipulator. Being initially proposed by Kuperberg in \cite{kuperberg1990problems}, and introduced to the robotics community by Rimon and Blake in \cite{rimon1996caging}, caging has always been a promising tool for manipulation under uncertainty \cite{rodriguez2012caging,wan2012grasping,mahler2016energy,pereira2004decentralized,varavafree2018} as it does not assume perfect knowledge of the object pose. Therefore, we will rely on the convex-combinatorial model derived by Aceituno-Cabezas et. al. in \cite{aceituno-cabezas2019icra}, which allows to pose planar caging as a global optimization problem. Through this approach, we can integrate caging constraints as part of the grasping process, allowing us to deal with initial uncertainty in the object pose. It is important to remark that, as proven in \cite{rodriguez2012caging}, not every cage can work as a way-point towards a grasp. \subsection{Sensorless manipulation} Also relevant to this work is the research on sensorless manipulation or compliant motion planning \cite{lozano1984automatic,erdmann1988exploration,goldberg1993orienting}. Initially studied by Lozano-Perez in \cite{lozano1984automatic}, most of these approaches traditionally perform a backward analysis in order to generate motion plans that can drive a rigid body, on an initially unknown configuration, towards a single goal or exploit the structure of the environment and the object to mitigate uncertainty \cite{nilles2018wafr}. Although many of these algorithms could cope with large initial pose uncertainty, these are restricted to specific geometric conditions of the object and the environment. \section{Preliminaries} \label{sec:preliminaries} \begin{figure}[b] \centering \vspace{-12pt} \includegraphics[width=0.9\linewidth]{Figures/fl_caginges.pdf} \caption{\small Example of a cage with four point-fingers in the workspace (left) and in a $\mathcal{C}-$slice (right). The loop created by the $\mathcal{C}-$obstacles encloses a compact connected component (blue).} \label{fig:cage_Ex} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{Figures/f3_cage_slice1_jose.pdf} \caption{\small Illustration of the caging model at each slice, with $N = 4$, $M = 2$, $R = 6$, and $L = 8$ (a). The model studies forms a polygonal loop in the slice of the $\mathcal{C}-$space (b), building a graph of polygonal interconnections that enclose the object (c). Then, to object is enclosed by the loop by checking that an infinite ray intersects the loop only once (d). Finally, each finger is assigned to a collision-free region (e).} \label{fig:f3} \vspace{-12pt} \end{figure*} This section describes the basic concepts that will be used and referred to throughout the paper. \subsection{Relevant Notation} Given an object $\mathcal{O}$ on a workspace $\mathcal{W}$, we will denote its \textit{Configuration Space} or $\mathcal{C}-$space \cite{lozano1983spatial} as $\mathcal{C}(\mathcal{O})$, which in the planar case is equivalent to SE(2). Then, we refer to the Free Space \cite{Rodriguez2012} of $\mathcal{O}$ as $\mathcal{C}_{free}(\mathcal{O},t)$, at a specific time $t$. Note that we can obtain $\mathcal{C}_{free}(\mathcal{O},t)$ by inflating obstacles in $\mathcal{W}$ through a Minkowski sum with $\mathcal{O}$. These inflated obstacles will be referred to as $\mathcal{C}$-obstacles. Finally, we will refer to a plane of $\mathcal{C}(\mathcal{O})$ with fixed orientation component as a $\mathcal{C}-$slice. \subsection{Caging} To cage an object is to restrict its mobility, such that there exists no path that can drive it arbitrarily far from its initial configuration. Formally, we treat caging, at a time $t$, under the following definition \cite{rodriguez2012caging}:\vspace{6pt} \textbf{Definition 1 (Caging)}: {An object $\mathcal{O}$ in a configuration $q(t)$ is caged if $q(t)$ lies on a compact-connected component of $\mathcal{C}_{free}(\mathcal{O},t)$, denoted as $\mathcal{C}^{compact}_{free}(\mathcal{O},t)$.}\vspace{6pt} An illustration of this definition is shown in Fig. \ref{fig:cage_Ex}. Also of relevance to this work is the model derived in \cite{aceituno-cabezas2019icra}, which allows caging to be posed as an optimization problem. For this, we define the concept of limit orientations:\vspace{6pt} \textbf{Definition 2 (Limit Orientations)}: {Given a compact-connected component $\mathcal{C}^{compact}$ in SE(2), its limit orientations are those which bound $\mathcal{C}^{compact}$ over the $\theta$ component.}\vspace{6pt} Then, we rely on the following conditions, necessary for a planar cage: \begin{enumerate} \item Either the object can rotate in all 360$^\circ$ without penetrating a $\mathcal{C}-$obstacle or $\mathcal{C}^{compact}_{free}(\mathcal{O},t)$ is bounded by two limit orientations. \item If we study all $\mathcal{C}-$slices, between two limit orientations if these exist, there is a loop of $\mathcal{C}-$obstacles that encloses the projection of $q(t)$ on such plane. \item At the $\mathcal{C}-$slice of the limit orientations, if these exist, the transverse section of $\mathcal{C}^{compact}_{free}(\mathcal{O},t)$ has zero area. Thus, getting reduced to either a line segment or a singleton. \end{enumerate} Using these conditions, \cite{aceituno-cabezas2019icra} derives a set of convex-combinatorial constraint to cage any piece-wise convex object. \section{Approach Overview} \label{sec:overview} This section provides an overview of our framework for grasping with formal certificates of success. \subsection{Problem Description} In order to develop our framework we treat the problem as follows, given: \begin{itemize} \item An object $\mathcal{O}$ segmented in $M$ convex polygons and with a boundary that consists of $L$ line segments, \item A manipulator $\mathcal{M}$ comprised of $N$ point-contacts. \item A configuration space $\mathcal{C}(\mathcal{O})$ sampled in $S$ $\mathcal{C}-$slices of constant orientation (see Fig. \ref{fig:f4}). \item A set of $R$ convex regions that cover the complement of the object planar workspace. \item A goal configuration of the object $q_G$, \end{itemize} find a manipulator path $\rho_\mathcal{M} = \lbrace \mathcal{M}(t) \ | \ t = 1,\dots,{N_T} \rbrace$ over $N_T$ time-steps and a set $Q_0 \subset \mathcal{C}(\mathcal{O})$, such that $\rho_\mathcal{M}$ will drive all configurations $q \in Q_0$ towards $q_G$. \subsection{Mathematical Framework} In order to solve this problem, we decouple the grasping process into three main stages, as shown in Fig. \ref{fig:f2}, distributed among $N_T$ time-steps: \begin{enumerate} \item \textbf{Stage 1} (Cage): At the start of the path ($t = 1$), the manipulator must capture all configurations in $Q_0$. To include this condition, we rely on the convex-combinatorial model derived by Aceituno et. al. in \cite{aceituno-cabezas2019icra}, which we will describe in Sect. V. \item \textbf{Stage 2} (Reach): After caging the object ($t = 2,\dots,N_T$), the manipulator must follow a non-penetration path over which the object remains caged, while the limit orientations approach gradually. Then, at the final time-step of the path, the $\mathcal{C}$-obstacles must reduce $\mathcal{C}^{compact}_{free}(\mathcal{O},N_T)$ to a singleton $\lbrace q_G \rbrace$. \item \textbf{Stage 3} (Certify): Once the object is grasped ($t = N_T$), the object must be immobilized under an observable configuration, such that the manipulator can certify that the object lies in $q_G$ through proprioceptive sensing. \end{enumerate} Through this process, we can formally certify that any configuration of the object in the initial caging set will be driven towards the goal state by the manipulator. Then, using the constraints derived for each of these stages, we will formulate an optimization problem (\textbf{MIQP1}) for grasp-synthesis with formal certificates of success under bounded uncertainty. \section{Stage 1: Caging Model} \label{sec:caging} This section reviews the convex-combinatorial model for planar caging that we will apply at each time-step of our framework, introduced in \cite{aceituno-cabezas2019icra}. By using this approach, we can incorporate cage constraints in the grasping process, and will serve as the basis for bounding the initial uncertainty on the object pose. In this model, $\mathcal{C}(\mathcal{O)}$ is sampled into $S$ $\mathcal{C}-$slices. At each slice, the $\mathcal{C}-$obstacles form a loop, composed of polygon intersections, that encloses the object configuration projection in such slice. Then, the model checks if there exists a pair of $\mathcal{C}-$slices (\textit{limit orientations}) that bound the component $\mathcal{C}_{free}^{compact}(\mathcal{O})$ by having zero area. Finally, a set of additional constraints is added to ensure that the component remains compact and connected in between $\mathcal{C}-$slices. Without any loss of generality, we assume that the object configuration lies in the origin of $\mathcal{C}(\mathcal{O})$. Below, we briefly describe the constraints of the model. The reader is referred to \cite{aceituno-cabezas2019icra} for details on implementation and proofs. \subsection{Caging at each $\mathcal{C}$-slice} At each slice there must exist a loop of $\mathcal{C}-$obstacles (fingers) that enclose the origin. For this, the model constructs a directed graph, where each node is a convex-polygon from the $\mathcal{C}-$obstacles and each edge represents an intersection between polygons. To include this condition algebraically, \cite{aceituno-cabezas2019icra} introduces the following matrices: $$H_n \in \{0,1\}^{M \times M} \ \text{and} \ G_n \in \{0,1\}^{M \times M}$$ where $H_n$ encodes edges connecting $\mathcal{C}-$obstacles $n$ and $n+1$, while $G_n$ encodes the edges within the $n_{th}$ $\mathcal{C}-$obstacle. Then, the following constraints enforce the creation of the loop: \begin{eqnarray} \label{eq:transcage1} H_n(i,j) \Rightarrow \exists r_t \in \mathbb{R}^2 \ \text{s.t.} \ r_t \in \mathbf{P}_{i,n} \cap \mathbf{P}_{j,n+1} \\ H_n(i,j) \Rightarrow \exists k,l \ \text{s.t.} \ G_{n+1}(j,k) + H_{n+1}(j,l) = 1\\ G_n(p,q) \Rightarrow \exists s,r\neq p \ \text{s.t.} \ G_{n}(q,r) + H_{n}(q,s) = 1 \\ \sum_{i,j} H_n(i,j) = 1 \label{eq:transcagef} \end{eqnarray} where $ \mathbf{P}_{i,n}$ represents the $i_{th}$ polygon from $\mathcal{C}-$obstacle $n$. Here, the $\Rightarrow$ operator is integrated in the model through big-M formulation \cite{richards2005mixed} \footnote{For a binary $\mathbf{B}$, we have $\mathbf{B} \Rightarrow A x \leq b$ is equivalent to $A x + M \mathbf{B} \leq b + M$ with $M$ being a large positive number.}. Given the previous conditions, the model verifies if the origin is enclosed by the loop by checking if an infinite ray, starting on the origin, has an odd number of intersections with the loop. To include this condition, \cite{aceituno-cabezas2019icra} segments the space covering each edge from the graph in four regions parallel to the ray (above, below, left and right). Then, they introduce a matrix $ F(n,m,k) \in \{0,1\}^{N \times M \times 4} $, which encodes if the ray intersects the edge starting in the $m_{th}$ polygon from the $n_{th}$ finger. Here, the matrix verifies intersection by assigning the origin to one of these regions, taking $F(n,m,1) = 1$ as the assignment with intersection between the edge and the ray. Using this decomposition method, the constraints required to satisfy this condition are: \begin{eqnarray} \label{eq:enclose1} \begin{cases} \sum_{n,m} F(n,m,1) \ \text{is an odd number}\\ \label{eq:enclosef} \sum_{i = 1}^{5} F(n,m,i) = 1 \,\ \forall n,m \end{cases} \end{eqnarray} Finally, each finger is assigned to one of $R$ convex regions covering the complement of $\mathcal{O}$. Each of these regions is described as: $$ Re_i = \{ r \in \mathbb{R}^2 | A_i r \leq b_i \}, \ \bigcup_i Re_i = \mathcal{W} / \mathcal{O} $$ Then, a binary matrix $\mathcal{R} \in \{0,1\}^{N \times R}$ is introduced along with the constraints: \begin{eqnarray} \label{eq:regions} \begin{cases} \mathcal{R}_{i,j} \Rightarrow p_{i} \in Re_j\\ \sum_{j = 1}^R R_{i,j} = 1, & \forall i \end{cases} \end{eqnarray} where the $\Rightarrow$ operator is included through big-M formulation. An illustration of all these constraints is shown in Fig. \ref{fig:f3}. \begin{figure}[t] \centering \includegraphics[width=0.55\linewidth]{Figures/f4_cage_se3.pdf} \caption{\small (a) Visual example of slicing and limit orientations (gray). (b) Projection of $\mathcal{C}_{free}^{compact}(\mathcal{O,t})$ (blue) on each slice.} \vspace{-12pt} \label{fig:f4} \end{figure} \subsection{Full Caging in SE(2)} In order to determine which $\mathcal{C}-$slices must have an enclosing loop, the model includes a binary matrix $\Theta \in \{0,1\}^S$, which selects a pair of \textit{limit orientations}, where the component closes. For this, they introduce a matrix $T \in \{0,1\}^{S \times N \times L}$, which assigns each finger to a facet of $\mathcal{O}$, along with a subset of the facet assignments that lead to a limit orientation $\mathcal{L}_\mathcal{O}$. Then, the following constraints are included in the model: \begin{eqnarray} T_s \in \mathcal{L}_\mathcal{O} \Rightarrow \Theta_s = 1 \\ \theta_l = \text{min}_s \ \theta_s \ s.t \ \Theta_s = 1 \\ \theta_u = \text{max}_s \ \theta_s \ s.t \ \Theta_s = 1 \\ \theta_l \leq \theta_s \leq \theta_u \Rightarrow \text{\eqref{eq:transcage1}-\eqref{eq:regions}} \end{eqnarray} where $\theta_s$ is the orientation at the $s_{th}$ slice, and $(\theta_u,\theta_l)$ are the upper and lower limit orientations. An examples is shown in Fig. \ref{fig:f4}. Finally, the model includes constraints on the continuity of the \textit{boundary variation}, in order to ensure that $\mathcal{C}_{free}^{compact}({\mathcal{O}})$ remains compact and connected between the slices. Details on these constraints and proof of their necessity are shown in \cite{aceituno-cabezas2019icra}. \section{Stage 2: Reaching and $\mathcal{C}-$space contraction} \label{sec:convergence} Given the initial cage on $\mathcal{O}$, the second stage of the process must drive all the bounded configurations towards the goal $q_G$. In this context, we can exploit the structure of the caging model described above, which allows us to define the bounds on $\mathcal{C}_{free}^{compact}(\mathcal{O},t)$. Therefore, in order to integrate this stage in the framework, we rely on the following remark:\vspace{6pt} \textbf{Remark 1} ($\mathcal{C}-$space contraction): Given an object $\mathcal{O}$, if its free-space at a given time $t$ has a compact connected component $ \mathcal{C}_{free}^{compact}(\mathcal{O},t)$ bounded between limit orientations $\theta_{l}(t)$ and $\theta_{u}(t)$. Then, any collision-free manipulator path $\rho_\mathcal{M}$, where $\mathcal{M}(N_T)$ immobilizes $\mathcal{O}$ in $q_G$ by having $\frac{d }{dt} |\theta_{u}(t) - \theta_{l}(t)| < 0$ while keeping $\mathcal{C}_{free}^{compact}(\mathcal{O},t)$ compact and connected, will drive any configuration $q \in \mathcal{C}_{free}^{compact}(\mathcal{O},t=1)$ towards $q_G$. \vspace{6pt} A proof of this remark is described in appendix A. Through the conditions specified in the remark, shown in Fig. \ref{fig:f5}, we can optimize a manipulator path that will drive $\mathcal{O}$ towards $q_G$. This also allows us to characterize the set of initial conditions for which the shape will certifiably converge to such configuration. Hence, By relying on the model described in Sect. V, we can transcribe a set of linear conditions for $\mathcal{C}-$space contraction as detailed below. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{Figures/f5_contract.pdf} \caption{\small Visual representation of Remark 1. The range of limit orientations (gray) decreases, converging at $t = N_T$.} \vspace{-12pt} \label{fig:f5} \end{figure} \subsection{Constraints} In order for the conditions detailed in remark 1 to hold, we must require that: \begin{enumerate} \item There exists a cage for all time-steps $t = 1,\dots,N_T$. \item The object must remain caged in between time-steps. \item The separation between \textit{limit orientations} must decrease between time-steps, until they converge at $t = N_T$. \item The cage at $t = N_T$ must immobilize the object in $q_G$. \item The manipulator cannot penetrate the object in any point of the path. \end{enumerate} Algebraically, the conditions to have a cage at each time-step can be transcribed as: \begin{eqnarray} \label{eq:cage} \begin{cases} \sum_{s} \Theta_s(t) = 2\\ \theta_s(t) \in [\theta_l(t),\theta_u(t)] \Rightarrow \text{\eqref{eq:transcage1}-\eqref{eq:transcagef}} |_{(s,t)} \\ \end{cases} \end{eqnarray} For all $t = 1,\dots,N_T$. Then, in order to ensure that the cage does not break between time-steps, we introduce the following constraint at each slice: \begin{equation} \label{eq:non_break} H_n(i,j)|_{t=k} \Rightarrow \exists r_t \in \mathbb{R}^2 \ s.t. \ r_t \in \mathbf{P}_{i,n,k+1} \cap \mathbf{P}_{j,n+1,k+1} \end{equation} Note that this condition is sufficient as the intersection occurs between convex polygons and the path is linearly interpolated. Because of this, we introduce remark 2:\vspace{6pt} \textbf{Remark 2:} Since all initial configurations of the object are caged at $t = 1$ and the enclosing loop does not break between adjacent time-steps, the conditions presented in Eq. \eqref{eq:enclose1} are trivially satisfied for all $t>1$.\vspace{6pt} Therefore, similar to Remark 2 from \cite{aceituno-cabezas2019icra}, we only constrain: \begin{eqnarray} \label{eq:enclose} \begin{cases} \sum_i F_{i,5}(t = 1) \ \text{is odd} & \\ \sum_{k} F_{i,k}(t = 1) = 1, & \forall i \in 1,\dots,M^2 \end{cases} \end{eqnarray} Furthermore, for the final cage to fully immobilize the object, we require that there exist two small \textit{limit orientations} at $t = N_T - 1$ which have the same facet assignment matrix, also enforced for $t = N_T$ (when limit orientations converge). Note that this implies that this reduced $\mathcal{C}_{free}^{compact}(\mathcal{O},t = N_T)$ to a singleton. Algebraically, this constraint is added as: \begin{eqnarray} \begin{cases} \label{eq:limcont1} T_{u}(N_T) = T_{u}(N_T-1) = T_{l}(N_T-1) \in \mathcal{L}_{\mathcal{O}} \\ |\theta_{u}(N_T-1) - \theta_{l}(N_T-1)| \approx 0 \label{eq:limcont2} \end{cases} \end{eqnarray} Finally, we enforce that limit orientations approach gradually through the constraints: \begin{eqnarray} \begin{cases} \label{eq:cont1}\theta_{u}(t+1) < \theta_{u}(t) \\ \label{eq:cont2}\theta_{l}(t) < \theta_{l}(t+1) \end{cases} \end{eqnarray} Then, integrating constraints \eqref{eq:cage}-\eqref{eq:cont1} in the convex-combinatorial model presented in \cite{aceituno-cabezas2019icra} satisfies the conditions presented in remark 1.\vspace{6pt} \textbf{Constraining a set of certified initial conditions}: Constraining that $\mathcal{C}_{free}^{compact}(\mathcal{O})$ contains an arbitrary set of initial conditions $Q_0$ can hardly be integrated as a convex constraint to the framework. However, we can introduce an inner bound of $Q_0$ in the form: $Q_0 = \lbrace q \in \mathcal{C}(\mathcal{O}) \ | \ q \in [x_1,x_2] \times [y_1,y_2] \times [\theta_1,\theta_2] \rbrace$ by adding the following constraints: \begin{eqnarray} \label{eq:incond} \theta_s \in [\theta_1,\theta_2] \Rightarrow \eqref{eq:enclose}, \ \forall (x,y) \in [x_1,x_2] \times [y_1,y_2] \end{eqnarray} In this case, robust optimization \cite{ben2009robust} would be used in each $\mathcal{C}-$slice, in order to ensure that all points in $[x_1,x_2] \times [y_1,y_2]$ are enclosed by the loop of $\mathcal{C}-$obstacles. \section{Stage 3: Certification and Grasp observability} \label{sec:observability} Once the object is driven towards $q_G$, the only certificate that the grasp was successful can come from the ability to retrieve the object pose from sensor readings. Because of this, it is necessary to define the conditions under which such ''\textit{observability}'' condition is met. This section derives the constraints that will allow a grasp to certify that an object has been immobilized under a given configuration. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{Figures/f8_observe.pdf} \caption{\small Grasp Observability examples. (a) An observable grasp $G_1$, there exists a mapping $F^{-1}$ to recover the object pose. (b) A non-observable grasp $G_2$, where the object can slide between the fingers while being grasped, the mapping $F$ cannot be inverted.} \vspace{-12pt} \label{fig:fobs} \end{figure} \subsection{Definition} Given a final grasp $G$ and a set of $n_r$ sensor readings $s \in \mathbb{R}^{n_r}$, we define grasp observability as:\vspace{6pt} \textbf{Definition 3} (Grasp Observability): Given an object in a configuration $q$, we define the mapping between object configuration and sensor readings under grasp $G$ as $s = F_G(q)$. Then, we have that $G$ is observable if and only if $F_G^{-1}$ exists.\vspace{6pt} Examples of this are shown in Fig. \ref{fig:fobs}. However, $F_G$ can be hard to define in closed form, as it depends on the object and manipulator geometries. Hence, we restrict our analysis to first order effects \cite{rimon1996force}. \subsection{First-Order Observability} Since $S$ corresponds to a set of sensor readings, we will say that a grasp is first-order observable only if a small change on the configuration of the object causes a proportional small change in the configuration of the manipulator. Performing a first-order expansion of the sensor mapping, we obtain: $$ F_G(q+\Delta q) \approx s + \frac{d s}{d q} \Delta q = a + J \Delta q $$ $$ \Rightarrow \Delta q = J^{-1} (F_G(q+\Delta q) - s) = J^{-1} \Delta s $$ Where $J = \frac{d s}{d q}$ is a Jacobian. Therefore, we define first-order observability as:\vspace{6pt} \textbf{Definition 2} (First-Order Grasp Observability): A grasp $G$ is first-order observable if and only if $J$ is full row rank. Algebraically: $ \Delta s = 0 \Rightarrow \Delta q = 0 $.\vspace{6pt} Locally, this condition is only met when: $$ \frac{d s}{d q} \Delta q = 0 \Rightarrow \Delta q = 0 $$ which is a semi-algebraic condition similar to those used in form-closure grasp analysis.\vspace{6pt} \subsubsection{Virtual Sensor Model} In order to analyze this relation, we introduce a virtual sensor model for first-order point-contact sensing. Intuitively, for an object in contact with the sensor, this model should report local changes based on a gap function. More concretely, sensor readings should be sensitive only to small changes in the object pose, implying decrements of the gap, but should ignore changes when no force is applied. With this in mind, we represent a reading on the $i_{th}$ finger, around the configuration $\bar{q}$ and finger position $\bar{p}_i$, as: $$ \frac{ds_i}{d q} = \begin{cases} k\frac{d \psi(q,p_i)}{d q}&, \frac{d \psi}{d q} \Big|_{\bar{q},\bar{p}_i} \Delta q < 0 \\ 0&, \frac{d \psi}{d q} \Big|_{\bar{q},\bar{p}_i} \Delta q \geq 0 \end{cases} $$ where $\psi(q,p_i)$ is a signed gap function between the $i_{th}$ finger and the object, while $k$ is a real non-zero constant. \vspace{6pt} \subsubsection{Relation to First-Order Form Closure} Using the sensor model presented above, we can derive that for each $i_{th}$ finger: $$ \left(\frac{d s}{d q} \Delta q = 0 \Rightarrow \Delta q = 0 \right) \Longleftrightarrow \left(\frac{d \psi}{d q} \Big|_{\bar{q},\bar{p_i}} \Delta q \geq 0 \Rightarrow \Delta q = 0 \right) $$ We note that this condition is also necessary and sufficient for first-order form closure \cite{prattichizzo2016grasping}. Therefore, we can conclude that the conditions for first-order observability are equivalent to those required for first-order form closure\footnote{Note that this equivalence is only valid for first-order analysis, which ignores shape curvature and friction}, which can be posed as a convex-combinatorial problem. \vspace{-6pt} \subsection{Constraints} \begin{figure}[t] \centering \vspace{4pt} \includegraphics[width=0.8\linewidth]{Figures/f9_obs_examples.pdf} \vspace{4pt} \caption{\small Examples of observability conditions: (a) Non-observable, (b) Non First-Order Observable, and (c) First-Order Observable} \label{fig:f9} \vspace{-12pt} \end{figure} Given the duality derived above, a planar grasp is first-order observable if there are $4$ unilateral contact constraints on the object \cite{rimon1996force}. This is satisfied if the following conditions hold: \begin{enumerate} \item The object configuration must lie in a singleton of $\mathcal{C}_{free}(\mathcal{O},t)$. \item There must exist no point of coincidence between all the contact normals. This is required because, in the first-order, the object is free have infinitesimal rotations around the point of concurrency of the contact normals \cite{rimon1996force}. \end{enumerate} which are convex-combinatorial constraints on the facet-assignment matrix $T$ and manipulator configuration $\mathcal{M}_{N_T}$. Examples are shown in Fig. \ref{fig:f9}. Therefore, we include them in the framework as follows: \vspace{6pt} \subsubsection{Free-Space Singleton} Satisfaction of this constraint is trivial, as the $\mathcal{C}-$space contraction constraints defined in Eq. \eqref{eq:limcont1} result in a singleton at the final time-step of the plan. \vspace{6pt} \subsubsection{Normal Non-Coincidence} To determine if a set of vectors are coincident is an inherently non-convex problem. Having $p_{i}(N_T)$ as the contact point corresponding to finger $i$ and $\langle \lambda_{i} \rangle$ as the vector space defined by facet normal vector $\lambda_i$, this can be transcribed as the constraint: $$ \bigcap_{i}{p_{i}(N_T) + \langle \lambda_i \rangle} = \emptyset $$ \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{Figures/fx_coincidence.pdf} \caption{\small Convex combinatorial conditions for non-coincidence in the case of non-parallel (left) and parallel (right) facet assignments.} \label{fig:f10} \vspace{-12pt} \end{figure} However, we note that there are two scenarios for every pair of fingers: 1) Intersecting normals correspond to non-parallel facets and have a single intersection point, and 2) Normal vectors are parallel and thus have infinite intersection points or none. Therefore, if we define the following sets: \begin{itemize} \item $\mathcal{P} = \{(i,j) \in N^2 \ | \ i > j \}$ is the set of all different pairs of facet-assignments. \item $\mathcal{P_{\parallel}} = \{(i,j) \in \mathcal{P} \ | \ \lambda_i \times \lambda_j = 0 \}$ is the set of pairs of facet-assignments with parallel normals. \item $\mathcal{P_{\nparallel}} = \{(i,j) \in \mathcal{P} \ | \ \lambda_i \times \lambda_j \neq 0 \}$ is the set of pairs of facet-assignments with nonparallel normals. \end{itemize} where $\times$ is a cross-product. Then, we can introduce the binary matrix $M = (M_{i,j})_{(i,j) \in \mathcal{P}} \in \{0,1\}^{|\mathcal{P}|}$, reducing the problem to the following set of convex-combinatorial conditions: \begin{eqnarray} \label{obs_1} M_{(i,j) \in \mathcal{P}_{\nparallel}} \Rightarrow \sum_{k = 1}^N | (\alpha_{i,j} - p_{k}(N_T)) \times \lambda_k | > 0 \\ M_{(i,j) \in \mathcal{P}_{\parallel}} \Rightarrow | (p_{i}(N_T) - p_{j}(N_T)) \times \lambda_i | > 0 \\ \label{obs_f} \sum_{(i,j) \in \mathcal{P}} M_{i,j} \geq 1 \label{obs_f} \end{eqnarray} Where, $\alpha_{i,j}$ is the intersection point between the normal vectors starting at $p_{i}(N_T)$ and $p_{j}(N_T)$. These conditions guarantee that at least one pair of normals is non-coincident to the rest, providing observability to first-order effects, as shown in Fig. \ref{fig:f10}. Here, we include absolute value function through slack variables and big-M formulation. \begin{figure*} \centering \vspace{-12pt} \includegraphics[width=0.9\linewidth]{Figures/fxII_sims.pdf} \caption{\small Simulation results: 12 random polygons are grasped with trajectories generated with our model. In each case, a set of random initial configurations certified by our model (shown in gray) are driven towards a goal grasp (purple) by using the same trajectory (blue).} \label{fig:f12} \vspace{-12pt} \end{figure*} \section{Application and Experimental Results} \label{sec:results} In this section, we describe an implementation of this framework for planar grasping with bounded uncertainty and its validation, both in simulation and in a real robot. For this, we formulate a Mixed-Integer program (MIP) using the constraints described above. Our goal will be to generate manipulator trajectories which can certifiably drive a set of initial conditions of the object towards a final goal configuration. All the computations are done in MATLAB R2018b on a computer with Intel Core i9 2.9 GHz processor. All Optimization problems are solved with Gurobi 8.0 \cite{gurobi}. \subsection{Mixed-Integer Programming Formulation} In order to solve the problem posed in Section IV.A. we propose a MIP formulation which receives as an input a description of the object $\mathcal{O}$ and the manipulator $\mathcal{M}$. We incorporate the conditions described in the previous sections as constraints and add a quadratic cost term on acceleration to smooth the trajectory, resulting in \textbf{MIQP1}. \begin{equation}\nonumber \mathbf{MIQP1:} \ \underset{\substack{\mathcal{M}(t),\Theta(t), H(t), \\ G(t),R(t),T(t), M}}{\text{\text{min}}} \ \ \int \sum_{i = 1}^{N} \left|\left| \frac{d^2 p_i(t)}{d t^2} \right|\right|^2 dt \end{equation} subject to:\vspace{6pt} \begin{enumerate} \item For $t = 1$ to $t = N_T$: \begin{itemize} \item Existence of a cage (CT11). \item Inter-step caging (CT12) \item $\mathcal{C}-$space contraction (CT15). \item Continuous Boundary Variation \cite{aceituno-cabezas2019icra}. \end{itemize} \item $(t = 1)$ Configuration enclosing \eqref{eq:enclose} and \eqref{eq:incond}. \item $(t = N_T - 1)$ Object immobilization \eqref{eq:limcont2}. \item $(t = N_T)$ First-Order Observability \eqref{obs_1}-\eqref{obs_f}. \end{enumerate} In our implementation, in order to reduce the complexity of the problem, we fix the limit orientations along the path, accommodating for (CT15), and drop the initial set constraints (CT16). This allows us to account for large uncertainty in the orientation component, without constraining a cage over an inscribed volume of $\mathcal{C}_{free}^{compact}(\mathcal{O},t=1)$. Since \textbf{MIQP1} is a Mixed-Integer Convex Program \cite{richards2005mixed}, it can be always solved to global optimality and will always converge to a solution if it exists without any initialization. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{Figures/fxi_experiments_v2.pdf} \caption{\small Experimental results. Each row shows snapshots of a grasping trajectories for 4 objects under initial pose uncertainty (first frame) moving towards a single goal configuration (last frame). Note how the uncertainty is funneled by following a single manipulator path.} \label{fig:f11} \vspace{-12pt} \end{figure*} \subsection{Simulated Experiments} To validate our method on a different set of objects, we generate 30 random polygons and optimize a trajectory for each using \textbf{MIQP1}. Then, we perform simulations for a set of over 100 different initial conditions, using the planar manipulation simulator developed in \cite{zhou2018convex}. When setting up the optimization problem, we in itialize the plan with limit orientations between $-15^\circ$ and $15^\circ$, centered around the goal configuration. Doing this, we can certify convergence for the range of configurations with $\theta \in [-15^\circ,15^\circ]$.\vspace{6pt} \subsubsection{Random polygons generation} In order to generate polygons with interesting properties, we rely on the heuristics presented in \cite{auer1996rpg}. This method allows us to specify parameters such as ''irregularity'' and ''spikeyness'', as well as a referential radius for the polygon. We implement this code in MATLAB, based on the method described in \cite{so_rp} and generate 30 polygons with 5 to 7 facets. In order to segment each object in convex polygons, we rely on Delaunay triangulation \cite{fortune1995voronoi}. While this method can result intractable for more complex objects, there exist algorithms capable of finding decompositions with a small number of convex polygons \cite{lien2006approximate}.\vspace{6pt} \subsubsection{Results} Once each object is generated, we sample a set of different initial conditions around the goal configuration. Then, we execute the trajectory with 4 free disc-shaped fingers. Results for 12 of the random object are reported in Fig. \ref{fig:f12}. Note that, using the same manipulator trajectory, a set of different initial poses (marked in gray) are effectively driven towards the goal configuration. For all of the objects, a trajectory was successfully found by algorithm in a range from 25 to 45 seconds. However, the time required to find the optimal trajectory often ranged for a several seconds to a couple minutes, depending on the complexity of the problem (number of polygons and regions covering the space). Resulting trajectories for all of the 30 objects are shown in the supplementary material. \subsection{Robot Experiments} Finally, we demonstrate trajectories generated on four different planar objects in a real experimental set-up with a two-armed robot and 4 different planar objects. In this case, we generate trajectories for each of the objects and use simulations to sample their regions of convergence. Each trajectory is designed with $N_T = 5$ and initial limit orientations between $-22.5^\circ$ and $22.5^\circ$. Then, we perform 10 experiments on each object, initializing them at random initial conditions certified to converge with our trajectories.\vspace{6pt} \subsubsection{Experimental Setup} Our experimental platform is an ABB YuMi$^{\tiny{\textregistered}}$ (IRB-14000) robot, which has two 7 DOF arms with a parallel gripper as end-effector. We initialize the pose of the object through QR code scanning and AprilTag 2 \cite{wang2016iros}, capturing the scene with an RGB-D Intel RealSense Depth Camera D415. Communication with YuMi$^{\tiny{\textregistered}}$ is done via Robot Operating System (ROS) Kinetic and a position-based controller. Additional constraints are added to \textbf{MIQP1} in order to account for the kinematics of the manipulator. For all the experiments, the end-effectors of YuMi$^{\tiny{\textregistered}}$ are modified to have disc-shaped fingers. We account for disc-shaped fingers by inflating the shape of each object before segmenting it. Segmentation of the object and the collision-free regions is done manually. Finally, all experiments are run in open-loop.\vspace{6pt} \subsubsection{Results} In this case, we validate the algorithm on four objects, with increasing orders of complexity. As with the simulated experiments, we show resulting trajectories for the four objects, on Fig. \ref{fig:f11}. Depending on the shape, the resulting trajectories vary from stretching -- (a) and (b) -- to squeezing (d), and a combination of both (c). We note that this method can handle significant uncertainty in the orientation dimension, although only allowing for a few millimeters to a centimeter of translational uncertainty. This is potentially caused by fixing of limit orientations and the use of sufficient conditions for caging, particularly \eqref{eq:enclose1}. Videos on the experiments for each of the objects are shown in the supplementary material. \section{Discussion} \label{sec:discussion} In this paper we developed a mathematical framework for planar grasping under bounded uncertainty. For this, we decouple the problem as a process of \textit{caging} and \textit{reaching}, which allows the manipulator to bound the mobility of the object before proceeding to immobilize it. We rely on a convex-combinatorial model for caging and define the concept of grasp observability, deriving a set of sufficient conditions to observe the pose of an object through proprioceptive sensing. We show how this approach provides a formal certificate of convergence for a set of initial configurations of the object. We demonstrate the application of this framework by formulating an optimization problem with the constraints derived throughout paper. The implemented planner generates trajectories that can always drive a set of initial configurations of the object towards a goal. We validate this method by performing experiments on both a simulated environment, grasping a set of random polygonal objects, and a real environment with a YuMi$^{\tiny{\textregistered}}$ robot. Our results show how this approach can reliably immobilize different objects over different initial poses certified by our method, despite no sensing on the pose of the object or the interactions involved. \subsection{Limitations} While this method is able to provide a geometric certificate of convergence for a bounded set of initial conditions, it only plans motions by reasoning over the configuration space. Therefore, elements like the friction between the fingers and the object are not accounted for, which could potentially lead to undesired effects such as jamming or wedging. Similarly, the conditions for grasp observability might result too restrictive, as first-order analysis ignores factors such as curvature and the proposed sensor model only accounts for static proprioceptive sensing. For this reason, future work work should explore the role of second-order mobility of the object \cite{rimon1998mobility}, in order to account for curvature, as well as more complex sensor models. Furthermore, accounting for the role of friction, such as in \cite{erdmann1994representation}, would allow this framework to work on a wider variety of scenarios. \subsection{Future Work and Potential Applications} Given the versatility of convex-combinatorial optimization frameworks, we believe that this approach could be applied for the design of application-specific effector forms \cite{rodriguez2013effector}. This would allow robots to certifiably grasp specific objects with a large set of possible initial conditions. Additionally, we are interested in combining this approach with energy-based methods \cite{mahler2016energy}, which could potentially allow for applications such as sensorless in-hand manipulation \cite{erdmann1988exploration}. Finally, a natural extension of this work would be to develop conditions in order to certify grasps with parametric uncertainty in object shape. \bibliographystyle{plainnat} \section{Introduction} The key question we study in this paper is that of robustness in the process of grasping an object. Can we ever certify that a planned grasp will work? \vspace{6pt} The common approach to grasping is to plan an arrangement of contacts on the surface of an object. Experimental evidence shows an intuitive but also paradoxical observation: On one hand, most grasps do not work as expected since fingers do not deliver exactly the planned arrangement of contacts; on the other hand, many planned grasps still end up working and produce a stable hold of the object. These natural dynamics work within all grasping algorithms, often to their benefit, sometimes adversarially. \citet{mason2012autonomous} put it as: if we cannot put the fingers in the right place, can we trust the fingers to \emph{fall where they may}? In this paper we study the possibility to synthesize grasps for which the fingers have no other option than to do so.\vspace{6pt} The notions of robustness and certification are central to the robotics community. However, formal approaches to synthesize robustness in grasping have been mostly limited to study the set of forces that a grasp can resist~\cite{bicchi2000robotic}, neglecting the key importance of the reaching motion towards that grasp. Both the reaching motion and the end-grasp can encode robustness. In this paper we study the problem of synthesizing trajectories of a set of point fingers that converge onto an intended grasp of a polygonal planar object, naturally encoding robustness to uncertainty as part of the grasping process. We start by proposing three different types of certificates that one can formulate at different stages of the grasping process: \begin{itemize} \item \textbf{Invariance Certificate:} At the beginning of the grasping process, the object lies in an invariant set of its configuration space. In this paper we study the case when the object is geometrically trapped by fingers around it, i.e., the object is caged by the fingers~\cite{rodriguez2012caging}. % \item \textbf{Convergence Certificate:} All configurations in the invariant set are driven towards a given end-grasp configuration. Intuitively, this is analogous to driving down the value of a scalar/energy function with only one minimum. % \item \textbf{Observability Certificate:} The configuration of the object in the end-grasp is identifiable with the robot's contact or proprioceptive sensors after completing the grasp. In this work we characterize when the location of fingers is enough to recover the pose of the object, for which the condition is analogous to first-order form closure. \end{itemize} \begin{figure}[t] \centering \includegraphics{Figures/overview_v2.pdf} \caption{\textbf{Overview of grasping with certificates.} From a configuration space perspective, we say a grasp is certified to succeed when: 1) The robot bounds the object pose within an invariant set, and 2) the free-space converges to a single configuration. From this initial bound, we obtain an invariant set of configurations for which the grasp will always succeed. A third certificate, also valid for non-converging grasping processes, comes from requiring that the end-grasp configuration is observable.} \label{fig:my_label} \vspace{-12pt} \end{figure} Sections~\ref{sec:caging}, \ref{sec:convergence}, and \ref{sec:observability} derive a model for a particular formulation of each of these certificates. These models build on tools from convex-combinatorial optimization that decompose the configuration space of an object surrounded by fingers into free regions, and is based on recent work to formulate the caging synthesis problem as an optimization problem~\cite{aceituno-cabezas2019icra}. Section~\ref{sec:caging} summarizes the approach. The combination of the models for each of the three certificates yields a complete geometric model to synthesize grasping motions that reach certifiable grasps. Section~\ref{sec:results} describes the application of this model to robust grasping of planar polygons, and provides experimental evidence of the value of the approach by a direct comparison between certified grasping and force-closure grasping. The formulation we provide in this paper for each of the proposed certificates presents limitations--and opportunities for future work--which we detail in Sec.~\ref{sec:discussion}. Most notably, the presented formulation is purely geometrical, and does not take into account friction uncertainty, which can yield undesired behaviors between fingers and object such as jamming and wedging. \section{Background} This work inherits ideas from three main sources related to grasping and robustness: \myparagraph{Sensorless Grasping.} Stemming from the foundational works by Mason and Erdmann on sensorless manipulation \cite{erdmann1988exploration}, and by Goldberg on sequences of squeezing grasps \cite{goldberg1993orienting}, this line of work aims to find grasping strategies that reliably bring an object to a known configuration, despite initial uncertainty in the object pose. In \cite{goldberg1993orienting}, Goldberg proposes an algorithm to find squeezing grasps that can reorient any convex polygon. This can be seen as a particular case of conformant path planning \cite{lozano1984automatic,erdmann1988exploration}, which synthesizes motions that drive a robot from an initially uncertain pose towards a goal, possibly under uncertain dynamics. This paper maintains the spirit of these works and studies the case of general point-based manipulators, and general planar polygonal objects. \myparagraph{From caging to grasping.} One way to constrain the object configuration to an invariant set is to cage it \cite{rimon1996caging}. While not all cages lead to a grasp \cite{rodriguez2012caging}, these always provide a certificate that the object is bounded to some compact set. More importantly, some cages are guaranteed to have a motion of the fingers that drives the cage into a grasp of the object. We are interested in synthesizing cages that lead to an unique grasp. \myparagraph{Computational models for caging.} Many algorithms for cage synthesis have been studied since its introduction \cite{rimon1996caging}. The most relevant to this work is the optimization model in \cite{aceituno-cabezas2019icra}, which poses the caging condition in terms of convex-combinatorial constraints. We exploit the properties of this model to include requirements of convergence of the grasp process and observability of the final grasp. Caging has also been studied in the context of randomized planning \cite{varava2017herding,varavafree2018}, making no assumptions on shapes, and graph-search defined on contact-space \cite{allen2015robust,bunis2018equilateral}, with polynomial bounds in complexity.\vspace{6pt} Beyond these three main sources, other works have also studied the role of uncertainty in grasping from a more practical perspective. Zhou et al \cite{zhou2017probabilistic} handle uncertainty by exploiting models for contact and sliding. Here, as in \cite{goldberg1993orienting}, we limit our analysis to the configuration space, without accounting for frictional interaction nor contact dynamics. In exchange, we are able to synthesize a grasping trajectory that drives a large set of initial configurations to a goal grasp for any planar polygonal object. \subsection{Preliminaries and Notation} We define an object $\mathcal{O}$, on a workspace $\mathcal{W} \subseteq \mathbb{R}^2$, as an union of $M$ convex polygons $\mathcal{O} = \bigcup_{i=1}^M \boldsymbol{P}_{i}$. The boundary of the object is described by the union of $L$ line segments $\partial \mathcal{O} = \bigcup_{j=1}^L \boldsymbol{L}_{j}$. The complement of the object is the region $\mathcal{W} \setminus \mathcal{O} = \bigcup_{k=1}^R \mathcal{R}_k$ consisting of $R$ convex polygonal regions $\mathcal{R}_k$. \vspace{6pt} We denote the \textit{Configuration Space} of $\mathcal{O} \subseteq SE(2)$ at instant $t$ as $\mathcal{C}$. We refer to a plane of $\mathcal{C}$ with fixed orientation $\theta$ as a $\mathcal{C}-$slice, denoted $\mathcal{C}(\theta)$. We refer to an arrangement of point fingers as the manipulator $\mathcal{M}$. We assume $\mathcal{M}$ has $N$ point fingers with positions $\mathcal{M} = \lbrace \boldsymbol{p}_1, \hdots, \boldsymbol{p}_N \rbrace \in \mathcal{W}^N$. We refer to the set of configurations where the object penetrates a finger as $\mathcal{C}$-obstacles. Then, the free-space of the object $\mathcal{C}_{free}(\mathcal{O},t)$ corresponds to the space $\mathcal{C}$ not intersecting any of the $\mathcal{C}$-obstacles. \vspace{6pt} At time $t$, an object configuration $\boldsymbol{q} = [q_x,q_y,q_\theta]^T$ is caged if $\boldsymbol{q}$ lies in a compact-connected component of $\mathcal{C}_{free}(\mathcal{O},t)$ (or invariant set), denoted as $\mathcal{C}^{compact}_{free}(\mathcal{O},t)$. Given a compact-connected component $\mathcal{A} \subset \mathcal{C}$, we refer to its \textbf{limit orientations} $\theta_u,~\theta_l$ to the maximum and minimum of $\theta$ in $\mathcal{A}$. We will describe in Section \ref{sec:caging}, how the caging condition can be transcribed as a set of convex-combinatorial constraints when the object is represented as an union convex polygons and the manipulator is a set of point-fingers. \section{Problem description} \label{sec:overview} The problem of interest for this paper is that of finding a grasping motion that is certified to succeed. Formally, we define this problem as:\vspace{6pt} \textbf{Problem 1 (Certified Grasping)}: {Given an object $\mathcal{O}$, a manipulator $\mathcal{M}$, $S$ samples of $\mathcal{C}-$slices, and a goal object configuration $\boldsymbol{q}$, find a manipulator trajectory $\rho_\mathcal{M} = \lbrace \mathcal{M}(t) \ | \ t \in \lbrace 1,\dots,{N_T} \rbrace \rbrace$ and a set $Q_0 \subset \mathcal{C}(\mathcal{O})$, such that $\rho_\mathcal{M}$ will drive any configuration of the objlpect $\boldsymbol{\hat{q}} \in Q_0$ towards an observable grasp on $\boldsymbol{q}$.}\vspace{6pt} This problem can be seen as a particular case of the general problem known as LMT \cite{lozano1984automatic}, and as a generalization of Goldberg's squeezing plans \cite{goldberg1993orienting} for non-convex objects and point-finger contacts. For an object on a plane without friction, a solution to this problem results from implementing the certificates described in the previous section as a three-step process (discretized as a manipulator trajectory of $N_T$ time-steps): \begin{itemize} \item {Invariance:} The configuration of the object $\boldsymbol{q}$ lies in a compact-connected component of its free-space. We will impose this condition at $t = 1$ with a convex-combinatorial model of caging \cite{aceituno-cabezas2019icra}. \item {Convergence:} The manipulator path drives all configurations in the initial invariant set (cage) towards the goal $\boldsymbol{q}$. To meet this condition, once the object is caged ($t \in \lbrace 2,\dots,N_T \rbrace$), the manipulator follows a penetration-free path over which the compact-connected component contracts. Then, at the final time-step of the path, the $\mathcal{C}$-obstacles reduce $\mathcal{C}^{compact}_{free}(\mathcal{O},N_T)$ to a singleton $\lbrace \boldsymbol{q} \rbrace$. \item {Observability:} As a consequence of the fingers motion, the final contact configuration can recover the object pose at $\boldsymbol{q}$ through proprioceptive sensing. We call such a configuration an \emph{observable grasp} and is a condition solely required at the end of the path ($t = N_T$). \end{itemize} The satisfaction of these constraints would give a geometric certificate that any configuration of the object in the set $Q_0 = \mathcal{C}^{compact}_{free}(\mathcal{O},t = 1)$ will be driven towards and immobilized in the goal grasp. The following three sections provide a model for each of these three steps, which then will be combined into an optimization problem (\textbf{MIQP1}) for certified grasping of polygonal objects. \begin{figure}[t] \centering \includegraphics[height=0.25\linewidth]{Figures/f4_cage_se3.pdf} \caption{\textbf{Invariance Certificate.} Example of a cage in $\mathcal{W}$ (left), the $\mathcal{C}-$slices (center) and the configuration space $\mathcal{C}(\mathcal{O})$ (right). Note how the configuration $\boldsymbol{q}$ lies in a compact connected-component of the free-space (pink), bounded by two limit orientations (gray). Image adapted from \cite{aceituno-cabezas2019icra}.} \label{fig:cage_Ex} \end{figure} \section{Invariance Certificate} \label{sec:caging} As explained above, one way to constrain an object to an invariant set is to cage it geometrically. Under the model presented in \cite{aceituno-cabezas2019icra}, the following are a set of sufficient conditions for invariance: \begin{enumerate} \item The component $\mathcal{C}^{compact}_{free}(\mathcal{O},t)$ is bounded in the orientation coordinate by two limit orientations, otherwise it infinitely repeats along such axis with period $2 \pi$. \item At all $\mathcal{C}-$slices between the two limit orientations (when these exist) there is a loop of $\mathcal{C}-$obstacles enclosing a segment of free-space. All these loops must be connected, enclosing a component of free-space in between adjacent slices. At the slice with $q_\theta$, the loop must enclose $\boldsymbol{q}$ (as illustrated in the middle column of Fig. \ref{fig:cage_Ex}). \item At the $\mathcal{C}-$slice of a limit orientation (if these exist) the free-space component enclosed by the loop has zero area. Thus, getting reduced to a line segment or a point. \end{enumerate} The union of these conditions define a net of constraints that enclose the configuration $q$, as illustrated in Fig. \ref{fig:cage_Ex}. Such conditions can be transcribed as a convex-combinatorial model composed of two sets of constraints, briefly described below, and explained in more detail in \cite{aceituno-cabezas2019icra}. \subsection{Creating loops at each $\mathcal{C}$-slice} To construct a loop of $\mathcal{C}-$obstacles at each slice, we transcribe the problem as that of finding a closed directed graph within the intersections between polygonal obstacles. In such graph, each node represents a convex polygon of the decomposition of a $\mathcal{C}-$obstacle, while each edge imposes an intersection between polygons. We denote the polygon $i$ of $\mathcal{C}-$obstacle $n$ as $\boldsymbol{P}_{n,i}$. Including this condition in the model, at each time $t$, is done through the following constraints: \myparagraph{Existence of a Loop.} This is encoded through two binary matrices: $\boldsymbol{H}_n \in \{0,1\}^{M \times M} \ \text{and} \ \boldsymbol{G}_n \in \{0,1\}^{M \times M}$. $\boldsymbol{H}_n$ encodes edges between $\mathcal{C}-$obstacle $n$ and $\mathcal{C}-$obstacle $n+1$, such that $\boldsymbol{H}_n(i,j) = 1 \Rightarrow \boldsymbol{P}_{n,i} \cap \boldsymbol{P}_{n+1,j} \neq \emptyset$. $\boldsymbol{G}_n$ encodes edges within $\mathcal{C}-$obstacle $n$, such that $\boldsymbol{G}_n(i,j) = 1 \Rightarrow \boldsymbol{P}_{n,i} \cap \boldsymbol{P}_{n,j} \neq \emptyset$. These matrices are constrained so that the resulting graph is closed and directed. We show an example of this loop and its graph in Fig. \ref{fig:f3} (b) and (c). \myparagraph{Configuration Enclosing.} We include this condition by introducing a binary tensor $\mathbf{F} \in \{0,1\}^{N \times M \times 4}$, where $\mathbf{F}(i,j,k=1) = 1$ imposes a ray intersection with polygon $j$ at $\mathcal{C}-$obstacle $i$, while other values of $k$ assign the ray to the complement of the segment. The constraint needed to enclose $\boldsymbol{q}$ is to impose $\sum_{(i,j)} \mathbf{F}(i,j,k=1)$ to be odd. An illustration of this condition is shown in Fig. \ref{fig:f3} (d). \myparagraph{Non-Penetration Constraints.} We impose this constraint by introducing a binary matrix $\boldsymbol{R} \in \{0,1\}^{N \times R}$. $\boldsymbol{R}(i,r) = 1$ assigns finger $i$ to region $r$ in $\mathcal{W} \setminus \mathcal{O}$, with $\sum_r \boldsymbol{R}(i,r) = 1, \forall i$. A visualization of this is shown in Fig. \ref{fig:f3} (e).\vspace{6pt} Combining all of these constraints ensures the existence of a loop at each $\mathcal{C}-$slice and that $\boldsymbol{q}$ is enclosed by one of these loops. \begin{figure}[t] \centering \includegraphics{Figures/cage_slice_v2.pdf} s \caption{\textbf{Caging Model.} (a) Illustration of the cage of an object composed of two polygons ($M = 2$), caged with four fingers ($N = 4$) in a configuration space slice of constant orientation defined by six polygonal regions ($R = 6$), and with a boundary with eight edges ($L = 8$). (b) The model forms a polygonal loop at each slice of $\mathcal{C}(\mathcal{O},t)$, (c) defining a graph of polygonal intersections that enclose $\boldsymbol{q}$. (d) We test that the configuration $\boldsymbol{q}$ is enclosed by the loop by checking the red ray has an odd number of intersections with the loop. (e) Slightly exploded view of the (intersecting) polygonal regions that define the non-penetration space where the fingers can move.} \label{fig:f3} \end{figure} \subsection{Constructing a cage from loops} The next step is to impose that these constraints are only active for slices between two limit orientations (when these exist) while also enclosing a component of free-space between slices. \myparagraph{Constraint Activation.} To determine which slices must contain a closed loop of $\mathcal{C}-$obstacles, we must first determine if the cage has limit orientations. To include this constraint, we introduce a binary vector $\Theta \in \{0,1\}^{S}$, where $\Theta(s) = 1$ imposes that a limit orientation must be reached before slice $s$, deactivating all loop constraints in such slice. In this context, \textit{before} means a greater or equal angle if the slice lies in the negative orientation half-space or a smaller or equal angle if it lies in the positive one. \myparagraph{Limit Orientations.} A limit orientation occurs when the loop encloses a zero-area component, a condition defined by the contacts between the fingers and some translation of the object. To verify the existence of limit orientations, we define a binary matrix $\boldsymbol{T}_s \in \{0,1\}^{N \times L}$, such that $\boldsymbol{T}_s(i,l) = 1 \Rightarrow \boldsymbol{p}_i \in \boldsymbol{L}_l$ imposes that finger $i$ must be in contact with facet $l$ at slice $s$. Using this variable and labeling $\mathcal{L}_\mathcal{O}$ as the set of contact assignments that lead to a limit orientation, we impose $\boldsymbol{T}_s \in \mathcal{L}_\mathcal{O} \Rightarrow \Theta(s) = 1$. \myparagraph{Continuous Boundary Variation.} In order for the $\mathcal{C}_{free}^{compact}(\mathcal{O},t)$ to be compact and connected, the loops created at the $\mathcal{C}-$slices must also enclose a segment of free-space between the slices. \cite{aceituno-cabezas2019icra} shows that a sufficient condition for this is to have the boundary of such loops to variate continuously unto the boundary of the loop in the adjacent $\mathcal{C}-$slices. A set of constraints for this condition are integrated as part of the model. Satisfying these conditions ensures that the configuration $\boldsymbol{q}$ is enclosed by a compact-connected component of free-space. For more details on implementation and proofs on the correctness of these conditions, the reader is referred to \cite{aceituno-cabezas2019icra}. \section{Convergence Certificate} \label{sec:convergence} Given an initial cage, the convergence certificate is satisfied if the process drives a set of bounded configurations towards the goal $\boldsymbol{q}$. The main insight that allows us to integrate this stage in the framework comes from the following remark:\vspace{6pt} \textbf{Remark 1:} Given an object $\mathcal{O}$, at some time-step $t$, with a configuration $\boldsymbol{q}$ enclosed in a compact connected component of free-space $\boldsymbol{q} \in \mathcal{C}_{free}^{compact}(\mathcal{O},t)$ and bounded between limit orientations $\theta_{l}(t)$ and $\theta_{u}(t)$, any collision-free manipulator path $\rho_\mathcal{M}$ where $\mathcal{M}(N_T)$ immobilizes $\mathcal{O}$ at $\boldsymbol{q}$ and satisfies $\frac{d}{dt} \left(\theta_{u}(t) - \theta_{l}(t)\right ) < 0$ will drive any configuration $\boldsymbol{\hat{q}} \in \mathcal{C}_{free}^{compact}(\mathcal{O},t)$ towards $\boldsymbol{q}$. \vspace{6pt} The conditions specified in Remark 1, shown in Fig. \ref{fig:f5}, are sufficient but might not be necessary. However, these allow us to optimize a manipulator path that satisfies the convergence certificate. This also allows us to characterize the set of initial configuration that will certifiably converge to $\boldsymbol{q}$. Hence, by relying on the model described in the previous section, we derive a linear model to certify convergence as detailed below. \begin{figure}[t] \centering \includegraphics[height=0.25\linewidth]{Figures/f5_contract.pdf} \caption{\textbf{Convergence Certificate.} This condition is trivially satisfied when the range of limit orientations (gray) decreases, converging at $t = N_T$.} \label{fig:f5} \end{figure} \subsection{Certificate Model} In order for the conditions detailed in remark 1 to hold, we require that: \begin{enumerate} \item The object configuration must lie in a cage at all times. \item The separation between \textit{limit orientations} must decrease monotonically between time-steps, until they converge at $t = N_T$. \item The cage at $t = N_T$ must only enclose the goal configuration $\boldsymbol{q}$. \end{enumerate} Algebraically, the conditions to impose a cage at each time-step are posed as: \begin{eqnarray} \label{eq:cage} \begin{cases} \sum_{s} \Theta_s(t) = 2\\ \theta_s(t) \in [\theta_l(t),\theta_u(t)] \Rightarrow \text{(loop existence)} |_{(s,t)} \\ \end{cases} \end{eqnarray} for all $t \in \lbrace 1,\dots,N_T \rbrace$. Then, in order to ensure that the cage does not break between time-steps, we introduce the following constraint at each slice: \begin{equation} \label{eq:non_break} \boldsymbol{H}_n(i,j)|_{t=k} \Rightarrow \exists ~ r_t \in \mathbb{R}^2 \ s.t. \ r_t \in \boldsymbol{P}_{i,n,k+1} \cap \boldsymbol{P}_{j,n+1,k+1} \end{equation} Note that this condition is sufficient and necessary, as the intersection occurs between convex polygons and the path is linearly interpolated. Because of this, we introduce the following remark:\vspace{6pt} \textbf{Remark 2:} Since all initial configurations of the object are caged at $t = 1$ and the enclosing loop does not change between adjacent time-steps, the conditions for $\boldsymbol{q} \in \mathcal{C}_{free}^{compact}(\mathcal{O},t)$ are trivially satisfied for all $t>1$.\vspace{6pt} Furthermore, for the final cage to fully immobilize the object, we require that there exist two similar \textit{limit orientations} at $t = N_T - 1$ which have the same facet assignment matrix, also enforced for $t = N_T$ (when limit orientations converge). Note that this reduces $\mathcal{C}_{free}^{compact}(\mathcal{O},t = N_T)$ to a singleton. Algebraically, this constraint is added as: \begin{eqnarray} \begin{cases} \label{eq:limcont1} \boldsymbol{T}_{u}(N_T) = \boldsymbol{T}_{u}(N_T-1) = \boldsymbol{T}_{l}(N_T-1) \in \mathcal{L}_{\mathcal{O}} \\ |\theta_{u}(N_T-1) - \theta_{l}(N_T-1)| \approx 0 \label{eq:limcont2} \end{cases} \end{eqnarray} Finally, the limit orientations converge gradually under the constraint: \begin{eqnarray} \begin{cases} \label{eq:cont1}\theta_{u}(t+1) < \theta_{u}(t) \\ \label{eq:cont2}\theta_{l}(t) < \theta_{l}(t+1) \end{cases} \end{eqnarray} This, along with the caging model, certifies that the grasp will always succeed within a set of certified initial configurations $Q_0$. \myparagraph{Constraining $Q_0$.} Constraining that $\mathcal{C}_{free}^{compact}(\mathcal{O})$ contains an arbitrary set of initial conditions $Q_0$ cannot be integrated in general within this convex-combinatorial model. However, we can use an inner box approximation of $Q_0$ in the form: $Q_0 = \lbrace q \in \mathcal{C}(\mathcal{O}) \ | \ q \in [x_1,x_2] \times [y_1,y_2] \times [\theta_1,\theta_2] \rbrace$ by adding the following constraints: \begin{eqnarray} \label{eq:incond} \theta_s \in [\theta_1,\theta_2] \Rightarrow (\text{configuration enclosing}), \ \forall (x,y) \in [x_1,x_2] \times [y_1,y_2] \end{eqnarray} In this case, robust optimization \cite{ben2009robust} would be used in each $\mathcal{C}-$slice to ensure that all points in $[x_1,x_2] \times [y_1,y_2]$ are enclosed by the cage. \section{Observability Certificate} \label{sec:observability} Once a planned grasp process has been executed, we can also certify the immobilization at the goal configuration if the grasp is \textit{observable}, i.e.~such that we can retrieve the object pose from sensor readings. In this section, we present a definition of grasp observability and derive sufficient constraints for a grasp to be locally observable under proprioceptive sensing (e.g. joint encoders). In practice, this adds an extra constraint to the type of end grasp that we are interested in. \subsection{Definitions} Given a vector of $n_r$ sensor readings $\boldsymbol{s} \in \mathbb{R}^{n_r}$, we define:\vspace{6pt} \textbf{Definition 1} (Sensor Model): Given a final grasp $G$ achieved by a manipulator configuration $\mathcal{M}(N_T)$, we define a sensor model $F_G$ as a mapping from object configurations to sensor readings: $$\begin{array}{llll} F_G: ~ & \mathcal{C}(\mathcal{O}) & \longrightarrow & \mathbb{R}^{n_r} \\ & \boldsymbol{\hat{q}} & \longmapsto & \boldsymbol{s} = (s_1, ..., s_{n_r}) = F_G(\boldsymbol{\hat{q}}). \end{array}$$ \textbf{Definition 2} (Grasp Observability): Given a grasp $G$, a sensor model $F_G$ and a final object configuration $\boldsymbol{q}$, we will say that $G$ is observable if and only if $F_G$ is locally invertible around $\boldsymbol{q}$.\vspace{6pt} \begin{wrapfigure}{R}{0.6\textwidth} \centering \vspace{-24pt} \includegraphics[height=0.38\linewidth]{Figures/f8_observe.pdf} \caption{\textbf{Observability Certificate.} (a) An observable grasp $G_1$. (b) A non-observable grasp $G_2$, the object can slide between the fingers.} \vspace{-24pt} \label{fig:fobs} \end{wrapfigure} \textbf{Remark 3}: If $n_r \geq 3$ and the sensor model $F_G$ satisfies that its Jacobian $JF_G(\boldsymbol{q}) \in \mathbb{R}^{n_r \times 3}$ is full rank, then the grasp $G$ is observable and only 3 sensor readings are necessary for observability.\vspace{6pt} Fig. \ref{fig:fobs} shows an example of grasp observability. In general, $F_G$ can be hard to define in closed form, as it depends on the object and manipulator geometries. Hence, we restrict our analysis to first order effects \cite{rimon1996force}. \myparagraph{Proprioceptive Sensor Model:} In order to give an intuitive notion of a sensor reading, we characterize a sensor model for point-contact sensing to first order effects. Intuitively, for an object in contact, this model reports local changes based on a gap function at each contact point, $\psi_i(\boldsymbol{\bar{q}}, \boldsymbol{p}_i)$, as it is commonly used to formalize the study of grasp stability \cite{prattichizzo2016grasping}. More concretely, sensor readings should only report changes in the object pose that imply decrements of the gap (causing penetration), ignoring changes that preserve or break contact (no applied force). Therefore, we characterize a sensor reading with the result of applying the sensor model Jacobian to an infinitesimal object configuration variation $d\boldsymbol{q}$ from $\boldsymbol{q}$: $$ JF_G(\boldsymbol{q}) = \left( \begin{matrix} \frac{ds_1}{d \boldsymbol{q}}(\boldsymbol{q}) \\ \vdots \\ \frac{ds_{n_r}}{d \boldsymbol{q}}(\boldsymbol{q}) \end{matrix} \right), \hspace{5pt} \frac{ds_i}{d \boldsymbol{q}}(\boldsymbol{q}) ~ d\boldsymbol{q} = \begin{cases} k_i~\dfrac{d \psi_i}{d \boldsymbol{q}}(\boldsymbol{q},\boldsymbol{p}_i)~d\boldsymbol{q}, & \dfrac{d \psi_i}{d \boldsymbol{q}}(\boldsymbol{q},\boldsymbol{p}_i)~d\boldsymbol{q} < 0, \\[9pt] 0, & \dfrac{d \psi_i}{d \boldsymbol{q}}(\boldsymbol{q},\boldsymbol{p}_i)~d\boldsymbol{q} \geq 0, \end{cases} $$ where $k_i$ is a real non-zero constant. \vspace{6pt} The first-order behavior of the proprioceptive sensor model above highlights a relation between observability and first-order form closure. As a result of Remark 3, we will consider only three sensor readings, $n_r = 3$. \vspace{6pt}\textbf{Remark 4}: Given a grasp $G$ of an object in its final configuration $\boldsymbol{q}$, first-order form closure is equivalent to have the matrix $JF_G(\boldsymbol{q})$ be of full rank, where $F_G$ is the proprioceptive sensor model defined above. \begin{proof} Note that having full rankness of $JF_G(\boldsymbol{q}) \in \mathbb{R}^{3 \times 3}$ is equivalent to: $$ \left[ JF_G(\boldsymbol{q}) ~ d\boldsymbol{q} = \boldsymbol{0} \Rightarrow d\boldsymbol{q} = \boldsymbol{0} \right] ~\Leftrightarrow~ \left[ \forall~i, ~~ \frac{ds_i}{d \boldsymbol{q}}(\boldsymbol{q}) ~ d\boldsymbol{q} = 0 \Rightarrow d\boldsymbol{q} = \boldsymbol{0} \right].$$ As a result of the first-order behavior of our virtual sensor model, we have $$ \frac{ds_i}{d \boldsymbol{q}}(\boldsymbol{q}) ~ d\boldsymbol{q} = 0 ~\Leftrightarrow~ \dfrac{d \psi_i}{d \boldsymbol{q}}(\boldsymbol{q},\boldsymbol{p}_i)~d\boldsymbol{q} \geq 0, $$ where the implication from right to left is by definition and from left to right is a consequence of $k_i \neq 0$. Therefore, $$\left[ \forall~i, ~~ \frac{ds_i}{d \boldsymbol{q}}(\boldsymbol{q}) ~ d\boldsymbol{q} = 0 \Rightarrow d\boldsymbol{q} = \boldsymbol{0} \right] \Leftrightarrow \left[ \forall~i, ~~ \frac{d \psi_i}{d \boldsymbol{q}}(\boldsymbol{q}, \boldsymbol{p}_i) ~ d\boldsymbol{q} \geq 0 \Rightarrow d\boldsymbol{q} = 0 \right],$$ that is precisely a characterization of first-order form closure \cite{prattichizzo2016grasping}. Consequently, first-order form closure is equivalent to full rankness of $JF_G(\boldsymbol{q})$, when considering $F_G$ as the proprioceptive sensor model. \end{proof} \textbf{Corollary 1}: Given a grasp $G$ and the proprioceptive sensor model $F_G$, first-order form closure implies grasp observability. \subsection{Certificate Model} \begin{figure}[t] \centering \begin{minipage}{0.48\linewidth} \centering \vspace{15pt} \includegraphics[height=0.32\linewidth]{Figures/f9_obs_examples.pdf} \vspace{6pt} \caption{Examples of proprioceptive observability conditions: (a) Not observable, (b) First-Order Not-Observable, and (c) First-Order Observable.} \label{fig:f9} \end{minipage} ~ \begin{minipage}{0.48\linewidth} \centering \includegraphics[height=0.39\linewidth]{Figures/fx_coincidence.pdf} \caption{Convex combinatorial conditions for non-coincidence in the case of non-parallel (left) and parallel (right) facet assignments.} \label{fig:f10} \end{minipage} \vspace{-12pt} \end{figure} Given the relation between form-closure and observability that we derived above, a planar grasp is first-order observable if there are $4$ unilateral contact constraints on the object \cite{rimon1996force}. This is satisfied if the following conditions hold: \begin{enumerate} \item The object configuration must lie in a singleton of $\mathcal{C}_{free}(\mathcal{O},t)$. This condition is already implied by (CT3). \item There must exist no point of coincidence between all the contact normals. This is required because otherwise, to first-order, the object would be free to rotate infinitesimally around the point of concurrency of the contact normals \cite{rimon1996force}. \end{enumerate} Fig. \ref{fig:f9} shows examples. These are convex-combinatorial constraints on the facet-assignment matrix $\boldsymbol{T}_s$ and manipulator configuration $\mathcal{M}({N_T})$. Algebraically, the non-coincidence condition can be expressed as: $$ \bigcap_{i}{\boldsymbol{p}_{i}(N_T) + \langle \lambda_i \rangle} = \emptyset $$ We note that there are two scenarios for every pair of fingers: 1) Intersecting normals correspond to non-parallel facets and have a single intersection point, and 2) Normal vectors are parallel and thus have infinite intersection points or none. Therefore, if we define the following sets: \begin{itemize} \item $\mathcal{P} = \{(i,j) \in N^2 \ | \ i > j \}$ is the set of all different pairs of facet-assignments. \item $\mathcal{P}_{\parallel} = \{(i,j) \in \mathcal{P} \ | \ \lambda_i \times \lambda_j = 0 \}$ is the set of pairs of facet-assignments with parallel normals. \item $\mathcal{P}_{\nparallel} = \{(i,j) \in \mathcal{P} \ | \ \lambda_i \times \lambda_j \neq 0 \}$ is the set of pairs of facet-assignments with nonparallel normals. \end{itemize} where $\times$ is the ordinary cross-product. Then, we can introduce the binary matrix $\boldsymbol{M} = (\boldsymbol{M}_{i,j})_{(i,j) \in \mathcal{P}} \in \{0,1\}^{|\mathcal{P}|_c}$, where $| \mathcal{P} |$ is the cardinality of $\mathcal{P}$, reducing the problem to the following set of convex-combinatorial conditions: \begin{eqnarray} \label{obs_1} \boldsymbol{M}_{(i,j) \in \mathcal{P}_{\nparallel}} \Rightarrow \sum_{k = 1}^N | (\alpha_{i,j} - \boldsymbol{p}_{k}(N_T)) \times \lambda_k | > 0 \\ \boldsymbol{M}_{(i,j) \in \mathcal{P}_{\parallel}} \Rightarrow | (\boldsymbol{p}_{i}(N_T) - \boldsymbol{p}_{j}(N_T)) \times \lambda_i | > 0 \\ \label{obs_i} \sum_{(i,j) \in \mathcal{P}} \boldsymbol{M}_{i,j} \geq 1, \label{obs_f} \end{eqnarray} where $\alpha_{i,j}$ is the intersection point between the lines defined by the normal vectors starting at $\boldsymbol{p}_{i}(N_T)$ and $\boldsymbol{p}_{j}(N_T)$. These conditions guarantee that at least one pair of normals is non-coincident to the rest, providing observability as shown in Fig. \ref{fig:f10}. Here, we include absolute value function through slack variables and big-M formulation \cite{floudas1995nonlinear}. \section{Application to Sensorless Grasping} \label{sec:results} This section describes an optimization problem for grasping of planar objects with bounded uncertainty. For this, we formulate a Mixed-Integer program (MIP) using the constraints described in sections 4, 5 and 6. We validate this approach on different polygonal objects, both with experiments and simulations. All the computations are done in MATLAB R2018b on a MacBook Pro computer with Intel Core i9 2.9 GHz processor. All optimization problems are solved with Gurobi 8.0 \cite{gurobi}. \subsection{Mixed-Integer Programming Formulation} We propose a formulation which receives as inputs the description of the polygonal object $\mathcal{O}$ and the manipulator $\mathcal{M}$. We incorporate the conditions described through the paper as constraints and add a quadratic cost term on acceleration to smooth the trajectory, resulting in problem \textbf{MIQP1}. \begin{equation}\nonumber \mathbf{MIQP1:} \ \underset{\substack{\mathcal{M}(t)}}{\text{\text{min}}} \ \ \int \sum_{i = 1}^{N} \left|\left| \frac{d^2 \boldsymbol{p}_i(t)}{d t^2} \right|\right|^2 dt \end{equation} subject to: \begin{enumerate} \item For $t = 1$ to $t = N_T$: \begin{itemize} \item Caging (CT1)-(CT2). \item Convergence Certificate (CT4)-(CT5). \end{itemize} \item $(t = 1)$ Invariance certificate \eqref{eq:incond}. \item $(t = N_T)$ First-Order Grasp Observability \eqref{obs_1}-\eqref{obs_f}. \end{enumerate} \subsection{Simulated Experiments} \begin{figure}[b] \centering \includegraphics[width=0.99\linewidth]{Figures/fxII_sims.pdf} \caption{\textbf{Simulation results.} 12 random polygons are grasped with trajectories generated with our model. In each case, a set of random initial configurations certified by our model (shown in gray) are driven towards a goal grasp (purple) by using the same trajectory (blue).} \label{fig:f12} \end{figure} We generate a set of 12 random polygons and optimize a trajectory for each using \textbf{MIQP1}. Then, we perform simulations for a set of over 100 different initial conditions, using the open planar manipulation simulator in \cite{zhou2018convex}. We initialize the plan with limit orientations between $-15^\circ$ and $15^\circ$, centered around $\boldsymbol{q} = 0$. This limits certification for the configurations with $\theta \in [-15^\circ,15^\circ]$, with no hard guarantees on translational uncertainty.\vspace{6pt} In order to generate random polygons with interesting properties, we rely on the heuristics presented in \cite{auer1996rpg}, which specify parameters such as irregularity and referential radius. We implement this code in MATLAB and generate the 12 polygons of Fig. \ref{fig:f12}, with 4 to 6 facets. We segment each object with Delaunay triangulation \cite{fortune1995voronoi} and determine $\mathcal{W} \setminus \mathcal{O}$ with hueristics valid for simple enough shapes. Is worth noting that algorithms other than Delaunay triangulation might be able to find a decomposition with a small number of convex polygons \cite{lien2006approximate}.\vspace{6pt} For each initial condition, we execute the trajectory with 4 free disc-shaped fingers. Results for 12 of the random object are reported in Fig. \ref{fig:f12}. Using the same manipulator trajectory, a set of different initial poses (marked in gray) are driven towards $\boldsymbol{q}$ (blue). For all the objects, a trajectory was successfully found in 25 to 45 seconds. However, the time required to find the optimal trajectory ranged from several seconds to around two minutes, depending on the number of integer variables of the problem. We note that fixing limit orientations usually allows for little translational uncertainty, suggesting the need for (CT5) in the general case. \vspace{-12pt} \subsection{Real Robot Experiments} We demonstrate trajectories generated on four different planar objects in a real experimental set-up with a two-armed robot. We optimize trajectories for each of the objects in Fig. \ref{fig:f12} and use simulations to determine $Q_0$. Each trajectory is designed with $N_T = 5$ time-steps and initial limit orientations between $-22.5^\circ$ and $22.5^\circ$. We perform 10 experiments on each object, initializing them at random initial configurations within the invariant set $Q_0$.\vspace{6pt} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{Figures/fx1_comparison_v4.png} \caption{\textbf{Experimental results.} Each row shows snapshots from execution of the resulting grasping trajectories for 4 objects, overlaying 10 experiments with initial pose uncertainty (first frame) moving towards a single goal configuration (last frame). Our certification allows for significant rotational uncertainty in the initial object configuration, always converging to the same goal.} \vspace{-12pt} \label{fig:f11} \end{figure} Our robotic platform is an ABB YuMi$^{\tiny{\text{\textregistered}}}$ (IRB-14000) robot, which has two 7 DOF arms with parallel jaw grippers. We work with a Robot Operating System (ROS) setup and an Intel RealSense D415 RGB-D camera calibrated with AprilTag 2 scanning, which we use to place the object within the reach of the robot, and within the invariant set $Q_0$. Additional constraints are added to \textbf{MIQP1} to account for the kinematics of the manipulator. The end-effectors of YuMi$^{\tiny{\textregistered}}$ are modified to have thin cylindrical fingers. To showcase the robustness of this approach, all experiments are run open-loop.\vspace{6pt} Fig. \ref{fig:f11} shows resulting trajectories for 10 different initial conditions of the four objects. Depending on the shape, the resulting trajectories vary from stretching -- (a) and (b) -- to squeezing (d), and a combination of both (c). In all cases, we are able to handle significant uncertainty in the orientation axis, and varying translational uncertainty (from millimeters to a few centimeters). Videos on the experiments for each of the objects are shown in the supplementary material. \subsection{Comparison with pure force-closure grasping} A natural question is how accounting for certification compares to a naive reaching strategy. In order to provide a quantitative answer to this question, we compare our approach to a naive grasping plan which optimizes some criteria of grasp quality, as commonly done in grasp planning algorithms. We design this naive motion by searching for a force close grasp \cite{prattichizzo2016grasping} and approaching each contact with a trajectory perpendicularly to the goal facet, starting all fingers with the same separation. \vspace{6pt} \begin{figure}[b] \vspace{-12pt} \centering \includegraphics{Figures/fxiii_comparison_v2.pdf} \caption{\textbf{Certified Grasping vs. Force-Closure Grasping.} We simulate grasps over an object with noisy initial configuration (left). A traditional grasping strategy that maximizes force closure (top-center) fails to handle uncertainty, resulting in significant error in the final pose of several simulations. In contrast, certified grasping (bottom-center) drives the object to its goal pose, always converging to the same configuration. By comparing the $L_1$ error on the final pose (right), we obtain that certified grasping is orders of magnitude more accurate than a naive policy.} \label{fig:f13} \end{figure} We simulate both strategies to grasp a T-shaped object from 100 different initial conditions. Certified grasping always drives the object to the goal configuration with proprioceptive observability. We measure the $L_1$ distance to desired object pose, which we call error, after each grasping strategy is executed and report our results in Fig. \ref{fig:f13}. As can be seen in many of these simulations, the naive force-closure grasp does not drive the object towards the goal nor does it provide observability. \section{Discussion} \label{sec:discussion} In this paper we study certified grasping of planar objects under bounded pose uncertainty. To do this, we extend grasp analysis to include the reaching motion towards the final arrangement of contacts. Under this perspective, we propose three certificates of grasp success: 1) Invariance within an initial set of configurations of the object, 2) convergence from the initial set to a goal grasp, and 3) observability of the final grasp. For each of the these certificates, we derive a mathematical model, which can be expressed with convex-combinatorial constraints, and demonstrate their application to synthesize robust sensorless grasps of polygonal objects. We validate these models in simulation and with real robot experiments, showcasing the value of the approach by a direct comparison with force closure grasping. \myparagraph{Limitations.} The first limitation of this work comes from restricting the analysis to the configuration space of the object. This neglects frictional interaction between the fingers and the object, which could lead to unaccounted stable configurations such as jamming or wedging. Accounting for the role of friction, characterizing undesired scenarios such as in~\cite{haas2018passive}, would allow this framework to provide certification over a larger range of dynamic settings. The second limitation comes from the first-order proprioceptive analysis of observability. Including second-order effects such as curvature of the object~\cite{rimon1996force} as well as accounting for more discriminative sensor models that provide shape, texture, or force information~\cite{donlon2018gelslim}, could certify success without requiring form-closure constraints. \myparagraph{Future Work.} Given the versatility of convex-combinatorial optimization models, we believe that this approach can be extended to the design of finger phalanges with complex shapes beyond finger points~\cite{rodriguez2013effector}. This would allow to certifiably grasp specific objects within a larger set of initial conditions and with a lower number of fingers. Additionally, we are interested in extending this model to invariance sets that are not purely geometrical, for example by considering energy bounds~\cite{mahler2016energy} or other type of dynamic constraints on object mobility. \bibliographystyle{plainnat} {\footnotesize
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Results} In the standard setting originated from von Neumann's book \cite{vN}, a {\em measuring process} for the quantum system on a separable Hilbert space $\Hil$ is described by a quadruple $(\cL,\phi,M,U)$, where $\cL$ is a separable Hilbert space, $\phi$ is a state\footnote{A state $\phi$ of $\K(\cL)$ corresponds to a positive trace class operator $\rho$ on $\cL$ of trace one; $\phi(x)=\Tr(x\rho),\ x\in \K(\cL)$.} of the compact operators $\K(\cL)$, $M$ is a self-adjoint operator on $\cL$, and $U$ is a unitary on $\Hil\otimes\cL$.\footnote{Taken from \cite{Oz} with some notational modifications.} Let $E_M$ denote the spectral measure of $M$ and let $\K(\Hil)^*_+$ denote the cone\footnote{This is the same as the cone of positive trace class operators on $\Hil$.} of positive functionals on $\K(\Hil)$ and $\cS(\K)=\{\varphi\in \K(\Hil)^*_+\ |\ \|\varphi\|=1\}$ the convex state space of $\K=\K(\Hil)$. This process produces an $\E(\Delta,\varphi)\in \K^*_+$ for each Borel subset $\Delta$ of $\R$ and $\varphi\in \cS(\K)$ by $$ \E(\Delta,\varphi)(x)=\varphi\otimes\phi(U^*(x\otimes E_M(\Delta))U),\ \ x\in \K. $$ Note that $\Delta\mapsto \E(\Delta,\varphi)$ is a $\K^*_+$-valued measure on $\R$ with $\E(\R,\varphi)\in \cS(\K)$ and that $\varphi\mapsto \E(\Delta,\varphi)$ is a continuous affine map from $\cS(\K)$ into $\K^*_+$ (in norm). Note also that if $(\Delta_n)$ is an increasing sequence of Borel subsets of $\R$ then $\E(\Delta_n,\varphi)$ converges to $\E(\bigcup_n\Delta_n,\varphi)$. All what we get from the quadruple $(\cL,\phi,M,U)$ with $U$ a unitary on $\Hil\otimes\cL$ is the collection $\E(\Delta,\varphi)$, which is called a {\em Davies-Lewis instrument} or DL-instrument for short \cite{DL70}. The self-adjoint operator $M$ is called a {\em meter observable} in \cite{Oz} and a {\em probe observable} in \cite{Oz03}; When the observed system $\K(\Hil)$ is under the state $\varphi$ and is applied this measuring process, we are supposed to be able to perform a {\em precise local measurement}\footnote{It is a bit bizarre to assume "`measurement"' in some sense is possible at all to explain measurement.} of $M$ to get a real number $x$ whose distribution is given by $\Delta\mapsto \E(\Delta,\varphi)(1)$ so that the ensemble of all states of $\K(\Hil)$ after observing $x\in\Delta$ is given by $\E(\Delta,\varphi)/\E(\Delta,\varphi)(1)$ for each $\Delta$ (cf. Section 3 of \cite{Oz84}). Given a self-adjoint operator $Q$ on $\Hil$, observing $Q$ is supposed to be choosing a suitable $(\cL,\phi,M,U)$ and applying the above process. This seems to be all well-established except for how to drive the quantum effect on $M$ to the macroscopic level for observation (cf. \cite{Oz97}). Recently Harada and Ojima describe such an amplification process as well as the preceding interaction between the observed and the probe systems in terms of certain abelian groups by noting a specific property of {\em regular representation} (section 3 of \cite{Oj09}). Here we propose another mathematical model for measurements of a quantum system in a C*-algebra setting, which incorporates a mechanism of {\em magnifying quantum effects to the classical level}\footnote{This expression is taken from page 250 of Penrose's book \cite{Pen}.} into the measuring apparatus. In this scheme the state of the quantum system transforms to new ones according to a certain probability law just like the phase does to a new one in phase transition we encounter in equilibrium quantum statistical mechanics. The (microscopic) quantum system is described by the C$^*$-algebra $\K=\K(\Hil)$ of compact operators on a separable Hilbert space $\Hil$ just as above and the (macroscopic) measuring apparatus is by a unital separable non-type I nuclear simple C$^*$-algebra $A$ with a certain unital endomorphism and a pure state.\footnote{That the C$^*$-algebra $A$ is non-type I and nuclear is assumed to assure that $A$ has a desired endomorphism \cite{K03}. We may assume that $A$ is the UHF algebra of type $2^\infty$, or the infinite tensor product of $2\times 2$ matrices, which may be considered as the observable algebra for an infinite number of electrons.} We will then specify a unitary from the combined system $\K\otimes A$ to dictate an interaction. After applying the adjoint action of the unitary and the endomorphism we reach a situation similar to the above; instead of $M$ (or the von Neumann algebra generated by $M$) we will obtain an abelian von Neumann algebra with zero intersection with the compact operators, as the center of the observable algebra, as an outcome of this process. Let us explain some basic of states and how the endomorphism works. When $\phi$ is a state of $A$, i.e., a positive linear functional with $\phi(1)=1=\|\phi\|$, we obtain the so-called GNS representation $\pi_\phi$ of $A$ on a Hilbert space $\Hil_\phi$ with a specific unit vector $\Omega_\phi$ such that $\phi(x)=\lan \Omega_\phi,\pi_\phi(x)\Omega_\phi\ran,\ x\in A$ and $\pi_\phi(A)\Omega_\phi$ is dense in $\Hil_\phi$. If $\phi$ is a pure state, i.e., an extreme point of the convex set of states of $A$, the weak closure of $\pi_\phi(A)$ is all the bounded operators $\B(\Hil_\phi)$. We call $\phi$ (or $\pi_\phi$) {\em factorial} if the weak closure $\M$ of $\pi_\phi(A)$ is a factor, i.e., the intersection of the commutant of $\M$, $\M'=\{Q\in \B(\Hil_\phi)\ |\ QT=TQ, T\in\M\}$, with $\M$ is $\C1$. In particular a pure state is factorial. If $\gamma$ is a unital endomorphism of $A$ with $\gamma(A)\not=A$ and $\phi$ is a factorial state then $\phi\gamma$ is a state but may not be factorial, i.e., if $\M$ denotes the weak closure of $\pi_{\phi\gamma}(A)$, the center $\M\cap \M'$ of $\M$ may not be $\C1$. (If $\phi\gamma$ is factorial $\pi_\phi\gamma$ may not be factorial, i.e., the weak closure of $\pi_\phi\gamma(A)$ may have non-trivial center. This is because $\pi_{\phi\gamma}$ is the restriction of $\pi_\phi\gamma$ to the subspace defined as the closure of $\pi_\phi\gamma(A)\Omega_\phi$.) In this case $\phi\gamma$ is centrally decomposed in the sense that there is a unique probability measure $\nu$ on the Borel subset $\F$ of factorial states in $A^*$ with $$ \phi\gamma=\int_\F \psi {\rm d}\nu(\psi), $$ where $\M\cap \M'$ on the left could be identified with $L^\infty(\F,\nu)$ on the right behind this equality (see 3.1.8 and 3.4.5 of \cite{Sak}). A factorial state is supposed to correspond to a {\em phase} in statistical mechanics and if $\phi$ transforms to $\phi\gamma$ causally but in a irreversible way then it would immediately jump to $\psi\in\F$ acausally according to the probability $\nu$ on $\F$.\footnote{Phases or sectors are also discussed in \cite{Oj09}. See \cite{BR} for backgrounds.} We also assume that $\gamma$ is asymptotically inner.\footnote{This is a misnomer but is widely used among operator algebraists. It is more like being asymptotically NOT inner and means that $\gamma$ is asymptotically approximated by inner automorphisms.} Namely we assume that there is a continuous unitary path $u_t,\,t\in [0,1)$ in $A$ such that $\gamma(x)=\lim_{t\rightarrow1}u_txu_t^*,\ x\in A$ and $u_0=1$. Then it follows that there is a bounded sequence $(h_n)$ of self-adjoint elements of $A$ such that $\lim_n[h_n,x]=0$ and $\gamma(x)=\lim_n \Ad(e^{ih_1}e^{ih_2}\cdots e^{ih_n})(x)$ for $x\in A$. We regard $\gamma$ as a time development as being a limit of Hamiltonean induced time developments which are cascading quantum effects to the visible classical level within a small time interval. Thus $\gamma$ describes an irreversible process.\footnote{An ideal measuring apparatus, interacting with the quantum system, should never be interfered by external forces but has to yield classical information to the outside. This is different from a closed system whose time evolution is described by a group of automorphisms and may be called a decaying closed system (not sustainable indefinitely). It is not an open system either, which is ideally described by a semigroup of CP contractions incorporating external forces.} If $\varphi$ is a state of $\K(\Hil)$ and the measuring apparatus $A$ is in a pure state $\phi$, then we suppose that $\varphi\otimes\phi$ turns to $(\varphi\otimes\phi)\Ad\,U^*$ and then to $(\varphi\otimes\phi)\Ad\,U^*(\id\otimes \gamma)$, which may not be factorial and then will be centrally decomposed as explained above, or we will witness collapse of the wave function. We formally give the definition of DL instrument or rather CP instrument following \cite{Oz84} and then the definition of our measuring processes (cf. \cite{DL70,Oz84,Oz}). \begin{definition}\label{I} Let $\M$ be an abelian von Neumann algebra with separable predual\footnote{A Banach space $X$ is a predual of $\M$ if $X^*\cong \M$; $\M$ has a unique predual denoted by $\M_*$ (1.13.3 of \cite{Sak}). We know that $\M$ is isomorphic to $L^\infty$-space on some probability space.} and $\M_+$ the cone of positive elements of $\M$. Let $\Hil$ be an infinite-dimensional separable Hilbert space and $\K=\K(\Hil)$ be the C$^*$-algebra of compact operators on $\Hil$. We call a map $\E$ from $\M\times \K^*$ into $\K^*$ a {\em CP instrument} based on $\M$ if it satisfies \begin{enumerate} \item For each $\varphi\in \K^*_+$ the map $\M\ni Q\mapsto \E(Q,\varphi)\in \K^*$ is a positive continuous linear map on $\M$, \item For each $Q\in\M_+$ the map $\K^*\ni\varphi\mapsto \E(Q,\varphi)\in \K^*$ is a completely positive (CP for short) linear map, \item $\E(1,\varphi)(1)=\varphi(1)$ for all $\varphi\in \K^*$, \end{enumerate} where $\M$ is equipped with the weak$^*$-topology. \end{definition} If we denote by $\E(Q)$ the linear map $\K^*\ni\varphi\mapsto \E(Q,\varphi)\in \K^*$ with $Q\in \M_+$, the dual map $\E(Q)^*: \K^{**}=\B(\Hil)\ra \B(\Hil)$ is completely positive or CP, i.e., the natural extension of $\E(Q)^*$ to a map from $M_k\otimes \B(\Hil)$ into $M_k\otimes \B(\Hil)$ is positive for any $k\in\N$, which follows from the complete positivity of $\E(Q)$. We denote $\E(Q)^*b$ by $\E^*(Q,b)$ for $b\in \B(\Hil)$; then for each $b\in \B(\Hil)_+$ the map $Q\mapsto \E^*(Q,b)$ is a positive continuous linear map\footnote{Since $\M$ is commutative this map is automatically CP.} when $\M$ and $\B(\Hil)$ are endowed with the weak$^*$-topology. For $Q\in \M_+$ the map $b\mapsto \E^*(Q,b)$ is a CP continuous linear map when $\B(\Hil)$ is endowed with the weak$^*$-topology. The third condition of the above definition is equivalent to $\E^*(1,1)=1$. \begin{definition}\label{D} Let $A$ be a unital separable non-type I nuclear simple C$^*$-algebra. Let $\phi$ be a pure state of $A$ and $\gamma$ an asymptotically inner endomorphism of $A$ such that $\pi_\phi\gamma(A)'$ is a non-trivial abelian von Neumann algebra. Let $\K=\K(\Hil)$ be as in the above definition and let $U$ be a unitary in the multiplier algebra $M(\K\otimes A)$\footnote{Identifying $\K\otimes A$ with $\K\otimes \pi_\phi(A)$ on $\Hil\otimes\Hil_\phi$, the multiplier algebra $M(\K\otimes A)$ is the set of $Q\in \B(\Hil\otimes\Hil_\phi)$ satisfying $Q(\K\otimes A),(\K\otimes A)Q\subset \K\otimes A$. For any unitary $U\in M(\K\otimes A)$ there is a unitary path $U_t,\ t\in [0,1]$ in $M(\K\otimes A)$ such that $U_0=1$, $U_1=U$, and $t\mapsto xU_t, U_tx$ are continuous in $\K\otimes A$ (12.2.2 of \cite{Bl}). Thus we may regard $\Ad\,U^*$ as representing a time development of $\K\otimes A$.} of $\K\otimes A$. We call $(A,\phi,\gamma,U)$ a {\em measuring process} for $\K$.\addednote{We may take a separable C$^*$-algebra $B$ for the observed system instead of $\K$. Then a measuring process for $B$ is defined as $(A,\phi,\gamma,U)$ where $U$ is now a unitary $U$ of $M(B\otimes A)$ connected with $1$.} \end{definition} \begin{prop}\label{CP} Let $(A,\phi,\gamma,U)$ be a measuring process and let $\M=\pi_\phi\gamma(A)'$. For each $Q\in\M$ and $\varphi\in \K^*$ define an $\E(Q,\varphi)\in\K^*$ by $$ \E(Q,\varphi)(x)=\overline{\varphi\otimes\phi}(\Ad\,\bar{U}^*)(x\otimes Q),\ \ x\in \K, $$ where $\overline{\varphi\otimes\phi}$ is a unique extension of the positive functional $\varphi\otimes\phi\pi_\phi^{-1}$ of $\K\otimes\pi_\phi(A)$ to a weak$^*$-continuous one of $(\K\otimes\pi_\phi(A))''=\B(\Hil\otimes\Hil_\phi)$ and $\bar{U}=(\id\otimes\pi_\phi)(U)$. Then $\E$ is a CP instrument based on $\M$. We call $\E$ the {\em CP instrument} obtained from $(A,\phi,\gamma,U)$.\addednote{If the observed system is a general separable C$^*$-algebra $B$, we specify an irreducible representation $\pi$ of $B$ and denote by $V_\pi$ as the linear space consisting of $\varphi\in B^*$ such that $\varphi=f\pi$ for some $f\in \pi(B)''_*$. Then as $V_\pi^*=\pi(B)''=\B(\Hil_\pi)$ each $\pi$ and a measuring process $(A,\phi,\gamma,U)$ for $B$ define a CP instrument $\E(\varphi,Q)\in V_\pi$ for $\varphi\in V_\pi, Q\in \M$ just as above. If $B=\K$ then there is essentially only one $\pi$.} \end{prop} If $E_\phi$ denotes the conditional expectation of $\B(\Hil\otimes \Hil_\phi)$ onto $\B(\Hil)$ defined by $$ \varphi(E_\phi(T))=\overline{\varphi\otimes \phi}\Ad \bar{U}^*(T),\ \ \varphi\in \K(\Hil)^*=\B(\Hil)_* $$ then it follows that $\E(Q,\varphi)(1)=\varphi(E_\phi(1\otimes Q))$. If $E_\phi|\M$ is a homomorphism, one can say that $(A,\phi,\gamma,U)$ exactly observes the abelian von Neumann algebra $E_\phi(1\otimes\M)$ (or a self-adjoint operator which generates it). In general it does only approximately an observable residing in $\Hil$. Note that we only use $\M=\pi_\phi\gamma(A)'$ for construction of the CP instrument $\E$, not $\gamma$ directly. In this sense the present scheme is not much different from the original one by von Neumann on the technical level. But we hope that the present model makes a contribution for a clarification on the conceptual level. When $\E_1$ and $\E_2$ are CP instruments based on $\M$ and $(\xi_n)$ is a dense sequence in the unit sphere of $\Hil$ we define $d(\E_1,\E_2)\geq 0$ by $$ d(\E_1,\E_2)=\sum_n2^{-n} \|\E_1(\ \cdot\ ,\psi_n)-\E_2(\ \cdot\ ,\psi_n)\|, $$ where $\psi_n$ is the vector state of $\K$ defined by $\xi_n$. It follows that $d$ is a distance on the set of CP instruments based on $\M$. We can show the following: \begin{prop} Let $\M$ be an abelian von Neumann algebra with separable predual. Then in the set of all CP instruments based on $\M$ is dense the set of CP instruments obtained from the measuring processes $(A,\phi,\gamma,U)$ with $\M=\pi_\phi\gamma(A)'$ in the sense of Proposition \ref{CP}. \end{prop} We will sketch how to prove this. First of all we have to show that there is an asymptotically inner endomorphism $\gamma$ and an irreducible representation $\pi$ of some unital separable non-type I nuclear simple C$^*$-algebra $A$ such that $\M\cong \pi\gamma(A)'$. This is indeed possible for any unital separable non-type I nuclear C$^*$-algebra, whose proof requires Glimm's result \cite{Gl} (which shows UHF algebras are typical examples of non-type I C$^*$-algebras), the existence result on endomorphisms \cite{K03} (for non-type I nuclear C$^*$-algebras), and the following well-known statement on UHF algebras: For any such $\M$ as above there is a representation $\pi$ such that $\pi(A)'\cong \M$, which will be shown in the same way as the examples of endomorphisms are given in Section 3. Thus we prepare $(A,\gamma)$ and some irreducible representation $\pi$ with $\pi\gamma(A)'\cong \M$. Secondly by Ozawa's results (5.1-3 of \cite{Oz84}) all the CP instruments are realized by the measuring processes in his sense (stated in the beginning). For the proof we use the fact that $\M\times \B(\Hil)\ni (Q,b)\mapsto \E^*(Q,b)\in \B(\Hil)$ is a completely positive, weak$^*$-continuous bilinear map and express this map as the restriction of a {\em faithful} weak$^*$-continuous representation of $\M\otimes \B(\Hil)$ (by extending if necessary the representation obtained by Stinespring's procedure as in the proof of 4.2 of \cite{Oz84}). Namely for a CP instrument $\E$ based on $\M$ one finds a separable Hilbert space $\cL$, a pure state $\phi$ of $\K(\cL)$, a normal unital representation $\rho$ of $\M$ on $\cL$, and a unitary $U$ on $\Hil\otimes\cL$ such that $$ \E(Q,\varphi)(x)=\overline{\varphi\otimes\phi}(\Ad\,U^*(x\otimes\rho(Q)),\ \ Q\in \M, \ x\in\K. $$ We may assume that $\rho(\M)\cap \K(\cL)=\{0\}$ by tensoring $\cL$ by another infinite-dimensional separable Hilbert space if necessary and making obvious arrangements. Then we outfit an irreducible representation $\pi$ of $A$ on $\cL$ such that $\pi\gamma(A)'=\rho(\M)$. Since this is done independently of $U$, we cannot expect that $U\in M(\K\otimes \pi(A))$. But, noting that $(\K\otimes \pi(A))''=\B(\Hil\otimes\cL)$, Kadison's transitivity (\cite{Kad} or 1.21.16 of \cite{Sak}) tells us that one can find a unitary $u\in \K\otimes \pi(A)+\C1$ which equals $U$ on any given finite-dimensional subspace.\footnote{Which shows a slightly stronger statement: For any CP instrument $\E$ and any finite number of pure states $\varphi_1,\ldots,\varphi_n$ on $\K$ there is a measuring process whose CP instrument is equal to $\E$ on $\varphi=\varphi_1,\ldots,\varphi_n$ (and any $Q\in\M$).} Thus we can replace $U$ by a unitary in $M(\K\otimes A)$ so that the resulting CP instrument is arbitrarily close to $\E$. In the next section we will give an example of measuring process and explain the above definition of CP instruments in more details. In section 3 we will show how to construct endomorphisms and irreducible representations in the case of UHF algebras of type $k^\infty$. I wonder if this exposition gives some justification for $\gamma$ being a magnifying glass of quantum effects. \small This endeavor of mine was prompted by Professor M. Ozawa's lectures which struck home his success in placing Heisenberg's uncertainty principle in the right scheme involving inevitable corrections on the principle and simultaneously bewildered me about the idea of measuring process itself, undoubtedly due to my ignorance, at the conference held in February 2013 organized by Professor T. Teruya for operator algebraists. I want to express my thanks to both of them for this opportunity and to Reiko, my wife, who accompanied me for this trip to Kyoto and then an expedition to Furano in March where an inceptive idea to the present note was conceived on a trail, for her unfailing company and patience in listening to my gibberish. I also want to extend my thanks to Professor I. Ojima for providing me with some information I should have known. \normalsize \section{The case $\pi\gamma(A)'\cong \ell^\infty(\N)$} Let $A$ be a unital separable non-type I nuclear simple C$^*$-algebra and let $\phi$ be a pure state of $A$. Let $\gamma$ be an asymptotically inner endomorphism of $A$ such that $\pi_\phi\gamma(A)'$ is an arbitrary abelian von Neumann algebra. The existence of such $\gamma$ follows from Theorem 3.3 of \cite{K03}.\footnote{For example let $\nu_1,\nu_2,\ldots,$ be a sequence of irreducible representations of $A$ such that all $\nu_n$ are mutually disjoint. If $\rho$ is the direct sum $\nu_1\oplus\nu_2\oplus\cdots$ then the weak closure of $\rho(A)$ is equal to $\B(\Hil_1)\oplus \B(\Hil_2)\oplus\cdots$, where $\Hil_n$ is the representation space of $\nu_n$. Then by Theorem 3.3 of \cite{K03} it follows that there is an asymptotically inner endomorphism $\gamma$ of $A$ such that $\pi\gamma$ is unitarily equivalent to $\rho$, which implies that $\pi\gamma(A)'$ is isomorphic to $\C\oplus\C\oplus\cdots$. The condition $u_0=1$ for the choice of $u_t$ is not explicitly mentioned but follows from the proof. We could impose a finite number of conditions on $\gamma$ of similar nature $\pi_i\gamma\cong \rho_i$ with mutually disjoint irreducible $\pi_i$ and arbitrary $\rho_i$.} Let $U$ be a unitary in $M(\K\otimes A)$. We will describe how the system $(A,\phi,\gamma,U)$ works as a measuring apparatus for the observed quantum system $\K$. Let $\varphi$ be a state of $\K$. We denote by $\id$ the identity representation of $\K=\K(\Hil)$ on $\Hil$, where $\varphi$ extends to a normal state of $\B(\Hil)=\K(\Hil)''$. Then through the interaction with $(A,\phi)$ the state $\varphi\otimes\phi$ of the combined system $\K\otimes A$ changes to $(\varphi\otimes\phi)\Ad\,U^*$, and then to $T(\varphi)=(\varphi\otimes \phi)\Ad\,U^*(\id\otimes \gamma)$. Let $\pi_0=(\id\otimes\pi_\phi)\Ad\,U^*(\id\otimes \gamma)$, which is a representation of $\K\otimes A$ on the Hilbert space $\Hil\otimes\Hil_\phi$. Then the commutant $\pi_0(\K\otimes A)'$ is equal to $\Ad\,\bar{U}^*(\C1\otimes \pi_\phi\gamma(A)')$, where $\bar{U}=\id\otimes\pi_\phi(U)$. Note that $\pi_0(\K\otimes A)'=\pi_0(\K\otimes A)'\cap \pi_0(\K\otimes A)''$, the center of $\pi_0(\K\otimes A)''$. Suppose that $\pi_\phi\gamma(A)'\cong \ell^\infty(\N)$, i.e., it is generated by minimal projections $E_1,E_2,\ldots$ on $\Hil_\phi$. Since $x\mapsto\pi_\phi\gamma(x)E_i$ is an irreducible representation of $A$ on $E_i\Hil_\phi$, $E_i$ is of infinite rank. Let $F_i=\Ad\,\bar{U}^*(1\otimes E_i)$, which is a minimal projection of the center of $\pi_0(\K\otimes A)''$. If $\overline{\varphi\otimes \phi}$ denotes the natural extension to a normal state of $\B(\Hil\otimes \Hil_\phi)$ then $$ T(\varphi)=\sum_{i=1}^\infty \overline{\varphi\otimes \phi}(F_i\pi_0(\ \cdot\ )). $$ Since $F_i$ is a minimal projection in $\pi_0(\K\otimes A)'$, the state $\omega_i=\overline{\varphi\otimes\phi}(F_i\pi_0(\ \cdot\ ))/\overline{\varphi\otimes\phi}(F_i)$ is a pure state of $\K\otimes A$ for $\overline{\varphi\otimes\phi}(F_i)>0$. Since $F_i$'s are mutually orthogonal central projections, $\omega_i$'s are mutually disjoint.\footnote{$\omega_1$ and $\omega_2$, states of $B=\K\otimes A$, are disjoint if and only if there is a central sequence $(x_n)$ in $B$ such that $\omega_1(x_n)\ra1$ and $\omega_2(x_n)\ra0$, which implies that $(\pi_{\omega_1}\oplus \pi_{\omega_2})(x_n)\ra 1\oplus 0$ in the weak operator topology. $(x_n)$ is a central sequence if it is bounded and $[x_n,y]\ra0$ for any $y\in B$. The C$^*$-algebra consisting of central sequences (divided by some trivial ones) is considered to be the classical observables associated with $B$. We expect they reduce to numbers in a phase.} Hence $T(\varphi)$ is the sum of phases with weights and Nature will pick up one according to the probability specified by $(\overline{\varphi\otimes \phi}(F_i))$. Since $\varphi\mapsto \overline{\varphi\otimes\phi}(F_i)$ extends to a continuous positive linear map from $\K^*$ into $\C$ there is a positive operator $P_i$ in $\B(\Hil)=\K(\Hil)^{**}$ such that $\varphi(P_i)=\overline{\varphi\otimes\phi}(F_i)$. Since $\sum_iF_i=1$ it follows that $\sum_iP_i=1$. Note that restriction of $\overline{\varphi\otimes\phi}(F_i\pi_0(\ \cdot\ ))$ to $\K$ is $\E(E_i,\varphi)=\overline{\varphi\otimes\phi}\Ad\,\bar{U}^*(\ \cdot\ \otimes E_i)$ and $\varphi(P_i)=\E(E_i,\varphi)(1)$ using the notation given in Definition \ref{D}. Hence it follows that $$ T(\varphi)|\K=\sum_i\varphi(P_i)\frac{\E(E_i,\varphi)}{\varphi(P_i)}, $$ where the sum is over $i$ with $\varphi(P_i)>0$ and $\varphi_i=\E(E_i,\varphi)/\varphi(P_i)$ is a state of $\K$, not necessarily a pure state. Here is our conclusion: After applying this measuring process to $\K$ Nature will transform $\varphi$ to $\varphi_i$ with probability $\varphi(P_i)$ for each $i=1,2,\ldots$. Note that if $U=1$ then $P_i=\overline{\varphi\otimes\phi}(1\otimes E_i)1$ is independent of $\varphi$. Suppose that $\phi$ is given as a vector state by a unit vector $\psi_1\in E_1\Hil_\phi$. If $U=1$ then $T(\varphi)=\varphi\otimes \phi\gamma$ is pure and $P_1=1$ (and other $P_i=0$); no information is gained. We choose a unit vector $\psi_i\in E_i\Hil_\phi$ for each $i>1$ and choose a unitary $u_i\in A$ (or $A+\C1$ if $A$ is non-unital) for $i\geq 1$ such that $\pi_\phi(u_i)\psi_1=\psi_i$. The existence of such $u_i$ follows from Kadison's transitivity \cite{Kad} since $\pi_\phi$ is irreducible. We set $U=\sum_ie_{ii}\otimes u_i$; the summation converges to a unitary as a multiplier of $\K\otimes A$, where $(e_{ij})$ are matrix units generating $\K$. Since $\bar{U}\xi_i\otimes \psi_1=\xi_i\otimes \psi_i$ where $(\xi_i)$ is an orthnormal basis of $\Hil$ with $e_{ii}\xi_i=\xi_i$, it follows that $\overline{\varphi\otimes \phi}(F_i)=\overline{\varphi\otimes\phi}(\bar{U}^*(1\otimes E_i)\bar{U})=\varphi(e_{ii})$ and $\varphi_i(x)=\Tr(e_{ii}x)$ for $x\in \K$ (when $\varphi(e_{ii})>0$). Hence for this choice of $\phi$ and $U$ we obtain $$ T(\varphi)|\K=\sum_i\varphi(e_{ii})\Tr(e_{ii}\ \cdot\ ), $$ which is what we expect by measuring e.g., the unbounded observable $\sum_n ne_{nn}\in M(\K)$. We should note that the von Neumann algebra $\M$ generated by all $E_i$ plays the same role as the von Neumann algebra generated by $M$ for the measuring process $(\cL,\phi,M,U)$ with $\cL=\Hil_\phi$ we discussed in the beginning. Previously $M$ is just an arbitrary self-adjoint operator on $\Hil_\phi$ and so the von Neumann algebra generated by $M$ can contain a non-zero compact operator. But the present $\M$ must satisfy $\M\cap \K(\Hil_\phi)=\{0\}$.\footnote{But this is not important as it is attained by tensoring an infinite-dimensional Hilbert space.} \section{Endomorphisms} As we have noted, Theorem 3.3 of \cite{K03} serves to produce the desired endomorphisms for any separable non-type I nuclear simple C$^*$-algebra. Here we show a concrete way to construct an asymptotically inner endomorphism $\gamma$ and an irreducible representation $\pi$ for the UHF algebra $A$ of type $k^\infty$ with $k>1$ such that $\pi\gamma(A)'$ is isomorphic to $\C^k$ but $A\cap \gamma(A)'=\C1$.\footnote{Which I do not have a specific reason to require but consider as a condition for $\gamma$ to be close to an automorphism.} We denote by $M_k$ the C$^*$-algebra of $k\times k$ matrices and denote by $v$ the diagonal matrix $1\oplus \omega\oplus \omega^2\oplus\cdots\oplus \omega^{k-1}$ with $\omega=e^{i2\pi/k}$. We define an automorphism $\sigma$ on $A=M_k\otimes M_k\otimes \cdots$ by $$ \sigma=\bigotimes_{n=1}^\infty\Ad\,v. $$ Since $v^k=1$ it follows that $\sigma^k=\id$. The fixed point algebra $A^\sigma=\{x\in A\ |\ \sigma(x)=x\}$ is isomorphic to $A$ (see \cite{St70} and \cite{K77}). This is easy to see if you know of AF algebras and associated Bratteli diagrams \cite{Br72}. We regard $M_k^{\otimes n}$ as the C$^*$-subalgebra of $A$ generated by the first $n$ copies of $M_k$. Since $v^{\otimes n}=\sum_{j=0}^{k-1}\omega^j E_j\in M_k^{\otimes n}$, where $E_j$'s are mutually orthogonal projections of $M_k^{\otimes n}$ of rank $k^{n-1}$, it follows that $(M_k^{\otimes n})^\sigma=\bigoplus_{j=0}^{k-1}E_jM_k^{\otimes n}E_j$ with $E_jM_k^{\otimes n}E_j\cong M_k^{\otimes (n-1)}$. Thus we can embed $M_k^{\otimes (n-1)}$ into $(M_k^{\otimes n})^\sigma$. We construct such embeddings consistently from $M_k\subset M_k^{\otimes 2}\subset M_k^{\otimes 3}\subset \cdots$ into $(M_k^{\otimes 2})^\sigma\subset (M_k^{\otimes 3})^\sigma\subset (M_k^{\otimes 4})^\sigma\subset\cdots$ preserving each level, where the closure of the union of the former (resp. latter) sequence is $A$ (resp. $A^\sigma$); thus we obtain the isomorphism $\gamma$ of $A$ onto $A^\sigma$, which is the endomorphism we aimed at and is asymptotically inner as all the unital endomorphisms of $A$ are. In our case this is also easy to see. Since $\gamma(M_k)\subset M_k^{\otimes 2}$, there is a (continuous) unitary path $u_t^{(1)},\ t\in [0,1]$ in $M_k^{\otimes 2}$ such that $u^{(1)}_0=1$ and $\Ad\,u^{(1)}_1(M_k)=\gamma(M_k)$. Since $\Ad\,u^{(1)}_1(M_k^{\otimes 2})=\gamma(M_k)\Ad\,u_1^{(1)}(1\otimes M_k)\subset M_k^{\otimes 3}$, it follows that both $\Ad\,u^{(1)}_1(1\otimes M_k)$ and $\gamma(1\otimes M_k)$ are unital subalgebras of $M_k^{\otimes 3}\cap \gamma(M_k)'$. Hence there is a unitary path $u_t^{(2)},\ t\in [0,1]$ in $M_k^{\otimes 3}\cap \gamma(M_k)'\cong M_k\otimes M_k$ such that $u_0^{(2)}=1$ and $\Ad(u_1^{(2)}u^{(1)}_1)(1\otimes M_k)=\gamma(1\otimes M_k)$. In this way we construct a unitary path $u_t^{(n)},\ t\in [0,1]$ in $M_k^{\otimes (n+1)}\cap \gamma(M_k^{(n-1)})'$ such that $u_0^{(n)}=1$ and $\Ad(u^{(n)}_1u^{(n-1)}_1\cdots u_1^{(1)})(1^{\otimes (n-1)}\otimes M_k)=\gamma(1^{\otimes (n-1)}\otimes M_k)$. Combining all these unitary paths $u^{(n)}_t$ into one continuous unitary path $u_t,\ t\in [0,\infty)$ in $A$ it follows that $\gamma(x)=\lim_{t\ra\infty}\Ad\,u_t(x),\ x\in A$. Thus $\gamma$ is an asymptotically inner endomorphism such that $\gamma(A)=A^\sigma$. Let $\phi_0$ be the pure state of $M_k$ defined by $\phi_0(x)=x_{11}$ for $x=(x_{ij})\in M_k$. Since $\phi_0(v)=1$ we have that $\phi_0\Ad\,v=\phi_0$. Let $\phi=\phi_0\otimes \phi_0\otimes\cdots$, which is a $\sigma$-invariant pure state of $A$. Define a unitary $U$ on the GNS representation space $\Hil_\phi$ by $$ U\pi_\phi(x)\Omega_\phi=\pi_\phi\sigma(x)\Omega_\phi,\ x\in A. $$ Then $\Ad\,U\pi_\phi(x)=\pi_\phi\sigma(x),\ x\in A$ and $U^k=1$. We can conclude that $\pi_\phi\gamma(A)'= U''\cong \C^k$ and that $\phi|\gamma(A)$ is a pure state. By using the above fact we can construct more general examples. Since the tensor product of an infinitely many copies of $A$ is isomorphic to $A$, one obtains a unital endomorphism of $A$ by $A\cong A\otimes A\otimes\cdots\ra \gamma(A)\otimes\gamma(A)\otimes\cdots \subset A\otimes A\otimes\cdots\cong A$, where the middle map is defined by $\gamma\otimes\gamma\cdots$. We denote this unital endomorphism by $\gamma^\infty$. Let $\phi^\infty$ be the pure state of $A\cong A\otimes A\otimes\cdots$ defined by $\phi\otimes\phi\otimes\cdots$. Since $\phi^\infty$ is invariant under the action $\sigma^\infty=\sigma\otimes\sigma\otimes\cdots$ of the compact group $G=\Z_k\times\Z_k\times\cdots$ on $A\cong A\otimes A\otimes\cdots$ and $\phi^\infty|\gamma^\infty(A)$ is pure, it follows that $\pi_{\phi^\infty}\gamma^\infty(A)'$ is isomorphic to $\ell^\infty(\hat{G})\cong\ell^\infty(\N)$. Define a unit vector $\xi\in \Hil_\phi$ by $\xi=n^{-1/2}\sum_{j=1}^k\pi_\phi(e_{j1}^{(1)})\Omega_\phi$ and a pure state $\psi$ of $A$ by $\psi(x)=\lan \xi, \pi_\phi(x)\xi\ran$, where $(e^{(1)}_{ij})$ is the matrix units of $M_k\subset A$. Then $\psi$ is not $\sigma$-invariant but $\sigma$-covariant (i.e., $\pi_\psi=\pi_\phi$ is $\sigma$-covariant). Let $\psi^\infty$ is the state of $A\cong A\otimes A\otimes \cdots$ defined by $\psi\otimes\psi\otimes\cdots$. Then $\psi$ is covariant under the action obtained by restricting $\sigma^\infty$ to the discrete subgroup $\hat{G}=\bigcup_{n=1}^\infty \Z_k\times \Z_k\times\cdots \times\Z_k (n\ {\rm factors})$ of $G$. Since there are no $\sigma^\infty|\hat{G}$-invariant states associated with $\pi_{\psi^\infty}$ we can conclude that $\pi_{\psi^\infty}\gamma(A)'\cong L^\infty(G)$, which is completely non-atomic. (If $\pi_{\psi^\infty}\gamma(A)'$ has a minimal projection $E$ then a unit vector in $E\Hil_{\psi^\infty}$ defines a $\sigma^\infty|\hat{G}$-invariant state of $A$, which must be $\sigma^\infty$-invariant, leading us to a contradiction.) Let $\phi_1$ be the pure state of $M_k$ defined by $\phi_1(x)=k^{-1}\sum_{i,j}x_{ij}$ and let $\chi=\phi_1\otimes\phi_1\otimes\cdots$ as a state of $A$. We define $\gamma$ in the most natural way, i.e., regarding $M_k^{\otimes (n-1)}=M_k(M_k^{\otimes (n-2)})$ as a giant matrix algebra $M_{k^{(n-1)}}$ in the natural way we embed $M_k^{\otimes (n-1)}$ into $(M_k^{\otimes n})^\sigma$ componentwise. Then one can easily see that $\chi\gamma=\chi$. Since $\chi,\chi\gamma,\cdots,\chi\gamma^{k-1}$ are mutually disjoint, it follows that $\pi_\chi\gamma(A)'=\C1$ as $\gamma(A)=A^\sigma$. In particular $A\cap\gamma(A)'=\C1$. In the representation $\pi_\chi$ there is a unitary $U$ such that $\Ad\,U\pi_\chi(x)=\pi_\chi\gamma(x),\ x\in A$, i.e., $\gamma$ is at least implemented by a unitary in some representation like an automorphism. \medskip {\bf Note added in August 2014} A 1972 paper \cite{Hep} by Klaus Hepp came to my attention while browsing the internet, which manifests an idea behind this note; disjointness of C$^*$-algebra states as macroscopic observables is used to explain the reduction of wave packets. Thus, a modest contribution of this note since then is presenting explicitly endomorphisms as a device achieving this. \small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Benchmark for two-point correlation using bulk criticality}\label{se1} We use an estimator of equal-imaginary-time correlations, which avoids reweighting along imaginary-time axis and turns out to be computationally cheap. The estimator correctly captures the asymptotic behavior in the $L \rightarrow \infty$ limit. Specifically speaking, in the worm quantum Monte Carlo simulations, we trace the trajectories of the defects $I$ and $M$ on an edge. If the imaginary-time distance between the defects is less than the $1/L$ faction of entire axis, the distance $r$ of two defects along the edge is recorded. The follow-up treatment is similar to the measurement of two-point correlations in a classical model~\cite{lv2021finite} which was based on the original idea in Ref.~\cite{prokof2001worm}. We use the $r=1$ result to normalize the two-point correlation and concentrate on the $r \ne 0$ domain of correlation function. Hence, the results do not suffer from the biased allocations of statistical weight between original and Green function state spaces. Finally, we obtain the two-point correlation $g(r)$ as a function of $r$ along the edge. We proceed to benchmark the above-mentioned methodology for correlation function using the bulk criticality. Particularly, we apply periodic conditions for both [10] and [01] directions to eliminate the open edges and sample the correlation functions at $t_c$. We analyze the $r$ dependence of $g(r)$ as well as the $L$-dependent behavior of $g(L/2)$. We quote a precise estimate $\eta = 0.03853(48)$ for the anomalous dimension of the (2+1)-dimensional O(2) criticality~\cite{Xu2019}. As shown in Fig.~\ref{SM_fig1}(a), the $r$-dependent behavior converges to the power law $g(r) \sim r^{2-(d+z)-\eta}$, with $d=2$, $z=1$ and $\eta \approx 0.03853$. From Fig.~\ref{SM_fig1}(b), we verify that $g(L/2)$ scales as $g(L/2) \sim L^{-1.03853}$. \begin{figure} \includegraphics[height=11cm,width=8cm]{sm_1.pdf} \caption{Bulk critical point. (a) Log-log plot of $g(r)$ versus $r$. (b) Log-log plot of $g(L/2)$ versus $L$.}~\label{SM_fig1} \end{figure} More quantitative verification can be achieved by least-squares fits. We fit $g(L/2)$ to \begin{equation} g(L/2)=a L^b, \label{eqfitBOK} \end{equation} where $a$ is a constant and $b=-1-\eta$. The results are summarized in Table~\ref{TABLE_B}. We obtain $b=-1.027(6)$ and $\chi^2/{\rm DF} \approx 1.2$ for $L_{\rm min}=48$, $b=-1.03(1)$ and $\chi^2/{\rm DF} \approx 1.5$ for $L_{\rm min}=64$, as well as $b=-1.06(3)$ and $\chi^2/{\rm DF} \approx 1.4$ for $L_{\rm min}=96$. The estimates of $b$ are consistent with $-1-\eta=-1.03853(48)$ of the (2+1)-dimensional O(2) universality. \section{Critical phenomena on open edges}\label{se2} In this section, we perform FSS analyses for the special transition, the Kosterlitz-Thouless-like criticality, the ordinary critical phase and the extraordinary-log critical phase. \begin{table} \begin{center} \caption{Fits of $g(L/2)$ to Eq.~(\ref{eqfitBOK}) at the bulk critical point.} \label{TABLE_B} \begin{tabular}{p{1.2cm}p{1.5cm}p{1.5cm}p{1.3cm}} \hline \hline $L_{\rm min}$ & $\chi^2$/DF &$a$&$b$\\ \hline 32 &20.62/4&2.65(3)&--1.009(3) \\ 48 &3.45/3&2.86(6)&--1.027(6) \\ 64 &3.04/2&2.9(1)&--1.03(1) \\ 96 &1.41/1&3.4(4)&--1.06(3) \\ \hline \hline \end{tabular} \end{center} \end{table} \headline{Special transition} We locate the special transition point by the FSS of the winding probability $R_{[10]}$. We perform fits according to \begin{equation} R_{[10]} = R_{[10]}^c+a_1 (\kappa-\kappa_c)L^{y_t}, \label{eqfitS} \end{equation} where $R_{[10]}^c$ is the critical dimensionless ratio, $a_1$ represents a fitting parameter, $\kappa_c$ denotes the transition point, and $y_t$ relates to the correlation length exponent $\nu$ by $y_t=1/\nu$. We perform least-squares fits with $\kappa=1.16, 1.18, 1.2$ and $L=48, 64, 96, 128, 192$, and obtain reasonably good results for large $L_{\rm min}$. We also perform fits by fixing $y_t$ at $0.608$ and $0.58$, which were estimated for the special transition in classical O(2) models in spin~\cite{deng2005surface} and flow representation~\cite{unpublished}, respectively. The results are listed in Table~\ref{TABLE_S}. By comparing the fits, our final estimate of $\kappa_c$ is $\kappa_c=1.18(2)$. \headline{Kosterlitz-Thouless-like criticality} We explore the critical phase at the large-$t$ side of the Kosterlitz-Thouless-like transition for $\kappa=10$. For each $t$ in the set \{$0.027$, $0.03$, $0.035$, $0.04$, $0.045$, $0.05$\}, we perform scaling analyses for $g(L/2)$ according to Eq.~(\ref{eqfitBOK}) with $b=-\eta$, which captures the leading FSS, and the results are summarized in Table~\ref{TABLE_K}. The fits are precise only at large sizes. Moreover, as $t$ increases, the exponent $\eta$ decreases. \headline{Ordinary critical phase} We consider the ordinary critical phase with the hopping enhancement $\kappa=0.4$ at $t_c$. We analyze $g(L/2)$ by Eq.~(\ref{eqfitBOK}) with $b=2y_h-4$. The results are presented in Table~\ref{TABLE_O}. For $L_{\rm min}=48$, $64$ and $96$, we find $b=-2.41(2)$, $-2.45(4)$ and $-2.5(1)$, with $\chi^2$/DF $\approx 1.3$, $1.1$ and $1.4$, respectively. These results are compatible with the exponent $2y_h-4$ with $y_h=0.781(2)$ of the classical O(2) ordinary surface criticality~\cite{deng2005surface}. \headline{Extraordinary critical phase} We analyze the FSS for the extraordinary phase. We fit $g(L/2)$ to \begin{equation} g(L/2) =a [{\rm ln}(L/l_0)]^{-\hat{q}}, \label{eqfitE1} \end{equation} and the results are listed in Table~\ref{TABLE_E1}. If $\hat{q}$ is free, we obtain $0.3 \lessapprox \hat{q} \lessapprox 0.7$, and find the tendency that $l_0$ drastically decreases upon increasing $\kappa$. To suppress uncertainty, we fix $\hat{q}=0.59$. For each considered $\kappa$, we obtain stable fitting results of $a$ and $l_0$. For $l_0$, instance results are $l_0=0.31(3)$, $0.21(1)$, $0.04(4)$, $0.0108(5)$ and $0.002(1)$ with $\chi^2$/DF $\approx$ $0.3$, $1.8$, $0.9$, $0.7$ and $0.5$, for $\kappa=2$, $3$, $5$, $7$ and $10$, respectively. Assuming a unique critical universality for the extraordinary phase, we analyze the sum of scaled superfluid stiffness $\rho_s L$ over $\kappa=2$, $3$, $5$, $7$ and $10$ by performing fits to \begin{equation} \sum_{\kappa} \rho_s L =A + B {\rm ln}L. \label{eqfitE2} \end{equation} As summarized in Table~\ref{TABLE_E2}, we obtain reasonably good fits with $\chi^2/{\rm DF} \sim 1$ for $L_{\rm max}=192$ and $128$. For $L_{\rm max}=192$, we obtain $A=-5.3(7)$, $B=5.8(2)$ and $\chi^2/{\rm DF} \approx 2.0$ with $L_{\rm min}=64$, as well as $A=-8.8(2.0)$, $B=6.5(4)$ and $\chi^2/{\rm DF} \approx 0.5$ with $L_{\rm min}=96$. For $L_{\rm max}=128$, we obtain $A=-4.6(8)$, $B=5.6(2)$ and $\chi^2/{\rm DF} \approx 0.8$ with $L_{\rm min}=64$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and statement of the result}\label{s1} The Julia set of a non-linear polynomial $P: \Ci\to \Ci$ is the set of points having no neighborhood on which the family of iterates $(P^n)$ is normal. It is a compact non-empty set which is, except for very special polynomials $P$, a fractal set. For $c\in \Ci$ we denote by $f_c$ the polynomial $$f_c(z)=z^d+c,$$ where $d\ge 2$ is an even integer number, which we fix. Denote by $J_c$ the Julia set of $f_c$. In the quadratic case $d=2$, the values of $c$ for which the Hausdorff dimension (HD) of $J_c$ is big has attracted a lot of attention. It has been known since the pioneering work of Douady-Hubbard ~\cite{DH} that the Hausdorff dimension of the Julia set is less than $2$ for every hyperbolic polynomial. Thus $HD(J_c)<2$ outside $\partial M$ and the (hypothetic) non-hyperbolic components of (the interior) $M$. We recall that $M$ stands for the Mandelbrot set, that is the compact subset of $c\in \Ci$ such that $J_c$ is connected. Shishikura ~\cite{Shi} was the first to find quadratic Julia sets with Hausdorff dimension $2$. He indeed proved that this property holds on a dense $\cal{G}_\delta$ subset of $\partial M$ or even on a dense $\cal{G}_\delta$ subset on the boundary of every hyperbolic component of $M$. More recently, Buff and Ch\'eritat ~\cite{BC} have found quadratic Julia sets with positive Lebesgue measure (see also http://annals.math.princeton.edu/articles/3682). Both Shishikura's and Buff-Ch\'eritat's results are based on the phenomenon of parabolic implosion which has been discovered and studied by Douady-Hubbard \cite{DH}. It should be pointed out that Buff-Ch\'eritat's result is very involved and that we will make no use of it. There is no doubt that if they exist, values of $c\in \bf{R}$ such that the Julia set $J_c$ has positive measure must be as hard to find as Buff-Cheritat's ones. The aim of present note is much more modest. Its starting point is the second author re-readind of Shishikura's result ~\cite{Z}: it states that if one implodes a polynomial with a parabolic cycle having $q$ petals, then the dimension of its Julia set automatically gets bigger than $2q/(q+1)$. Shishikura's result follows from this by a Baire argument. Very little is known about Hausdorff dimension of $J_c$ for real $c$. In particular, it is not known if, for a given degree $d$, $$\sup\{HD(J_c), c\in \bf {R}\}=2.$$ Possible candidates for high dimension are of course (just look at them!) infinitely renormalizable polynomials but the analysis seems to be very delicate and at least no result concerning dimension $2$ has been proven so far (for results in the opposite direction, see ~\cite{AL} though). It is for example unknown if the Julia set of the Feigenbaum polynomial has dimension $2$ or not (see~\cite{LS1}-~\cite{LS2} for the Julia set of the Feigenbaum universal map though). The only known result about this set is a very general result of Zdunik \cite{Zd}: it has dimension bigger than $1$. If one tries to use the same ideas as in ~\cite{Shi} for real polynomials $f_c$, one immediately faces the problem that if $f_c$, for $c$ real, has a parabolic cycle that may be imploded along the real axis then the number of petals is $1$ and this does not imply more than Zdunik's general result. The only trick of this paper is to make use of a virtual doubling of petals when the critical point is mapped to a parabolic point (by Lavaurs map). It was inspired by Douady et al paper~\cite{BDDS} and implies the following theorem, which is the main result of this work. \begin{theo}\label{t} Let $f_c(z)=z^d+c$, $d$ even. Let $N$ be the set of parameters $c\in \bf{R}$, such that $f_c$ has a parabolic cycle of period at least $2$ and multiplier $1$. Then there exists an open set $Y$ of $\bf{R}$ whose closure contains $N$ such that $J_c$ is connected and $$HD(J_c)>\frac{2d}{d+1},$$ for every $c\in Y$. \end{theo} \begin{com} In fact, we prove a stronger statement: hyperbolic dimension~\cite{Shi} of $J_c$ is bigger than $2d/(d+1)$. \end{com} \begin{com} By ~\cite{KSS} (~\cite{GS},~\cite{LY} for $d=2$), the set of real $c$ such that $f_c$ is hyperbolic is dense in $\bf{R}$. In particular, hyperbolic parameters $c$ are dense in $Y$. \end{com} {\bf Acknowledgment.} This work was done during the first author's one month's visit at the university of Orl\'eans in 2008. \section{Proof of the theorem} We fix an even integer $d\ge 2$ and consider the family $f_c(x)=x^d+c$, for real $c$. Then the Julia set $J_c$ is connected if and only if $c\in [a, b]$, where $a<0$ is such that $f_a^2(0)$ is a fixed point, and $b>0$ is such that $f_b$ has a fixed point with multiplier $1$. It is sufficient to prove that, given $c_0\in (a,b)$ such that $f_{c_0}$ has a neutral cycle of period $k>1$ with multiplier $1$, that is, with one petal, there is an open set $Z$ accumulating at $c_0$ for which $HD(J_c)>2d/(d+1)$ for $c\in\Z$. We begin with three pictures. The first two ones illustrate how to choose the parameter (for $d=2$) and how the corresponding Julia set looks like. The third one has been kindly drawn for us by the referee and shows the case $d=4$. \begin{figure}[!ht] \begin{center} \includegraphics[width=10cm,angle=270]{mand.eps} \caption{Choice of the parameter} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=10cm,angle=270]{Jul.eps} \caption{The corresponding Julia set} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=10cm, angle=270]{d4.eps} \caption{The case d=4} \end{center} \end{figure} Since $c_0\in (a, b)$, the set $J_c$ is connected for $c$ in a small neighborhood $U$ of $c_0$. It is known, that $f_c$ has an attracting periodic orbit of period $k$ for $c\in U$ on the left side of $c_0$. Since $k>1$, the corresponding filled-in Julia set $K_{c_0}$ is such that its interior has a component $\Delta$ containing $0$, and infinitely many preimages of $\Delta$. The boundary $\partial \Delta$ contains a parabolic point $\alpha$ of period $k$. In particular, there is a sequence of preimages of $\Delta$, which intersect the real line and accumulate at $\alpha$. For definicity, one can assume that $\alpha>0$. Then $F=f_{c_0}^k$ has the following local form: \begin{equation}\label{loc} F(z)=z+a(z-\alpha)^2+b(z-\alpha)^3+..., \end{equation} where $a>0$. This implies that a parabolic implosion phenomenon occurs as $\epsilon\to 0$, $\epsilon>0$, for the maps $f_{c_0+\epsilon}$. \\ At this point we digress somewhat and describe briefly the theory of parabolic implosion.\\ Let $\delta>0$ be very small and $D_{\pm }$ be the disks centered at $\alpha\pm \delta$ with radius $\delta$. The map $F$ sends $D_-$ into itself while $D_+\subset F(D_+)$. This defines, after identification of $z$ with $F(z)$ at the boundary, two cylinders $U_-=D_-\backslash F(D_-)$ and $U_+=F(D_+)\backslash D_+$. The fact that $U_\pm $ are actual cylinders is best seen in the approximate Fatou coordinate $I: z\mapsto -1/(a(z-\alpha))$ which sends $\alpha$ to $\infty$ and conjugates $F$ to a map which is asymptotically the translation by $1$ at $\infty$: \begin{equation}\label{Finf} F_\infty(w)=I\circ F\circ I^{-1}(w)=w+1+\frac{A}{w}+O(\frac{1}{|w|^2}), \end{equation} where $A=1-b/a^2$. The real number $A$ is a conformal invariant. In the case of real polynomial $F$ which has a parabolic fixed point $\alpha$ with multiplier $1$ and with a single critical point in its immediate basin $\Delta$, it is known~\cite{Shi} that $A>0$.\\ \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm,angle=270]{Fatou.eps} \caption{Fatou coordinates} \end{center} \end{figure} By Riemann mapping theorem these two cylinders may be uniformized by "straight" cylinders. In other words there exists $\varphi_\pm $ mapping the cylinders $U_\pm $ to vertical strips $V_\pm $ of width $1$ conjugating $F$ to the translation by $1$. For further use we notice that, by symmetry, $c_0$ being real, we may assume that $\varphi_\pm (\overline{z})=\overline{\varphi_\pm (z)}$.\\ We also notice that these maps are unique up to post-composition by a real translation. These two maps are called respectively repelling (+) and attracting (-) Fatou coordinates. We normalize them as follows. For every $\kappa>0$ and $R>0$, consider two sectors $\Sigma_-(\kappa, R)=\{w: Re(w)>R-\kappa |Im(w)|\}$, $\Sigma_+(\kappa, R)=\{w: Re(w)<-R+\kappa |Im(w)|\}$. Then for any $\kappa>0$ there is a big enough $R(\kappa)$, such that, if we introduce two sectors $\Sigma_\pm(\kappa)=\Sigma_\pm(\kappa, R(\kappa))$, then $\varphi_\pm=\Phi_\pm\circ I$, where \begin{equation}\label{phi} \Phi_\pm(w)=w- A \log_\pm(w)+C_\pm+o(1) \end{equation} as $w\to \infty$ within $\Sigma_\pm(\kappa)$ respectively. We specify the constants $C_\pm$ and the log-branches in such a way, that $\Phi_\pm$ are real for real $w$. Namely, we choose $C_-=0$ and $\log_-$ to be the standard log-branch in the slit plane $\Ci\setminus \{x\le 0\}$. In turn, let $C_+=i A \pi$ and $\log_+$ to be a branch of $log$ in $\Ci\setminus \{x\ge 0\}$ so that $\log_+(w)=\log|w|+i\pi$ for $w<0$. Now, we extend $\varphi_\pm$ in the following way. Since \begin{equation} \varphi_-(F(z))=T_{1} (\varphi_-(z)) \label{F-}\end{equation} where $T_\sigma$ denotes the translation by $\sigma$, and since every orbit converging to $\alpha$ passes through $U_-$ exactly once, $\varphi_-$ extends uniquely to $\Delta$ to an holomorphic function still satisfying ($\ref{F-}$). It is seen from~(\ref{F-}), that the map $\varphi_-: \Delta\to \Ci$ is a branched covering, with the critical points at $0$ and all its preimages in $\Delta$ by $F^n$, $n>0$, and the critical values at the real numbers $\varphi_-(0)-n$, $n\ge 0$. In particular, there exists a simply-connected domain $\Omega_-\subset \Delta\cap {\bf H}^+$, where ${\bf H}^+$ is the upper half-plane, such that $\varphi_-: \Omega_-\to {\bf H}^+$ is a holomorphic homeomorphism. Moreover, the intersecrion of the boundary of $\Omega_-$ with $\bf{R}$ is the interval $(0, \alpha)$. Concerning the repelling Fatou coordinate it is best to consider $\psi_+= \varphi_-^{-1}:\, V_+\to U_+$. The functionnal relation is now \begin{equation}\psi_+(T_1(z))=F(\psi_+(z)) \label{F+}\end{equation} and we can extend $\psi_+$ to an entire function by putting, for $n\in\Z,\,\psi_+(T_n(z))=F^n(\psi_+(z))$ for $z\in V_+$. There exists a simply connected domain $\Omega_+\subset {\bf H}^+$, such that $\psi_+: \Omega_+\to {\bf H}^+$ is a homeomorphism.\\ Let now $\sigma$ be a real number. We define the Lavaurs map $g_\sigma$ on the component $\Delta$ of the interior of the filled-in Julia set of $f_{c_0}$ by $$g_\sigma=\psi_+\circ T_\sigma\circ \varphi_-.$$ The "raison d'\^etre" of this definition is the following theorem due to Douady and Lavaurs ($\cite{AD}$): \begin{theo}:There exists a sequence of positive $\epsilon_n$ converging to $0$ and a sequence of positive integers $N_n$ such that \begin{equation} g_\sigma(z)=\lim_{n\to \infty} f_{c_0+\epsilon_n}^{k N_n}(z) \label{Lavaurs}\end{equation} uniformly on compact sets of $\Delta$. \end{theo} Using~(\ref{phi}) with the constants $C_\pm$ and the log-branches specified as above, it is easy to get, that, for every $\kappa$, if $w$ tends to $\infty$ in $\Sigma(\kappa):=\Sigma_-(\kappa)\cap \Sigma_+(\kappa)\cap {\bf H}^+$, then $g_\infty(w):=I\circ g_\sigma\circ I^{-1}(w)=w+(\sigma-i A\pi)+O(\frac{1}{|w|})$, where $A$ is real and positive. Therefore, for every real $\sigma$ and every $\kappa>\kappa(\sigma)$, the inverse map $g_\infty^{-1}$ leaves the sector $\Sigma(\kappa)$ invariant and $g_\infty^{-n}\to \infty$ as $n\to \infty$. Coming back to the $z$-plane, we conclude that the branch $G=I^{-1}\circ g_\infty^{-1}\circ I$ of $g_\sigma^{-1}$ leaves the set $S(\kappa)=I^{-1}(\Sigma(\kappa))$ invariant, and $G^n(z)\to \alpha$ as $n\to \infty$, for $z\in S(\kappa)$ and every $\kappa>\kappa(\sigma)$. We have, for $w\in \Sigma(\kappa)$: \begin{equation}\label{Ginf} G_\infty(w):=I\circ G\circ I^{-1}(w)=w+(-\sigma+i A\pi)+O(\frac{1}{|w|}), \end{equation} Now, from the definition of $g_\sigma(z)$ and the global properties of the maps $\varphi_-$ and $\psi_+$, it follows the existence of a simply-connected domain $\Omega_0\subset \Omega_-$, which is mapped by $g_\sigma$ homeomorphically onto ${\bf H}^+$ and such that $\alpha\in \bar \Omega_0$. Moreover, from the above description, $S(\kappa)\subset \Omega_0$, for every $\kappa>\kappa(\sigma)$. Therefore, the branch $G$ of $g^{-1}$ which is defined above, extends to a global univalent branch $G: {\bf H}^+\to \Omega_0$ of $g_\sigma^{-1}$. Since $\Omega_0\subset {\bf H}^+$, the iterates $G^n: {\bf H}^+\to \Omega_0$, $n>0$, converge uniformly on compact sets in ${\bf H}^+$ to a unique fixed point in $\bar \Omega_0$, which must be $\alpha$. Let us consider the continuous map $\sigma\mapsto g_\sigma(0)=\psi_+(\varphi_-(0)+\sigma)$: if $\sigma$ runs in the interval $I=\{\varphi_+(x)-\varphi_-(0): x\in U_+\cap \R\}$, then $g_\sigma(0)$ runs over $U_+\cap \R$. It is thus clear, and this is the key point in the proof, that we can choose $\sigma$ in such a way that $g_\sigma(0)$ is a preimage of $\alpha$: there is $j\ge 1$, such that $f^j_{c_0}\circ g_\sigma(0)=\alpha$. Since $0$ is a critical point for $g_\sigma$, taking the inverse image by $f^j_{c_0}\circ g_\sigma$ has the same effect as multiplying the number of petals by $d$ and we may state: \begin{lem}\label{l} There exists an infinite iterated function system defined on a small compact neighborhood $B_0$ of zero and generated by some holomorphic branches of $f_{c_0}^{-1}$ and $g_{\sigma}^{-1}$ such that its limit set has Hausdorff dimension bigger than $2d/(d+1)$. \end{lem} \begin{com}\label{jl} In fact, the limit set is a subset of a so-called Julia-Lavaurs set denoted by $J_{c_0, \sigma}$. It is defined as follows. The map $g_\sigma$ can be extended in a natural way from $\Delta$ to the interior of the filled-in Fatou set of $f_{c_0}$: if $f_{c_0}^k(z)\in \Delta$, set $g_\sigma(z)=g_\sigma\circ f_{c_0}^k(z)$. Then $J_{c_0, \sigma}$ is simply the closure of the set of points $z$ for which there exists $m\in\N$ such that $g_\sigma^m(z)$ is defined and belongs to $J(f_0)$. \end{com} This lemma together with the above discussion implies the theorem. Indeed, by a general property of iterated function systems~\cite{mu}, there exists its finite subsystem with the Hausdorff dimension of its limit set bigger than $2d/(d+1)$. On the other hand, the finite iterated function system persists for $f_{c_0+\epsilon}$ by (\ref{Lavaurs}). To be more precise, if $\{I_j: B_0\to X_j, 1\le j\le j_0\}$ is this finite iterated function system, then each $I_j$ can be extended to a univelent map to a fixed neighborhood $Y$ of $B_0$ as $I_j: Y\to Y_j$. Consider the inverse univalent map $I^{-1}_j: Y_j\to Y$. Since the convergence in (\ref{Lavaurs}) is uniform on compacts in $\Delta$, for every $\epsilon_n$ small enough there is some integer $N_j>0$ and a compact set $X_{j, n}$, so that $f_{c_0+\epsilon_n}^{N_j}: X_{j, n}\to B_0$ is univalent, too. Now it is clear, that, for every $\epsilon_n$ small enough, the non-escaping set $K_n$ of the dynamical system which consists of a finitely many maps $f_{c_0+\epsilon_n}^{N_j}: X_{j, n}\to B_0$, $1\le j\le j_0$, has the Hausdorff dimension which is bigger than $2d/(d+1)$. On the other hand, $K_n$ must lie in the Julia set of $f_{c_0+\epsilon_n}$ because some iterate of the map $f_{c_0+\epsilon_n}^{N_1 N_2 ... N_{j_0}}$ leaves the set $K_n$ invariant and is expanding on it. \paragraph{Proof of Lemma~\ref{l}.} As the first step, let us fix a small enough closed ball $B_0$ around zero, so that it does not contain points of the postcritical set of $f_{c_0}$. There exists its preimage $B'$ by $F^{-1}$ in $\Delta\cap {\bf H}^+$. Then we can apply to $B'$ the maps $G^n$, $n>0$. By the above, $B'_n=G^n(B')$ are pairwise disjoint, compactly contained in $\Delta$, and $B_n'\to \alpha$ as $n\to \infty$. Now, for every $n\ge n_0$, so that $B_n'$ lies in a small enough neighborhood $U$ of $\alpha$, we make ``clones'' of $B_n'$ in $U\cap \Delta$ applying to it $F^r$, $r\in\Z$, where $F^{r}$ for $r<0$ is a well-defined in $U\cap \Delta$ branch which fixes $\alpha$ . We obtain the sets $B'_{n,r}=F^r(B_n')$. On the second step, we consider the map $f_{c_0}^j\circ g_\sigma$ from a neighborhood $V$ of $0$ onto $U$, This map is a remified cover with the only remification point at $0$ of order $d$. Let $U^*=U\setminus \{x\ge \alpha\}$, and $V^*=V\cap \{z: Arg(z)\in (0, 2\pi/d)\}$. Denote by $h$ a branch of $(f_{c_0}^j\circ g_\sigma)^{-1}$ from $U^*$ onto $V^*$. Let $B_{n,r}=h(B'_{n,r})$. We obtain a system of holomorphic maps ${\bf \Psi}=\{\psi_{n,r}: B_0\to B_{n,r}\}$, where $\psi_{n,r}=h\circ F^r\circ G^n\circ F^{-1}$, $r\in\Z, n\ge n_0$. If the neighborhood $U$ is chosen small enough, the maps $\psi_{n,r}$ extend to univalent maps in a fixed neighborhood $\tilde B$ of $B_0$ into itself. In particular, the compact sets $B_{n,r}$ are pairwise disjoint and compactly contained in $B_0$. Now, it is quite standard to check that ${\bf \Psi}$ form a conformal infinite iterated function system in the sense of~\cite{mu} (strictly speaking, in the hyperbolic metric of $\tilde B$, which is equivalent to the Eucledian one on $B_0$ though). Let us calculate the parameter $\theta=\inf\{t: \psi(t)<\infty\}$ of ${\bf \Psi}$, where $\psi(t)=\sum_{(n,r)} \max_{z\in B_0}|\psi_{n,r}'(z)|^t$. The map $\psi_{n,r}=h\circ I^{-1}\circ F_\infty^r\circ G_\infty^n\circ I\circ F^{-1}$. Here $F^{-1}$ is a univalent map of a neighborhood $\tilde B$ of $B_0$ into $\Delta$. Now, routine and well-know calculations based on~(\ref{Finf})-(\ref{Ginf}) show (see e.g.~\cite{Z}), that, for all $r\in\Z, n\ge n_0$ and some $C$, which depends only on a compact set in ${\bf H}^+\cap \Delta$, from which $w$ is taken, $C^{-1}|r+(-\sigma+i\pi A)n|\le |F_\infty^r\circ G_\infty^n(w)|\le C|r+(-\sigma+i\pi A)n|$, and $C^{-1}\le |(F_\infty^r\circ G_\infty^n)'(w)|\le C$. On the other hand, the map $h$ is a composition of a univalent map with an inverse branch of $z^{1/d}$. This gives us: $C_1^{-1} |r+(-\sigma+i\pi A)n|^{-1-1/d}\le |\psi_{n,r}'(z)|\le C_1 |r+(-\sigma+i\pi A)n|^{-1-1/d}$, for some $C_1$ and every $z\in B_0$. It follows that the series for $\psi(t)$ converges if and only if $t>\theta=2d/(d+1)$, and $\psi(\theta)=\infty$. Hence~\cite{mu}, the Hausdorff dimension of the limit set of ${\bf \Psi}$ is strictly bigger than $2d/(d+1)$. \small{Inst.\ of Math., Hebrew University, Jerusalem 91904, Israel,\\MAPMO, Universit\'e d'Orl\'eans, BP 6759 45067 Orl\'eans Cedex, France}\\ \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the year 1978 a groundbreaking result in the theory of homogenisation has been found by Francois Murat and Luc Tartar, the celebrated $\dive$-$\curl$ lemma (\cite{M78} or \cite{T09}): \begin{theorem}\label{t:dcl} Let $\Omega\subseteq \mathbb{R}^d$ open, $(u_n)_n,(v_n)_n$ in $L^2(\Omega)^d$ weakly convergent. Assume that \[ (\dive u_n)_n= (\sum_{j=1}^d \partial_j u_n)_n,\quad (\curl u_n)_n=\left((\partial_j u_n^{(k)}-\partial_k u_n^{(j)})_{j,k}\right)_n \] are relatively compact in $H^{-1}(\Omega)$ and $H^{-1}(\Omega)^{d\times d}$, respectively. Then $(\langle u_n,v_n\rangle_{\mathbb{C}^d})_n$ converges in $\mathcal{D}'(\Omega)$ and we have \[ \lim_{n\to\infty} \langle u_n,v_n\rangle_{\mathbb{C}^d} = \langle \lim_{n\to\infty}u_n,\lim_{n\to\infty}v_n\rangle_{\mathbb{C}^d}. \] \end{theorem} Ever since people were trying to generalise the latter theorem in several directions. For this we refer to \cite{BCM09}, \cite{L17}, \cite{GM08}, and \cite{LR12} just to name a few. It has been observed that the latter theorem has some relationship to the de Rham cohomology, see \cite{T09}. We shall also refer to \cite{X14}, where the Helmholtz decomposition has been used for the proof of the div-curl lemma for the case of 3 space dimensions. We will meet the abstract counter part of the Helmholtz projection in our abstract approach to the div-curl lemma. In any case, the sequence property of the differential operators involved plays a crucial role in the derivation of the div-curl lemma. Note that, however, there are results that try to weaken this aspect, as well, see \cite{G07}. In this note, in operator theoretic terms, we shall further emphasise the intimate relation of the sequence property of operators from vector analysis and the div-curl lemma. In particular, we will provide a purely functional analytic proof of the $\dive$-$\curl$ lemma. More precisely, we relate the so-called ``global'' form (\cite{S16}) of the div-curl lemma to functional analytic realisations of certain operators from vector analysis, that is, to compact sequences of operators in Hilbert spaces. Moreover, having provided this perspective, we will also obtain new variants of the div-curl lemma, where we apply our abstract findings to the Pauly--Zulehner $\Grad\grad$-sequence, see \cite{PZ17} and \cite{QB15}. With these new results, we have paved the way to obtain homogenisation results for the biharmonic operator with variable coefficients, which, however, will be postponed to future research. The next section contains the functional analytic prerequisites and our main result itself -- the operator-theoretic version of the div-curl lemma. The subsequent section is devoted to the proof of the div-curl lemma with the help of the results obtained in Section \ref{s:adcl}. In the concluding section, we will apply the general result to several examples. \section{An Abstract $\dive$-$\curl$ Lemma}\label{s:adcl} We start out with the definition of a (short) sequence of operators acting in Hilbert spaces. Note that in other sources sequences are also called ``complexes''. We use the usual notation of domain, range, and kernel of a linear operator $A$, that is, $\dom(A)$, $\rge(A)$, and $\kar(A)$. Occasionally, we will write $\dom(A)$ to denote the domain of $A$ endowed with the graph norm. \begin{definition} Let $H_j$ be Hilbert spaces, $j\in\{0,1,2\}$. Let $A_0 \colon \dom(A_0)\subseteq H_0\to H_1$, and $A_1 \colon \dom(A_1)\subseteq H_1 \to H_2$ densely defined and closed. The pair $(A_0,A_1)$ is called a \emph{(short) sequence}, if $\rge(A_0)\subseteq \kar(A_1)$. We say that the sequence $(A_0,A_1)$ is \emph{closed}, if both $\rge(A_0)\subseteq H_1$ and $\rge(A_1)\subseteq H_2$ are closed. The sequence $(A_0,A_1)$ is called \emph{compact}, if $\dom(A_1)\cap \dom(A_0^*) \hookrightarrow H_1$ is compact. \end{definition} We recall some well-known results for sequences of operators in Hilbert spaces, we refer to \cite{PZ17} and the references therein for the respective proofs. \begin{theorem}\label{t:tb} Let $(A_0,A_1)$ be a sequence. Then the following statements hold: \begin{enumerate}[label=(\alph*)] \item\label{t1} $(A_1^*,A_0^*)$ is a sequence; \item\label{t2} $(A_0,A_1)$ is closed if and only if $(A_1^*,A_0^*)$ is closed. \item\label{t3} $(A_0,A_1)$ is compact if and only if $(A_1^*,A_0^*)$ is compact; \item\label{t4} if $(A_0,A_1)$ is compact, then $(A_0,A_1)$ is closed. \item\label{t5} $(A_0,A_1)$ is compact if and only if both $\dom(A_0)\cap \kar(A_0)^\bot\hookrightarrow \kar(A_0)^\bot$ and $\dom(A_1^*)\cap \kar(A_1^*)^\bot\hookrightarrow \kar(A_1^*)^\bot$ are compact and $\kar(A_0^*)\cap \kar(A_1)$ is finite-dimensional. \end{enumerate} \end{theorem} Next, we need to introduce some notation. \begin{definition} Let $H_0,H_1$ be Hilbert spaces, $A\colon \dom(A)\subseteq H_0\to H_1$. Then we define the canonical embeddings \begin{enumerate} \item $\iota_{\rge(A)} \colon \rge(A) \hookrightarrow H_1$; \item $\iota_{\kar(A)} \colon \kar(A) \hookrightarrow H_0$; \item $\pi_{\rge(A)}\coloneqq \iota_{\rge(A)}\iota_{\rge(A)}^*$; \item $\pi_{\kar(A)}\coloneqq \iota_{\kar(A)}\iota_{\kar(A)}^*$. \end{enumerate} \end{definition} If a densely defined closed linear operator has closed range, it is possible to continuously invert this operator in an appropriate sense. For convenience of the reader and since the operator to be defined in the next theorem plays an important role in the following, we provide the results with the respective proofs. Note that the results are known, as well, see for instance again \cite{PZ17}. \begin{theorem}\label{t:tb2} Let $H_0,H_1$ Hilbert spaces, $A\colon \dom(A)\subseteq H_0\to H_1$ densely defined and closed. Assume that $\rge(A)\subseteq H_1$ is closed. Then the following statements hold: \begin{enumerate}[label=(\alph*)] \item\label{tb2a} $B\coloneqq \iota_{\rge(A)}^*A \iota_{\rge(A^*)}$ is continuously invertible; \item\label{tb2b} $B^*=\iota_{\rge(A^*)}^*A^* \iota_{\rge(A)}$; \item\label{tb2c} the operator $\hat{A^*}\colon H_1 \to \dom(B)^*, \phi\mapsto (v\mapsto \langle \phi,Av\rangle_{H_1})$ is continuous; and $\hat{B^*}\coloneqq \hat{A^*}|_{\rge(A)}$ is an isomorphism that extends $B^*$. \end{enumerate} \end{theorem} \begin{proof} We prove \ref{tb2a}. Note that by the closed range theorem, we have $\rge(A^*)\subseteq H_0$ is closed. Moreover, since $\kar(A)^\bot=\rge(A^*)$, we have that $B$ is injective and since $\iota_{\rge(A)}^*$ projects onto $\rge(A)$, we obtain that $B$ is also onto. Next, as $A$ is closed, we infer that $B$ is closed. Thus, $B$ is continuously invertible by the closed graph theorem. For the proof of \ref{tb2b}, we observe that $B^*$ is continuously invertible, as well. Moreover, it is easy to see that $B^*=A^*$ on $\dom(A^*)\cap \kar(A^*)^{\bot}$, see also \cite[Lemma 2.4]{TW14}. Thus, the assertion follows. In order to prove \ref{tb2c}, we note that $\hat{A^*}$ is continuous. Next, it is easy to see that $\hat{B^*}$ extends $B^*$. We show that $\hat{B^*}$ is onto. For this, let $\psi\in \dom(B)^*$. Then there exists $w\in \dom(B)$ such that \[ \langle w,v\rangle_{H_0}+\langle Bw,Bv\rangle_{H_1} = \psi(v)\quad (v\in \dom(B)). \] Define $\phi\coloneqq (B^{-1})^*w + Bw \in \rge(A)$. Then we compute for all $v\in \dom(B)$ \begin{align*} (\hat{B^*}\phi) (v) & = \langle \phi,Bv\rangle_{H_1} \\ & = \langle (B^{-1})^*w + Bw, Bv\rangle_{H_1} \\ & = \langle w,B^{-1}Bv\rangle_{H_0} + \langle Bw,Bv\rangle_{H_1} \\ & = \psi(v). \end{align*} Hence, $\hat{B^*}\phi = \psi$. We are left with showing that $\hat{B^*}$ is injective. Let $\hat{B^*}\phi=0$. Then, for all $v\in \dom(B)$ we have \[ 0=\langle \phi,Bv\rangle_{H_1}. \] Hence, $\phi\in \dom(B^*)$ and $B^*\phi=0$. Thus, $\phi=0$, as $B^*$ is one-to-one. Hence, $\hat{B^*}$ is one-to-one. \end{proof} \begin{remark}\label{r:dadb} In the situation of the previous theorem, we remark here a small pecularity in statement \ref{tb2c}: One could also define \[ \tilde{A^*} \colon H_1 \to \dom(A)^*, \phi\mapsto (v\mapsto \langle \phi, Av\rangle_{H_1}) \] to obtain an extension of $A^*$. In the following, we will restrict our attention to the consideration of $\hat{A^*}$. The reason for this is the following fact: \[\dom(A)^*\supseteq\rge(\tilde{A^*})\cong \rge(\hat{A^*})\subseteq \dom(B)^*,\] where the identification is given by \[ \hat{A^*}\phi \mapsto (\tilde{A^*}\phi)|_{\dom(B)}\quad (\phi\in H_1). \] Indeed, let $\phi\in H_1$. Then \begin{align*} \sup_{\substack{v\in \dom(A),\\ \|v\|_{\dom(A)}\leq1}}|(\tilde{A^*}\phi)(v)| & =\sup_{\substack{v\in \dom(A),\\ \|v\|_{\dom(A)}\leq1}}|\langle \phi,Av\rangle_{H_1}| \\ & = \sup_{\substack{v\in \dom(A)\cap \kar(A)^{\bot},\\ \|v\|_{\dom(A)}\leq1}}|\langle \phi,Av\rangle_{H_1}| \\ & = \sup_{\substack{v\in \dom(B), \\ \|v\|_{\dom(B)}\leq1}}|\langle \phi,Av\rangle_{H_1}| \\ & = \sup_{\substack{v\in \dom(B),\\ \|v\|_{\dom(B)}\leq1}}|(\hat{A^*}\phi)(v)|. \end{align*} \end{remark} The latter remark justifies the formulation in the div-curl lemma, which we state next. \begin{theorem}\label{t:mr} Let $(A_0,A_1)$ be a closed sequence. Let $(u_n)_n, (v_n)_n$ in $H_1$ be weakly convergent. Assume \[ (\hat{A_0^*}u_n)_n, (\hat{A_1} v_n)_n \] to be relatively compact in $\dom(A_0)^*$ and $\dom(A_1^*)^*$, respectively. Further, assume that $\kar(A_0^*)\cap \kar(A_1)$ is finite dimensional. Then \[ \lim_{n\to \infty} \langle u_n,v_n\rangle_{H_1} = \langle \lim_{n\to\infty} u_n, \lim_{n\to\infty} v_n\rangle_{H_1}. \] \end{theorem} We emphasise that in this abstract version of the $\dive$-$\curl$ lemma \emph{no} compactness condition on the operators $A_0$ and $A_1$ is needed. On the other hand, it is possible to formulate a statement of similar type without the usage of (abstract) distribution spaces. For this, however, we have to assume that $(A_0,A_1)$ is a \emph{compact} sequence. The author is indebted to Dirk Pauly for a discussion on this theorem. It is noteworthy that the proof for both Theorem \ref{t:mr} and \ref{t:mrcc} follows a commonly known standard strategy to prove the so-called `Maxwell compactness property', see \cite{W74,P84,BPS16}. \begin{theorem}\label{t:mrcc} Let $(A_0,A_1)$ be a compact sequence. Let $(u_n)_n, (v_n)_n$ be weakly convergent sequences in $\dom(A_0^*)$ and $\dom(A_1)$, respectively. Then \[ \lim_{n\to \infty} \langle u_n,v_n\rangle_{H_1} = \langle \lim_{n\to\infty} u_n, \lim_{n\to\infty} v_n\rangle_{H_1}. \] \end{theorem} In order to prove Theorem \ref{t:mr} and \ref{t:mrcc} we formulate a corollary of Theorem \ref{t:tb2} first. \begin{corollary}\label{cor:tb3} Let $H_0$, $H_1$ be Hilbert spaces, $A\colon \dom(A)\subseteq H_0\to H_1$ densely defined and closed. Assume that $\rge(A)\subseteq H_1$ is closed. Let $B$ be as in Theorem \ref{t:tb}. For $(\phi_n)_n$ in $H_1$ the following statements are equivalent: \begin{enumerate}[label=(\roman*)] \item\label{tb3i} $(\hat{A^*}\phi_n)_n$ is relatively compact in $\dom(B)^*$; \item\label{tb3ii} $(\pi_{\rge(A)}\phi_n)_n$ is relatively compact in $H_1$. \end{enumerate} If $(\phi_n)_n$ weakly converges to $\phi$ in $H_1$, then either of the above conditions imply $\pi_{\rge(A)}\phi_n\to \pi_{\rge(A)}\phi$ in $H_1$. \end{corollary} \begin{proof} From $\rge(A)=\kar(A^*)^{\bot}$ and $\kar(\hat{A^*})=\kar(A^*)$, we deduce that $\hat{A^*}\phi=\hat{A^*}\pi_{\rge(A)}\phi$ for all $\phi\in H_1$. Next, $\hat{A^*}\pi_{\rge(A)}\phi= \hat{B^*}\iota_{\rge(A)}^* \phi$ for all $\phi\in H_1$. Thus, as $\hat{B^*}$ is an isomorphism by Theorem \ref{t:tb2}, we obtain that \ref{tb3i} is equivalent to $(\iota_{\rge(A)}^* \phi_n)_n$ being relatively compact in $\rge(A)$. The latter in turn is equivalent to \ref{tb3ii}, since $(\iota_{\rge(A)}^* \phi_n)_n$ being relatively compact is (trivially) equivalent to the same property of $(\iota_{\rge(A)}\iota_{\rge(A)}^* \phi_n)_n=(\pi_{\rge(A)} \phi_n)_n$. The last assertion follows from the fact that $\pi_{\rge(A)}$ is (weakly) continuous. Indeed, weak convergence of $(\phi_n)_n$ to $\phi$ implies weak convergence of $(\pi_{\rge(A)}\phi_n)_n$ to $\pi_{\rge(A)}\phi$. This together with relative compactness implies $\pi_{\rge(A)}\phi_n\to \pi_{\rge(A)}\phi$ with the help of a subsequence argument. \end{proof} \begin{corollary}\label{cor:tb3b} Let $H_0$, $H_1$ be Hilbert spaces, $A\colon \dom(A)\subseteq H_0\to H_1$ densely defined and closed. Assume $\dom(A)\cap \kar(A)^{\bot_{H_0}}\hookrightarrow H_0$ compact. Let $(\phi_n)_n$ weakly converging to $\phi$ in $\dom(A^*)$. Then $\lim_{n\to\infty} \pi_{\rge(A)}\phi_n=\pi_{\rge(A)}\phi$ in $H_1$. \end{corollary} \begin{proof} We note that -- by a well-known contradiction argument -- $\dom(A)\cap \kar(A)^{\bot_{H_0}}\hookrightarrow H_0$ compact implies the Poincar\'e type inequality\[ \exists c>0 \forall \phi\in \dom(A)\cap \kar(A)^\bot: \|\phi\|_{H_0}\leq c \|A\phi\|_{H_1}. \] The latter together with the closedness of $A$ implies the closedness of $\rge(A)\subseteq H_0$. Thus, Theorem \ref{t:tb2} is applicable. Let $B$ as in Theorem \ref{t:tb2}. We observe that the assertion is equivalent to $\lim_{n\to\infty}\iota_{\rge(A)}^*\phi_n=\iota_{\rge(A)}^*\phi$ in $\rge(A)$. We compute with the help Theorem \ref{t:tb2} for $n\in \mathbb{N}$ \begin{align*} \iota_{\rge(A)}^*\phi_n & = (B^*)^{-1}B^*\iota_{\rge(A)}^*\phi_n \\ & = (B^*)^{-1}\iota_{\rge(A^*)}^*A^* \iota_{\rge(A)}\iota_{\rge(A)}^*\phi_n \\ & = (B^*)^{-1}\iota_{\rge(A^*)}^*A^* \pi_{\rge(A)}\phi_n \\ & = (B^*)^{-1}\iota_{\rge(A^*)}^*A^*\phi_n. \end{align*} By hypothesis, $A^*\phi_n\rightharpoonup A^*\phi$ in $H_0$ and so $\iota_{\rge(A^*)}^*A^*\phi_n\rightharpoonup \iota_{\rge(A^*)}^*A^*\phi$ in $\rge(A^*)$ as $n\to\infty$ since $\iota_{\rge(A^*)}^*$ is (weakly) continuous. Next $B^{-1}$ is compact by assumption and thus so is $(B^*)^{-1}$. Therefore $(B^*)^{-1}\iota_{\rge(A^*)}^*A^*\phi_n\to (B^*)^{-1}\iota_{\rge(A^*)}^*A^*\phi$ in $\iota_{\rge(A)}$. The assertion follows from $(B^*)^{-1}\iota_{\rge(A^*)}^*A^*\phi=\iota_{\rge(A)}^*\phi$. \end{proof} \begin{proof}[Proof of Theorem \ref{t:mr} and Theorem \ref{t:mrcc}] By the sequence property, we deduce that $\pi_{\rge(A_0)}\leq \pi_{\kar(A_1)}$ and $\pi_{\rge(A_1^*)}\leq \pi_{\kar(A_0^*)}$. By Corollary \ref{cor:tb3} (Theorem \ref{t:mr}) or Corollary \ref{cor:tb3b} (Theorem \ref{t:mrcc}), we deduce that $\pi_{\rge(A_0)}u_n\to\pi_{\rge(A_0)}u$ and $\pi_{\rge(A_1^*)} v_n\to \pi_{\rge(A_1^*)}v$ in $H_1$. From $\kar(A_1)\cap \kar(A_0^*)$ being finite-dimensional (cf. Theorem \ref{t:tb}), we obtain $\pi_{\kar(A_1)\cap \kar(A_0^*)}u_n\to \pi_{\kar(A_1)\cap \kar(A_0^*)}u$ as $\pi_{\kar(A_1)\cap \kar(A_0^*)}$ is compact. Thus, we obtain for $n\in \mathbb{N}$ \begin{align*} \langle u_n, v_n\rangle_{H_1}& = \langle (\pi_{\rge(A_0)}+\pi_{\kar(A_0^*)\cap \kar(A_1)}+\pi_{\kar(A_0^*)\cap \rge(A_1^*)}) u_n, (\pi_{\rge(A_1^*)}+\pi_{\kar(A_1)}) v_n\rangle_{H_1} \\ & = \langle u_n, \pi_{\rge(A_1^*)}v_n\rangle_{H_1} \\ & \quad +\langle (\pi_{\rge(A_0)}+\pi_{\kar(A_0^*)\cap \kar(A_1)}+\pi_{\kar(A_0^*)\cap \rge(A_1^*)}) u_n,\pi_{\kar(A_1)} v_n\rangle_{H_1} \\ & = \langle u_n, \pi_{\rge(A_1^*)}v_n\rangle_{H_1} \\ & \quad+\langle \pi_{\rge(A_0)}u_n,\pi_{\kar(A_1)} v_n\rangle_{H_1} + \langle \pi_{\kar(A_0^*)\cap \kar(A_1)} u_n,\pi_{\kar(A_1)} v_n\rangle_{H_1} \\ & \to \langle \lim_{n\to\infty} u_n,\lim_{n\to\infty} v_n\rangle_{H_1}.\qedhere \end{align*} \end{proof} A closer look at the proof of our main result reveals the following converse of Theorem \ref{t:mr}: \begin{theorem}\label{t:mrc} Let $(A_0,A_1)$ be a closed sequence. Assume that for all weakly convergent sequences $(u_n)_n, (v_n)_n$ in $\dom(A_0^*)$ and $\dom(A_1)$, respectively, we obtain \[ \lim_{n\to \infty} \langle u_n,v_n\rangle_{H_1} = \langle \lim_{n\to\infty} u_n, \lim_{n\to\infty} v_n\rangle_{H_1}. \] Then $\kar(A_0^*)\cap \kar(A_1)$ is finite-dimensional. \end{theorem} For the proof of the latter, we need the next proposition: \begin{proposition}\label{p:id} Let $H$ be a Hilbert space. Then the following statements are equivalent: \begin{enumerate} \item $H$ is infinite-dimensional; \item there exists $(u_n)_n$ weakly convergent to $0$ such that $c\coloneqq \lim_{n\to\infty}\langle u_n,u_n\rangle$ exists with $c\ne 0$. \end{enumerate} \end{proposition} \begin{proof} Let $H$ be infinite-dimensional. Without loss of generality, we may assume that $H=L^2(0,2\pi)$. Then $u_n\coloneqq \sin(n\cdot)\to 0$ weakly as $n\to\infty$ and \[ \int_0^{2\pi} (\sin(nx))^2dx\to \frac{1}{2\pi}\int_0^{2\pi} (\sin(x))^2dx>0. \] If $H$ is finite-dimensional, then weak convergence and strong convergence coincide, and the desired sequence cannot exist. \end{proof} \begin{proof}[Proof of Theorem \ref{t:mrc}] Suppose that $\kar(A_0^*)\cap \kar(A_1)$ is infinite-dimensional. Choose $(u_n)_n$ in $\kar(A_0^*)\cap \kar(A_1)$ as in Proposition \ref{p:id}. Then, clearly, $(u_n)_n$ is weakly convergent in $\dom(A_0^*)$ and $\dom(A_1)$. Hence, \[ 0= \langle \lim_{n\to\infty} u_n, \lim_{n\to\infty} u_n\rangle_{H_1} = \lim_{n\to\infty}\langle u_n, u_n\rangle_{H_1} = c \neq 0.\qedhere \] \end{proof} We will need the next abstract results for the proof of the div-curl lemma in the next section. Note that this is only needed for the formulation of the div-curl lemma where the divergence and the curl operators are considered to map into $H^{-1}$. For this, we need some notation. Let $A\in L(H_0,H_1)$. The dual operator $A'\in L(H_1^*,H_0^*)$ is given by \[ (A'\phi)(\psi)\coloneqq \phi(A\psi). \] We also define $A^\diamond \colon H_1 \to H_0^*$ via $A^\diamond \coloneqq A'R_{H_1}$, where $R_{H_1}\colon H_1\to H_1^*$ denotes the Riesz isomorphism. \begin{proposition}\label{p:dual} Let $H_0$, $H_1$, $D$ Hilbert spaces, $A\colon \dom(A)\subseteq H_0\to H_1$ densely defined and closed. Assume $D\hookrightarrow\dom(A)$ continuously and $\rge(A|_D)= \rge(A)\subseteq H_1$ closed. Define $\mathcal{A}\colon D\to H_1, \phi\mapsto A\phi$. Then $\hat{A^*}=\mathcal{A}^\diamond$, that is, for every $v\in H_1$ we have $\mathcal{A}^\diamond v$ can be uniquely extended to an element of $\dom(A)^*$, the extension is given by $\hat{A^*}v$, where $\hat{A^*}$ is given in Theorem \ref{t:tb2}. \end{proposition} \begin{proof} Let $v \in H_1$. Then for all $\phi\in D$ we have \[ \left(\hat{A^*}v\right)(\phi)= \langle v, A\phi\rangle_{H_1}=\langle v, \mathcal{A}\phi\rangle_{H_1} = R_{H_1}v (\mathcal{A}\phi) = (\mathcal{A}'R_{H_1}v)(\phi) = (\mathcal{A}^\diamond v)(\phi). \] Since $\mathcal{A}$ is continuous, it is densely defined and closed, hence $\mathcal{B}\coloneqq \iota_{\rge(\mathcal{A})}^*\mathcal{A}\iota_{\rge(\mathcal{A}^*)}$ is a Hilbert space isomorphism from $D\cap \kar(\mathcal{A})^{\bot_D}$ to $\rge(\mathcal{A})=\rge(A)$, by Theorem \ref{t:tb2}. Note that $\mathcal{A}\mathcal{B}^{-1}=\mathrm{id}_{\rge(\mathcal{A})}=\mathrm{id}_{\rge({A})}$. For $\psi\in \dom(A)$ and $v\in H_1$, we define \[ \left(\mathcal{A}^\diamond v\right)_{\textnormal{e}}(\psi)\coloneqq \left(\mathcal{A}^\diamond v\right)(\mathcal{B}^{-1}A\psi). \] Next, if $\psi\in \dom(A)$, then with the above computations, we obtain \[ \left(\mathcal{A}^\diamond v\right)_{\textnormal{e}}(\psi)= \left(\mathcal{A}^\diamond v\right)(\mathcal{B}^{-1}A\psi)= \langle v, \mathcal{A}\mathcal{B}^{-1}A\psi\rangle_{H_1}=\langle v, A\psi\rangle_{H_1}=\left(\hat{A^*}v\right)(\psi). \] Thus, $\left(\mathcal{A}^\diamond v\right)_{\textnormal{e}}$ indeed extends $\mathcal{A}^\diamond v$ and coincides with $\hat{A^*}v$. We infer also the continuity property for $\mathcal{A}^\diamond v$. The uniqueness property follows from $\rge(\mathcal{A})=\rge(A)$. \end{proof} From Proposition \ref{p:dual} it follows that $\rge(\hat{A^*})=\rge(\mathcal{A}^\diamond)$. This is the actual fact used in the following. \begin{lemma}[{{\cite[Lemma 2.14]{PZ17}}}]\label{l:PZr} Let $H_0$, $H_1$, $H_2$ Hilbert spaces, $A\in L(H_1,H_2)$ onto. Then $\rge(A^\diamond)\subseteq H_1^*$ is closed and $(A^\diamond)^{-1}\in L(\rge(A^\diamond),H_2)$. \end{lemma} \begin{proof} By the Riesz representation theorem $A^\diamond$ and $A'$ are unitarily equivalent. Thus, it suffices to prove the assertions for $A'$ instead of $A^\diamond$. By the closed range theorem, $\rge(A')$ is closed, since $\rge(A)=H_2$ is. Next, $A$ is onto, hence $A'\in L(H_2^*,H_1^*)$ is one-to-one, and, thus, by the closed graph theorem, we obtain that $(A')^{-1}$ maps continuously from $\rge(A')$ into $H_2^*$. \end{proof} \begin{corollary}\label{cor:AB} Let $H_0$, $H_1$ be Hilbert spaces, $A\colon \dom(A)\subseteq H_0\to H_1$ densely defined and closed, $C\colon \dom(C)\subseteq H_0\to H_1$ densely defined, closed. Assume that $\rge(A)\subseteq H_1$ is closed, $\dom(C)\hookrightarrow \dom(A)$ continuous. If \begin{equation}\label{ABeq} \rge(A)=\{A\phi; \phi\in \dom(C)\}, \end{equation} then $\rge(\hat{A^*})=\dom(B)^*\subseteq \dom(C)^*$ is closed, where $B$ is given in Theorem \ref{t:tb2}. \end{corollary} \begin{proof} Since $\dom(C)\hookrightarrow \dom(A)$ continuously, we obtain that \[ {\mathcal{A}} \colon \dom(C)\to \rge(A)=\rge(B), \phi\mapsto A\phi \] is continuous. Moreover, by \eqref{ABeq}, we infer that ${\mathcal{A}}$ is onto. Hence, by Lemma \ref{l:PZr}, we obtain that $\rge({ {\mathcal{A}}}^\diamond)\subseteq \dom(C)^*$ is closed. Thus, we are left with showing that $\rge({\mathcal{A}}^\diamond)=\dom(B)^*$. By Proposition \ref{p:dual}, we realise that $\rge({\mathcal{A}}^\diamond)=\rge(\hat{A^*})=\rge(\hat{B^*})$. By Theorem \ref{t:tb2}, we get that $\hat{B^*}$ maps onto $\dom(B)^*$. \end{proof} \begin{remark} Corollary \ref{cor:AB} particularly applies to $A=C$. \end{remark} \section{The classical $\dive$-$\curl$ lemma} Before we formulate Theorem \ref{t:dcl2}, the classical $\dive$-$\curl$ lemma, we need to introduce some differential operators from vector calculus. \begin{definition} Let $\Omega\subseteq \mathbb{R}^d$ open. We define \begin{align*} \grad_\text{c} & \colon C_c^\infty(\Omega) \subseteq L^2(\Omega) \to L^2(\Omega)^d, \phi\mapsto (\partial_j\phi_j)_{j\in\{1,\ldots,d\}} \\ \dive_\text{c} & \colon C_c^\infty(\Omega) \subseteq L^2(\Omega)^d \to L^2(\Omega), (\phi_j)_{j\in\{1,\ldots,d\}}\mapsto \sum_{j=1}^d\partial_j\phi_j \\ \Grad_{\text{c}} & \colon C_c^\infty(\Omega)^d \subseteq L^2(\Omega)^d \to L^2(\Omega)^{d\times d}, (\phi_j)_{j\in\{1,\ldots,d\}}\mapsto (\partial_k\phi_j)_{j,k\in\{1,\ldots,d\}} \\ \Dive_{\text{c}} & \colon C_c^\infty(\Omega)^{d\times d} \subseteq L^2(\Omega)^{d\times d} \to L^2(\Omega)^{d}, (\phi_{j,k})_{j,k\in\{1,\ldots,d\}}\mapsto (\sum_{k=1}^d\partial_k\phi_{j,k})_{j\in\{1,\ldots,d\}} \\ \Curl_{\text{c}} & \colon C_c^\infty(\Omega)^{d} \subseteq L^2(\Omega)^{d} \to L^2(\Omega)^{d\times d}, (\phi_{j})_{j\in\{1,\ldots,d\}}\mapsto (\partial_k\phi_{j}-\partial_j\phi_{k})_{j,k\in\{1,\ldots,d\}} \\& \hspace{10cm}=\Grad \phi-\left(\Grad\phi\right)^T. \end{align*} Moreover, we set $\interior{\grad}\coloneqq \overline{\grad}_\text{c}$ and, similarly, $\interior{\dive},\interior{\Dive},\interior{\Curl},\interior{\Grad}$. Furthermore, we put $\dive\coloneqq -\interior{\grad}^*$, $\Dive\coloneqq -\interior{\Grad}^*$, $\grad\coloneqq -\interior{\dive}^*$, $\Grad\coloneqq -\interior{\Dive}^*$ and $\Curl\coloneqq (2\interior{\Dive} \skew)^*$, where $\skew A \coloneqq \frac{1}{2}(A-A^T)$ denotes the skew symmetric part of a matrix $A$. \end{definition} \begin{remark} It is an elementary computation to establish that the operators just introduced with $\interior{\ }$ are restrictions of the ones without. \end{remark} As usual, we define, $H^{-1}(\Omega)\coloneqq \dom(\interior{\grad})^*$. We may now formulate the classical $\dive$-$\curl$ lemma. We slightly rephrase the lemma, though. \begin{theorem}[$\dive$-$\curl$ lemma -- global version]\label{t:dcl2} Let $(u_n)_n,(v_n)_n$ in $L^2(B(0,1))^d$ weakly convergent, with \[\overline{\bigcup_{n\in\N}(\spt u_n \cup \spt v_n)}\subseteq B(0,\delta)=\{ x\in \R^d; \|x\| \leq\delta\}\] for some $\delta<1$. Assume \[ (\dive u_n)_n, (\Curl u_n)_n \] are relatively compact in $H^{-1}(B(0,1))$ and $H^{-1}(B(0,1))^{d\times d}$, resp. Then \[ \lim_{n\to\infty}\langle u_n, v_n \rangle_{L^2} = \langle \lim_{n\to\infty}u_n,\lim_{n\to\infty}v_n\rangle_{L^2}. \] \end{theorem} We recall here that in \cite{S16}, Theorem \ref{t:dcl2} is called ``global $\dive$-$\curl$ lemma''. We provide the connection to the classical, the ``local'' version of it, in the following remark. \begin{remark}[$\dive$-$\curl$ lemma -- local version]\label{r:dcl12} We observe that the assertions in Theorem \ref{t:dcl} and in Theorem \ref{t:dcl2} are equivalent. For this, observe that Theorem \ref{t:dcl} implies Theorem \ref{t:dcl2}. Indeed, for $\Omega=B(0,1)$, the assumptions of Theorem \ref{t:dcl2} imply the same of Theorem \ref{t:dcl}. Moreover, let $\phi\in C_c^\infty(B(0,1))$ be such that $\phi=1$ on the compact set $\overline{\bigcup_{n\in\N}(\spt u_n \cup \spt v_n)}$. Then, by Theorem \ref{t:dcl} and putting $u\coloneqq \lim_{n\to\infty}u_n$ and $v\coloneqq \lim_{n\to\infty}v_n$, we obtain \[ \langle u_n, v_n \rangle_{L^2}= \int_\Omega \phi \langle u_n,v_n\rangle \to \int_\Omega \phi \langle u,v\rangle = \langle u,v\rangle. \] On the other hand, let the assumptions of Theorem \ref{t:dcl} be satisfied. With the help of Theorem \ref{t:dcl2}, we have to prove that for all $\phi\in C_c^\infty(\Omega)$ we get \begin{equation}\label{e:dcl12} \int_\Omega \phi \langle u_n,v_n\rangle \to \int_\Omega \phi \langle u,v\rangle. \end{equation} To do so, we let $\psi\in C_c^\infty(\Omega)$ be such that $\psi=1$ on $\spt \phi$. Then there exists $R>0$ such that $\spt \psi \subseteq B(0,R)$. By rescaling the arguments, the statement in \eqref{e:dcl12} follows from Theorem \ref{t:dcl2}, once we proved that \[ (\dive(\psi u_n))_n=(\psi\dive(u_n)+\grad(\psi )u_n)_n,\, (\Curl(\psi v_n))_n=(2\skew((\grad \psi) v_n^T)+\psi \Curl v_n)_n \] is relatively compact in $H^{-1}(B(0,R+1))$ and $H^{-1}(B(0,R+1))^{d\times d}$. This, however, follows from the hypothesis and the compactness of the embedding $L^2(B(0,1)) \hookrightarrow H^{-1}(B(0,1))$, which in turn follows from Rellich's selection theorem. \end{remark} The rest of this section is devoted to prove Theorem \ref{t:dcl2} by means of Theorem \ref{t:mr}. We will apply Theorem \ref{t:mr} to the following setting \begin{alignat}{1}\tag{$*$}\label{eq:setting} \begin{aligned} H_0 & = L^2(B(0,1)), \\ H_1 & = L^2(B(0,1))^d, \\ A_0 &\coloneqq \interior{\grad}, \\ A_1 &\coloneqq \interior{\Curl}. \end{aligned} \end{alignat} \begin{proposition}\label{prop:seq} With the setting in \eqref{eq:setting}, $(A_0,A_1)$ is a sequence. \end{proposition} \begin{proof} By Schwarz's lemma, it follows for all $\phi\in C_c^\infty(B(0,1))$ that \[ \interior{\Curl}\interior{\grad}\phi=\interior{\Curl}(\partial_j\phi)_{j\in\{1,\ldots,d\}}=(\partial_k\partial_j\phi-\partial_j\partial_k\phi)_{j,k\in\{1,\ldots,d\}}=0. \] Thus, $\interior{\Curl}\interior{\grad}\subseteq 0$. \end{proof} Next, we address the compactness property. \begin{theorem}\label{t:comp} With the setting in \eqref{eq:setting}, $(A_0,A_1)$ is compact. \end{theorem} For the proof of Theorem \ref{t:comp}, we could use compactness embedding theorems such as Weck's selection theorem (\cite{W74}) or Picard's selection theorem (\cite{P84}). However, due to the simple geometric setting discussed here, it suffices to walk along the classical path of showing compactness by proving Gaffney's inequality and then using Rellich's selection theorem. We emphasise, however, that meanwhile there have been developed sophisticated tools detouring Gaffney's inequality, to obtain compactness results for very irregular $\Omega$, which do not satisfy Gaffney's inequality. For convenience of the reader, we shall provide a proof of Theorem \ref{t:comp} using the following regularity result for the Laplace operator, see \cite[Teorema 10 and 14]{K64} or since we use the respective result for a $d$-dimensional ball, only, see \cite[Inequality (3,1,1,2)]{G11}. For this, we denote the Dirichlet Laplace operator by $\Delta\coloneqq \dive\interior{\grad}$. \begin{theorem}\label{t:LaplReg}Let $\Omega\subseteq \mathbb{R}^d$ open, bounded and convex. Then for all $u\in \dom(\Delta)$, we have $u\in \dom(\Grad\interior{\grad})$ and \[ \|\Grad\interior{\grad}u\|_{L^2(\Omega)^{d\times d}}\leq \|\Delta u\|_{L^2(\Omega)}. \] \end{theorem} Based on the latter estimate, we shall prove Friedrich's inequality. For the proof of which, we will follow the exposition of \cite{S82}. Since the exposition in \cite{S82} is restricted to 2 or 3 spatial dimensions, only, we provide a proof for the ``multi-$d$''-case in the following. \begin{theorem}[{{\cite[Theorem 2.2]{S82}}}]\label{t:gaff} Let $\Omega\subseteq \mathbb{R}^d$ open, bounded, convex. Then $\dom(\interior{\Curl})\cap\dom(\dive)\hookrightarrow \dom(\Grad)$. Moreover, we have \[ \|\Grad u\|_{L^2(\Omega)^d}^2\leq \frac12\|\interior{\Curl} u\|^2_{L^2(\Omega)^{d\times d}}+\|\dive u\|^2_{L^2(\Omega)} \] for all $u\in \dom(\interior{\Curl})\cap\dom(\dive)$. \end{theorem} \begin{lemma}[{{\cite[Lemma 2.1]{S82}}}]\label{l:dens} Let $\Omega\subseteq \mathbb{R}^d$ open, bounded. Denote \[ V\coloneqq \{ \phi; \exists \psi\in C_c^\infty(\Omega)^d:\, \phi=\psi+\interior{\grad}(-\Delta+1)^{-1}\dive \psi \}. \] Then $V$ is dense in $\dom(\interior{\Curl})\cap \dom(\dive)$. \end{lemma} \begin{proof} First of all note that $V\subseteq X\coloneqq \dom(\interior{\Curl})\cap \dom(\dive)$. Indeed, for $\phi=\psi+\interior{\grad}(-\Delta+1)^{-1}\dive \psi$ for some $\psi\in C_c^\infty(\Omega)$, we get $\interior{\Curl}\phi=\interior{\Curl} \psi\in L^2(\Omega)^{d\times d}$, by Proposition \ref{prop:seq}. Moreover, $\dive \phi = (-\Delta+1)^{-1}\dive \psi\in L^2(\Omega)$. Thus, $V\subseteq X$. Next, we show the density property. For this, we endow $X$ with the scalar product \[ \langle u,v\rangle_X\coloneqq \langle \interior{\Curl} u,\interior{\Curl}v\rangle+\langle \dive u,\dive v\rangle + \langle u,v\rangle. \] Let $u\in V^{\bot_X}\subseteq X$. We need to show that $u=0$. For all $\psi\in C_c^\infty(\Omega)$ and $w\coloneqq (-\Delta+1)^{-1}\dive \psi$ we have \begin{align*} 0 & = \langle u,\psi + \interior{\grad}w\rangle_X \\& = \langle \interior{\Curl} u, \interior{\Curl} \psi\rangle + \langle \dive u, \dive \psi\rangle + \langle \dive u,\dive\interior{\grad}w\rangle + \langle u,\psi\rangle + \langle u,\interior{\grad}w\rangle \\& = \langle \interior{\Curl} u, \interior{\Curl} \psi\rangle + \langle \dive u, \dive \psi\rangle + \langle \dive u,\Delta w\rangle + \langle u,\psi\rangle - \langle \dive u, w\rangle \\& = \langle \interior{\Curl} u, \interior{\Curl} \psi\rangle + \langle u,\psi\rangle. \end{align*} Thus, $(\interior{\Curl}^*\interior{\Curl} + 1)u=0$, which yields $u=0$. \end{proof} Before we come to the proof of Theorem \ref{t:gaff}, we mention an elementary formula to be used in the forthcoming proof: For all $\psi\in C_c^\infty(\Omega)^d$ we have \[ -\Delta I_{d\times d} \psi = - \Dive\Grad \psi = -\Dive \Curl\psi - \grad \dive \psi. \] \begin{proof}[Proof of Theorem \ref{t:gaff}] By Lemma \ref{l:dens} it suffices to show the inequality for $u\in V$. For this, let $\psi\in C_c^\infty(\Omega)^d$ and put $u\coloneqq \psi + \interior{\grad}w$ with $w\coloneqq (-\Delta+1)^{-1}\dive\psi$. We compute \begin{multline*} \|\Grad u\|^2=\|\Grad (\psi+\interior{\grad}w)\|^2\\= \langle\Grad \psi,\Grad \psi\rangle + 2\Re \langle \Grad \psi,\Grad \interior{\grad}w\rangle + \|\Grad \interior{\grad}w\|^2. \end{multline*} We aim to discuss every term in the latter expression separately. We have \begin{align*} \langle\Grad \psi,\Grad \psi\rangle & = - \langle\Dive\Grad \psi, \psi\rangle \\ & = -\langle\Dive \Curl\psi,\psi\rangle - \langle\grad \dive \psi,\psi\rangle \\ & = -\langle\Dive\skew\Curl\psi,\psi\rangle + \langle \dive \psi,\dive\psi\rangle \\ & = \frac12\langle\Curl\psi,\Curl\psi\rangle + \langle \dive \psi,\dive\psi\rangle. \end{align*} Next, \begin{align*} \langle \Grad \psi,\Grad \interior{\grad}w\rangle & = -\langle\Dive \Grad \psi,\interior{\grad}w\rangle \\ & = - \langle\Dive \Curl \psi,\interior{\grad}w\rangle-\langle\grad \dive \psi,\interior{\grad}w\rangle \\ & = \langle\dive\Dive \Curl \psi,w\rangle-\langle\grad \dive \psi,\interior{\grad}w\rangle \\ & = -\langle\grad \dive \psi,\interior{\grad}w\rangle. \end{align*} By Theorem \ref{t:LaplReg}, we estimate \[ \|\Grad \interior{\grad}w\|^2\leq \|\Delta w\|^2=\|w-\dive\psi\|^2=\|w\|^2-2\Re\langle w,\dive\psi\rangle+\|\dive\psi\|^2. \] Note that since $\dive\psi\in C_c^\infty(\Omega)$, we obtain from $w=(-\Delta+1)^{-1}\dive\psi$ that \[ \langle \interior{\grad}w,\interior{\grad}\dive\psi\rangle + \langle w,\dive\psi\rangle = \langle \dive \psi,\dive\psi\rangle. \] Thus, all together, \begin{align*} \|\Grad u\|^2 & \leq \frac12\langle\Curl\psi,\Curl\psi\rangle + \langle \dive \psi,\dive\psi\rangle - 2\Re \langle\grad \dive \psi,\interior{\grad}w\rangle \\ & \quad+ \|w\|^2-2\Re\langle w,\dive\psi\rangle+\|\dive\psi\|^2 \\ &= \frac12\langle\Curl\psi,\Curl\psi\rangle + \langle \dive \psi,\dive\psi\rangle \\ &\quad+ 2\Re\langle w,\dive\psi\rangle - 2 \langle \dive \psi,\dive\psi\rangle + \|w\|^2-2\Re\langle w,\dive\psi\rangle+\|\dive\psi\|^2 \\ &= \frac12\langle\Curl\psi,\Curl\psi\rangle + \|w\|^2 \\ &= \frac12\|\Curl u\|^2 + \|\dive u\|^2.\qedhere \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{t:comp}] By Theorem \ref{t:gaff} as $B(0,1)$ is convex, we obtain that \[ \dom(A_1)\cap\dom(A_0^*)=\dom(\interior{\Curl})\cap \dom(\dive)\hookrightarrow \dom(\Grad). \] On the other hand $\dom(\Grad)\hookrightarrow L^2(B(0,1))^d$ is compact by Rellich's selection theorem. This yields the assertion. \end{proof} \begin{lemma}\label{l:toptriv} Assume the setting in \eqref{eq:setting}. Then $\kar(\dive)\cap\kar(\interior{\Curl})=\{0\}$. \end{lemma} \begin{proof} The assertion follows from the connectedness of $B(0,1)$. See e.g.~\cite{DS52,P79}. \end{proof} For the next proposition, we closely follow a rationale given by Pauly and Zulehner, see \cite{PZpre}. We also refer to \cite{BPS16} for a similar argument. \begin{proposition}\label{prop:closedminusone} Assume the setting in \eqref{eq:setting}. Then $\rge(\hat{\interior{\Curl}})\subseteq H^{-1}(\Omega)^{d\times d}$ is closed. \end{proposition} \begin{proof} In this proof, we need to consider the differential operators on various domains. To clarify this in the notation, we attach the underlying domain as an index to the differential operators in question, that is, $\grad=\grad_\Omega$ and when the domains are considered we write $\dom(\grad)=\dom(\grad,\Omega)$ and similarly for $\rge$ and $\kar$. We apply Corollary \ref{cor:AB} to $A=\interior{\Curl}_{B(0,1)}$, $C=\interior{\Grad}_{B(0,1)}$. Note that $\rge(A)$ is closed by Theorem \ref{t:comp} and Theorem \ref{t:tb}. Thus, we are left with showing that \[ \rge(\interior{\Curl},{B(0,1)})=\{\interior{\Curl}_{B(0,1)}\phi;\phi\in \dom(\interior{\Grad},{B(0,1)})\}. \] From Proposition \ref{prop:seq} and by Theorem \ref{t:gaff}, we infer \begin{align*} \rge(\interior{\Curl}_{B(0,1)})&=\{\interior{\Curl}_{B(0,1)}\phi;\phi\in \kar(\dive,{B(0,1)})\cap\dom(\interior{\Curl},{B(0,1)})\} \\ & =\{\interior{\Curl}_{B(0,1)}\phi;\phi\in \dom(\Grad,{B(0,1)})\cap\dom(\interior{\Curl},{B(0,1)})\}. \end{align*} So, let $\psi=\Curl_{B(0,1)} \phi$ for some $\phi\in \dom(\interior{\Curl},{B(0,1)})\cap \dom(\Grad,{B(0,1)})$. Extend $\phi$ and $\psi$ by zero to $B(0,2)$, we call the extensions $\phi_{\text{e}}$ and $\psi_{\text{e}}$. Note that $\phi_{\text{e}}\in \dom(\interior{\Curl},{B(0,2)})$ and $\interior{\Curl}_{B(0,2)}\phi_{\text{e}}=\psi_{\text{e}}$. By the above applied to $\Omega=B(0,2)$, we find $\phi_{\text{r}}\in \dom(\interior{\Curl},{B(0,2)})\cap\dom(\Grad,{B(0,2)})$ such that $\interior{\Curl}_{B(0,2)}\phi_{\text{r}}=\interior{\Curl}_{B(0,2)}\phi_{\text{e}}=\psi_{\text{e}}$. Thus, \[\phi_{\text{r}}-\phi_{\text{e}} \in \kar(\interior{\Curl},{B(0,2)})=\rge(\interior{\grad},{B(0,2)}),\] by Lemma \ref{l:toptriv}. Thus, we find $u\in \dom(\interior{\grad},{B(0,2)})$ with $\interior{\grad}_{B(0,2)}u=\phi_{\text{r}}-\phi_{\text{e}}$. On $B(0,2)\setminus \overline{B(0,1)}$ we have \[ 0=\phi_{\text{e}}=\phi_{\text{r}}-{\grad}_{B(0,2)\setminus \overline{B(0,1)}}u. \] Therefore, ${\grad}_{B(0,2)\setminus \overline{B(0,1)}}u=\phi_{\text{r}}$ on $B(0,2)\setminus\overline{B(0,1)}$. Hence, \[u\in \dom(\Grad\grad,B(0,2)\setminus \overline{B(0,1)})=H^2(B(0,2)\setminus \overline{B(0,1)}).\] By Calderon's extension theorem, there exists \[u_{\text{e}}\in \dom(\Grad\grad,B(0,2))=H^2(B(0,2))\text{ with }u_{\text{e}}=u\text{ on }B(0,2)\setminus\overline{B(0,1)}.\] Next, we observe that $\phi_{\text{r},0}\coloneqq \phi_{\text{r}}-\grad_{B(0,2)} u_{\text{e}} \in \dom(\Grad,B(0,2))$ as well as $u-u_{\text{e}}\in \dom(\grad,B(0,2))$ and \[ \phi_{\text{r}}= \phi_{\text{r},0} - \grad_{B(0,2)}(u-u_{\text{e}}). \] Moreover, on $B(0,2)\setminus \overline{B(0,1)}$, we have $\phi_{\text{r},0}=0$ as well as $u-u_{\text{e}}=0$. Thus, $\phi_{\text{r},0}\in \dom(\interior{\Grad},B(0,1))$ and $u-u_{\text{e}}\in \dom(\interior{\grad},B(0,1))$. Thus, \[ \psi = \Curl_{B(0,1)} \phi =\Curl_{B(0,1)} \phi_\textnormal{r} = \Curl_{B(0,1)}(\phi_{\text{r},0} - \interior{\grad}_{B(0,1)}(u-u_{\text{e}}))=\interior{\Curl}_{B(0,1)} \phi_{\text{r},0}. \] Therefore, \begin{align*} \rge(\interior{\Curl},{B(0,1)}) & =\{\interior{\Curl}_{B(0,1)}\phi;\phi\in \dom(\interior{\Grad},{B(0,1)})\cap\dom(\interior{\Curl},B(0,1))\} \\&=\{\interior{\Curl}_{B(0,1)}\phi;\phi\in \dom(\interior{\Grad},{B(0,1)})\}.\qedhere \end{align*} \end{proof} \begin{lemma}\label{l:distr} Let $\Omega\subseteq \mathbb{R}^d$ open, bounded, $\phi\in L^2(\Omega)^d$ with $\spt\phi\subseteq \Omega$. Then \[ \dom(\interior{\Dive}\skew)^* \ni \Curl \phi = \interior{\Curl}\phi\in \dom(\Dive\skew)^* \] \end{lemma} \begin{proof} We have $ \dom({\Dive}\skew)^* \hookrightarrow \dom((\interior{\Dive}\skew)^*$. Let $\eta\in C_c^\infty(\Omega)$ with the property $\eta=1$ on $\spt \phi$. Then for all $\psi\in\dom(\Dive\skew)$ we have $\eta\psi\in\dom(\interior{\Dive}\skew)$ and so, \begin{align*} \langle\interior{\Curl}\phi,\psi\rangle & =\langle\phi,2\Dive\skew\psi\rangle \\&=\langle\phi,2\Dive\skew\eta\psi\rangle \\&=\langle\phi,2\interior{\Dive}\skew\eta\psi\rangle \\&=\langle\Curl\phi,\eta\psi\rangle. \end{align*} Thus, there is $\kappa>0$ such that for all $\psi\in \dom(\Dive \skew)$ \begin{align*} |(\interior{\Curl}\phi)(\psi)|&=|(\Curl(\phi)(\psi))| \\ & =|(\Curl(\phi)(\eta\psi))| \\ & \leq \kappa \|\psi\|_{\dom({\Dive}\skew)}. \end{align*} This yields the assertion. \end{proof} Finally, we can prove the $\dive$-$\curl$ lemma with operator-theoretic methods. We shall also formulate a simpler version of the $\dive$-$\curl$ lemma, which needs less technical preparations. In fact, the simpler version only uses Theorem \ref{t:mrcc} and Theorem \ref{t:comp}. \begin{proof}[Proof of Theorem \ref{t:dcl2}] We apply Theorem \ref{t:mr} with the setting in \eqref{eq:setting}. For this, by Lemma \ref{l:distr}, we note that $\Curl v_n = \interior{\Curl}v_n=\hat{\interior{\Curl}}\,v_n$. With Theorem \ref{t:mr} at hand, we need to establish that $(\hat{\interior{\Curl}}\,v_n)_n$ is relatively compact in $\dom(\interior{\Curl}^*)^*$. By Corollary \ref{cor:AB} applied to $C=A=\interior{\Curl}^*$, the latter is the same as showing that $(\hat{\interior{\Curl}}\,v_n)_n$ is relatively compact in $\rge(\hat{\interior{\Curl}})$. On the other hand, by Proposition \ref{prop:closedminusone}, $\rge(\hat{\interior{\Curl}})$ is closed in $H^{-1}(\Omega)^{d\times d}$. Thus, since $(\hat{\interior{\Curl}}\,v_n)_n$ is relatively compact in $H^{-1}(\Omega)^{d\times d}$, we get that $(\hat{\interior{\Curl}}v_n)_n$ is relatively compact in $\dom(\interior{\Curl}^*)^*$. This yields the assertion. \end{proof} Theorem \ref{t:mrcc} with the setting in \eqref{eq:setting} reads as follows. Note that the assertion follows from Theorem \ref{t:comp}. \begin{theorem}\label{t:dclcc} Let $(u_n)_n$ in $\dom(\dive)$ and $(v_n)_n$ in $\dom(\interior{\Curl})$ be weakly convergent sequences. Then \[ \lim_{n\to\infty}\langle u_n,v_n\rangle_{L^2(\Omega)^d} = \langle \lim_{n\to\infty} u_n,\lim_{n\to\infty}v_n\rangle_{L^2(\Omega)^d}. \] \end{theorem} It is well-known that the sequence property and the compactness of the sequence is true also for submanifolds of $\mathbb{R}^d$ and the covariant derivative on tensor fields of appropriate dimension and its adjoint. We conclude this exposition with a less known sequence. The Pauly--Zulehner $\Grad\grad$-complex, see \cite{PZ17}. \section*{An Example -- the Pauly--Zulehner-$\Grad\grad$-complex} In the whole section, we let $\Omega\subseteq \mathbb{R}^3$ to be a bounded Lipschitz domain. We will denote by $\curl$ the usual $3$-dimensional curl operator that maps vector fields to vector fields. Some definitions are in order \begin{definition}We define \begin{align*} \intersec{\grad_\textnormal{r}\grad} &\colon \inter{H}^2(\Omega) \subseteq L^2(\Omega)\to L_{\sym}^2(\Omega),\phi\mapsto \grad_\textnormal{r}\grad\phi. \\ \interior{\curl}_{\textnormal{r},\sym} &\colon \dom(\interior{\curl}_{\textnormal{r}})\cap L_{\text{sym}}^2(\Omega)\subseteq L_{\sym}^2(\Omega)\to L_{\dev}^2(\Omega) ,\phi\mapsto \interior{\curl}_{\textnormal{r}}\phi \\ \interior{\dive}_{\textnormal{r},\dev} &\colon \dom(\interior{\dive}_{\textnormal{r}})\cap L_{\dev}^2(\Omega)\subseteq L_{\sym}^2(\Omega)\to L^2(\Omega)^d ,\phi\mapsto \Dive\phi \\ \ob{\dive\dive_{\textnormal{r}}}_{,\sym}& \colon \dom(\ob{\dive\Dive}_{\sym}) \subseteq L_{\sym}^2(\Omega) \to L^2(\Omega), \phi\mapsto \dive\Dive \phi, \\ \sym\curl_{\textnormal{r},\dev} & \colon\dom(\curl_\textnormal{r})\cap L_{\dev}^2(\Omega)\subseteq L_{\text{dev}}^2(\Omega)\to L^2_{\text{sym}}(\Omega), \phi\mapsto \sym\curl_\textnormal{r}\phi, \\ \dev\grad_\textnormal{r} & \colon H^1(\Omega)^3 \subseteq L^2(\Omega)^3\to L^2_{\dev}(\Omega),\phi\mapsto \dev\grad_\textnormal{r}\phi. \end{align*} The subscript $\textnormal{r}$ refers to row-wise application of the vector-analytic operators, where it is attached. Moreover, as before, we have attached a ``$\interior{\ }$'' above the differential operators in question, if we consider the completion of smooth tensor fields with compact support with appropriate norm. The operators $\dev$ and $\sym$ are the projections on the \emph{deviatoric} and \emph{symmetric} parts of $3\times3$-matrices, that is, for a matrix $A\in \mathbb{C}^{3\times 3}$, we put \[ \dev A\coloneqq A-\frac{1}{3}\tr (A)I_{3\times 3},\quad \sym A= \frac{1}{2}(A+A^T). \] Moreover, we define $L^2_{\dev}(\Omega)\coloneqq \dev\left[L^2(\Omega)^{3\times 3}\right]$ as well as $L^2_{\sym}(\Omega)\coloneqq \sym\left[L^2(\Omega)^{3\times 3}\right]$. \end{definition} Next, we gather some of the main results of Pauly--Zulehner: \begin{theorem}[{{\cite[Lemma 3.5, Remark 3.8, and Lemma 3.21]{PZ17}}}] The pairs \begin{multline*} \left(\intersec{\grad_\textnormal{r}\grad},\interior{\curl}_{\textnormal{r},\sym}\right),\; \left(\interior{\curl}_{\textnormal{r},\sym},\interior{\dive}_{\textnormal{r},\dev}\right),\\ \left(-\dev\grad_\textnormal{r},\sym\curl_{\textnormal{r},\dev}\right),\;\left(\sym\curl_{\textnormal{r},\dev},\ob{\dive\dive_\textnormal{r}}_{,\sym}\right) \end{multline*} are compact sequences. Moreover, we have $\intersec{\grad_\textnormal{r}\grad}^*=\ob{\dive\dive_\textnormal{r}}_{,\sym}$, $\interior{\curl}_{\textnormal{r},\sym}^*=\sym\curl_{\textnormal{r},\dev}$, $\interior{\dive}_{\textnormal{r},\dev}^*=-\dev\grad_\textnormal{r}$. \end{theorem} We have now several theorems being consequences of our general observation in Theorem \ref{t:mr}. We will formulate the versions for Theorem \ref{t:mr} only. The analogues to Theorem \ref{t:mrcc} are straightforwardly written down, which we will omit here. \begin{theorem} \begin{enumerate}[label=(\alph*)] \item Let $(u_n)_n,(v_n)_n$ be weakly convergent sequences in $L^2_{\sym}(\Omega)$. Assume that \[ (\ob{\dive\dive_\textnormal{r}}_{,\sym}u_n)_n, (\interior{\curl}_{\textnormal{r},\sym}v_n)_n \] are relatively compact in $\dom(\intersec{\grad_\textnormal{r}\grad})^*$ and $\dom(\sym\curl_{\textnormal{r}})^*$. Then \[ \lim_{n\to\infty}\langle u_n,v_n\rangle = \langle \lim_{n\to\infty} u_n,\lim_{n\to\infty} v_n\rangle. \] \item Let $(u_n)_n,(v_n)_n$ be weakly convergent sequences in $L^2_{\dev}(\Omega)$. Assume that \[ (\sym\curl_{\textnormal{r},\dev}u_n)_n, (\interior{\dive}_{\textnormal{r},\dev}v_n)_n \] are relatively compact in $\dom(\interior{\curl}_{\textnormal{r},\sym})^*$ and $\dom(\dev\grad_\textnormal{r})^*$. Then \[ \lim_{n\to\infty}\langle u_n,v_n\rangle = \langle \lim_{n\to\infty} u_n,\lim_{n\to\infty} v_n\rangle. \] \end{enumerate} \end{theorem} \section*{Acknowledgements} This work was carried out with financial support of the EPSRC grant EP/L018802/2: "Mathematical foundations of metamaterials: homogenisation, dissipation and operator theory". A great deal of this research has been obtained during a research visit of the author at the RICAM for the special semester 2016 on Computational Methods in Science and Engineering organised by Ulrich Langer and Dirk Pauly, et al. The wonderful atmosphere and the hospitality extended to the author are gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{introduction} Cyg X-3, V4641 Sgr, V404 Cyg and GRS 1915$+$105 are unique sources even in the fairly non-homogenous group of X-ray binaries (XRBs). They present very different companion stars: Cyg X-3 harbors a Wolf-Rayet companion \citep{vankerkwijk96,koljonen17}, making it a high-mass XRB, while V4641 Sgr, V404 Cyg, and GRS 1915$+$105 are low-mass XRBs with a late B-type \citep{orosz01} star, a K-type subgiant \citep{casares92,king93,khargharia10}, and a K-type giant star \citep{greiner01a} as donors, respectively. However, they share some similarities that are unique among the XRB population. They are all very powerful X-ray emitters, with Cyg X-3 persistently emitting a luminosity of 10$^{38}$ erg s$^{-1}$ in the X-ray band, V4641 Sgr and V404 Cyg exhibiting luminous outbursts where the X-ray luminosity can exceed the Eddington luminosity for a 10 solar mass black hole \citep{revnivtsev02,motta17a}, and GRS 1915$+$105 being in outburst for the past 27 years \citep{castrotirado92,fender04}, with luminosities reaching and surpassing the Eddington limit \citep[e.g.,][]{done04}. Except for Cyg X-3, all these sources have long orbital periods and thus large accretion disks, and there is evidence of a high orbital inclination. Therefore, any large-scale geometrical change in the accretion disk such as puffing-up or warping of the accretion flow, or an equatorial outflow can cause local obscuration events. Evidence of this could be seen in the June 2015 outburst of V404 Cyg, which showed highly variable high column density material absorbing the X-ray continuum that remained hard throughout the outburst \citep{motta17a}. Especially so-called X-ray plateaus with diminished X-ray luminosity and softer spectra suggested a heavy obscuration of the intrinsic emission \citep{motta17b,sanchez17}. Similarly, the high-luminosity active accretion phases of V4641 Sgr can be very rapid with a heavily absorbed hard X-ray continuum \citep{munozdarias18,revnivtsev02}. In both systems, there is evidence that the intense likely super-Eddington X-ray emission drives a strong disk wind, thus expelling a significant amount of mass to surround the systems \citep{king15,munozdarias18}. On the other hand, Cyg X-3 orbits its companion star at a close distance with a short 4.8-hour orbital period \citep{parsignault72}. Because the companion is a Wolf-Rayet star exhibiting a heavy stellar wind that extends much farther than the binary orbit, Cyg X-3 is constantly embedded in a high-density environment that affects its X-ray spectra in all accretion states \citep{szostek08,zdziarski10,koljonen18}. This material is optically thick in X-rays as a result of the absorption of metals and Compton scattering, causing iron absorption edges \citep{koljonen18} and Compton downscattering \citep{zdziarski10} and/or Compton scattering out of the line of sight to the intrinsic X-ray continuum. Highly ionized iron lines are resolved with {\it Chandra}\/ and reveal a distinct component of gas at much higher ionization, in addition to a component from fluorescence by neutral or near-neutral material. Attempts to unify the iron emission with that of lower-Z elements implied a need for an additional absorption component, possibly associated with a disk wind \citep{kallman19}. GRS 1915$+$105 is known to have ionized accretion disk wind \citep{neilsen09}, but a high obscuration like that in the other three sources has not been observed. However, GRS 1915$+$105 recently entered a new accretion state that presents lower fluxes throughout its spectral energy distribution than ever before during its 27-year-long outburst. In this state, sporadic X-ray flares have been observed \citep[e.g.,][]{iwakiri19,neilsen19,jithesh19} in addition to the X-ray spectra, indicating heavy obscuration \citep{miller19}. Strong radio flares were also observed in the flaring period \citep{motta19,trushkin19,koljonen19}, indicating episodic jet emission that is also similar to the multiwavelength evolution of V404 Cyg and Cyg X-3. The similarity of the X-ray spectra in Cyg X-3 and GRS 1915$+$105 has previously been noted by \citet{vrtilek13} and \citet{zdziarski16}, who studied the color-color-intensity diagrams of XRBs and found that GRS 1915$+$105 and Cyg X-3 occupy an area that is different from that of other black hole or neutron star XRBs. This underlines the connection between the two and their likely `messy' surroundings. In this paper, we study spectra obtained with the \textit{Nuclear Spectroscopic Telescope Array} ({\it NuSTAR}) and \textit{Rossi X-ray Timing Explorer} ({\it RXTE}) of the four luminous hard X-ray sources Cyg X-3, V404 Cyg, V4641 Sgr, and GRS 1915$+$105, which are all likely surrounded or occluded in the line of sight by dense material that affects the intrinsic X-ray emission at specific times. The data processing of all sources is described in Section \ref{observations}. In Section \ref{results} we show that the X-ray spectra are very similar and peculiar at specific times in the evolution in all four sources, which is caused by the high-density environments in which they are embedded. The mutual spectral characteristics include a low-energy cutoff of the hard X-ray spectra, absorption edges of highly ionized iron, ionized or Doppler-shifted iron emission or absorption lines, and high absorption. We furthermore fit the time-averaged spectra, and also time-resolved spectra in case of V404 Cyg, with two physically motivated models that either describe a scenario in which all the intrinsic emission is reprocessed in the surrounding matter or in which the emitter is surrounded by a thick torus with variable opening angle. This underlines the assumption of X-ray obscuration in all sources. Using the results from both fits, we discuss in Section \ref{discussion} that the (outflowing) obscuring matter in V404 Cyg and GRS 1915$+$105 shows a change in geometry that is linked to the radio (jet) evolution observed from the sources, in addition to a change in the intrinsic X-ray emission. The sources display X-ray obscuration with varying intrinsic luminosities from lower than 1\% of the Eddington luminosity up to the Eddington limit within the framework of the models. This indicates that different factors cause the obscuration. We therefore further discuss the effect of these results for other sources. Finally, we conclude in Section \ref{conclusions}. \section{Observations and data reduction} \label{observations} \begin{table*} \centering \caption{Observation log.} \label{obslog} \begin{tabular}{lccccccc} \toprule Source & Instrument & Pointing & Date$^{a}$ & MJD$^{b}$ & Obs. length & Exposure & Count rate$^{c}$ \\ & & & & & (s) & (s) & (cts/s) \\ \midrule Cyg X-3 & {\it NuSTAR} & 10102002002 & 2015/11/13 & 57339.53742 & 20789 & 10168 & 113 (51--188) \\ V404 Cyg & {\it NuSTAR} & 90102007002 & 2015/06/24 & 57197.93615 & 64383 & 17721 & 588 (10--8305) \\ GRS 1915 & {\it NuSTAR} & 90501321002 & 2019/05/05 & 58608.30055 & 63813 & 28700 & 31 (9--68) \\ & & 30502008002 & 2019/05/19 & 58622.52845 & 52451 & 25403 & 15 (4--51) \\ & & 30502008004 & 2019/07/31 & 58695.87901 & 53420 & 23243 & 17 (1--81) \\ V4641 Sgr & {\it RXTE}/PCA & 70119-01-01-14 & 2002/05/23 & 52417.81503 & 736 & 736 & 73 (65--87) \\ & & 80054-08-01-01$^{d}$ & 2003/08/06 & 52857.43170 & 2992 & 2992 & 67 (45--90) \\ \bottomrule \end{tabular} \tablefoot{ \tablefoottext{a}{Year/month/day of data start time.} \tablefoottext{b}{Modified Julian Date of data start time.} \tablefoottext{c}{Mean countrate in 3--79 keV band ({\it NuSTAR}) and 3--40 keV band ({\it RXTE}/PCA) with the data range in parenthesis.} \tablefoottext{d}{Only the second part of the lightcurve were studied here.} } \end{table*} \subsection{{\it NuSTAR}} In the case of V404 Cyg, we selected the {\it NuSTAR}\/ observation that was taken during the June outburst of 2015 nearing the end of the 12-day flaring activity that contained both a plateau spectral state with slow spectral evolution and low flux density and a flaring state with rapid spectral changes and highly variable flux density (pointing 90102007002). This observation was previously analyzed in \citet{walton17}, although they mostly concentrated on analyzing the low-absorption flaring periods with high count rates. For GRS 1915$+$105, we selected observations that were taken when the source descended to a very anomalous low-flux state in June 2019 that was interspersed by very luminous flares (pointings 90501321002, 30502008002, and 30502008004). For Cyg X-3, we selected the only {\it NuSTAR}\/ observation that was taken in the hard state (pointing 10102002002). All observations used in this paper are tabulated in Table \ref{obslog}. We reduced the {\it NuSTAR}\/ data from the two focal plane modules (FPMA and FPMB) using \textsc{nupipeline}. We used a circular source region with a 100 arcsec radius centered on the location of the source, and circular background regions with a 100 arcsec radius that were selected from a sourceless region in the detector image. The source region size was a compromise of including most of the point-spread function but avoiding to confuse it with the possible contribution from the scattering halo that is present in the data of V404 Cyg \citep{beardmore16,heinz16,vasilopoulos16}. The pipeline was run with the parameters \textsc{tentacle=`yes'} and \textsc{saamode=`optimized'}. The former requires a simultaneous increase in the CdZnTe detector event count rates and the observed shield single rates, and the latter allows identification and flagging of time intervals in which the CdZnTe detector event count rates show an increase when the spacecraft enters the South Atlantic Anomaly (SAA). The data reduction was performed with \textsc{heasoft 6.26.1}. We extracted an averaged spectrum from the two detectors using whole pointings in the case of Cyg X-3 and GRS 1915$+105$, and spectra with several intervals in the case of V404 Cyg (see below). The broadband (3--79 keV) {\it NuSTAR}\/ count rate ranged between 51--188 cts/s for Cyg X-3, which mostly arises from the orbital modulation (a factor of 2--3). The total exposure ($\sim$21 ks) is longer than the orbit ($\sim$17.3 ks), therefore we can expect the effect of orbital modulation on the spectral components to average out. For GRS 1915$+105$, the {\it NuSTAR}\/ count rate was found to vary between 1 and 81 cts/s during the $\sim$30--60 ks long pointings, which is considerably lower than usually observed. GRS 1915$+105$ is famous for its plethora of X-ray variability states \citep{belloni13}; the flux and hardness vary on short timescales. For the observations considered here, the hardness ratio between 3--5 keV and 10--79 keV remained relatively constant with 0.8$\pm$0.1, 0.22$\pm$0.03, and 0.2$\pm$0.1 for pointings 90501321002, 30502008002, and 30502008004, respectively. We therefore consider the averaged spectrum as a relatively accurate representation of the spectral shape, although we cannot rule out fast changes in the source spectrum. For V404 Cyg, we concentrated on analyzing the times preceding and in between the intense X-ray flaring with the total count rate not exceeding $\sim$1000 cts/s in order to study spectral characteristics when the intrinsic X-ray emission was likely obscured (the low-absorption intense flaring spectra were studied in detail in \citealt{walton17}). Because the spectrum changed rapidly during the pointing, we analyzed the spectral evolution in several ways. We divided the pointing into 70 segments, where the extracted spectrum has 30000 counts to ensure sufficient spectral quality. We excluded spectra from the analysis that contained count rates exceeding 1050 cts/s in order to fully include the preflaring period (the first $\sim$21 ksec of the pointing) and periods with low count rates between the luminous flares. This resulted in 25 individual time bins ranging from 420 s to 3980 s in exposure time. We furthermore binned time-resolved spectra that were similar in shape to spectral epochs in order to increase the spectral quality for detailed modeling. For X-ray modeling, we binned the data to a minimum signal-to-noise ratio (S/N) of 30 in the full band 3--79 keV. The spectral fitting was performed using the Interactive Spectral Interpretation System \textsc{(isis;} \citealt{houck02}). In the modeling, a constant factor was added to account for the flux difference of the {\it NuSTAR}\/ detectors. In some pointings (e.g., 30502008004 of GRS 1915$+$105), the discrepancies in the FPMA and FPMB data cannot be explained by a simple constant, especially in the 6--10 keV region. This can affect the fit quality significantly. Therefore we also fit the same models to the spectra from a single detector and present its fit quality as well. However, all the model parameters we present here are estimated from fits to data from both detectors. All the {\it NuSTAR}\/ fluxes are normalized to the FPMA detector. For the X-ray timing analysis, we extracted 2$^{-6}$-s light curves from three energy bands: 3--10 keV, 10--79 keV, and 3--79 keV. The cospectra, which can be used as a proxy for white-noise-subtracted power spectral densities (PSDs), were calculated from 512-s long segments averaged over each good time interval (GTI) using Matteo's Libraries and Tools in Python for {\it NuSTAR}\/ timing (\textsc{maltpynt}; \citealt{bachetti15b}). The cospectrum is used to mitigate instrumental effects in the {\it NuSTAR}\/ light curves \citep{bachetti15}. We used the rms normalization and binned the cospectra geometrically by a factor of 1.1--1.5 before importing them to \textsc{isis} for model fitting. \subsection{{\it RXTE}} We downloaded all the proportional counter array ({\it RXTE}/PCA) data from the High Energy Astrophysics Science Archive Research Center (HEASARC) during outbursts of V4641 Sgr in 1999, 2002, 2003, and 2005, and selected two representative observations for spectral modeling that resemble the data from Cyg X-3 and V404 Cyg (pointings 80054-08-01-01 and 70119-01-01-14). The former was taken during the outburst of 2003 and was analyzed in \citet{maitra06}. However, only the first $\sim$2 ksec were studied in their paper, and here we concentrate on the latter part of the light curve. The other selected pointing was taken during the outburst of 2002, and we are not aware that this spectrum has been studied in detail elsewhere. The pointing 80054-08-01-01 was taken at MJD 52857.37 with an exposure of 2.9 ksec and a mean count rate of the proportional counter unit 2 (PCU2) of 67 cts/s, while the pointing 70119-01-01-14 was taken at MJD 52417.81 with an exposure of 0.7 ksec and a mean PCU2 countrate of 73 cts/s. Because neither light curve presented significant spectral changes during the pointings, we extracted the average spectrum for spectral modeling in both cases. {\it RXTE}/PCA\/ data were reduced using the methods described in the {\it RXTE}\/ cookbook with \textsc{heasoft 6.26.1}. The 128-channel energy spectra were extracted from the standard-2 data using all available PCUs and all layers. For the spectral fitting, we ignore bins below 3 keV and above 40 keV, binned the data to S/N=5.5, and added 0.5\% systematics to each channel. For the timing analysis, we extracted 0.125-s light curves from standard-1 data. We calculated the averaged PSD using 512-s light-curve segments and binned them geometrically by a factor of 1.1 before importing them to \textsc{isis} for model fitting. \section{Results} \label{results} \subsection{X-ray spectra: Overview} \begin{table*} \centering \caption{Source parameters.} \label{sourceparam} \begin{tabular}{lccccccc} \toprule Source & Distance & Mass & Period & Inclination & ISM abs. \\ & (kpc) & (M$_{\odot}$) & (days) & (deg) & (10$^{22}$ cm$^{-2}$) \\ \midrule Cyg X-3 & 7.4$\pm$1.1 (1) & 2.4$^{+2.1}_{-1.1}$ (2) & 0.2 (3) & 30--50 (2,4,5) & 3.5 (6,7) \\ V404 Cyg & 2.39$\pm$0.14 (8) & 9.0$^{+0.2}_{-0.6}$ (9) & 6.5 (10) & 67$^{+3}_{-1}$ (9) & 0.83 (11) \\ GRS 1915$+$105 & 8.6$^{+2.0}_{-1.6}$ (12) & 12.4$^{+2.0}_{-1.8}$ (12) & 33.9 (13) & 60$\pm$5 (12) & 3.5 (14) \\ V4641 Sgr & 6.2$\pm$0.7 (15) & 6.4$\pm$0.6 (15) & 2.8 (16) & 72 (15) & 0.23 (17) \\ \bottomrule \end{tabular} \tablebib{ (1) \citet{mccollough16}; (2) \citet{zdziarski13}; (3) \citet{parsignault72}; (4) \citet{vilhu09}; (5) \citet{zdziarski12}; (6) \citet{koljonen18}; (7) \citet{kallman19}; (8) \citet{millerjones09}; (9) \citet{khargharia10}; (10) \citet{casares92}; (11) \citet{motta17b}; (12) \citet{reid14}; (13) \citet{steeghs13}; (14) \citet{chapuis04}; (15) \citet{macdonald14}; (16) \citet{orosz01}; (17) \citet{maitra06}. } \end{table*} Fig. \ref{spectra} shows some of the X-ray spectra from the observations tabulated in Table \ref{obslog}. In the top panel, the hard-state spectra of V404 Cyg (from the beginning of the pointing before spectral softening, see Section \ref{v404}), Cyg X-3, and a 2002 outburst peak spectrum of V4641 Sgr show a striking similarity in spectral shape with a similar absorption profile, a broad peak at the iron line region, a sharp drop at the iron edge energies, and a low-energy cutoff in the hard X-rays. This type of spectrum is not observed from any other XRB. The closest comparison can be found in Compton-thick active galactic nuclei \citep[AGN; e.g.,][]{balokovic14,bauer15}. The GRS 1915$+$105 spectrum from the anomalous low-luminosity X-ray state before the X-ray or radio flaring shows similar features, but with a prominent iron absorption line and a higher X-ray cutoff energy. The spectra in Fig. \ref{spectra} (top panel) display different flux densities: the V404 Cyg is a factor of $\sim$2, $\sim$6, and $\sim$20 brighter than Cyg X-3, GRS 1915$+$105, and V4641 Sgr, respectively. However, when the distances (see Table \ref{sourceparam}) are taken into account, the X-ray luminosity of V404 Cyg is a factor of $\sim$3 brighter than V4641 Sgr, but a factor of $\sim$2 and $\sim$4 dimmer than GRS 1915$+$105 and Cyg X-3, respectively, in this state. In the bottom panel of Fig. \ref{spectra}, a harder X-ray spectrum is shown for V404 Cyg (during X-ray flaring) and V4641 Sgr (2003 outburst peak spectrum), which share approximately the same shape, in addition to a spectrum of GRS 1915$+$105 after a high-intensity X-ray and radio-flaring period in the low-luminosity state. However, no similar spectrum can be found for Cyg X-3 because all the other spectral states are softer (see all the different spectra from different accretion states in \citealt{koljonen10}). This might therefore indicate that these accretion states are less absorbed or obscured, and the spectral shape might be explained by strong reflection from the accretion disk surface. However, we show here that the observed spectral evolution is also compatible with a change in the geometry of the obscuring material, or with a change in the ionization structure. \begin{figure} \centering \includegraphics[width=\linewidth]{plot_all_nustar.pdf} \caption{Collection of X-ray spectra from the four sources ordered into two panels according to their spectral shape. \textit{Top:} {\it NuSTAR}\/ spectra (FPMA/FPMB data both included) of V404 Cyg during the outburst period of 2015 (preceding intense X-ray flaring period), Cyg X-3 in the hard state, GRS 1915$+$105 during the anomalous low-luminosity state of 2019--2020 (preceding a period of sporadic X-ray and radio flaring), and the {\it RXTE}/PCA spectrum of V4641 Sgr during the outburst period of 2002. \textit{Bottom:} {\it NuSTAR}\/ spectra of V404 Cyg during the outburst period of 2015 (in between intense X-ray flaring), {\it RXTE}/PCA spectrum of V4641 Sgr during the outburst period of 2003, and {\it NuSTAR}\/ spectra of GRS 1915$+$105 from the anomalous low-luminosity state (after the intense X-ray and radio flaring period). Note the spectral similarity between sources. The spectra of GRS 1915$+$105 and V4641 Sgr are renormalized by the amount shown for illustrative purposes.} \label{spectra} \end{figure} All the spectra show a strong iron line with a line width above 200 eV. This has been resolved as a combination of a neutral iron K$\alpha$ line (6.4 keV) and ionized iron lines, with the strongest component arising from the Fe XXV K$\alpha$ line (6.7 keV) and possibly from Fe XXVI Ly$\alpha$ line (7.0 keV) with {\it Chandra}\/ in the case of Cyg X-3 \citep{paerels00,kallman19} and V404 Cyg \citep{king15}. The energy band from $\sim$7 keV to $\sim$10 keV appears to be affected by the absorption of the above-mentioned species of iron in all sources. Their ionization energies are 7.1 keV, 8.8 keV, and 9.2 keV for Fe K$\alpha$, Fe XXV K$\alpha$, and Fe XXVI Ly$\alpha$, respectively. In addition, there is a strong visible iron absorption line around 6.5 keV in the preflare spectrum of GRS 1915$+$105, and a weaker line in the preflare spectrum of V404 Cyg. Moreover, there might be an indication of emission from an iron Fe XXV K$\beta$ line (7.8 keV) or Ni XXVII K$\alpha$ (7.8 keV) in the Cyg X-3 spectrum and the Fe XXVI Ly$\beta$ line (8.3 keV) in the GRS 1915$+$105 preflare spectrum. Curiously, some of the V404 Cyg spectra show iron line centroids close to 6.3 keV, similar to findings of \citet{motta17a}, indicating that the redshifted neutral iron K$\alpha$ line arises either very close to the compact object (gravitational redshift) or in a medium moving away (Doppler redshift). The strong photoionized emission and absorption lines point to a significant amount of absorbing matter in the line of sight to all sources. All spectra exhibit a strong curvature or a cutoff in the hard X-rays around 20--30 keV. This is atypical for a hard-state XRB. This cutoff might be caused by a heavy absorption of the intrinsic (cutoff) power-law spectrum, or by a Compton downscattering in the accretion disk (i.e., reflection spectrum) or the surrounding medium. In addition, the spectra shown in Fig. \ref{spectra} (top panel) present a rather sharp upturn at $\sim$ 10 keV, possibly indicating a location where two model components meet that can have very different spectral slopes. In the following, we consider these scenarios by fitting the data with appropriate spectral models. \subsection{X-ray spectra: Initial modeling} \label{inimod} In the hard X-ray spectral state, the hard X-ray spectrum of XRBs can typically be fit with a cutoff power law or a Comptonized continuum with a power-law photon index in the range of $\Gamma=$1.5--2.0 and cutoff energy of $\sim$100 keV. The spectra presented in Fig. \ref{spectra}, especially in the top panel, are quite unlike typical XRB hard-state spectra. Thus, we started by finding and fitting curved models to the hard X-ray data (above 10 keV) of V404 Cyg (preflare or plateau spectrum, as shown in Fig. \ref{spectra}, top panel). The resulting model parameters are tabulated in Table \ref{phenom} and the models are shown in Fig. \ref{initial} (left panel). To model the strong spectral curvature, we first tried a cutoff power-law model (C1 in Fig. \ref{initial} and Table \ref{phenom}). The best-fit model is highly inverted, with a low-energy cutoff (power-law index, $\Gamma\sim-1.6$, cutoff energy, $E_{\mathrm{cut}}\sim6$ keV), and the fit is poor ($\chi^{2}_{\mathrm{red}}=2.6$). Obviously, the power-law index is too low for any reasonable physical scenario. Next, we tried a thermal Comptonization model in a spherical plasma cloud (\textsc{compTT}; \citealt{titarchuk94}). The resulting model (C2 in Fig. \ref{initial} and Table \ref{phenom}) has an electron temperature of $kT_{e}\sim24$ keV, a seed photon temperature of $kT_{s}\sim5$ keV, and an optical depth of $\tau\sim0.7$. Model C2 provides a better fit to the data ($\chi^{2}_{\mathrm{red}}=1.8$) than model C1, but it is still poor. A tendency for a high seed-photon temperature in fitting thermal Comptonization models to the hard X-ray data of V404 Cyg has been noted earlier by \citet{jenke16}, \citet{roques15}, and \citet{natalucci15}, who discussed that the high-temperature seed photons might arise from a synchrotron emission in the jet base. However, the radio luminosity during the time of the V404 Cyg preflare observation was very low, $\sim$10 mJy \citep{munozdarias16,gandhi17}, with a steep spectrum that indicates optically thin emission from the jet ejecta \citep{millerjones19}. The core jet therefore was likely quenched and the synchrotron scenario is not plausible. The considerations above show why the simple models are not sufficient for modeling the highly unusual hard X-ray spectrum. \citet{natalucci15} also fit the hard X-ray data from the \textit{INTErnational Gamma-Ray Astrophysics Laboratory} with a pure reflection model and a partially covering absorber model. The first model was found to fit most of the data only by assuming very high values for the reflection factor, and the latter by assuming very high column densities for the absorber. The authors regarded these models as unphysical. However, based on the analyses of X-ray and optical data as detailed in Section 1, the spectral similarity to the Cyg X-3 hard X-ray state spectrum and the strong X-ray emission lines indicate that the absorption and reprocessing and reflection scenarios are plausible at least for the low-luminosity phases. We therefore continued to fit the V404 Cyg preflare hard X-ray data with absorption and reflection models. We first tried an absorbed power-law model, but found that a fully absorbed model (\textsc{phabs} $\times$ \textsc{powerlaw}) is completely inadequate to fit the data ($\chi^{2}_{\mathrm{red}}=8.2$). Instead, a partially absorbed power law (\textsc{pcfabs} $\times$ \textsc{powerlaw}; model A1 in Fig. \ref{initial} and Table \ref{phenom}) fits the hard X-ray data much better ($\chi^{2}_{\mathrm{red}}=1.5$), although the intrinsic spectrum is very soft ($\Gamma\sim3.9$) and partially absorbed ($f_{\mathrm{cov}}\sim0.93$) by a dense medium ($N_{\mathrm{H}} \sim 3.6 \times 10^{24}$ cm$^{-2}$). An even better fit ($\chi^{2}_{\mathrm{red}}=1.1$) can be found by changing the intrinsic spectrum to a cutoff power-law model (A2 in Fig. \ref{initial}/Table \ref{phenom}) with a value for the power-law photon index more in line with a typical XRB in a hard X-ray state ($\Gamma\sim2.2$) and cutoff energy of $E_{\mathrm{cut}}\sim18$ keV. The parameters of the absorption component are slightly lower than those of model A1, but they are still comparable. A similar fit quality is achieved by changing the intrinsic emission component to a thermal Comptonization component (model A3 in Fig. \ref{initial} and Table \ref{phenom}). The best-fit model has an electron temperature of $kT_{e}\sim$10 keV, the seed-photon temperature is fixed to 0.1 keV, and the optical depth is $\tau\sim3.6$. The absorption component has similar parameter values as models A1 and A2. For the reflection scenario, we began by fitting the data with \textsc{pexrav} \citep{magdziarz95}; a cutoff power-law continuum reprocessed in a neutral medium (model R1 in Fig. \ref{initial} and Table \ref{phenom}). We fixed the inclination to the value presented in Table \ref{sourceparam} and the abundances to solar. The resulting model has a power-law photon index $\Gamma\sim1.8$, a cutoff energy $E_{\mathrm{cut}}\sim20$ keV, and a reflection factor $R_{f}\gtrsim400$. The fit quality is moderate ($\chi^{2}_{\mathrm{red}}=1.3$). We also fit the V404 Cyg hard X-ray spectrum with a relativistic reflection model \citep[\textsc{relxill}][]{garcia14,dauser14}. The resulting model (R2 in Fig. \ref{initial} and Table \ref{phenom}) also has a very high reflection factor ($R_{f}\sim90$) in a moderately ionized matter (log $\xi\sim2.3$). The intrinsic spectrum is a very hard cutoff power law with $\Gamma\sim1.1$ and cutoff energy of $E_{\mathrm{cut}}\sim$20 keV. The spin of the black hole is low: $a\sim0.2$. Fixing the inclination to the value presented in Table \ref{sourceparam} did not result in a good fit, therefore we left it free to vary, which returned rather low values of $\theta_{\mathrm{inc}}\lesssim10^\circ$. All the other parameters were fixed in the default values. The fit quality is slightly better than in model R1 ($\chi^{2}_{\mathrm{red}}=1.1$). In the reflection models, the amount of the radiation that ionizes the reflecting material is typically defined as the ratio of the photon intensity that illuminates it to the direct photon intensity that reaches the observer. When instead of the accretion disk, the reflecting medium is surrounding the X-ray source, the maximum reflection factor can be much higher than in the unobscured case, where usually $R_{f} \lesssim 1$. We note, however, that high values for the reflection factor, $R_{f}\lesssim 10$, can be accommodated in the unobscured case as well when strong light bending deep in the gravitational potential of the compact object is assumed (\citealt{dauser16}). Clearly, the reflection factors in the fits are much higher, and we can assume that the majority of the incident emission is reprocessed in a surrounding medium. From the \textsc{relxill} model family, we also tried the lamppost geometry (\textsc{relxilllp}), which resulted in similar parameters as for model R2 (coronal geometry), but with an even higher reflection factor. Finally, we selected a model with a Comptonized incident spectrum: \textsc{relxillCp} (model R3 in Fig. \ref{initial}/Table \ref{phenom}). Because the reflection factor found above is very high, we fit only the reflection component to the data. The parameters and fit quality are very similar to the cutoff power-law \textsc{relxill} model, except for the power-law index, which is $\Gamma\sim2$. This initial model fitting shows that the absorption or reprocessing models describe the hard X-ray data better than normal continuum models. In addition, when we discard models where the incident continuum is not too soft or too hard for a hard X-ray state, we are left with models A2, A3, R1, and R3. \begin{table*} \centering \caption{Model parameters from the initial fits to the V404 Cyg preflare hard X-ray spectrum (10--79 keV).} \label{phenom} \begin{tabular}{lccccccccc} \toprule & & \multicolumn{2}{c}{Pure continuum model} & \multicolumn{3}{c}{Absorbed continuum model} & \multicolumn{3}{c}{Reprocessed continuum model} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} & & \textsc{cutoffpl} & \textsc{compTT} & \textsc{powerlaw} & \textsc{cutoffpl}& \textsc{compTT} & \textsc{pexrav} & \textsc{relxill} & \textsc{relxillCp} \\ Param. & Unit & (C1) & (C2) & (A1) & (A2) & (A3) & (R1) & (R2) & (R3) \\ \midrule N$_{\mathrm{H}}$ & 10$^{22}$ cm$^{-2}$ & & & 363$\pm$7 & 290$\pm$15 & 299$\pm$13 & \\ f$_{\mathrm{cov}}$ & & & & 0.931$\pm$0.002 & 0.86$\pm$0.01 & 0.89$\pm$0.01 & \\ $\Gamma$ & & -1.64$\pm$0.05 & & 3.85$\pm$0.03 & 2.2$\pm$0.3 & & 1.84$\pm$0.04 & 1.11$^{+0.07}_{-0.05}$ & 1.96$^{+0.05}_{-0.04}$ \\ E$_{\mathrm{cut}}$ & keV & 5.6$\pm$0.1 & & & 18$\pm$3 & & 20$\pm$1 & 23.3$\pm$0.6 \\ kT$_{\mathrm{seed}}$ & keV & & 4.6$^{+0.06}_{-0.08}$ & & & 0.1 & & \\ kT$_{e}$ & keV & & 24$^{+42}_{-10}$ & & & 10$\pm$1 & & & 13.7$^{+0.8}_{-0.5}$ \\ $\tau$ & & & 0.7$^{+1.0}_{-0.6}$ & & & 3.6$\pm$0.5 & & \\ R$_{f}$ & & & & & & & $>$411 & $>$480 & -2 \\ $\theta_{\mathrm{inc}}$ & deg & & & & & & 67 & $<$11 & $<8$ \\ $a$ & & & & & & & & 0.2$\pm$0.2 & 0.6$^{+0.1}_{-0.2}$ \\ log $\xi$ & & & & & & & & 2.30$^{+0.02}_{-0.26}$ & 2.00$^{+0.01}_{-0.06}$\\ \midrule $\chi^{2}$/d.o.f & & 713/279 & 491/278 & 429/278 & 293/277 & 299/277 & 356/278 & 314/275 & 340/276 \\ $\chi_{\mathrm{red}}^{2}$ & & 2.56 & 1.77 & 1.54 & 1.06 & 1.08 & 1.28 & 1.14 & 1.23 \\ \bottomrule \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=\linewidth]{plot_model_initial2.pdf} \caption{Initial modeling of the V404 Cyg preflare spectrum. \textit{Left:} Best-fit models fit to the hard X-ray data (10--79 keV), but plotted in the full data range. Different models are labeled, and the parameters are tabulated in Table \ref{phenom}. Solid lines refer to pure continuum models, dashed lines to absorbed continuum models, and dotted lines to reprocessed continuum models. \textit{Middle:} Fitting an absorbed cutoff power-law model to the full data range. The dot-dashed blue line corresponds to model A2 fit to the hard X-ray data (see left panel), the dotted green line corresponds to absorption and disk blackbody components added to the model, the dashed yellow line corresponds to partial absorption and smeared edge components added to the model, and the solid red line shows emission and absorption lines added to the model (parameters of the final model are tabulated in Table \ref{phenom2}). See the text for more details. \textit{Right:} Fitting a reprocessed thermal Comptonization model to the full data range. The dot-dashed blue line corresponds to model R3 fit to the hard X-ray data (see the left panel), the dotted green line shows an increased value of the ionization parameter, the dashed yellow line corresponds to absorption and smeared edge components added to the model, and the solid red line shows partial absorption and an absorption line added to the model (parameters of the final model are tabulated in Table \ref{phenom2}). See the text for more details.} \label{initial} \end{figure*} \begin{table*} \centering \caption{Model parameters from the initial fits to the V404 Cyg preflare X-ray spectrum (3--79 keV).} \label{phenom2} \begin{tabular}{cccccccccc} \toprule \multicolumn{9}{c}{Model: \textsc{phabs} $\times$ \textsc{smedge} $\times$ (\textsc{pcfabs1} $\times$ \textsc{cutoffpl} + \textsc{pcfabs2} $\times$ \textsc{diskbb} + \textsc{gauss1} + \textsc{gauss2})} \\ \midrule \textsc{phabs} & \multicolumn{3}{c}{\textsc{smedge}} & \multicolumn{2}{c}{\textsc{pcfabs1}} & \multicolumn{3}{c}{\textsc{cutoffpl}} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-6} \cmidrule(lr){7-9} N$_{\mathrm{H}}$ & E & $\tau$ & $\sigma$ & N$_{\mathrm{H}}$ & f$_{\mathrm{cov}}$ & norm & $\Gamma$ & E$_{\mathrm{cut}}$ \\ (10$^{22}$ cm$^{-2}$) & (keV) & & (keV) & (10$^{24}$ cm$^{-2}$) & & & & (keV) \\ 2.8$\pm$0.7 & 8.7$\pm$0.1 & 0.30$^{+0.07}_{-0.05}$ & 0.5$\pm$0.2 & 2.4$\pm$0.2 & 0.92$\pm$0.01 & 33$^{+26}_{-15}$ & 2.2$\pm$0.2 & 19$\pm$3\\ \addlinespace \multicolumn{2}{c}{\textsc{pcfabs2}} & \multicolumn{2}{c}{\textsc{diskbb}} & \multicolumn{3}{c}{\textsc{gauss1}} & \multicolumn{3}{c}{\textsc{gauss2}} \\ \cmidrule(lr){1-2} \cmidrule(lr){3-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} N$_{\mathrm{H}}$ & f$_{\mathrm{cov}}$ & norm & kT & E$_{1}$ & $\sigma_{1}$ & norm & E$_{2}$ & $\sigma_{2}$ & norm \\ (10$^{22}$ cm$^{-2}$) & & & (keV) & (keV) & (keV) & ($\times$10$^{-3}$) & (keV) & (keV) & ($\times$10$^{-3}$) \\ 46$\pm$3 & 0.92$\pm$0.01 & 518$^{+105}_{-95}$ & 1.34$\pm$0.03 & 6.4 & 0.46$\pm$0.02 & 36$\pm$3 & 6.5 & 0.002 & -3.6$\pm$0.7 \\ \midrule \multicolumn{9}{c}{Both {\it NuSTAR}\/ detectors: $\chi^{2}$/d.o.f = 871/617 \hspace{0.2cm} $\chi_{\mathrm{red}}^{2}$ = 1.41 \hspace{0.2cm} FPMA-only: $\chi^{2}$/d.o.f = 347/305 \hspace{0.2cm} $\chi_{\mathrm{red}}^{2}$ = 1.14} \\ \bottomrule \addlinespace \multicolumn{9}{c}{Model: \textsc{phabs} $\times$ \textsc{smedge} $\times$ \textsc{pcfabs} $\times$ (\textsc{gauss} + \textsc{relxillcp})} \\ \midrule \textsc{phabs} & \multicolumn{3}{c}{\textsc{smedge}} & \multicolumn{2}{c}{\textsc{pcfabs}} & \multicolumn{3}{c}{\textsc{gauss}} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-6} \cmidrule(lr){7-9} N$_{\mathrm{H}}$ & E & $\tau$ & $\sigma$ & N$_{\mathrm{H}}$ & f$_{\mathrm{cov}}$ & E & $\sigma$ & norm \\ (10$^{22}$ cm$^{-2}$) & (keV) & & (keV) & (10$^{22}$ cm$^{-2}$) & & (keV) & (keV) & ($\times$10$^{-3}$) \\ 2.7$^{+0.6}_{-0.7}$ & 7.4$\pm$0.07 & 0.32$^{+0.05}_{-0.04}$ & 1 & 39$\pm$2 & 0.82$\pm$0.01 & 6.5 & 0.002 & -11$\pm$2 \\ \addlinespace \multicolumn{9}{c}{\textsc{relxillcp}} \\ \cmidrule(lr){1-9} norm & $\Gamma$ & kT$_{e}$ & $\theta_{\mathrm{inc}}$ & R$_{f}$ & R$_{\mathrm{in}}$ & $a$ & log $\xi$ & A$_{\mathrm{Fe}}$\\ ($\times$10$^{-3}$) & & (keV) & (deg) & & & & & (solar) \\ 27$\pm$2 & 1.85$\pm$0.01 & 3.9$\pm$0.2 & 20$^{+2}_{-4}$ & -2 & 2.6$^{+13.8}_{-0.4}$ & -0.998--0.998 & 3.43$\pm$0.03 & 1.8$\pm$0.2 \\ \\ \midrule \multicolumn{9}{c}{Both {\it NuSTAR}\/ detectors: $\chi^{2}$/d.o.f = 873/617 \hspace{0.2cm} $\chi_{\mathrm{red}}^{2}$ = 1.41 \hspace{0.2cm} FPMA-only: $\chi^{2}$/d.o.f = 335/306 \hspace{0.2cm} $\chi_{\mathrm{red}}^{2}$ = 1.09} \\ \bottomrule \end{tabular} \end{table*} Next, we included the soft X-ray (3--10 keV) data in the model fitting. Fig. \ref{initial} (left panel) shows the above models for the whole data range. It is clear that models with partially absorbed but soft intrinsic spectra would need further absorption components to bring the spectrum down to match the data in the soft X-rays, while models with a hard intrinsic spectrum need an additional soft component to account for the data. We selected two models to continue fitting the full data range: an absorbed cutoff power-law continuum (A2), and a fully reprocessed thermal Comptonization continuum (R3). The reasoning behind this selection is that models A2 and A3 are likely very similar, therefore we selected the slightly better fit of model A2. Model R3 was selected because an ionization parameter is included in the model, because of relativistic effects to the spectral shape, and because the fit quality is slightly better. For the absorbed thermal Comptonization continuum (i.e., model A2), we first added a soft component that we modeled with an absorbed disk blackbody component (\textsc{phabs}$\times$\textsc{diskbb}). However, any model producing a Planckian-type spectrum, for instance, thermal Comptonization or bremsstrahlung, produced equally good fits. The absorbed disk with $kT\sim1.1$ keV and $N_{\mathrm{H}} \sim 6.2 \times 10^{23}$ cm$^{-2}$ can adequately model the soft X-rays, but the soft X-ray slope below 6 keV is not well fit, and large residuals can be found in the 9--10 keV region as well (Fig. \ref{initial}, middle panel). We therefore added another partial covering absorber and a smeared edge to improve the model. The resulting fit is much better in the soft X-rays, but residuals are still found in the 5--10 keV energy range, likely due to missing line components. Therefore we added two Gaussian lines to the model: an emission line fixed to 6.4 keV (neutral iron), and an absorption line fixed to 6.5 keV (ionized iron). This resulted in a fit quality of $\chi^{2}_{\mathrm{red}}=1.4$. All the parameters of this model can be found in Table \ref{phenom2}. A very similar but physically more accurate model was fit to all datasets and is discussed in more detail in the following section. For the fully reprocessed thermal Comptonization continuum (i.e., model R3), we first fit the same model again for the whole range. The soft X-rays are better taken into account by increasing the value of the ionization parameter from log $\xi\sim2$ to log $\xi\sim3$ (Fig. \ref{initial}, right panel). Thus, there is no need to add a soft component. Clear residuals are left for the soft X-ray regions below 4 keV and between 7--10 keV. We continued to add an absorption (\textsc{phabs}) and a smeared edge (\textsc{smedge}) component to bring the model down in these regions. The resulting fit was already much better with $\chi^{2}_{\mathrm{red}}=2.4$, but some small residuals remained below 4 keV and around $\sim$6.5 keV. Therefore we further added a partial covering absorption component (\textsc{pcfabs}) and an absorption line (\textsc{egauss}) to the model, resulting in a fit quality of $\chi^{2}_{\mathrm{red}}=1.4$. A very similar model was fit to all datasets and is discussed in more detail in the following section. \subsection{X-ray spectra: Physical modeling} \label{modeling} Because of the observational evidence of a high-density environment described above, we considered the possibility that all sources are embedded in a dense medium ($N_{\mathrm{H}} \gtrsim 10^{23-25}$ cm$^{-2}$) that surrounds the X-ray source and causes significant absorption and scattering that affects the X-ray spectrum up to $\sim$30--40 keV. To facilitate this scenario for spectral fitting, we considered two models: model (A) consisting of a partially absorbed thermal Comptonization component reprocessed in a highly ionized plasma (\textsc{xillverCp,} or \textsc{relxillCp} when relativistic effects are important) and model (B) consisting of an intrinsic cutoff power-law component (mimicking the thermal Comptonization process) reprocessed in a surrounding neutral uniform-density sphere with polar cutouts of various sizes resembling a torus of different opening angles (\textsc{borus02}; \citealt{balokovic18}). Model (B) is similar to the Compton-thick AGN scenario where the X-ray source is surrounded by a thick torus with the emission received from a highly absorbed line-of-sight component and a reflected component from the surface of the torus (to follow the discussion of the resulting torus geometries for each source, we refer to Fig. \ref{drawing}). In model (A), the reprocessing occurs in a shell or shells of ionized matter surrounding the X-ray source. Model A corresponds to a scenario in which the spectra are dominated by a reflection or scattered component, and the contribution of the incident continuum is severely diminished. This model consists of one or two partially absorbed reflection models (\textsc{xillverCp} and/or \textsc{relxillCp}), where the incident photons arise from thermal Comptonization (\textsc{nthComp}). Following the indication from the initial modeling in Section \ref{inimod} that the majority of incident photons are reprocessed in a medium encompassing the incident photon source, we fixed the reflection factor to a negative value in the model (all radiation was reprocessed). The parameters of the two reflection components (if needed) were kept at the same values, except for the ionization parameter and normalization, which were left free to vary separately for both components. This is to account for changes in ionization parameter in the scattering component, which is evident in the high-resolution X-ray spectra observed from V404 Cyg, Cyg X-3, and GRS 1915$+$105, which show neutral as well as ionized iron lines. For V404 Cyg and GRS 1915$+$105, we allowed the redshift to vary freely for the scattering components in order to fit the iron line centroids of $\sim$6.3 keV, indicating either gravitational or Doppler redshift of the neutral iron line (for Cyg X-3, we fixed this to 1000 km/s, which is approximately the wind speed of the Wolf-Rayet companion, and to zero for V4641 Sgr because the spectral resolution is far lower). In addition, a narrow iron absorption line and a smeared iron edge with variable absorption energy are needed to successfully fit the first two epochs of V404 Cyg, and epoch 1 of GRS 1915$+$105. This might indicate an additional absorbing medium during these epochs. Thus, the total model can be described as \textsc{constant} $\times$ \textsc{phabs} $\times$ \textsc{smedge} $\times$ \textsc{pcfabs} $\times$ (\textsc{xillverCp$_{1}$}/\textsc{relxillCp} + \textsc{xillverCp$_{2}$} + \textsc{gauss}). Here, constant is the instrument cross-normalization. We did not fix the inclination of the scattering components because the reflecting surface might be inclined away from the disk inclination angle, for example, for an equatorial outflow disk wind with an opening angle of several tens of degrees, or in the case of spherical obscuration, the reflection angle would correspond to some mean angle from all scattering processes. To reduce the parameter space, we fixed the black hole spin to 0. If it is let free, the spectral fits do not restrict the parameter well, which has also been found by \citet{walton17} in the case of V404 Cyg, where the spin was estimated as $a>-0.1$. It can be also expected that the reflecting medium can lie farther away than the innermost stable circular orbit (ISCO) when reprocessing in the surrounding media is assumed. We therefore fixed the outer radius to 1000 gravitational radii in all sources. For model B, we used the \textsc{borus02} model component \citep{balokovic18}, which instead of a disk reprocessing allows a variety of geometries from a uniform sphere to torus-like shapes through polar cutouts. The reprocessing torus in \textsc{borus02} is considered to be cold, neutral, and static. To take a moving reprocessor into account, we therefore allowed the redshift of the scattering component to vary. Like in model A, we also introduced a highly ionized iron edge component and an ionized iron absorption or emission line component to find acceptable fits to the data. We can expect that the reprocessed photons are either redshifted (behind the source, i.e., moving away) or blueshifted (in front of the source, i.e., moving toward) because they arise in the fast outflow or wind or in the accretion flow. The total model can be described as \textsc{constant1 $\times$ phabs1 $\times$ smedge1 $\times$ (constant2 $\times$ borus02(red) + constant3$ \times$ borus02(blue) + phabs2 $\times$ cabs1 $\times$ cutoffpl + constant4 $\times$ cutoffpl)}. \textsc{constant1} is the instrument cross-normalization, \textsc{constant2} is the relative normalization of the redshifted scattered component, \textsc{constant3} is the relative normalization of the blueshifted scattered component, \textsc{constant4} is the relative normalization of the leaked (unabsorbed) intrinsic spectrum, \textsc{phabs2 $\times$ cabs1} is the line-of-sight absorption including beam-scattering with the column densities linked between the components, and \textsc{cutoffpl} represents the intrinsic continuum of the accretion flow (mimicking thermal Comptonization spectrum). For the scattered component, we fixed the inclination angle according to Table \ref{sourceparam} because the scattering angle is now taken into account in the model. We also fixed the iron abundance to solar. All the parameters are linked between \textsc{borus02(red)} and \textsc{borus02(blue),} except for the redshift for \textsc{borus02(blue),} which is determined as being the negative of the value in \textsc{borus02(red)}. In addition, for the first two epochs of V404 Cyg, an additional soft component is needed, which we modeled with a partially absorbed blackbody component. However, any component resembling a Planckian spectrum might be inserted instead, such as a low-temperature thermal Comptonization component (cf. model A), a disk blackbody component, or a Wien spectrum. The physical interpretation of this component is discussed in Sects. \ref{v404} and \ref{v404_soft}. In addition, as mentioned above, we added an emission line component to the preflare spectrum of GRS 1915$+$105 and to the Cyg X-3 spectrum. In the following, we concentrate on the fitting results of these two models for individual sources. \subsubsection{Cyg X-3} \label{cygx3} The peculiarity of the hard-state spectrum in Cyg X-3 has been known for more than a decade \citep{hjalmarsdotter04}. The observed low-energy cutoff has previously been attributed to either strong absorption in the stellar wind, pure Compton reflection in a medium that covers the emitting source, or nonthermal Comptonization of a steep electron population by \citet{hjalmarsdotter08}. They preferred the latter scenario, although the former two produce better fits and require either an unusual accretion state or a very massive black hole as a primary star. Later, \citet{zdziarski10} showed that a low-energy cutoff in the X-ray spectrum can be obtained when Compton downscattering is considered in an optically thick plasma cloud, likely arising from the interaction of the accretion disk and the strong stellar wind of the Wolf-Rayet companion. In addition, there is evidence that the primary star in Cyg X-3 is a black hole with a relatively low mass \citep{zdziarski13,koljonen17}. Because of the different types of companion stars in V404 Cyg, V4641 Sgr, GRS 1915$+$105, and Cyg X-3, it seems unlikely that the similar X-ray spectra would be a result of different accretion mechanisms; wind versus Roche-lobe accretion. In addition, the hard X-ray state of Cyg X-3 is a relatively stable state that can last up to several years. Therefore it seems unlikely that it would present a peculiar accretion state for such a long time. Rather, the spectral similarity to V404 Cyg, V4641 Sgr, and GRS 1915$+$105 likely arises from some type of radiation reprocessing. The interstellar absorption for Cyg X-3 is relatively high as a result of its location in the plane of the Galaxy, and likely because it is located behind two spiral arms and the Cygnus X star-forming region \citep{mccollough16}. We fixed the lower limit of the hydrogen column to 3.5$\times$10$^{22}$ atoms cm$^{-2}$, which is approximately the value found in studies where instruments with softer X-ray response were used \citep{koljonen18,kallman19}. Because the reprocessing matter in the model is neutral, we added a smeared edge in model B to account for the absorption of highly ionized iron. The energy of the smeared-edge component is about 9 keV, indicating that the Fe XXVI Ly$\alpha$ line (with an ionization energy of 9.2 keV) is the most dominant absorber. We found that including a narrow line at the energy of 7.8 keV (either from Fe XXV K$\beta$ and/or Ni XXVII K$\alpha$) improves the fit as well. We also fixed the redshift in both models to 1000 km/s ($z = 0.003$), which is approximately the wind speed of the Wolf-Rayet companion \citep{koljonen17}, although this has only a slight effect on the fit. The resulting parameters for model A fits can be found in Table \ref{modela2}. The Cyg X-3 hard-state spectrum can be adequately described by a rather soft, thermally Comptonized ($\Gamma\sim2.4$, kT$_{\mathrm{e}}\sim33$ keV) spectrum reprocessed in a highly ionized medium (log $\xi\sim3.8$) and further in a lower ionization medium (log $\xi\sim2.9$). This probably is a scattering cloud of decreasing ionization parameter. Similar modeling with two ionized absorbers was successfully used in \citet{kallman19} to fit the X-ray emission lines from {\it Chandra}\/ data with a medium-ionization component having log $\xi\sim2.9$ and a high-ionization component fixed to log $\xi\sim4.2-5.0$. The emission is further partially absorbed in a relatively dense environment (N$_{\mathrm{H}}\sim4\times10^{23}$ atoms cm$^{-2}$) with a covering fraction of f$_{\mathrm{cov}}\sim0.5$. This is consistent with the estimates of the wind column of $\sim10^{23}$ atoms cm$^{-2}$ acquired from fitting the X-ray emission lines \citep{kallman19}. Furthemore, the covering fraction can be understood as half of the emission back-scattering and passing through the cloud. Model B (parameters can be found in Table \ref{modelb2}) delivers similar results, with the surrounding dense torus essentially a sphere (cos($\theta_{\mathrm{tor}})>0.96$, N$_{\mathrm{H,tor}}\sim10^{24}$ atoms cm$^{-2}$; see also Fig. \ref{drawing}). The incident spectrum is very similar to that of model A ($\Gamma\sim2.4$), with a cutoff energy at 28 keV (although if the mechanism is thermal Comptonization, this corresponds to an electron temperature of kT$_{e}\sim9-14$ keV). The line-of-sight component is absorbed by a column of N$_{\mathrm{H}}\sim3\times10^{23}$ atoms cm$^{-2}$ and comprises 65\% of the flux received, while the 17\% and 18\% of the flux come from the scattered and leaked (or direct) component, respectively. The unabsorbed 3--79 keV luminosity is similar for both models and corresponds to $\sim$10\% of the Eddington luminosity\footnote{Because most of the accreted matter from the Wolf-Rayet companion is helium, the Eddington limit is twice the value of the pure hydrogen Eddington limit.} for a 2.5 solar mass black hole, which is fairly high and corresponds to values found for V404 Cyg. However, because Cyg X-3 is orbiting the Wolf-Rayet star inside its photosphere, there is always ample matter to accrete and sustain the high luminosity. On the other hand, assuming a 10 solar mass black hole (an upper limit for the allowed mass of the compact object; \citealt{koljonen17}), the luminosity would be $\sim$2--3\% of the Eddington luminosity, which is similar to what has been observed from Cyg X-1 \citep{basak17}. \subsubsection{V404 Cyg} \label{v404} \begin{figure*} \centering \includegraphics[width=\linewidth]{plot_nustar_gti_time.pdf} \caption{{\it NuSTAR}\/ 3--79 keV light curve of V404 Cyg divided into GTIs that total 30000 counts (numbered). The numbers for GTIs that include count rates exceeding 1050 cts/s are not shown (excluding GTIs 13 and 14), and the data points are shown in gray with a reduced point size. The coloring scheme corresponds to the spectral shape shown in Fig. \ref{groups}, with every other GTI shown in a different hue for clarity.} \label{v404_gti} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{plot_time_nustar.pdf} \caption{Changes in the spectral shape of V404 Cyg during the GTIs shown in Fig. \ref{v404_gti}. All the spectra have been normalized to the 7 keV flux in the first spectrum to show the difference in the spectral shape.} \label{groups} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{plot_time_transit.pdf} \caption{Spectra of V404 Cyg from GTIs 6--9 in Fig. \ref{v404_gti}, showing the change from a harder (6) to a softer spectrum (7--9) preceding the flaring period. The spectra are not normalized, thus showing the variable regimes below 6 keV and above 10 keV.} \label{transit} \end{figure} V404 Cyg (GS 2023$+$338) is one of the closest XRB (2.39$\pm$0.14 kpc; \citealt{millerjones09}) with a relatively long orbit (6.473$\pm$0.001 d; \citealt{casares92}), implying a large accretion disk and a long outburst recurrence time. Overall, two outbursts have been detected from V404 Cyg with X-ray instruments in 1989 \citep{kitamoto89,oosterbroek97,zycki99} and 2015 (\citealt{rodriguez15,motta17a,motta17b,sanchez17,walton17,kajava18}; considering the June and December 2015 flaring events to be parts of the same outburst), although in retrospect, additional optical outbursts have been detected in 1938 and 1956 \citep{richter89,wagner91}. The two X-ray outbursts in 1989 and 2015 were both hard-state outbursts with no excursion into a soft X-ray state, although the luminosities reached or exceeded the Eddington luminosity. During the 2015 outbursts, a plethora of X-ray spectral behavior was observed from V404 Cyg, with some applicable to absorption events with intermediate flux densities and X-ray spectra not consistent with a Comptonization model, and some to intrinsic variations with very high or low flux densities consistent with a Comptonization model \citep{motta17a,motta17b,sanchez17,kajava18,walton17,hynes19}. In both cases, the X-ray spectra exhibit fast changes from one state to another in a matter of seconds to minutes \citep{motta17a,kajava18,sanchez17,walton17}. We identify three different spectral states in the set of 30000 cts spectra during the {\it NuSTAR}\/ pointing (Figs. \ref{v404_gti}, \ref{groups}). At the beginning of the pointing, the source presents Cyg X-3 hard-state-like spectra, that is, spectra that likely exhibit strong absorption or reprocessing, as evidenced by the curved hard X-ray spectra (epoch 1 in Fig. \ref{v404_gti}, blue spectra in Fig. \ref{groups}). This spectrum was used in the initial model fitting in Section \ref{inimod}. The spectrum further evolves to a softer state with an increase in the fluxes below 6 keV and a decrease in the fluxes above 10 keV (epoch 2 in Fig. \ref{v404_gti}, yellow spectra in Fig. \ref{groups}). Both epochs are approximately 10 ksec long, and the change takes place in a gap between spectra 6 and 7 (Fig. \ref{transit}). During the transition, the energy region between 7--10 keV remains remarkably constant. This might be interpreted as the soft and hard part of the spectrum arising from two different components that are anticorrelated (spectral pivoting is not enough to explain the whole spectral change). After epochs 1 and 2, V404 Cyg entered into a high count rate flaring period and presented significantly harder spectra and a variable cutoff energy higher than that of the preflare spectra (epochs 3--5 in Fig. \ref{v404_gti}, green spectra in Fig. \ref{groups}). We fit the averaged spectra from epochs 1--5 and the individual GTIs with models A and B. The model parameters, corresponding fluxes, and the fit quality for the averaged spectra can be found in Table \ref{modela1} for model A and in Table \ref{modelb1} for model B, while a selection of the model parameters and fluxes is shown for the individual GTIs in Figs. \ref{v404_params_A} and \ref{v404_params_B} (with a fit quality ranging between $\chi^{2}_{\mathrm{red}}$ = 0.9--1.6 with a mean of $\chi^{2}_{\mathrm{red}}$ = 1.2). In addition, the average spectra and the corresponding model B fits divided into different model components (both absorbed and unabsorbed) are shown in Fig. \ref{models}. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{plot_time_parameters_rx.pdf} \caption{Model A parameter values for V404 Cyg spectra (individual GTIs in black, epochs in color).} \label{v404_params_A} \end{figure} Considering model A fits, the spectra of all epochs can be fit with the fully reprocessed thermal Comptonization emission; a model that is essentially very similar to the basic model used in \citet[][their Table 3]{walton17}, except that here we used the version of \textsc{relxill} with a spherically symmetric corona and an intrinsic emission arising from thermal Comptonization (\textsc{relxillCp}) instead of a lamp-post geometry and an intrinsic emission modeled as a cutoff power-law spectrum (\textsc{relxilllp}). In Section \ref{inimod} the lamppost model fit to the epoch 1 hard X-ray spectrum (essentially very similar to the fit with coronal geometry) would require a very steep incident power-law spectrum. In addition, we used a simple absorption component instead of an \textsc{xstar} reprocessor. We also let the ionization parameter and the redshift of the \textsc{xillvercp} component vary freely instead of fixing them to 1 and 0, respectively, to fit the redshifted iron line with energies $\sim$6.3 keV. The results of the model A fits are very similar to those in \citet{walton17} for epochs 3--5 (the flaring state), as expected, with low intrinsic absorption, an inclination close to 30$^{\circ}$, a photon index of $\sim$1.6 on average, and an ionization parameter of $\sim$1000. Some differences do arise, however, with the iron abundance close to solar in our fits except for a few GTIs in epoch 4, compared to twice the solar value in \citet{walton17}. On another note, our fits for the epoch 1--2 averaged spectra also show elevated abundances (for individual GTIs, the abundances were fixed to the averaged value of the epoch because they were not well constrained in the fits). Because the geometry and intrinsic emission were modeled differently, the remaining parameters are more difficult to compare, but \citet{walton17} reported very high values for the reflection factor ($R_{f}\sim$1--3). This might also indicate reprocessing in the surrounding medium, as speculated in this paper, instead of a strong gravitational bending and a scenario with a high black hole spin. On the other hand, epochs 1--2 display much softer spectra, as shown in Fig. \ref{groups}, which is reflected in the fits with increased values for the photon index ($\Gamma\sim$ 2) and ionization parameter (log $\xi\sim$ 3.5), and very low values for the electron temperature ($kT_{e}\sim3-4$ keV, corresponding to an optical depth of $\tau\sim10$). In addition, a partially covered absorption component with a high column density ($4\times10^{23}$ cm$^{-2}$, f$_{\mathrm{cov}}\sim$ 0.8), a narrow iron absorption line at 6.5 keV, and a smeared iron edge at 7.4 keV likely arising from an additional variable absorption component are needed to fit the spectra successfully with \textsc{relxillCp} (an additional \textsc{xillverCp} component is not needed in these epochs). The physical explanation for the low electron temperature is a challenge in this model. Because the cutoff energy in the epoch 1--2 spectra is low ($<$20 keV), all models with thermal Comptonization are expected to give electron temperatures that are approximately lower than 6--10 keV (assuming E$_{\mathrm{cut}}\sim$ 2--3 kT$_{e}$). We recall that the electron temperature in the model is given in the frame of the observer. A very efficient cooling mechanism, such as radiative cooling by soft photons from the strong outburst or reprocessing in an optically thick medium, can therefore thermalize and Compton downscatter the intrinsic emission to lower energies (similar to what has been proposed for Cyg X-3 in \citealt{zdziarski10}, see also Section \ref{cygx3}). The increase in soft X-ray emission between epochs 1 and 2, as mentioned above, is mirrored in the model A parameter evolution as a change in the absorption parameters (decrease in the column density and covering fraction), as an increase in the power-law photon index and normalization, and a decrease in the electron temperature. In addition, there is an increase in R$_{\mathrm{in}}$, log $\xi$, $z$, and inclination. In Section \ref{v404_soft} we discuss the hypothesis that this parameter evolution can arise from a geometry change gearing toward jet ejection. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{plot_time_parameters_borus.pdf} \caption{Model B parameter values for V404 Cyg spectra (individual GTIs in black, epochs in color).} \label{v404_params_B} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{plot_model_nustar_borus.pdf} \caption{Averaged {\it NuSTAR}\/ FPMA data from V404 Cyg in the preflare (epochs 1--2) and outburst (epochs 3--5) stages as shown in Fig. \ref{groups}, together with absorbed (top row) and intrinsic (bottom row) model B components. The total model (solid red line) consists of a sum of a blackbody component (dashed green line), a cutoff power-law component absorbed in and scattered off from the material in the line of sight (dotted blue line), scattered into the line of sight by the surrounding medium (dot-dashed light blue line), and unabsorbed or direct component (long-dashed yellow line). The middle panels show the residuals of the models to the data.} \label{models} \end{figure*} In epochs 1--2, model B fits are consistent with the torus completely covering the source (cos($\theta_{\mathrm{tor}}$) pegged to 1; see also Fig. \ref{drawing}), with an average density of the torus of $N_{\mathrm{H,tor}}\sim10^{24}$ atoms cm$^{-2}$ and a similar if slightly higher (a factor of two) line-of-sight column density through the torus. The flux from the line-of-sight component dominates the spectrum during these epochs, comprising of 75--90\% of the total flux, while scattered and leaked flux contribute only up to 2\% (the remaining flux comes from the blackbody component discussed below). In epochs 3--5, the geometry changes from spherical to more disk-like with cos($\theta_{\mathrm{tor}})\sim0.55$, and an increase of a factor of four in $N_{\mathrm{H,tor}}$, while the line-of-sight column density decreases by a factor of about six (see also Fig. \ref{drawing}). The flux from the line-of-sight component decreases to 25--50\% of the total flux, while the scattered flux increases to 40--70\% of the total flux and causes the majority of the spectral variability in this state. Moreover, the flux from the leaked component increases significantly, up to 10\% of the total flux, indicating that the geometry has changed so that the direct intrinsic emission is partly visible. Similar to the parameter evolution in model A, the intrinsic cutoff power-law spectrum is soft in epochs 1--2, with the power-law photon index $\Gamma$ pegged to 2.6 and cutoff energy values of about 21--26 keV. The spectrum hardens in epochs 3--5 with $\Gamma\sim2$ and much higher cutoff energies, although overall the intrinsic spectrum is softer than in model A. In contrast to model A, no absorption line is required in the model for epochs 1--2 because the double-horned line profile can be explained by reprocessing in an outflowing medium, which is seen as both redshifted and blueshifted ($z\sim0.05c\sim$15000 km/s). Previously, \citet{motta17a} have shown that the iron line exhibit redshift and blueshift with velocities $v<0.1c,$ in line with the {\it NuSTAR}\/ values. For epochs 3--5, the resulting line speeds ($\sim$4000 km/s) are comparable to the values measured from the P Cygni profiles of He I $\lambda$5876 (a well-known accretion wind tracer), while for epochs 1--2, they are still well within of what can be achieved with super-Eddington accretion (usually on the order of $0.1c-0.2c$; e.g., \citealt{pinto19}). In epochs 3--5, the back-illumination dominates the scattering (i.e., c2 $>$ c3 in Table \ref{modelb1}), so that the scattered emission is seen through material that is more translucent than that of the line of sight. The scattered line therefore exhibits mainly redshift and reduces the observed iron K$\alpha$ line energy to 6.3 keV without producing the blueshifted line. The strong absorption of the intrinsic emission in model B requires an additional component to account for the elevated soft X-ray emission in epochs 1--2. We modeled this as a partially absorbed blackbody component, as discussed in Section \ref{modeling}. The temperature of the blackbody component decreases from 1.1 keV to 0.95 keV, while its normalization (and flux) increases from epoch 1 to epoch 2, indicating an increase in the size of the emitting medium. While the temperature of this component might arise from the hot disk, it does not seem plausible that the emitting area would increase and temperature decrease when the source is gearing toward the flaring state when emission from an accretion event is considered. \citet{zycki99} briefly discussed that the blackbody component needed to provide the soft excess in the 1989 outburst might arise from incident disk photons that are thermalized in multiple scattering processes in the surrounding medium. Thus, a dense medium, whether a disk with a large scale-height or a stellar or accretion disk wind, might then explain both the large column and the soft component, similar to what was discussed for model A above. For both models, the unabsorbed luminosities in the 3--79 keV {\it NuSTAR}\/ band are sub-Eddington for all epochs (2--4\% for model A, and 2--14\% for model B), although because the intrinsic spectrum is steep in epochs 1--2, the Eddington limit would be reached by extrapolating the spectrum down to $\sim$0.2 keV (for model B at least). The luminosity of epochs 1--2 is much higher for model B because the highly absorbed intrinsic emission is included, while model A presents only the flux for the scattered component. When the scattered flux is only taken into account in model B, the resulting luminosities agree with those for model A. \subsubsection{V4641 Sgr} \begin{figure} \centering \includegraphics[width=\linewidth]{plot_time_rxte.pdf} \caption{Changes in the {\it RXTE}/PCA\/ spectral shape of V4641 Sgr during the outburst periods of 1999, 2002, 2003, and 2005. All the spectra have been normalized to the 7 keV flux in the first spectrum to show the difference in spectral shape. The top panel shows outburst spectra reminiscent of the spectra with high count rate and low absorption of V404 Cyg \citep{walton17}, while the bottom panel shows spectra reminiscent of the flaring spectra with lower count rate and higher absorption of V404 Cyg in Fig. \ref{groups}.} \label{v4641_groups} \end{figure} V4641 Sgr is a very peculiar source, presenting a dynamically confirmed black hole with a high-mass companion \citep{orosz01}, but showing transient outbursts similar to low-mass XRBs. The outbursts of V4641 Sgr can be very short ($\text{about}$ a week) and intense, reaching super-Eddington levels, as in 1999 outburst \citep{revnivtsev02}, or longer but much weaker \citep[e.g.,][]{uemura02,maitra06,munozdarias18}. There is some evidence that the inner accretion disk is misaligned to the orbital plane of the binary ($i \sim 70^{\circ}$; \citealt{orosz01,macdonald14,pahari15}), and V4641 Sgr might instead be a low-inclination source in X-rays and radio ($i \sim 10^{\circ}$; \citealt{hjellming00,orosz01,gallo14}). It has previously been suggested that a {\it Chandra}\/ spectrum observed during an outburst decline is very similar to those observed from Seyfert-2 AGN \citep{morningstar14}. The {\it RXTE}/PCA\/ spectra of V4641 Sgr gathered from the 1999, 2002, 2003, and 2005 outbursts are plotted in Fig. \ref{v4641_groups}. In the top panel, the spectra from 1999 and the latter part of 2003 are very reminiscent of the spectra with low absorption and high count rate of V404 Cyg that were studied in \citet{walton17}, while in the bottom panel, the spectra from the 2002, 2003, and 2005 outburst are more reminiscent of the flaring spectra with lower count rate of V404 Cyg (Fig. \ref{groups}). The only outburst containing both types of spectra is the 2003 outburst, which began with those presented in the bottom panel (epochs 1 and 2 in \citealt{maitra06}, including the spectra shown in Fig. \ref{spectra}) and continued with those presented in the top panel (epochs 3 and 4 in \citealt{maitra06}). The two models (A and B) were fit to the spectra from two pointings observed during the 2002 (hereafter epoch 1) and 2003 (hereafter epoch 2) outbursts (Fig. \ref{spectra}; the top corresponds to epoch 1, and the bottom corresponds to epoch 2), and the resulting parameters are shown in Tables \ref{modela2} and \ref{modelb2}. Both observations were taken during the middle of the outburst peak with similar optical magnitudes ($\sim$11.5 mag in V and R band; \citealt{uemura02,maitra06}) and X-ray count rates. Because the statistics of {\it RXTE}/PCA\/ are much lower than those of {\it NuSTAR}\/, we fixed the inclination to $i = 70^{\circ}$ in both models to reduce the number of free parameters. In the case of model A, a successful fit could be achieved with a single \textsc{relxillCp} model, while for model B, the spectra are dominated by the (back-)scattered component (75--80\% of the total flux) for both epochs. The main difference in the parameters of both models for the two epochs can be found in the intrinsic spectrum with the power-law photon index being higher for epoch 1 ($\Gamma_{A} = 2.2$, $\Gamma_{B} = 2.6$) with no high-energy cutoff, while for epoch 2, the power-law photon index is much lower ($\Gamma_{A} = 1.5$, $\Gamma_{B} = 1.6$) and the spectrum has a low-energy cutoff ($kT_{e} \sim$ 9 keV, E$_{\mathrm{cut}} \sim$ 20 keV). Unlike in other sources, the covering fraction and the line-of-sight absorption column in model B are low for both epochs. The scattered component dominates the flux in model B, while pure scattering in a plasma with a single-ionization parameter is consistent according to model A. In contrast to V404 Cyg epochs 1--2 and GRS 1915+105 epoch 1 (discussed in the next section), there is evidence of a strong jet in the two V4641 Sgr epochs. The 8.5 GHz radio observations of the Very Large Array show flux densities of 80--170 mJy \citep{rupen02} and 550--570 mJy \citep{rupen03} coinciding with epochs 1 and 2, respectively. This means that during both observations, any surrounding matter was likely evacuated by the jet. Because the inclination of the system is likely high, we might observe the X-ray source through a disk wind or geometrically thick accretion flow (the column density of the torus is the highest of all the four sources for model B fits; log N$_{\mathrm{H,tor}}$ $\sim$ 10$^{25}$ cm$^{-2}$), and most of the emission received is from backscattering (i.e., c2 = 0 in Table \ref{modelb2}; see also Fig. \ref{drawing}). The luminosities of the two V4641 Sgr epochs are much lower ($\sim$5$\times$10$^{36}$ erg/s, corresponding to $\sim$0.5\% of the Eddington luminosity) than the luminosities for the other sources, although the observations are from the peak of the outbursts. The 2002 and 2003 outbursts were weaker than the much more luminous outburst in 1999, where the peak luminosity was at or greater than the Eddington luminosity \citep{hjellming00,revnivtsev02}. These weak outbursts do not have any predictable periodicity, but they seem to occur roughly at intervals of 500--600 days \citep{negoro18}. On the other hand, an optical counterpart as bright as in the 2002 and 2003 outbursts has not been detected for any other weak outbursts since the 1999 outburst, which marks them as different and probably means that they included enhanced reprocessing of the X-ray emission in the accretion disk. \subsubsection{GRS 1915$+$105} GRS 1915$+$105 is one of the brightest XRBs in our Galaxy in its outburst because it has the longest orbital period known among low-mass XRBs \citep[33.9 days;][]{steeghs13} and thus the largest accretion disk size because of the largest tidal truncation radius. The huge mass reservoir has lasted already three decades, powering the outburst until the drop in the X-ray flux in 2018. We fit all three {\it NuSTAR}\/ observations (labeled epochs 1--3) from this anomalous state with models A and B. The resulting model parameters are shown in Tables \ref{modela2} and \ref{modelb2}. With model A, epoch 1 is fit best with a single \textsc{relxillCp} model, while epochs 2 and 3 can be fit with two \textsc{xillverCp} models, one corresponding to a plasma with a higher (log $\xi \sim$ 3.4) and the other to a plasma with a lower ionization parameter (log $\xi \sim$ 2). Epoch 1 includes a prominent narrow (up to detector resolution) absorption line at 6.56 keV that might be the Fe XXV K$\alpha$ line, although it is redshifted by 6000--7000 km/s, or the Fe K$\alpha$ line blueshifted by the same amount. In addition, an edge at 7.4 keV is needed to fit the spectra adequately. In the case of blueshifted neutral iron absorption, this implies a velocity of the absorbing material of $\sim$13000 km/s. However, the redshift needed to model the spectrum is even higher, $z\sim$0.12, which agrees with what is observed from ultraluminous X-ray sources (ULXs). For epochs 2--3, the redshift required is much lower ($\sim$ 3000 km/s). In contrast, the iron abundance increases from solar in epoch 1 to around twice the solar value in later epochs. The incident cutoff power-law spectrum changes from epoch 1 to epochs 2--3, displaying a decrease in the power-law photon index from 1.9 to 1.6, with the latter two also showing a cutoff at 14 keV, while in epoch 1, the cutoff is unconstrained ($\gtrsim$160 keV, when left free to vary). The inclination is only constrained for epoch 1, corresponding to 60--80 degrees, while for epochs 2 and 3, we fixed it to 70 degrees. The models are absorbed with N$_{\mathrm{H}}$ $\sim$ 5$\times$10$^{22}$ atoms cm$^{-2}$, which is comparable with the interstellar value of N$_{\mathrm{H}}$ $\sim$ 3.5$\times$10$^{22}$ atoms cm$^{-2}$. Additional, partial absorption is needed for epochs 2 and 3, with N$_{\mathrm{H}}$ $\sim$ 5$\times$10$^{23}$ atoms cm$^{-2}$ and a covering fraction of $\sim$0.5. With model B, epoch 1 is consistent with the torus covering the source completely, while cos($\theta_{\mathrm{tor}}$) decreases to 0.8 and 0.7 in epochs 2 and 3, respectively (Fig. \ref{drawing}). The density of the torus remains equal in all epochs ($N_{\mathrm{H,tor}}\sim2.5\times10^{24}$ atoms cm$^{-2}$). The line-of-sight component contributes about equally in all epochs (52--59\%) with a similar column density ($N_{\mathrm{H,los}}\sim2.5-5\times10^{23}$ atoms cm$^{-2}$). The scattered flux increases from 17\% to 32--35\%, while the leaked flux decreases from 31\% to 9--10\% when transiting from epoch 1 to epochs 2 and 3. The scattering component is dominated by the redshifted scattering, with epoch 1 showing high wind speeds of 0.057$c$ ($\sim$17000 km/s, which is more in line with the velocity of the absorption line if it is due to a blueshifted neutral iron line) that reduce to much lower values (600--2700 km/s) in epochs 2 and 3. The incident spectrum in epoch 1 is a steep power law ($\Gamma\sim2.3$) similar to V404 Cyg epochs 1, 2, and 5, and Cyg X-3 spectra. In epochs 2 and 3, the power-law index of the incident spectrum decreases to 1.6--1.8 and the spectrum exhibits a cutoff at 21--24 keV. Interestingly, the unabsorbed flux remains similar in all epochs and corresponds to 1--2\% of the Eddington flux. The average spectra and the corresponding model B fits divided into different model components (both absorbed and unabsorbed) are shown in Fig. \ref{1915_params_B}. The inclination is fairly well constrained in epochs 2 and 3 and corresponds to 40--60 degrees. For epoch 1, it is not well-constrained in the fit, and we froze it to 53 degrees (cos($\theta_{\mathrm{inc}}$)=0.6). Based on measurements of the jet inclination of the system, \citet{reid14} estimated the disk inclination angle as 60$^{\circ}\pm$5$^{\circ}$, which is consistent with the values derived here. While having slightly different parameters, the two models show a similar evolution of the model parameters, consistent with a scenario of obscured emission through fast (spherical) outflowing wind in epoch 1, which flattens and decelerates in later epochs (Fig. \ref{drawing}). Epoch 1, which is observed immediately after the sudden decrease of the hard X-ray emission, shows higher wind velocities but neutral absorbing material, higher incident power-law photon indices, and no spectral cutoffs. This is consistent with a scenario where the inner accretion flow is obscured by matter that is first seen as scatterer and absorber. Later, it is mostly seen in reflection, and the regions with higher ionization are exposed. \begin{figure} \centering \includegraphics[width=\linewidth]{plot_model_nustar_borus_1915.pdf} \caption{Averaged {\it NuSTAR}\/ FPMA data from GRS 1915$+$105 from the three epochs together with absorbed (top row) and intrinsic (bottom row) model B components. The total model (solid red line) consists of the sum of a cutoff power-law component absorbed in and scattered off from the material in the line of sight (dotted blue line), scattered into the line of sight by the surrounding medium (dot-dashed light blue line), and unabsorbed or direct component (long-dashed yellow line). The middle panels show the residuals of the models to the data.} \label{1915_params_B} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{drawing.pdf} \caption{Geometry of the torus-shaped obscuring matter around the intrinsic X-ray source according to the parameter $\theta_{\mathrm{tor}}$ in model B fits for all sources, in addition to a graphical representation of the column densities (gray, darker for a higher column density), the wind speeds (arrows), and the lines of sight (dashed lines). The thick dashed line around V404 Cyg epoch 1 and 2 corresponds to the partially absorbed blackbody component. The model parameters for each source can be found in Tables \ref{modelb1} and \ref{modelb2}.} \label{drawing} \end{figure} \subsection{X-ray timing} \label{timing} \begin{figure*} \centering \includegraphics[width=\linewidth]{plot_v404_pds.pdf} \caption{Cospectra of V404 Cyg, Cyg X-3, and GRS 1915$+$105 for {\it NuSTAR}\/ epochs. The soft- and hard-band cospectra are shown separately when they display any differences. The best-fit single or broken power-law model is also plotted for all data. Additional Lorenzian functions are needed for the QPO and harmonics in the GRS 1915$+$105 epoch 1 data.} \label{timing} \end{figure*} While we did not concentrate on the X-ray timing properties of the sources in detail, in the following we provide a quick analysis of the {\it NuSTAR}\/ X-ray cospectra (a proxy for the PSD). The cospectra for the {\it NuSTAR}\/ data for all sources are shown in Fig. \ref{timing}. As described in Section \ref{observations}, we extracted two cospectra for the soft X-ray band (3--10 keV) and the hard X-ray band (10--79 keV), and when they did not present any differences from each other, we extracted the cospectra from the full {\it NuSTAR}\/ range 3--79 keV. We can assume that the reprocessing or scattering in the obscuring matter smears out most of the high-frequency timing information of the intrinsic emission. For Cyg X-3, it has been shown that the PSD is close to a power law with an index of --2.0, independent of the accretion state of the source \citep{axelsson09,koljonen11}, and this has been speculated to be due to a suppression of the high-frequency variations by scattering in the stellar wind surrounding the X-ray source, mimicking a red noise process \citep{koljonen18}. Likewise, for the {\it NuSTAR}\/ pointing considered here, we obtain a power-law cospectrum with an index of --1.9$\pm$0.1. For V404 Cyg, a timing analysis has been performed on a subset of the {\it NuSTAR}\/ data used in this paper in \citet{gandhi17}. The authors found that the X-ray cospectrum is consistent with a power-law spectrum with an index of --1.6; this is not as steep as in Cyg X-3 or V4641 Sgr, but steeper than the flicker noise that is typically observed from hard states of XRBs, indicating the suppression of high-frequency variability. We studied here the epoch-by-epoch variations in the cospectrum and found that a single or broken power law fits all the data sufficiently well. In all epochs, the slope of the high-frequency spectra agrees roughly with an index of $\sim$--1.6 (epoch 1: --2.0$^{+0.4}_{-0.2}$/--1.41$\pm$0.06, epoch 2: --1.58$\pm$0.08/--1.41$^{+0.08}_{-0.03}$, epoch 3: --1.6$^{+0.2}_{-0.1}$, epoch 4: --1.7$\pm$0.1, epoch 5: --1.4$\pm$0.1; when two numbers are given, they correspond to the 3--10 and 10--79 keV band), while there is some evidence of a break to flatter indices at lower frequencies. Interestingly, the epoch 2 cospectrum shows a diminished rms in the soft X-ray band compared to the hard X-ray band, which might incidate that the soft spectral component dilutes the rms. In addition, the spectral break shifts to lower frequencies from 0.2 Hz to 0.06 Hz, and if this is related to the size of the varying soft component, it indicates an increasing emitting area that is consistent with the blackbody parameter evolution of model B fits. GRS 1915$+$105 is famous for its complex X-ray variability. The PSDs typically consist of a band-limited noise component with one or more peaks, indicating quasi-periodic oscillations \citep[QPOs; e.g.][]{morgan97}. The cospectra from epochs 1--3 are consistent with a power-law noise (epoch 1: 1.21$\pm$0.02/0.91$\pm$0.05, epoch 2: 1.02$\pm$0.06, and epoch 3: 1.96$\pm$0.08; when two numbers are given, these correspond to the 3--10 and 10--79 keV band), with the epoch 1 cospectrum also showing a low-frequency QPO with harmonics at 3.3 Hz (at least for the soft X-ray band). Clearly, the epoch 1 cospectrum differs from epochs 2--3, which do not present power over 0.1 Hz, and likely we have a more direct view of the accretion flow. In addition, the soft X-ray band where the incident (leaked) spectral component dominates (Fig. \ref{1915_params_B}) is more variable than the hard X-ray band. For the model B fit, the leaked component presents the highest luminosity fraction in epoch 1, comprising 31\% of the total flux, and in this pointing, we might be observing a patchy outflow. For epochs 2--3, the amount of flux observed from the leaked component is diminished and the total flux is dominated by the line-of-sight and scattered components, which accounts for the loss of high-frequency power. The low-frequency power varies from flicker noise in epochs 1--2 to red noise in epoch 3. For V4641 Sgr, \citet{maitra06} studied the X-ray timing properties during the 2003 outburst and found red-noise-dominated PSD below 1 Hz and Poisson noise above it. Slightly more structure was found in the much brighter 1999 outburst, with the PSD showing a broken power-law shape with indices of $\sim$--1 and $\sim$--2 below and above 5 Hz, respectively \citep{wijnands00}. However, no other structure, such as QPOs, were found in the PSD reaching 100 Hz, and again the higher frequency spectrum is consistent with the red-noise process. For the {\it RXTE}\/ pointings of V4641 Sgr considered here, the PSDs are consistent with pure Poisson noise in the case of the 2002 data, and a red noise below 0.03 Hz and Poisson noise above for 2003 data. \section{Discussion} \label{discussion} We have shown that the X-ray spectra of V404 Cyg, Cyg X-3, GRS 1915$+$105, and V4641 Sgr share similarities at certain evolutionary times that correspond to low X-ray flux periods during outburst events (considering that the persistent source Cyg X-3 is always `on', that is, in outburst, and that V4641 Sgr exhibits low-luminosity outbursts). We modeled these spectra successfully with two models consisting either of a fully reprocessed spectral component and/or a heavily absorbed spectral component, with the intrinsic spectra arising from a thermal Comptonization process. It is well known that Cyg X-3 orbits its Wolf-Rayet companion star inside a high-density stellar wind and that it can strongly affect the X-ray spectra. Based on the spectral similarity, we can therefore presume that similar surroundings affect the X-ray spectra of V404 Cyg and V4641 Sgr during the outbursts, and the recent anomalous accretion state of GRS 1915$+$105. Because the companion stars of V404 Cyg, V4641 Sgr, and GRS 1915$+$105 are not expected to present high-density stellar wind, we attribute this medium to either a large scale-height accretion flow (possibly due to a super-Eddington accretion rate) or an optically thick equatorial outflow or envelope either from the radiation pressure of intrinsic (super-Eddington) emission of the accretion flow or from the base of the jet. Both models (A and B) indicate a change in geometry in the system in the evolution of V404 Cyg and GRS 1915$+$105. Epochs 1--2 for V404 Cyg and epoch 1 for GRS 1915$+$105 are consistent with the X-ray source being obscured in an outflowing (spherical) plasma cloud that transforms into a more disk-like geometry in subsequent epochs (Fig. \ref{drawing}). This change also coincides with the start of the activity in the radio and X-ray flaring. The two V4641 Sgr epochs are consistent with a disk-like geometry, and in both cases, strong radio emission was detected at the same time. This means that the geometry change of the obscuring component seems to be linked with the radio evolution. Interestingly, a very similar evolution took place in the 1999 outburst of V4641 Sgr \citep{revnivtsev02}, with the source luminosity dropping by an order of magnitude followed by optical and subsequently radio emission and a change in the X-ray spectrum from softer (epoch 1-type) to harder (epoch 2-type). In V404 Cyg and GRS 1915$+$105, the intrinsic spectra first resemble that of an intermediate state of XRBs with power-law photon indices $\Gamma\sim 2.0-2.6$, and changes to lower values, $\Gamma\sim 1.4-2.0$, more typical of an XRB hard-state spectrum. However, their accretion history is very different. V404 Cyg epochs 1--2 are the softest state that the source goes through during the outburst, while for GRS 1915$+$105, epochs 1--3 are the hardest so far observed from the outburst (epoch 3 being much harder than epochs 1--2). Therefore it does not seem likely that a single accretion state would explain the similar-looking spectra. Rather, it has to do with the reprocessing of the intrinsic emission. In the following, we discuss this issue further. \subsection{Was V404 Cyg in a ``soft-state'' during epochs 1--2?} \label{v404_soft} The outburst spectra of V404 Cyg do not show a soft blackbody emission component, except for the 1989 outburst, when a 0.3 keV disk component was seen; see \citet{zycki99}. We discussed in Section \ref{v404} that the parameter evolution of the blackbody component is not consistent with arising directly from the accretion disk. Instead, the soft component could arise from incident thermal (disk) photons scattered multiple times in the surrounding changing medium. On the other hand, the fits with the reflection model in Section \ref{v404} showed that the soft component could also be modeled with an increasing ionization parameter that would be consistent with the implied very high intrinsic luminosity of the source in this state. Thus, it is unlikely that the soft component needed to model epochs 1 and 2 in V404 Cyg comes from an accretion disk. In addition, V404 Cyg epoch 1 corresponds to the hardest X-ray state in Cyg X-3 \citep[e.g.,][]{koljonen10}. Cyg X-3 is a persistent wind-accreting source and thus always accreting at (fairly) similar mass accretion rates. The source likely stays in an intermediate state all the time, with an average power-law spectral index $\Gamma\geq2.0$. Thus, the spectral variation is at least partly due to changes in the surrounding medium. It has been suggested that the jet pressure can play a role in reducing the wind density in the line of sight, producing drastic variability in the spectral evolution of Cyg X-3 \citep{koljonen18}. However, in Cyg X-3, there is further evidence of a very soft state (hypersoft state), where the X-ray spectrum is dominated by a partially absorbed blackbody component with a very weak and flat hard X-ray tail \citep{koljonen18}. This might be (partly) similar to what is observed in V404 Cyg epoch 2, with the increased soft X-ray emission. The soft state in Cyg X-3 always precedes jet ejection episodes \citep{koljonen10}, which indicates that this might be the case for V404 Cyg as well. \citet{koljonen18} argued that in the hypersoft state, the jet turns off, allowing the stellar wind from the Wolf-Rayet companion to fill out the cavity created by the jet. This increases the density of the wind close to the X-ray source, providing a medium where multiple scattering can take place. When the jet is turned on later, it encounters a dense medium where an efficient energy dissipation can take place to boost the jet emission to Jy levels. \citet{gandhi17} showed the multiwavelength evolution of V404 Cyg during the {\it NuSTAR}\/ observation (their Fig. 1 and supplementary Fig. 4; their epoch 1 corresponds to our GTIs 10-14). They showed that during the change from the preflare state to the flaring state, there is a brightening in the 15 GHz flux at least by a factor of five and a change in the radio spectral index from negative values to above zero, indicating the start of the jet ejection. In model A, during epoch 2, there is a rise in the parameter evolution of $R_{\mathrm{in}}$, $\Gamma$, log $\xi$, $z,$ and inclination (Fig. \ref{v404_params_A}). This indicates that the geometry of the reprocessor changes and moves farther away (the increase in inclination and $R_{\mathrm{in}}$) with higher speeds (the increase in $z$), most likely caused by increased radiation pressure (the increase in log $\xi$) that also cools the electrons in the corona (the increase in $\Gamma$, and low values of kT$_{e}$). In model B, a similar evolution can be seen in $z$ and in the blackbody flux (Fig. \ref{v404_params_B}). Because the black body flux is proportional to the emitting area squared and temperature to the fourth (i.e., decreasing from epoch 1 to epoch 2; Table \ref{modela1}), this also implies an increase in the size of the blackbody emitter in addition to the increase in the speed of the scattering medium (even more than for model A). This means that what we may be seeing here is a jet ejection event following an accretion event that starts by pushing the reprocessing medium farther out and at the same changing its geometry from sphere-like to disk-like (Fig. \ref{drawing}) and exposing the inner accretion disk seen in the following high-luminosity flaring period \citep{walton17,gandhi17}. A similar spectral sequence can be seen as leading to a high-luminosity flare as well (see \citealt{walton17}; their Fig. 14), beginning from the $\Gamma\sim2$ low-cutoff spectrum with high absorption and evolving to a harder spectrum with a higher cutoff and low absorption (see \citealt{walton17}; their Fig. 15), but with a much faster evolutionary time (tens of seconds compared to $\sim$10000 seconds, as shown in Fig. 2). The intrinsic luminosity in epochs 1--2 is very high in model B, with an Eddington fraction of 0.2--0.3 in the 3--79 keV band, and when the model is extrapolated to lower energies, the model luminosity reaches the Eddington limit roughly in the 0.2--100 keV band. In the case of model A, we see only the scattered emission, while the intrinsic emission would be completely absorbed or scattered, and thus there is no reliable way to estimate the intrinsic emission. Therefore we assume that model B gives a better indication of the intrinsic luminosity. Based on the evolution of the model parameters outlined above, the super-Eddington luminosities, and the radio evolution, the likeliest scenario for V404 Cyg epochs 1--2 therefore is a super-Eddington accretion rate event that resulted in a large scale-height accretion flow and a powerful optically thick accretion disk wind that launched with mildly relativistic speed and led to a jet ejection event. \subsection{Is GRS 1915$+$105 in the hard state?} \begin{figure} \centering \includegraphics[width=\linewidth]{plot_asm.pdf} \caption{ 2--20 keV daily light curve of GRS 1915$+$105 from \textit{The Monitor of All-sky X-ray Image}/Gas Slit Camera ({\it MAXI}/GSC) since February 2019, with {\it NuSTAR}\/ observations marked as arrows and radio flare detections \citep{motta19,trushkin19,koljonen19} as vertical dotted lines. The data are colored and marked according to the spectral hardness shown in Fig. \ref{1915_hid}. } \label{1915_lc} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{hid_1915.pdf} \caption{{\it MAXI}/GSC\/ hardness-intensity diagram of GRS 1915$+$105 from daily monitoring observations since August 2009. The blue data points (dark triangles and light blue squares) indicate the recent low-luminosity state with increased spectral hardness. The light blue squares correspond to the anomalous state with occasional strong X-ray flares (red diamonds) and highly variable radio emission. The numbered green boxes correspond to the {\it NuSTAR}\/ epochs.} \label{1915_hid} \end{figure} In July 2018, GRS 1915$+$105 entered an extended unusually low-flux X-ray phase followed by a change to a state with even lower average X-ray fluxes that were not seen before during the 27-year-long outburst (Fig. \ref{1915_lc}; light blue squares) but presented renewed flaring activity in radio as well as X-rays (Fig. \ref{1915_lc}; red diamonds and dashed lines). After the change to this peculiar state, radio monitoring data showed significant radio flaring \citep{motta19} that started approximately at the time of the renewed X-ray activity. The radio flaring has since continued and is still ongoing at the time of writing \citep{trushkin20}. While this radio behavior is consistent with what has frequently been observed in the past \citep[e.g.,][]{fender99,punsly13}, this is the first time that significant radio activity does not seem to be associated with a strong X-ray counterpart. It is therefore not entirely clear whether the outburst of GRS 1915$+$105 is nearing its end or if the source is just highly obscured. The prolonged low-luminosity state since July 2018 does indicate that continuous super-Eddington accretion and Compton-thick wind production is unlikely. There seems to be no X-ray flare during the time of the flux drop (only later), which means that super-Eddington accretion and subsequent mass expulsion did probably not take place and did not cover the source. In addition, the X-ray spectrum is harder than ever observed from the source (blue points in Fig. \ref{1915_hid}), indicating that the source might have reached a regular hard state on its way toward quiescence. On the other hand, the radio emission from the source has become more variable and presents flux densities that are among the strongest flares ever observed from GRS 1915$+$105 \citep{trushkin20}. In addition, sporadic X-ray flares display softer spectra and reach similar X-ray hardnesses as before the anomalous low-luminosity state (red points in Fig. \ref{1915_lc} and \ref{1915_hid}), which at least in principle is consistent with varying absorption. While more detailed studies should be made in terms of radio to X-ray correlation, it seems that the X-ray emission remains at a rather constant level (Fig. \ref{1915_lc}), while large amplitude variations are evident in the radio monitoring data \citep{trushkin19,trushkin20}. The enhanced radio luminosity and variability can arise from additional energy dissipation either in merging shocks in the jet (as has been suggested as an explanation of the radio behavior of GRS 1915$+$105 in \citealt{vadawale03}) or the jet interacting with expelled matter from the accretion flow. The unabsorbed luminosities of model A fits for V404 Cyg and GRS 1915$+$105 in the pre-flaring state are similar: $\sim$2$\times$10$^{37}$ erg/s, indicating that a similar fraction of reflected emission is received from both sources. We have speculated that epochs 1--2 of V404 Cyg might be due to super-Eddington accretion, which might be then the case for GRS 1915$+$105 as well if the intrinsic emission is completely obscured to us. However, model B fits, where the absorbed line-of-sight component is taken into account, give a similar unabsorbed luminosity for GRS 1915$+$105 as model A fits, while for V404 Cyg, it is an order of magnitude higher. The resulting Eddington luminosity of 1--2\% for both models of GRS 1915$+$105 is consistent with a regular XRB hard-state luminosity. In addition, the cospectrum for GRS 1915$+$105 in epoch 1 is markedly different from the latter epochs (or the cospectra of V404 Cyg and Cyg X-3; Fig. \ref{timing}), displaying a low-frequency QPO and spectral power at least up to 10 Hz, while in others, there appears to be no power above 0.1 Hz, indicating that we have at least a partial view to the accretion flow in epoch 1. This is also consistent with the amount of the leaked or intrinsic emission received in the model B fits. It therefore seems more likely that GRS 1915$+$105 has reached a genuine hard X-ray state. However, it is clear from the X-ray spectra and the model fits above that the source is (partly) obscured in all {\it NuSTAR}\/ epochs and likely continuously after MJD 58610. This may indicate that the accretion flow has changed from geometrically thin to thick, and that due to high inclination, it blocks the view to the central parts of the flow. \subsection{Implications for other sources} We have argued that the intrinsic X-ray emission from V404 Cyg, Cyg X-3, V4641 Sgr, and GRS 1915$+$105 is significantly affected by the surrounding medium in scattering processes. While it is clear that these four sources are unique among the XRB population with large accretion disks and high inclination angles, there is evidence that similar scattering takes place in other sources as well: \textbf{i)} Strong and variable absorption has been found in the X-ray spectrum of Swift J1858.6$-$0814 during X-ray flaring that bears similarity to the flaring spectra from V404 Cyg and V4641 Sgr \citep{hare20}. In addition, P Cygni profiles were observed in the optical spectra, indicating a high-velocity wind \citep{munozdarias19}. \textbf{ii)} The well-known accretion disk wind source GRO 1655--40 displays similar X-ray spectra as Cyg X-3 \citep{uttley15}. \textbf{iii)} The hard X-ray emission from SS 433 has been speculated to be heavily reprocessed intrinsic X-ray emission from a supercritical accretion flow viewed through the optically thick accretion wind cone \citep{middleton18}. \textbf{iv)} Other super-Eddington accretors, such as extragalactic ULXs, typically have soft X-ray spectra that might be similar to the epoch 2 spectrum of V404 Cyg. \textbf{v)} Other high-mass XRBs, such as Cyg X-1, might exhibit reprocessing in the companion wind, and the dense environment around XRB jets can affect the properties of the jet emission through shocking, thus increasing the radiative efficiency and enhancing the radio luminosity. Recently, puzzling observations of a low-luminosity soft state ($<$0.01 L/L$_{\mathrm{Edd}}$) in XRBs have been reported, including V4641 Sgr \citep{pahari15}, Swift J1753.5--0127 \citep{shaw16}, and 4U 1630--47 \citep{tomsick14}. \citet{pahari15} showed that during a low-luminosity outburst or a renewed X-ray activity period in January-February 2014 \citep{tachibana14,uemura14}, the X-ray spectrum presented a soft state with reflection features including the ionized iron line and iron edges similar to the spectra presented in this paper. Swift J1753.5--0127 and 4U 1630--47 exhibited a low-luminosity soft state at the end of their outbursts in March--May 2015 and July 2010, respectively. With the low luminosity, it is difficult to attribute this component to a regular soft-state disk. The system parameters for these sources are not all well constrained, but there is some evidence for a high orbital inclination from optical variability of Swift J1753.5--0127 \citep{neustroev14} and dipping phenomena in 4U 1630--47 \citep{kuulkers98,tomsick98}, while the orbital inclination of V4641 Sgr is fairly well constrained at 72$^{\circ}\pm$4$^{\circ}$ \citep{macdonald14}. When we assume a high inclination, a significant change in the scale height of the accretion flow, for example, by disk warping or precession, might intercept our line of sight, resulting in strong absorption and reprocessing of the intrinsic emission that could thermalize the intrinsically hard spectrum. A misalignment between the black hole and accretion disk spins has been suspected for V4641 Sgr \citep{maccarone02,gallo14}. This might cause the disk precession and perhaps explain the semi-regular interval of the weak outbursts \citep{negoro18}. \section{Conclusions} \label{conclusions} We have studied the {\it NuSTAR}\/ and {\it RXTE}/PCA\/ spectra of four unique XRBs, V404 Cyg, Cyg X-3, V4641 Sgr, and GRS 1915$+$105, which are known to present complex spectral evolution distinct from other XRBs. We showed that all sources have similar X-ray spectra at certain times that can be modeled by assuming that a Compton-thick medium surrounds the central X-ray source. This assumption is further enhanced by the fact that Cyg X-3 orbits its Wolf-Rayet companion star inside a high-density stellar wind that strongly affects the X-ray spectra. While the companion stars of V404 Cyg, V4641 Sgr, and GRS 1915$+$105 are not expected to present a high-density stellar wind, we attribute the obscuring medium to either a large scale-height accretion flow or to an optically thick equatorial outflow or envelope. The results from fitting two physically motivated scattering models suggest that a low-luminosity phase preceding a flaring episode in the 2015 outburst of V404 Cyg is a heavily obscured, but intrinsically very bright (super-Eddington) accretion state. In this state, a dense medium fully covers the X-ray source, and the majority of the received flux comes from heavily absorbed intrinsic emission. After the fully obscured phase, the geometry changes to resemble a disk-like wind, and the majority of the received emission comes from the scattered or reflected component. During this time, large-amplitude flares are observed in X-rays and radio, indicating that some of the obscuring material has been removed either by a change in the accretion flow geometry, in the accretion wind geometry, or by the jet pressure. A part of the elevated emission may be due to the interaction of the jet with the obscuring matter. The shift of the iron line energy below 6.4 keV suggests that the scattering medium is in motion. Similar spectral evolution to that of V404 Cyg is observed from the unusual low-luminosity state of GRS 1915$+$105, with the difference that the unabsorbed luminosity remains at a few percent of the Eddington luminosity. It is therefore more likely that the source has declined in flux and reached a regular hard X-ray state. Along with the state transition, the accretion flow must have thickened because there is evidence that the source is absorbed. Similarly, the weaker 2002 and 2003 outbursts of V4641 Sgr present similar spectra although the unabsorbed luminosities are very low, lower than 1\% of the Eddington luminosity. In these cases, we might be seeing the sources through a disk wind or a geometrically thick accretion flow. Thus, this work highlights the importance of taking the reprocessing of the X-ray emission in the surrounding medium in the modeling of the X-ray spectra into account, which may well take place in multiple sources. \section*{Acknowledgements} We thank Sara Motta for enlightening discussions and the anonymous referee for useful comments. KIIK was supported by the Academy of Finland project 320085. JAT acknowledges partial support from NASA under grant 80NSSC18K0574. This research has made use of data and software provided by HEASARC, which is a service of the Astrophysics Science Division at NASA/GSFC. The {\it MAXI}/GSC\/ data has been provided by RIKEN, JAXA and the \textit{MAXI} team. \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Thin shells are a key component of engineered structures such as aircraft fuselages, architectural domes, and textile fabrics. It is well known that thin shells can be modeled using the Kirchhoff-Love kinematical assumption, but this assumption yields a vector-valued fourth-order partial differential equation (PDE) to be solved. Unfortunately, $C^0$-continuous finite element methods cannot be directly applied to the Galerkin approximation of fourth-order PDEs, so thick shell formulations based on Reissner-Mindlin kinematics are typically preferred in finite element shell analysis \cite{bischoff2018models}. However, in recent years, isogeometric analysis \cite{Hughes2005} has rekindled interest in Kirchhoff-Love shell formulations. Splines exhibit the requisite $C^1$-continuity to directly solve fourth-order PDEs, and isogeometric discretizations based on Non-Uniform Rational B-splines (NURBS) \cite{kiendl2009isogeometric,echter2013hierarchic}, multi-patch NURBS \cite{kiendl2010bending}, hierarchical NURBS \cite{coradellohierarchically}, T-splines \cite{bazilevs2012isogeometric,casquero2017arbitrary}, PHT- and RHT-splines \cite{nguyen2011rotation,nguyen2017isogeometric}, and subdivision surfaces \cite{Cirak2000,cirak2001fully} have been successfully applied to the Galerkin approximation of the Kirchhoff-Love shell equations. Isogeometric discretizations have also been combined with immersed and embedded methods to treat trimmed NURBS \cite{guo2015weak,guo2017parameter,guo2018variationally} and implicitly defined geometries \cite{schollhammer2019kirchhoff}. Stable and accurate numerical modeling of Kirchhoff-Love shells requires proper enforcement of all types of boundary conditions. In a classical Galerkin method, Dirichlet boundary conditions are enforced strongly, but this is a difficult task for Kirchhoff-Love shells because both displacement (functional) and normal rotation (derivative) boundary conditions must be applied\footnote{If a given discretization method interpolates both function values and derivatives at specified points in the domain, then both homogeneous and non-homogeneous displacement and normal rotation boundary conditions may be easily enforced in a strong manner. Unfortunately, state-of-the-art isogeometric discretization methods do not interpolate either function values or derivatives, so strong enforcement of non-homogeneous displacement and normal boundary conditions is much more difficult using these methods.}. This has inspired the development of weak boundary condition enforcement strategies, the most common approach of which is the classical penalty method wherein penalty terms are added to the underlying variational formulation \cite{lei2015c0,breitenberger2015analysis,duong2017new,herrema2019penalty}. However, the penalty method is quite inaccurate unless parameters associated with the penalty terms are chosen sufficiently large, but large penalty parameters in turn yield an overly stiff, ill-conditioned linear system after discretization. A second common approach to weak boundary condition enforcement is to introduce Lagrange multiplier fields \cite{apostolatos2015domain,schuss2019multi}. The primary disadvantages of this approach are that it leads to a discrete saddle-point problem and stability can only be ensured if the approximation spaces for the primal and Lagrange multiplier fields satisfy the Babu\v{s}ka-Brezzi inf-sup condition \cite{brezzi2012mixed}. Nitsche's method is an alternative approach for the weak enforcement of Dirichlet boundary conditions. Nitsche's method was first proposed in 1971 \cite{Nitsche1971}, but it did not grow in popularity until the recent emergence of meshless \cite{FernandezMendez2004}, extended \cite{annavarapu2012robust,hansbo2002unfitted}, immersed \cite{kamensky2015immersogeometric,ruess2013weakly,schillinger2012isogeometric}, and isogeometric \cite{embar2010imposing,Apostolatos2014,nguyen2014nitsche,ruess2014weak,harari2015unified,guo2015nitsche} finite element methods. For these emerging finite element methods, strong enforcement of boundary and interface conditions is quite difficult due to the non-interpolatory nature of the primal field approximation space along domain boundaries and interfaces. Nitsche's method involves the addition of consistency, symmetry, and penalty terms to the underlying variational formulation. The design of the consistency and symmetry terms is guided by the Euler-Lagrange equations for the problem of interest, while the design of the penalty terms is guided by trace inequalities. Nitsche's method is variationally consistent and stable by construction, and it provides optimal convergence rates. Moreover, for self-adjoint elliptic PDEs, Nitsche's method yields a relatively well-conditioned symmetric positive-definite linear system after discretization. Nitsche's method is particularly appealing for Kirchhoff-Love shells since it can be used to enforce both displacement and normal rotation boundary conditions. It comes as no surprise, then, that a number of Nitsche-based formulations have been proposed in the literature for Kirchhoff-Love shells, most commonly for isogeometric finite element shell analysis \cite{nguyen2017isogeometric,guo2015weak,guo2018variationally,guo2015nitsche}. However, a comprehensive error analysis or verification has not yet been conducted for any of these formulations. In fact, as we demonstrate later in this paper, the formulations proposed in \cite{guo2015weak,guo2015nitsche} for the linear Kirchhoff-Love shell are variationally inconsistent and provide sub-optimal convergence rates when used with common boundary condition specifications. This variational inconsistency is due to the fact that existing Nitsche-based formulations are based upon Euler-Lagrange equations typically presented in the literature, and these equations are incorrect for general sets of admissible boundary conditions. In particular, the so-called ersatz force that appears in one of the Euler-Lagrange boundary conditions is incorrect. We believe this fact has been missed previously in the literature as state-of-the-art verification tests, such as the so-called ``shell obstacle course'' \cite{Belytschko1985}, are unable to assess order of accuracy. Instead, these verification tests only gauge convergence of displacement or stress fields to reference values at particular spatial locations. In this paper, we present a new Nitsche-based formulation for the linear Kirchhoff-Love shell that is provably stable and optimally convergent for general sets of admissible boundary conditions. To arrive at our formulation, we first present a framework for constructing Nitsche's method for an abstract variational constrained minimization problem admitting a generalized Green's identity. Our construction follows that of \cite{stenberg1995some} in that we first construct a stabilized Lagrange multiplier method for the abstract variational problem and then statically condense the Lagrange multiplier field. With the guidance of generalized trace and Cauchy-Schwarz inequalities, we are able to establish conditions under which the resulting method is both stable and convergent. We then apply this abstract framework to the construction of a stable and convergent Nitsche-based formulation for the linear Kirchhoff-Love shell. The resulting formulation has not appeared previously in the literature. Most notably, it involves consistency and symmetry terms associated with corner forces and penalty terms associated with corner displacement boundary conditions, similar to the Nitsche-based formulation proposed in \cite{harari2012embedded} for the Kirchhoff-Love plate. To arrive at our formulation, we derive the Euler-Lagrange equations for general sets of admissible boundary conditions and discover, as previously noted, that the equations typically presented in the literature are incorrect. For a NURBS-based isogeometric discretization of the linear Kirchhoff-Love shell, we establish \textit{a priori} error estimates for the $H^2$-, $H^1$-, and $L^2$-norms of the error in the displacement field, and we confirm these estimates using a new suite of manufactured solutions that covers a wide variety of geometric configurations and boundary conditions. To the best of our knowledge, this suite is the first comprehensive verification test bed capable of assessing convergence rates for Kirchhoff-Love shell discretizations, and we are aware of only one manufactured solution test case for the linear Kirchhoff-Love shell in the literature \cite{gfrerer2018code}. While the focus of this paper is weak enforcement of boundary conditions for the linear Kirchhoff-Love shell, the abstract framework presented here can be employed to construct Nitsche-based formulations for other linear problems arising from energy minimization. Moreover, given the close connection between the method of stabilized Lagrange multipliers, Nitsche's method, and the symmetric interior penalty Galerkin method \cite{arnold1982interior}, the framework can also be used to construct discontinuous Galerkin \cite{hansbo2002discontinuous,Noels2008} and continuous/discontinuous Galerkin methods \cite{engel2002continuous} for membranes, plates, shells, and other problems of interest. For example, the Nitsche-based Kirchhoff-Love formulation presented here can be easily modified to weakly enforce continuity of displacement and normal rotation along patch interfaces for non-conforming multi-patch NURBS geometries and along trimming curves for trimmed NURBS geometries. Finally, while the framework presented in this paper is strictly for linear problems arising from energy minimization, it is easily extended to nonlinear and nonsymmetric problems, including those involving contact, damage, and fracture. In fact, the only reason we consider linear problems arising from energy minimization in this paper is the simplicity in establishing stability and convergence results for these problems in an abstract setting, and we plan to extend our formulation to nonlinear Kirchhoff-Love shells in future work \cite{kiendl2015isogeometric,tepole2015isogeometric}. The remainder of this paper proceeds as follows. In Section~\ref{sec:Nitsche}, Nitsche's method is constructed for an abstract variational constrained minimization problem, and this framework is applied to the construction of a Nitsche-based formulation for the linear Kirchhoff-Love shell in Section~\ref{sec:KL_Shell}. In Section~\ref{sec:apriori}, \textit{a priori} error estimates for the $H^2$-, $H^1$-, and $L^2$-norms of the error in the displacement field are established for NURBS-based isogeometric discretizations of the linear Kirchhoff-Love shell problem, and these estimates are confirmed using a suite of manufactured solutions in Section~\ref{sec:num_results}. Finally, concluding remarks and future research directions are presented in Section~\ref{sec:conclusion}. \section{Nitsche's Method for an Abstract Variational Constrained Minimization Problem} \label{sec:Nitsche} This section develops an abstract framework and theory that is applied later to the linear Kirchhoff-Love shell. To this end, let $\mathcal{V}$ and $\mathcal{Q}$ be two Hilbert spaces with respective inner products $(\cdot,\cdot)_\mathcal{V}$ and $(\cdot,\cdot)_\mathcal{Q}$ and induced norms $\| \cdot \|_\mathcal{V} = (\cdot,\cdot)^{1/2}_\mathcal{V}$ and $\| \cdot \|_\mathcal{Q} = (\cdot,\cdot)^{1/2}_\mathcal{Q}$. We also use the notation $| \cdot |$ to refer to the absolute value for scalar quantities and the Euclidean norm for vector quantities. Let $\mathcal{V}^*$ and $\mathcal{Q}^*$ be the respective dual spaces of $\mathcal{V}$ and $\mathcal{Q}$, and let ${}_{\mathcal{V}^*}\langle \cdot, \cdot \rangle_{\mathcal{V}}$ be the duality pairing between $\mathcal{V}$ and its dual and ${}_{\mathcal{Q}^*}\langle \cdot, \cdot \rangle_{\mathcal{Q}}$ the duality pairing between $\mathcal{Q}$ and its dual. Let $\mathcal{T}: \mathcal{V} \rightarrow \mathcal{Q}$ be a bounded, surjective linear map, and given $g \in \mathcal{Q}$, define \begin{equation*} \mathcal{V}_g := \left\{ v \in \mathcal{V}: \mathcal{T}v = g \right\}. \end{equation*} Finally, let $a : \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ be a bounded, symmetric, positive semi-definite bilinear form satisfying the following coercivity condition on the kernel of $\mathcal{T}$: \begin{equation*} a(v,v) \geq C \| v \|^2_\mathcal{V} \hspace{10pt} \forall v \in \mathcal{V}_0 \label{eqn:coerA} \end{equation*} for some constant $C \in \mathbb{R}_+$. \begin{remark} In the context of structural mechanics, $\mathcal{V}$ is the space of admissible displacements free of boundary conditions and $\mathcal{Q}$ is the space of admissible essential boundary conditions (e.g., displacement and rotation boundary conditions in the context of a Kirchhoff-Love shell). The map $\mathcal{T}$ then gives the trace of the displacement field (e.g., the displacement and normal rotation in the context of a Kirchhoff-Love shell) along portions of the boundary where essential boundary conditions are being enforced. Consequently, $\mathcal{V}_g$ denotes the space of admissible displacements satisfying prescribed essential boundary conditions and $\mathcal{V}_0$ denotes the corresponding space of virtual displacements. \end{remark} We are interested in the following constrained minimization problem: $$ (M) \left\{ \hspace{5pt} \parbox{6.00in}{ \noindent Given $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, find $u \in \mathcal{V}_g$ that minimizes the total energy \begin{eqnarray*} E_{\textup{total}}(u) = E_{\textup{int}}(u) + E_{\textup{ext}}(u) \end{eqnarray*} where the \textbf{\textit{internal energy}} is defined by \begin{eqnarray*} E_{\textup{int}}(u) = \frac{1}{2} a(u,u) \end{eqnarray*} and the \textbf{\textit{external energy}} is defined by \begin{eqnarray*} E_{\textup{ext}}(u) = -{}_{\mathcal{V}^*}\langle f, u \rangle_{\mathcal{V}}. \end{eqnarray*} } \right. $$ Note that the G\^{a}teaux derivative of the total energy associated with a solution is zero for any variation $\delta u \in \mathcal{V}_0$. Consequently, Problem $(M)$ is equivalent to the following variational problem: $$ (V) \left\{ \hspace{5pt} \parbox{6.0in}{ \noindent Given $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, find $u \in \mathcal{V}_g$ such that \begin{eqnarray} a(u, \delta u) = {}_{\mathcal{V}^*}\langle f, \delta u \rangle_{\mathcal{V}} \label{eq:virtual_work} \end{eqnarray} for every $\delta u \in \mathcal{V}_0$. } \right. $$ The Lax-Milgram theorem guarantees that Problem $(V)$ has a unique solution $u \in \mathcal{V}$ that depends continuously on the input data $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$ \cite{EvansPDEs}. \begin{remark} The quantity $E_{\textup{int}}(u)$ denotes the internal energy of a system displaced by $u$ due to internal stresses and strains, while the quantity $E_{\textup{ext}}(u)$ denotes the external energy of the same system due to external forces, tractions, and moments. The quantity $a(u,\delta u)$ represents the virtual work due to internal stresses as the system undergoes a virtual displacement $\delta u$, while the quantity ${}_{\mathcal{V}^*}\langle f, \delta u \rangle_{\mathcal{V}}$ represents the virtual work done to the system by external forces, tractions, and moments. Therefore, \eqref{eq:virtual_work} is often referred to as the principle of virtual work since it states that in equilibrium the external and internal virtual work must be in balance. The kernel of the bilinear form $a(\cdot,\cdot)$ consists of rigid body modes. \end{remark} Let $\mathcal{V}_h \subset \mathcal{V}$ be a finite-dimensional approximation space and $\mathcal{V}_{g,h} = \mathcal{V}_h \cap \mathcal{V}_g$ for every $g \in \mathcal{Q}$. The Bubnov-Galerkin approximation of Problem $(V)$ then reads as follows: $$ (V_h) \left\{ \hspace{5pt} \parbox{6.00in}{ \noindent Given $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, find $u_h \in \mathcal{V}_{g,h}$ such that \begin{eqnarray*} a(u_h, \delta u_h) = {}_{\mathcal{V}^*}\langle f, \delta u_h \rangle_{\mathcal{V}} \end{eqnarray*} for every $\delta u_h \in \mathcal{V}_{0,h}$. } \right. $$ The Lax-Milgram theorem guarantees that Problem $(V_h)$ has a unique solution $u_h \in \mathcal{V}_h$ that depends continuously on the input data $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, and it is also easily shown that the solution to Problem $(V_h)$ best approximates the solution to Problem $(V)$ with respect to the norm induced by the bilinear form $a(\cdot,\cdot)$. The difficulty associated with Problem $(V_h)$ is the need for strong enforcement of the condition $\mathcal{T}u_h = g$. This is straightforward for simple approximation spaces (e.g., piecewise linear finite elements), simple applications (e.g., linear elasticity), and simple constraints (e.g., displacement boundary conditions). However, for complex approximation spaces (e.g., B-splines and subdivision surfaces), complex applications (e.g., Kirchhoff-Love shells), and complex constraints (e.g., rotation boundary conditions), this condition is much more difficult to enforce. Alternatively, we may turn to the method of Lagrange multipliers for weak enforcement of $\mathcal{T}u_h = g$. It is well known that the solution of Problem $(M)$ may be found by solving the following saddle point problem: $$ (\mathit{SP}) \left\{ \hspace{5pt} \parbox{5.90in}{ \noindent Given $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, find the saddle point $(u,\lambda) \in \mathcal{V} \times \mathcal{Q}^*$ of the Lagrangian \begin{eqnarray*} \mathcal{L}(u,\lambda) = E_{\textup{total}}(u) + {}_{\mathcal{Q}^*}\langle \lambda, \mathcal{T}u - g \rangle_{\mathcal{Q}}. \end{eqnarray*} } \right. $$ Note that the G\^{a}teaux derivative of the Lagrangian at the solution is zero for any direction $(\delta u, \delta \lambda) \in \mathcal{V} \times \mathcal{Q}^*$. Consequently, Problem $(\mathit{SP)}$ is equivalent to the following variational problem: $$ (L) \left\{ \hspace{5pt} \parbox{6.00in}{ \noindent Given $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, find $(u,\lambda) \in \mathcal{V} \times \mathcal{Q}^*$ such that \begin{eqnarray} a(u, \delta u) + {}_{\mathcal{Q}^*}\langle \lambda, \mathcal{T}\delta u \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \delta \lambda, \mathcal{T} u \rangle_{\mathcal{Q}} = {}_{\mathcal{V}^*}\langle f, \delta u \rangle_{\mathcal{V}} + {}_{\mathcal{Q}^*}\langle \delta \lambda, g \rangle_{\mathcal{Q}} \label{eq:Lagrange} \end{eqnarray} for every $(\delta u,\delta \lambda) \in \mathcal{V} \times \mathcal{Q}^*$. } \right. $$ By the surjectivity and boundedness of $\mathcal{T}$, Problem $(L)$ has a unique solution $(u,\lambda) \in \mathcal{V} \times \mathcal{Q}^*$ that depends continuously on the input data $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$. The variable $\lambda$ is commonly referred to as the Lagrange multiplier associated with the constraint $\mathcal{T}u = g$. \begin{remark} In the context of structural mechanics, $\lambda$ comprises the reaction or constraint forces, tractions, and moments that result from application of essential boundary conditions. \end{remark} The discretization of Problem $(L)$ requires approximations of both $\mathcal{V}$ and $\mathcal{Q}^*$. To this end, let $\mathcal{V}_h \subset \mathcal{V}$ and $\mathcal{Q}^*_h \subset \mathcal{Q}^*$ be two finite-dimensional approximation spaces. The Galerkin approximation of Problem $(L)$ then reads as follows: $$ (L_h) \left\{ \hspace{5pt} \parbox{6.00in}{ \noindent Given $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, find $(u_h,\lambda_h) \in \mathcal{V}_h \times \mathcal{Q}^*_h$ such that: \begin{eqnarray*} a(u_h, \delta u_h) + {}_{\mathcal{Q}^*}\langle \lambda_h, \mathcal{T}\delta u_h \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \delta \lambda_h, \mathcal{T} u_h \rangle_{\mathcal{Q}} = {}_{\mathcal{V}^*}\langle f, \delta u_h \rangle_{\mathcal{V}} + {}_{\mathcal{Q}^*}\langle \delta \lambda_h, g \rangle_{\mathcal{Q}} \end{eqnarray*} for every $(\delta u_h,\delta \lambda_h) \in \mathcal{V}_h \times \mathcal{Q}_h^*$. } \right. $$ The advantage of the formulation given by Problem ($L_h$), which is commonly referred to as the \textit{\textbf{method of Lagrange multipliers}}, over the formulation given by Problem ($V_h$), which is commonly referred to as \textit{\textbf{Galerkin's method}}, is that the condition $\mathcal{T} u_h = g$ need not be directly embedded into the solution space $\mathcal{V}_h$. However, the disadvantage of the method of Lagrange multipliers is that two approximation spaces $\mathcal{V}_h \subset \mathcal{V}$ and $\mathcal{Q}^*_h \subset \mathcal{Q}^*$ are needed and, moreover, they must be chosen intelligently in order to arrive at a stable and convergent method. Namely, the two approximation spaces $\mathcal{V}_h \subset \mathcal{V}$ and $\mathcal{Q}^*_h \subset \mathcal{Q}^*$ must satisfy the so-called \textit{\textbf{Babu\v{s}ka-Brezzi inf-sup condition}} \cite{babuvska1973finite}. For simple approximation spaces and applications, selecting inf-sup stable approximation spaces is reasonably straightforward, but for complex problems, it is more difficult. An alternative is to use stabilization to bypass the inf-sup condition entirely. This is a rather elegant solution first proposed by Franca and Hughes for enforcing incompressibility for Stokes flow \cite{hughes1986new} and later proposed by Barbosa and Hughes for enforcing essential boundary conditions for contact problems \cite{barbosa1991finite}. To this end, we make the following assumption: \begin{assumption} \label{assumption1} There exists a dense subspace $\tilde{\mathcal{V}} \subset \mathcal{V}$ and linear maps $\mathcal{L} : \tilde{\mathcal{V}} \rightarrow \mathcal{V}^*$ and $\mathcal{B}: \tilde{\mathcal{V}} \rightarrow \mathcal{Q}^*$ such that the following \textit{\textbf{generalized Green's identity}} holds: \begin{equation} a(w,v) = {}_{\mathcal{V}^*}\langle \mathcal{L}w, v \rangle_{\mathcal{V}} + {}_{\mathcal{Q}^*}\langle \mathcal{B}w, \mathcal{T}v \rangle_{\mathcal{Q}} \label{eq:int_by_parts} \end{equation} for all $w \in \tilde{\mathcal{V}}$ and $v \in \mathcal{V}$, and the solution $u$ of Problem $(M)$ satisfies $\mathcal{L} u = f$ whenever $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$ are such that $u \in \tilde{\mathcal{V}}$. \end{assumption} \begin{remark} In the context of structural mechanics, \eqref{eq:int_by_parts} results from the application of integration by parts to the original variational formulation in order to arrive at the Euler-Lagrange equations. Thus, in this context, the map $\mathcal{L}$ encodes the differential-algebraic operators associated with the governing system of PDEs in their strong form as well as those associated with the natural boundary conditions. The map $\mathcal{B}$ encodes the energetically conjugate essential boundary conditions that result from the application of integration by parts. In the context of linear elasticity, the quantity $\mathcal{B}u$, where $u$ is the solution to Problem $(M)$, is the traction field along portions of the boundary where essential boundary conditions are being enforced. In the context of Kirchhoff-Love shells, the quantity $\mathcal{B}u$ consists of shears, moments, and corner forces. Note that in order for the Euler-Lagrange equations to hold, the solution to Problem $(M)$ must be sufficiently smooth. This is why we introduced an additional subspace $\tilde{\mathcal{V}} \subset \mathcal{V}$, one for which \eqref{eq:int_by_parts} holds. Generally, $\tilde{\mathcal{V}}$ is significantly more regular than $\mathcal{V}$, raising concerns about the numerical practicality of a method requiring such level of regularity. However, as discussed below, we can extend the necessary operators to an enlarged, additive space between $\tilde{\mathcal{V}}$ and a subspace of $\mathcal{V}$. This permits discretizations of far less regularity and should alleviate these initial concerns. \label{remark:V_tilde} \end{remark} \begin{remark} To distinguish the Green's identity given in \eqref{eq:int_by_parts} from Green's first, second, and third identities, we have used the clarifier ``generalized''. For the Poisson problem subject to homogeneous Dirichlet boundary conditions, the Green's identity given in \eqref{eq:int_by_parts} coincides with Green's first identity. \end{remark} With Assumption~\ref{assumption1} in hand, we establish an important result giving an expression for the Lagrange multiplier $\lambda \in \mathcal{Q}^*$, provided the solution $u$ of Problem $(M)$ satisfies $u \in \tilde{\mathcal{V}}$. \begin{theorem} \label{theorem:L_is_Bu} Suppose that Assumption~\ref{assumption1} holds and the solution $u$ of Problem $(M)$ satisfies $u \in \tilde{\mathcal{V}}$. Then the solution $(u,\lambda)$ of Problem $(L)$ satisfies $\lambda = - \mathcal{B}u$. \begin{proof} By \eqref{eq:Lagrange}, we know that \begin{align*} 0 & = a(u, \delta u) + {}_{\mathcal{Q}^*}\langle \lambda, \mathcal{T}\delta u \rangle_{\mathcal{Q}} - {}_{\mathcal{V}^*}\langle f, \delta u \rangle_{\mathcal{V}} \nonumber \\ & = {}_{\mathcal{V}^*}\langle \mathcal{L}u - f, \delta u \rangle_{\mathcal{V}} + {}_{\mathcal{Q}^*}\langle \mathcal{B}u + \lambda, \mathcal{T}\delta u \rangle_{\mathcal{Q}} \\ & = {}_{\mathcal{Q}^*}\langle \mathcal{B}u + \lambda, \mathcal{T}\delta u \rangle_{\mathcal{Q}} \nonumber \end{align*} for all $\delta u \in \mathcal{V}$. Since $\mathcal{T}$ is surjective, it follows that $\lambda = - \mathcal{B}u$. \end{proof} \end{theorem} \begin{remark} In the context of structural mechanics, Theorem 1 comes as no surprise. It says that the reaction forces, tractions, and moments that result from the application of essential boundary conditions are balanced by the traction field in the context of linear elasticity and shears, moments, and corner forces in the context of Kirchhoff-Love shells along the portions of the boundary where essential boundary conditions are being enforced. Thus, Theorem 1 is simply a re-statement of Newton's third law: For every action, there is an equal and opposite reaction. \end{remark} Given Theorem~\ref{theorem:L_is_Bu}, we can now construct a \textit{\textbf{stabilized Lagrange multiplier method}}. First, let $\epsilon: \textup{dom}(\epsilon) \subseteq \mathcal{Q}^* \rightarrow \mathcal{Q}$ be a densely defined, positive, surjective, self-adjoint linear map. Note that since $\epsilon$ is linear, positive, and surjective, it is also invertible. We assume that $\mathcal{Q}^*_h \subset \textup{dom}(\epsilon)$. Provided that the domain of definition of the operator $\mathcal{B}: \tilde{\mathcal{V}} \rightarrow \mathcal{Q}^*$ can be extended to the enlarged space $\tilde{\mathcal{V}} + \mathcal{V}_h$, we also assume that $\left\{ \mathcal{B}v: v \in \tilde{\mathcal{V}} + \mathcal{V}_h \right\} \subset \textup{dom}(\epsilon)$. We associate with $\epsilon$ a symmetric, positive-definite \textit{\textbf{stabilization bilinear form}} $S: \textup{dom}(\epsilon) \times \textup{dom}(\epsilon) \rightarrow \mathbb{R}$ satisfying \begin{equation*} S(\mu,\xi) = {}_{\mathcal{Q}^*}\langle \xi, \epsilon \mu \rangle_{\mathcal{Q}} \end{equation*} for all $\mu, \xi \in \textup{dom}(\epsilon)$. If the solution $u$ of Problem $(M)$ satisfies $u \in \tilde{\mathcal{V}}$, then by Theorem~\ref{theorem:L_is_Bu}, since $\lambda + \mathcal{B}u = 0$, the solution $(u,\lambda) \in \mathcal{V} \times \mathcal{Q}^*$ of Problem $(L)$ satisfies \begin{equation*} S(\lambda + \mathcal{B}u,\delta \lambda) = 0 \end{equation*} for all $\delta \lambda \in \textup{dom}(\epsilon)$. This then inspires the following stabilized Lagrange multiplier method: $$ (L^s_h) \left\{ \hspace{5pt} \parbox{6.00in}{ \noindent Given $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, find $(u_h,\lambda_h) \in \mathcal{V}_h \times \mathcal{Q}^*_h$ such that \begin{equation} B_h\left( (u_h, \lambda_h), (\delta u_h, \delta \lambda_h) \right) = {}_{\mathcal{V}^*}\langle f, \delta u_h \rangle_{\mathcal{V}} + {}_{\mathcal{Q}^*}\langle \delta \lambda_h, g \rangle_{\mathcal{Q}} \label{eq:stabilized} \end{equation} for every $\left( \delta u_h, \delta \lambda_h \right) \in \mathcal{V}_h \times \mathcal{Q}^*_h$, where $B_h: \left( \mathcal{V}_h \times \mathcal{Q}^*_h \right) \times \left( \mathcal{V}_h \times \mathcal{Q}^*_h \right) \rightarrow \mathbb{R}$ is the bilinear form defined by \begin{equation*} B_h\left( (w_h, \theta_h), (v_h, \mu_h) \right) = a(w_h, v_h) + {}_{\mathcal{Q}^*}\langle \theta_h, \mathcal{T}v_h \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \mu_h, \mathcal{T} w_h \rangle_{\mathcal{Q}} - S(\theta_h + \mathcal{B} w_h,\mu_h + \mathcal{B} v_h) \end{equation*} for every $(w_h,\theta_h), (v_h,\mu_h) \in \mathcal{V}_h \times \mathcal{Q}_h^*$. } \right. $$ The stabilization bilinear form acts to improve the stability of the method of Lagrange multipliers. In fact, provided that $\epsilon$ is chosen appropriately, the stabilized Lagrange multiplier method can restore stability for an otherwise unstable choice of $\mathcal{V}_h$ and $\mathcal{Q}^*_h$ \cite{hughes1986new,barbosa1991finite}. Nitsche's method corresponds to the formal selection of $\mathcal{Q}_h^*$ as the entire space $\textup{dom}(\epsilon)$ rather than a finite dimensional subspace in the stabilized Lagrange multiplier method. This selection generally yields an infinite-dimensional linear system since $\textup{dom}(\epsilon)$ is dense in $\mathcal{Q}^*$. However, the Lagrange multiplier variable $\lambda_h$ may be statically condensed from the system, resulting in a finite-dimensional linear system for the primal variable $u_h$. To see this, take $\delta u_h = 0$ in \eqref{eq:stabilized} to obtain \begin{equation*} {}_{\mathcal{Q}^*}\langle \delta \lambda_h, \mathcal{T} u_h - g - \epsilon \left( \lambda_h + \mathcal{B} u_h \right) \rangle_{\mathcal{Q}} = 0 \end{equation*} for all $\delta \lambda_h \in \mathcal{Q}^*_h$. Since $\mathcal{Q}_h^* = \textup{dom}(\epsilon)$, $\textup{dom}(\epsilon)$ is dense in $\mathcal{Q}^*$, and $\epsilon$ is invertible, it follows that \begin{equation*} \lambda_h = -\mathcal{B} u_h + \epsilon^{-1} \left( \mathcal{T} u_h - g \right). \label{eqn:NitscheLM} \end{equation*} Inserting the above expression for $\lambda_h$ into \eqref{eq:stabilized} and taking $\delta \lambda_h = 0$, we obtain the following reduced formulation: \begin{mybox}[\emph{Nitsche's Method for an Abstract Variational Constrained Minimization Problem}] \vspace{-7pt} $$ (N_h) \left\{ \hspace{5pt} \parbox{6.00in}{ Given $f \in \mathcal{V}^*$ and $g \in \mathcal{Q}$, find $u_h \in \mathcal{V}_h$ such that \begin{equation*} a_h(u_h,\delta u_h) = {}_{\mathcal{V}^*}\langle f, \delta u_h \rangle_{\mathcal{V}} \ {\color{ForestGreen} \underbrace{ - {}_{\mathcal{Q}^*}\langle \mathcal{B} \delta u_h, g \rangle_{\mathcal{Q}} }_\text{Symmetry Term} } \ {\color{Orchid} \underbrace{ + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} \delta u_h, g \rangle_{\mathcal{Q}} }_\text{Penalty Term} } \label{eq:nitsche} \end{equation*} for every $\delta u_h \in \mathcal{V}_h$, where $a_h: \left( \tilde{\mathcal{V}} + \mathcal{V}_h \right) \times \left( \tilde{\mathcal{V}} + \mathcal{V}_h \right) \rightarrow \mathbb{R}$ is the bilinear form defined by \begin{equation*} a_h(w, v) = a(w, v) {\color{Cerulean} \ \underbrace{ - {}_{\mathcal{Q}^*}\langle \mathcal{B} w, \mathcal{T}v \rangle_{\mathcal{Q}} }_\text{Consistency Term} } \ {\color{ForestGreen} \underbrace{ - {}_{\mathcal{Q}^*}\langle \mathcal{B} v, \mathcal{T} w \rangle_{\mathcal{Q}} }_\text{Symmetry Term} } \ {\color{Orchid} \underbrace{ + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v, \mathcal{T} w \rangle_{\mathcal{Q}} }_\text{Penalty Term} } \end{equation*} for all $w, v \in \tilde{\mathcal{V}} + \mathcal{V}_h$. } \right. $$ \end{mybox} \noindent We refer to the above formulation as \textit{\textbf{Nitsche's method}} since it is a generalization of Nitsche's method for second-order elliptic boundary value problems to arbitrary variational constrained minimization problems. \begin{remark} Note that we can interpret Nitsche's method as a Lagrange multiplier method in which the Lagrange multiplier field $\lambda$ is approximated as \begin{equation*} \lambda_h = -\mathcal{B} u_h + \epsilon^{-1} \left( \mathcal{T} u_h - g \right). \end{equation*} Since the Lagrange multiplier field often represents one or more physical quantities of interest (e.g., $\lambda$ comprises the reaction or constraint forces, tractions, and moments in the context of structural mechanics), this formula provides a means of recovering such quantities in a variationally consistent manner \cite{hughes2000continuous,van2012flux}. \end{remark} Nitsche's method exhibits several important properties that give rise to its stability and convergence. Namely, it is \textit{\textbf{consistent}}, its bilinear form $a_h(\cdot,\cdot)$ is \textit{\textbf{symmetric}}, and, provided the map $\epsilon: \textup{dom}(\epsilon) \subseteq \mathcal{Q}^* \rightarrow \mathcal{Q}$ is chosen appropriately (see Assumption~\ref{assumption2} below), its bilinear form $a_h(\cdot,\cdot)$ is also \textit{\textbf{coercive}} on the discrete space $\mathcal{V}_h$. \begin{lemma}[Consistency] \label{lemma:abstract_consistency} Suppose that Assumption~\ref{assumption1} holds and the solution $u$ of Problem $(M)$ satisfies $u \in \tilde{\mathcal{V}}$. Then \begin{equation*} a_h(u,\delta u_h) = {}_{\mathcal{V}^*}\langle f, \delta u_h \rangle_{\mathcal{V}} - {}_{\mathcal{Q}^*}\langle \mathcal{B} \delta u_h, g \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} \delta u_h, g \rangle_{\mathcal{Q}} \end{equation*} for all $\delta u_h \in \mathcal{V}_h$. \begin{proof} Since Assumption~\ref{assumption1} holds and $u$ is the solution to Problem $(M)$, it follows that \begin{align*} a_h(u,\delta u_h) &= a(u, \delta u_h) - {}_{\mathcal{Q}^*}\langle \mathcal{B} u, \mathcal{T}\delta u_h \rangle_{\mathcal{Q}} - {}_{\mathcal{Q}^*}\langle \mathcal{B} \delta u_h, \mathcal{T} u \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} \delta u_h, \mathcal{T} u \rangle_{\mathcal{Q}} \nonumber \\ &= {}_{\mathcal{V}^*}\langle \mathcal{L}u, \delta u_h \rangle_{\mathcal{V}} + {}_{\mathcal{Q}^*}\langle \mathcal{B}u, \mathcal{T}\delta u_h \rangle_{\mathcal{Q}} - {}_{\mathcal{Q}^*}\langle \mathcal{B} u, \mathcal{T}\delta u_h \rangle_{\mathcal{Q}} - {}_{\mathcal{Q}^*}\langle \mathcal{B} \delta u_h, \mathcal{T} u \rangle_{\mathcal{Q}} \nonumber \\ & \hspace{8pt} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} \delta u_h, \mathcal{T} u \rangle_{\mathcal{Q}} \nonumber \\ &= {}_{\mathcal{V}^*}\langle f, \delta u_h \rangle_{\mathcal{V}} - {}_{\mathcal{Q}^*}\langle \mathcal{B} \delta u_h, g \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} \delta u_h, g \rangle_{\mathcal{Q}} \end{align*} for all $\delta u_h \in \mathcal{V}_h$. \end{proof} \end{lemma} \begin{lemma}[Symmetry] \label{lemma:abstract_symmetry} It holds that \begin{equation*} a_h(w, v) = a_h(v, w) \end{equation*} for all $w, v \in \tilde{\mathcal{V}} + \mathcal{V}_h.$ \begin{proof} The result follows by direct computation. \end{proof} \end{lemma} To establish a coercivity result for Nitsche's method, we must make another assumption. \begin{assumption} \label{assumption2} There exists a densely defined, positive, self-adjoint linear map $\eta: \textup{dom}(\eta) \subseteq \mathcal{Q}^* \rightarrow \mathcal{Q}$ with the following properties: \begin{enumerate} \item The space $\left\{ \mathcal{B}v: v \in \tilde{\mathcal{V}} + \mathcal{V}_h \right\}$ is a subset of $\textup{dom}(\eta)$. \item The following \textit{\textbf{generalized trace inequality}} holds: \begin{equation*} {}_{\mathcal{Q}^*}\langle \mathcal{B}v_h, \eta \mathcal{B}v_h \rangle_{\mathcal{Q}} \leq a(v_h,v_h) \label{eqn:gen_trace} \end{equation*} for all $v_h \in \mathcal{V}_h$. \item The following \textit{\textbf{generalized Cauchy-Schwarz inequality}} holds: \begin{equation*} \left| {}_{\mathcal{Q}^*}\langle \mathcal{B} v, \mathcal{T} w \rangle_{\mathcal{Q}} \right| \leq \frac{1}{\gamma} {}_{\mathcal{Q}^*}\langle \mathcal{B}v, \eta \mathcal{B}v \rangle^{1/2}_{\mathcal{Q}} {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} w, \mathcal{T} w \rangle^{1/2}_{\mathcal{Q}} \label{eqn:gen_CS} \end{equation*} for all $v, w \in \tilde{\mathcal{V}} + \mathcal{V}_h$, where $\gamma \in (1,\infty)$. \end{enumerate} \end{assumption} Now, defining an energy norm $\vvvertiii{\cdot}: \tilde{\mathcal{V}} + \mathcal{V}_h \rightarrow \mathbb{R}$ via \begin{equation*} \vvvertiii{v}^2 := a(v,v) + {}_{\mathcal{Q}^*}\langle \mathcal{B}v, \eta \mathcal{B}v \rangle_{\mathcal{Q}} + 2 {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v, \mathcal{T} v \rangle_{\mathcal{Q}}, \end{equation*} we can derive a coercivity result for Nitsche's method. \begin{lemma}[Coercivity] \label{lemma:abstract_coercivity} Suppose that Assumption~\ref{assumption2} holds. Then \begin{equation*} a_h(v_h,v_h) \geq \frac{1}{2} \left(1 - \frac{1}{\gamma} \right) \vvvertiii{v_h}^2 \end{equation*} for all $v_h \in \mathcal{V}_h$. \begin{proof} Since Assumption~\ref{assumption2} holds, it follows that \begin{align*} a_h(v_h,v_h) &= a(v_h, v_h) - 2{}_{\mathcal{Q}^*}\langle \mathcal{B} v_h, \mathcal{T}v_h \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \nonumber \\ & \geq a(v_h, v_h) - 2 \left| {}_{\mathcal{Q}^*}\langle \mathcal{B} v_h, \mathcal{T}v_h \rangle_{\mathcal{Q}} \right| + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \nonumber \\ & \geq a(v_h, v_h) - \frac{2}{\gamma} {}_{\mathcal{Q}^*}\langle \mathcal{B}v_h, \eta \mathcal{B}v_h \rangle^{1/2}_{\mathcal{Q}} {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle^{1/2}_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \nonumber \\ & \geq a(v_h, v_h) - \frac{1}{\gamma} \left( {}_{\mathcal{Q}^*}\langle \mathcal{B}v_h, \eta \mathcal{B}v_h \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \right) + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \nonumber \\ & \geq a(v_h, v_h) - \frac{1}{\gamma} \left( a(v_h, v_h) + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \right) + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \nonumber \\ & \geq \left( 1 - \frac{1}{\gamma} \right) a(v_h, v_h) + \left( 1 - \frac{1}{\gamma} \right) {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \nonumber \\ & \geq \frac{1}{2} \left( 1 - \frac{1}{\gamma} \right) \left( a(v_h, v_h) + {}_{\mathcal{Q}^*}\langle \mathcal{B}v_h, \eta \mathcal{B}v_h \rangle_{\mathcal{Q}} \right) + \left( 1 - \frac{1}{\gamma} \right) {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v_h, \mathcal{T} v_h \rangle_{\mathcal{Q}} \nonumber \\ & = \frac{1}{2} \left( 1 - \frac{1}{\gamma} \right) \vvvertiii{v_h}^2\nonumber \end{align*} for all $v_h \in \mathcal{V}_h$. Young's inequality ($|ab| \leq \frac{1}{2}(a^2 + b^2)$ for $a,b \in \mathbb{R}$) is used here from line three to four. \end{proof} \end{lemma} We need one more result before we can establish an error estimate for Nitsche's method. \begin{lemma}[Continuity] \label{lemma:abstract_continuity} Suppose that Assumption~\ref{assumption2} holds. Then \begin{equation*} |a_h(w,v)| \leq \vvvertiii{w} \cdot \vvvertiii{v} \end{equation*} for all $w, v \in \tilde{\mathcal{V}} + \mathcal{V}_h$. \begin{proof} Since Assumption~\ref{assumption2} holds, it follows that \begin{align*} a_h(w,v) &= a(w, v) - {}_{\mathcal{Q}^*}\langle \mathcal{B} w, \mathcal{T}v \rangle_{\mathcal{Q}} - {}_{\mathcal{Q}^*}\langle \mathcal{B} v, \mathcal{T}w \rangle_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v, \mathcal{T} w \rangle_{\mathcal{Q}} \nonumber \\ & \leq a(w,v) + \left| {}_{\mathcal{Q}^*}\langle \mathcal{B} w, \mathcal{T}v \rangle_{\mathcal{Q}} \right| + \left| {}_{\mathcal{Q}^*}\langle \mathcal{B} v, \mathcal{T}w \rangle_{\mathcal{Q}} \right| + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v, \mathcal{T} w \rangle_{\mathcal{Q}} \nonumber \\ & \leq a(w, w)^{1/2} a(v,v)^{1/2} + {}_{\mathcal{Q}^*}\langle \mathcal{B}w, \eta \mathcal{B}w \rangle^{1/2}_{\mathcal{Q}} {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v, \mathcal{T} v \rangle^{1/2}_{\mathcal{Q}} + {}_{\mathcal{Q}^*}\langle \mathcal{B}v, \eta \mathcal{B}v \rangle^{1/2}_{\mathcal{Q}} {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} w, \mathcal{T} w \rangle^{1/2}_{\mathcal{Q}} \nonumber \\ &\phantom{\leq} + {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} v, \mathcal{T} v \rangle^{1/2}_{\mathcal{Q}} {}_{\mathcal{Q}^*}\langle \epsilon^{-1} \mathcal{T} w, \mathcal{T} w \rangle^{1/2}_{\mathcal{Q}} \nonumber \\ & \leq \vvvertiii{w} \cdot \vvvertiii{v} \end{align*} for all $w,v \in \tilde{\mathcal{V}} + \mathcal{V}_h$. The standard Cauchy-Schwarz inequality $(|(x,y)| \leq \| x \|_2 \|y \|_2$ for $x,y \in \mathbb{R}^n)$ is used from line two to three above. \end{proof} \end{lemma} We are now ready to prove well-posedness and an error estimate for Nitsche's method. \begin{theorem}[Well-Posedness and Error Estimate] \label{theorem:error_estimate} Suppose that Assumptions~\ref{assumption1} and~\ref{assumption2} hold. Then there exists a unique discrete solution $u_h \in \mathcal{V}_h$ to Problem $(N_h)$. Moreover, if the continuous solution $u \in \mathcal{V}$ to Problem $(M)$ satisfies $u \in \tilde{\mathcal{V}}$, then the discrete solution $u_h$ satisfies the error estimate \begin{equation*} \vvvertiii{u - u_h} \leq \left( 1+ \frac{2}{1-\frac{1}{\gamma}} \right) \min_{v_h \in \mathcal{V}_h} \vvvertiii{u - v_h}. \end{equation*} \begin{proof} Well-posedness is a direct result of the Lax-Milgram Theorem and coercivity and continuity as established by Lemmas~\ref{lemma:abstract_coercivity} and \ref{lemma:abstract_continuity}. To prove the error estimate, let $v_h \in \mathcal{V}_h$ be an arbitrary function. Since Assumption~\ref{assumption2} holds, by Lemma~\ref{lemma:abstract_coercivity}, we have that \begin{align*} \vvvertiii{u_h - v_h}^2 \leq \frac{2}{1-\frac{1}{\gamma}} a\left(u_h - v_h, u_h - v_h\right). \end{align*} By Assumption~\ref{assumption1} and Lemma~\ref{lemma:abstract_consistency}, we have that \begin{align*} \vvvertiii{u_h - v_h}^2 \leq \frac{2}{1-\frac{1}{\gamma}} a\left(u - v_h, u_h - v_h\right). \end{align*} By Assumption~\ref{assumption2} and Lemma~\ref{lemma:abstract_continuity}, we have that \begin{align*} \vvvertiii{u_h - v_h}^2 \leq \frac{2}{1-\frac{1}{\gamma}} \vvvertiii{u - v_h} \vvvertiii{u_h - v_h} \end{align*} and, hence, \begin{align*} \vvvertiii{u_h - v_h} \leq \frac{2}{1-\frac{1}{\gamma}} \vvvertiii{u - v_h}. \end{align*} By the triangle inequality, we have that \begin{align*} \vvvertiii{u - u_h} &\leq \vvvertiii{u - v_h} + \vvvertiii{u_h - v_h} \nonumber \leq \left( 1+ \frac{2}{1-\frac{1}{\gamma}} \right) \vvvertiii{u - v_h} \end{align*} and, since $v_h \in \mathcal{V}_h$ is arbitrary, the final result holds. \end{proof} \end{theorem} Note that the above theorem applies to any formulation and problem setup for which Assumptions~\ref{assumption1} and~\ref{assumption2} hold. Consequently, constructing Nitsche-based formulations for a new problem class should proceed according to the following steps:\\ \noindent \textbf{Step 1:} Construct an appropriate variational formulation (including specification of the Hilbert spaces $\mathcal{V}$ and $\mathcal{Q}$, the map $\mathcal{T}: \mathcal{V} \rightarrow \mathcal{Q}$, and the bilinear form $a : \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$) such that Assumption~\ref{assumption1} is satisfied, and determine the space $\tilde{\mathcal{V}}$ and the linear maps $\mathcal{L} : \tilde{\mathcal{V}} \rightarrow \mathcal{V}^*$ and $\mathcal{B} : \tilde{\mathcal{V}} \rightarrow \mathcal{Q}^*$ associated with Assumption~\ref{assumption1}. Note that the relevant operators will ultimately be defined over the extended domain $\tilde{\mathcal{V}} + \mathcal{V}_h$ for discretization.\\ \noindent \textbf{Step 2:} Construct suitable linear maps $\epsilon: \textup{dom}(\epsilon) \subseteq Q^* \rightarrow Q$ and $\eta: \textup{dom}(\eta) \subseteq Q^* \rightarrow Q$ such that Assumption~\ref{assumption2} is satisfied.\\ \noindent \textbf{Step 3:} Pose Nitsche's method according to Problem $(N_h)$.\\ In the following, we complete the above three steps to construct a new Nitsche-based formulation for the linear Kirchhoff-Love shell. Note that we do not need to conduct a full stability and convergence analysis, since we can readily employ the abstract framework presented here. It should further be mentioned that symmetry can be employed to arrive at error estimates in norms other than the energy norm using the well-known Aubin-Nitsche trick \cite{ciarlet1991basic}. To complete our analysis, and in particular to construct the maps $\epsilon$ and $\eta$ such that Assumption~\ref{assumption2} is satisfied, we need to use function analytic results such as trace inequalities. We discuss later how to compute trace constants in a practical manner. Finally, to ensure that our results are invariant with respect to scaling, we take special care in constructing the maps $\epsilon$ and $\eta$ so that the resulting energy norm is dimensionally consistent. This requires a little finesse and rigor, but we believe that arriving at scale-invariant error estimates is worth the added effort. \section{Nitsche's Method for the Linear Kirchhoff-Love Shell Problem} \label{sec:KL_Shell} Our abstract framework provides a convenient means for constructing and analyzing Nitsche-based formulations for problems of interest, regardless of their complexity. In this section, we apply our framework to the vector-valued, fourth-order PDE that governs the linear Kirchhoff-Love shell to arrive at a provably convergent Nitsche-based formulation. We also derive and discuss what are known as the \textbf{\emph{ersatz forces}}, or modified boundary shear forces used to maintain variational consistency of the Nitsche formulation. These are either incorrect or incomplete in the existing literature. Because we only consider the linear case in what follows, we drop ``linear'' from ``linear Kirchhoff-Love shell'' in this and subsequent sections. In the following, underline and double underline ($\surfVec{\bullet}$ and $\surfTens{\bullet}$) are used to denote manifold quantities, that is, quantities that can be expressed through a linear combination of tensorial quantities lying in the tensor bundle of the manifold, with the number of underlines indicating the order of the tensor. By contrast, \fullTens{bold-faced text} denotes quantities residing in three-dimensional space. The concepts presented in this section, and those that follow, rely heavily on differential geometry and continuum mechanics posed over differentiable manifolds. For a brief discussion of the necessary differential-geometric subjects, see to \ref{sec:Appendix_Diff_Geo} and for a review of continuum mechanics, see to \ref{sec:Appendix_Cont_Mech}. \begin{figure}[ht!] \includegraphics[width=\textwidth]{shell_config.pdf} \caption{An arbitrary shell domain. All positive conventions for degrees of freedom and applied loadings are depicted.} \label{fig:KL_shell_domain} \end{figure} Shell models simulate the structural response of curved, load-bearing members subject to both in-plane and out-of-plane loadings. They are idealized through a midsurface model with linearized through-thickness displacement profiles, where we use $\zeta$ to denote the thickness variable. The midsurface is chosen to be the surface midway through the thickness of the shell body. In special cases, namely, small strains and displacements and shells comprised of an isotropic material, the midsurface coincides with the neutral plane, that is, the plane that undergoes no compressive or tensile forces due to bending. A general shell model employs a displacement variable, denoted ${\bf u}$, as well as a rotational degree of freedom, denoted $\surfVec{\theta}$. The Kirchhoff-Love shell displacement field is assumed to be free of transverse shear strain. Consequently, this introduces a constraint between the rotational and displacement degrees of freedom, namely, $\surfVec{\theta}({\bf u}) = - \undef{\fullTens{\FFF}}_{3} \cdot \surfVec{\nabla} {\bf u}$, which appears later in our derivations. We integrate through-thickness before discretization, which introduces a $\zeta$-dependence in the expression for membrane action and a $\zeta^3$-dependence in the expression for bending action due to a zero-transverse shear strain constraint imposed on the displacement variable. It is this discrepancy between thickness-dependence in the presence of intrinsic curvature coupling that gives rise to \textbf{\emph{membrane locking}}, a parasitic numerical phenomenon that causes little-to-no displacement for thin shells in specific configurations until sufficient mesh resolution is attained. \subsection{The Variational Formulation} Let $\Omega}%{\undef{\Omega} \subset \mathbb{R}^3$ be an immersed two-dimensional manifold with Lipschitz-continuous boundary $\Gamma}%{\undef{\Gamma} = \partial \Omega}%{\undef{\Omega}$. Assume that $\Omega}%{\undef{\Omega}$ is smooth enough that the derivatives of the curvature are finite. Note that less-smooth manifolds are admissible to the methodology we present in this section; however, special care must be taken in regions without appropriate smoothness and, hence, we invoke this assumption for simplicity of exposition. Since the Kirchhoff-Love shell accommodates both prescribed displacements and rotations as well as their energetically conjugate shears and moments on the boundary, we partition the boundary accordingly. In particular, let $\Gamma}%{\undef{\Gamma}_{D_1}$ and $\Gamma}%{\undef{\Gamma}_{N_1}$ be the Dirichlet-1 and Neumann-1 boundaries associated with prescribed transverse displacements and applied transverse shears, respectively. Let $\Gamma}%{\undef{\Gamma}_{D_2}$ and $\Gamma}%{\undef{\Gamma}_{N_2}$ be the Dirichlet-2 and Neumann-2 boundaries associated with prescribed normal rotations and applied bending moments, respectively. For a well-posed PDE, we require that $\Gamma}%{\undef{\Gamma} = \overline{\Gamma}%{\undef{\Gamma}_{D_\alpha} \cup \Gamma}%{\undef{\Gamma}_{N_\alpha}}$, $\Gamma}%{\undef{\Gamma}_{D_\alpha} \cap \Gamma}%{\undef{\Gamma}_{N_\alpha} = \emptyset$, and $\Gamma}%{\undef{\Gamma}_{D_\alpha} \neq \emptyset$ for $\alpha = 1,2$. Note that there are no constraints between the 1- and 2-boundaries because there is no energetic exchange between the two sets. In general, $\Gamma}%{\undef{\Gamma}_{D_\alpha} \neq \emptyset$ for $\alpha = 1,2$ is not necessary, e.g., $\Gamma}%{\undef{\Gamma}_{D_2}$ can be empty provided sufficient conditions are imposed on $\Gamma}%{\undef{\Gamma}_{D_1}$; however, for simplicity, we make the broader assumption. We further introduce the set $\cornerSet{} \subset \Gamma}%{\undef{\Gamma}$ as the set of ``corners'', that is, the non-differentiable loci with zero-Lebesgue measure, along the boundary. We further decompose this set into $\cornerSet{D} := \cornerSet{} \cap \overline{\Gamma}%{\undef{\Gamma}_{D_1}}$ and $\cornerSet{N} := \cornerSet{} \cap \Gamma}%{\undef{\Gamma}_{N_1}$ and note that, by construction, $\cornerSet{} = \cornerSet{D} \cup \cornerSet{N}$ and $\cornerSet{D} \cap \cornerSet{N} = \emptyset$. We denote corners as $C \in \cornerSet{}$. Given a geometric mapping ${\bf x}$ from a parametric domain to the midsurface $\Omega}%{\undef{\Omega}$, we are able to construct a \textbf{\emph{covariant}} coordinate basis through the derivatives of the convected coordinates. In particular, the $\alpha^{th}$ covariant basis vector is given by $\undef{\fullTens{\FFF}}_\alpha = {\bf x}_{,\alpha}$, where the comma notation refers to differentiation of the geometric mapping with respect to the $\alpha^{th}$ coordinate. The midsurface normal director $\undef{\fullTens{\FFF}}_3$ can be constructed through a cross product of these in-plane vectors and, provided the geometric mapping ${\bf x}$ is non-degenerate, the resulting covariant set can be shown to form a basis of $\mathbb{R}^3$. According to differential-geometric theory, we can uniquely construct an algebraically dual set of \textbf{\emph{contravariant}} basis vectors to this set denoted $\undef{\fullTens{\FFF}}^\alpha$, which, by definition, satisfy the Kronecker relationship $\undef{\fullTens{\FFF}}_\alpha \cdot \undef{\fullTens{\FFF}}^\beta = \delta^\beta_\alpha$. Note by construction that $\undef{\fullTens{\FFF}}_3 = \undef{\fullTens{\FFF}}^3$. For a deeper discussion of the required differential-geometric tools, see \ref{sec:Appendix_Diff_Geo}. These contravariant basis vectors allow us to effectively combine the in-plane and out-of-plane behaviors via ${\bf w} = \surfVec{w} + w_3 \undef{\fullTens{\FFF}}^3$, where $\surfVec{w} = w_\alpha \undef{\fullTens{\FFF}}^\alpha$. Later in this section, we will invoke the \textit{\textbf{in-plane projector}} $\surfTens{P} := \textbf{I} - \undef{\fullTens{\FFF}}^3 \otimes \undef{\fullTens{\FFF}}_3$ that, when acting on a vector, returns the in-plane part of that vector, where $\textbf{I}$ is the identity tensor. Note that $\surfTens{P}$ is symmetric and thus also satisfies the definition $\surfTens{P} = {\bf I} - \undef{\fullTens{\FFF}}_3 \otimes \undef{\fullTens{\FFF}}^3$. Finally, we present various quantities defined over manifolds with their required regularities in terms of Sobolev embeddings. Since these spaces are defined over manifolds, we present what this entails more rigorously in Section~\ref{sec:apriori}. Let $\applied{\textbf{\textup{f}}} \in \left( L^2(\Omega}%{\undef{\Omega}) \right)^3$ be the applied body loading, $\applied{\bf u} = \applied{\surfVec{u}} + \applied{u}_3 \undef{\fullTens{\FFF}}^3$ such that $(\applied{u}_1,\applied{u}_2) \in \left( H^{1/2}(\Gamma}%{\undef{\Gamma}_{D_1}) \right)^2$ and $\applied{u}_3 \in H^{3/2}(\Gamma}%{\undef{\Gamma}_{D_1})$ are the prescribed displacement, and let $\applied{\theta}_n \in H^{1/2}(\Gamma}%{\undef{\Gamma}_{D_2})$ be the prescribed normal rotation. In general, ``hat'' notation $(\applied{\bullet})$ is used to denote a quantity that is prescribed or applied. Note that, by Sobolev embedding, $\applied{u}_3 \in C^0(\Gamma}%{\undef{\Gamma}_{D_1})$. Given an applied traction $\applied{\bm{\tau}} \colon \Gamma}%{\undef{\Gamma}_{N_1} \rightarrow \mathbb{R}^3$ and an applied twisting moment $\applied{B}_{nt} \colon \Gamma}%{\undef{\Gamma}_{N_1} \rightarrow \mathbb{R}$, define the \textbf{\emph{ersatz traction}} via \begin{equation} \applied{\bf T} = \underbrace{ \applied{\surfVec{\tau}} - \applied{B}_{nt} \surfTens{b} \cdot \surfVec{t} }_{\displaystyle \surfVec{\applied{\textup{T}}}} + \underbrace{ \left[ \outOfPlane{\applied{\traction}} + \frac{\partial \applied{B}_{nt}}{\partial t} \right] }_{\displaystyle \applied{\textup{T}}_3} \undef{\fullTens{\FFF}}^{3}, \label{eqn:Ersatz_traction_KLS} \end{equation} where the term $\surfTens{b}$ is the second fundamental form, or \textbf{\emph{curvature tensor}} \eqref{eqn:secondFF}, associated with the manifold. The corresponding \textbf{\emph{corner forces}} are defined via \begin{equation*} \applied{\textup{S}} = \llbracket \applied{B}_{nt} \rrbracket, \end{equation*} where \begin{equation} \llbracket \applied{B}_{nt} \rrbracket = \lim_{\epsilon \rightarrow 0} \left( \applied{B}_{nt}(\textbf{x} + \epsilon \surfVec{t}) - \applied{B}_{nt}(\textbf{x} - \epsilon \surfVec{t}) \right) \label{eqn:jumpDef} \end{equation} and $\surfVec{t}$ is the positively oriented, counter-clockwise unit tangent vector to $\Gamma}%{\undef{\Gamma}$. The corner forces and the ersatz traction arise from the integration-by-parts formula \begin{equation*} \begin{aligned} \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \applied{B}_{nt} \theta_t({\bf v}) \ d \Gamma}%{\undef{\Gamma} &= \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \outOfPlane{v} \frac{\partial \applied{B}_{nt}}{\partial t} \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{N}} \left. \left( \llbracket \applied{B}_{nt} \rrbracket \outOfPlane{v} \right) \right|_{C} - \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \left( \applied{B}_{nt} \ \surfTens{b} \cdot \surfVec{t} \right) \cdot \surfVec{v} \ d \Gamma}%{\undef{\Gamma} \end{aligned} \label{eqn:Ersatz_IBP_KLS} \end{equation*} for any ${\bf v} \colon \Gamma}%{\undef{\Gamma}_{N_1} \rightarrow \mathbb{R}^3$ with $\left. v_3 \right|_{\partial \Gamma}%{\undef{\Gamma}_{N_1}} = 0$, where $\theta_t({\bf v}) = - \left( \undef{\fullTens{\FFF}}_3 \cdot \surfVec{\nabla} {\bf v} \right) \cdot \surfVec{t}$ is the \textbf{\emph{twisting rotation}} and $\surfVec{t}$ is again the positively oriented unit tangent vector to $\Gamma}%{\undef{\Gamma}$. In contrast to the boundary traction and twisting moment, the ersatz traction and corner forces are energetically conjugate to the boundary displacement, so they are the the natural entities to use in our derivation of Nitsche's method for the Kirchhoff-Love shell through our abstract framework (see Remark~\ref{alternative_f_KLS} below). Assume that $\applied{\bf T} \in \left( L^2(\Gamma}%{\undef{\Gamma}_{N_1}) \right)^3$ and $\left\{ \applied{\textup{S}}|_{C} \right\}_{C \in \cornerSet{N}} \in \mathbb{R}^{\#\cornerSet{N}}$. Finally, let $\applied{B}_{nn} \in L^2(\Gamma}%{\undef{\Gamma}_{N_2})$ be the applied bending moment that is energetically conjugate to the boundary rotation. Throughout the remainder of the paper, we use superscript $S$ to denote quantities associated with the Kirchhoff-Love shell problem to differentiate them from those in the abstract framework. In order to apply the abstract results from Section~\ref{sec:Nitsche} to the Kirchhoff-Love shell, let \begin{equation*} \mathcal{V}^{S} := \left\{ {\bf v} = \surfVec{v} + v_3 \undef{\fullTens{\FFF}}^3 \ \colon \ (v_1,v_2) \in \left( H^1(\Omega}%{\undef{\Omega}) \right)^2 \hspace{5pt} \text{and} \hspace{5pt} v_3 \in H^2(\Omega}%{\undef{\Omega}) \right\} \end{equation*} and \begin{equation*} \mathcal{Q}^{S} := \left\{ ({\bf v},\mu_n) \ \colon \ (v_1,v_2) \in \left( H^{1/2}(\Gamma}%{\undef{\Gamma}_{D_1}) \right)^2, \ v_3 \in H^{3/2}(\Gamma}%{\undef{\Gamma}_{D_1}), \hspace{5pt} \text{and} \hspace{5pt} \mu_n \in H^{1/2}(\Gamma}%{\undef{\Gamma}_{D_2}) \right\}. \end{equation*} These spaces are selected in this way to accommodate the required smoothness of a weak solution to the underlying PDE. In particular, $\mathcal{V}^{S}$ is constructed such that members of $\mathcal{V}^{S}$ have one integrable derivative in-plane and two integrable derivatives out-of-plane, respectively. Accordingly, $\mathcal{Q}^{S}$ is the corresponding trace space that outlines the necessary smoothness for the applied displacement field and normal rotation field along the Dirichlet boundary. As such, define the trace operator $\mathcal{T}^{S} \colon \mathcal{V}^{S} \rightarrow \mathcal{Q}^{S}$ via its action on the displacement field ${\bf v} \in \mathcal{V}^{S}$, i.e., $\mathcal{T}^{S} {\bf v} = \left. \left( {\bf v}, \theta_n({\bf v}) \right) \right|_{\Gamma}%{\undef{\Gamma}_D}$, where $\theta_n({\bf v}) = - \left( \undef{\fullTens{\FFF}}_3 \cdot \surfVec{\nabla} {\bf v} \right) \cdot \surfVec{n}$ is the \textbf{\emph{normal rotation}} and $\surfVec{n}$ is the outward-facing unit normal to $\Gamma}%{\undef{\Gamma}$. Given $\left( \applied{\bf u}, \applied{\theta}_n \right) \in \mathcal{Q}^{S}$, define \begin{equation*} \mathcal{V}^{S}_{\applied{\bf u},\applied{\theta}_n} := \left\{ {\bf v} \in \mathcal{V}^{S} \colon \mathcal{T}^{S} {\bf v} = \left( \applied{\bf u},\applied{\theta}_n \right) \right\} \end{equation*} as the trial space of displacement fields satisfying the prescribed Dirichlet boundary conditions. To this end, $\mathcal{V}^{S}_{{\bf 0},0}$ denotes the test space of displacement fields satisfying homogeneous Dirichlet boundary conditions, particularly $\applied{\bf u} = {\bf 0}$ and $\applied{\theta}_n = 0$. Given external loadings and boundary conditions, we introduce the corresponding strain and stress measures that serve as our proxy for the resulting displacement field. For ${\bf w} \in \mathcal{V}^{S}$, the midsurface rotation is given by the negative gradient of this displacement variable projected onto the midsurface normal director through the Kirchhoff-Love kinematical assumption \eqref{eqn:KL_constraint}, namely, $\surfVec{\theta}({\bf w}) = - \undef{\fullTens{\FFF}}_3 \cdot \surfVec{\nabla} {\bf w}$. This is readily seen by setting the transverse shear strain to zero and solving algebraically for $\surfVec{\theta}$ in terms of ${\bf w}$ in Table~\ref{table:VariousStrains}. The \textbf{\emph{membrane strain}} \eqref{eqn:mem_strain} is defined as $\surfTens{\alpha}({\bf w}) := \surfTens{P} \cdot \text{Sym}\left( \surfVec{\nabla} {\bf w} \right) \cdot \surfTens{P}$, where the operator $\text{Sym}(\cdot)$ returns the symmetric part of the displacement gradient, in particular, $\text{Sym} \left( \surfVec{\nabla} \ {\bf w} \right) := \frac{1}{2}\left[ \left( \surfVec{\nabla} \ {\bf w} \right) + \left( \surfVec{\nabla} \ {\bf w} \right)^T \right]$. The \textbf{\emph{membrane stress} \eqref{eqn:mem_stress}} is defined via $\surfTens{A}({\bf w}) := \zeta \mathbb{C} \colon \surfTens{\alpha}({\bf w})$, that is, the composition of the membrane strain with the elasticity tensor. Analogously, the \textbf{\emph{bending strain}} \eqref{eqn:bend_strain} is defined as $\surfTens{\beta}({\bf w}) := - \surfTens{P} \cdot \text{Sym}\left( \undef{\fullTens{\FFF}}_3 \cdot \surfVec{\nabla} \ \surfVec{\nabla} {\bf w} \right) \cdot \surfTens{P}$ and the \textbf{\emph{bending stress} \eqref{eqn:bend_stress} }is defined as $\surfTens{B}({\bf w}) := \frac{\zeta^3}{12}\mathbb{C} \colon \surfTens{\beta}({\bf w})$. The \textbf{\emph{surface gradient}}, which we denote $\surfVec{\nabla}$, is defined in \eqref{eqn:surfGradDef}. \begin{remark} It can be shown that the magnitude of $\mathbb{C}$ is given by \begin{equation*} | \mathbb{C} |^2 = \mathbb{C}^{\alpha\beta\lambda\mu} \mathbb{C}_{\alpha\beta\lambda\mu} = \frac{3 \nu^2 - 2 \nu + 3}{\left( 1-\nu^2 \right)^2} E^2, \end{equation*} where $E$ is Young's modulus and $\nu$ is Poisson's ratio. Since $0 \le \nu \le \frac{1}{2}$, it follows that $| \mathbb{C} |^2 \le \frac{44}{9} E^2$. \end{remark} We are interested in the following variational constrained minimization problem for the Kirchhoff-Love shell: $$ (M^{S}) \left\{ \hspace{5pt} \parbox{6.00in}{ \noindent Find ${\bf u} \in \mathcal{V}^{S}_{\applied{\bf u},\applied{\theta}_n}$ that minimizes the total energy \begin{eqnarray*} E^{S}_\text{total}({\bf u}) = E^{S}_\text{int}({\bf u}) + E^{S}_\text{ext}({\bf u}), \end{eqnarray*} where \begin{equation} E^{S}_\text{int}({\bf v}) = \underbrace{ \frac{1}{2} \int_{\Omega}%{\undef{\Omega}} \surfTens{A}({\bf v}) \colon \surfTens{\alpha}({\bf v}) \ d \Omega}%{\undef{\Omega}}_{\text{Membrane Energy}} + \underbrace{ \frac{1}{2} \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf v}) \colon \surfTens{\beta}({\bf v}) \ d \Omega}%{\undef{\Omega}}_{\text{Bending Energy}} \label{eqn:E_int_S} \end{equation} is the internal strain energy due to both membrane and bending effects and \begin{equation} E^{S}_\text{ext}({\bf v}) = -\int_{\Omega}%{\undef{\Omega}} \applied{\textbf{\textup{f}}} \cdot {\bf v} \ d \Omega}%{\undef{\Omega} - \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \applied{\bf T} \cdot {\bf v} \ d \Gamma}%{\undef{\Gamma} - \sum_{C \in \cornerSet{N}} \left. \left( \applied{\textup{S}} \outOfPlane{v} \right) \right|_{C} - \int_{\Gamma}%{\undef{\Gamma}_{N_2}} \applied{B}_{nn} \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} \label{eqn:E_ext_S} \end{equation} is the external energy due to applied loadings. } \right. $$ \noindent We define an associated bilinear form $a^{S}(\cdot,\cdot) \colon \mathcal{V}^{S} \times \mathcal{V}^{S} \rightarrow \mathbb{R}$ as twice the shell strain energy \begin{equation*} a^{S}({\bf w},{\bf v}) := \int_{\Omega}%{\undef{\Omega}} \surfTens{A}({\bf w}) \colon \surfTens{\alpha}({\bf v}) \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf w}) \colon \surfTens{\beta}({\bf v}) \ d \Omega}%{\undef{\Omega} \end{equation*} for all ${\bf w},{\bf v} \in \mathcal{V}^{S}$. The linear functional $f^{S} \in \left( \mathcal{V}^{S} \right)^*$ is defined via \begin{equation*} \begin{aligned} \left\langle f^{S}, {\bf v} \right\rangle = \int_{\Omega}%{\undef{\Omega}} \applied{\textbf{\textup{f}}} \cdot {\bf v} \ d \Omega}%{\undef{\Omega} + \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \applied{\bf T} \cdot {\bf v} \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{N}} \left. \left( \applied{\textup{S}} \outOfPlane{v} \right) \right|_{C} + \int_{\Gamma}%{\undef{\Gamma}_{N_2}} \applied{B}_{nn} \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} \end{aligned} \end{equation*} for all ${\bf v} \in \mathcal{V}^{S}$. Therefore, the solution to Problem $(M^{S})$ is also the solution to the following variational problem: $$ (V^{S}) \left\{ \hspace{5pt} \parbox{6.00in}{ \noindent Find ${\bf u} \in \mathcal{V}^{S}_{\applied{\bf u},\applied{\theta}_n}$ such that \begin{eqnarray*} a^{S}({\bf u},\delta {\bf u}) = \left\langle f^{S}, \delta {\bf u} \right\rangle \label{eqn:Shell_weak} \end{eqnarray*} \noindent for every $\delta {\bf u} \in \mathcal{V}^{S}_{{\bf 0},0}$. } \right. $$ Note that the bilinear form $a^{S}(\cdot,\cdot)$ is symmetric and positive semi-definite, and the kernel consists of constant and linear functions that are the rigid-body modes of the shell. Furthermore, note that $a^{S}(\cdot,\cdot)$ is coercive on $\mathcal{V}^{S}_{{\bf 0},0}$ (i.e., the kernel of $\mathcal{T}^{S}$) with respect to the induced norm (see \cite[Theorem 4.3-4]{Ciarlet2005}). The Lax-Milgram Theorem guarantees that Problem $(V^{S})$ has a unique solution ${\bf u} \in \mathcal{V}^{S}$ that depends continuously on the external loading $f^{S} \in \left( \mathcal{V}^{S} \right)^*$ and the boundary data $\left( \applied{\bf u}, \applied{\theta}_n \right) \in \mathcal{Q}^{S}$, $\applied{\bf T} \in \left[ L^2(\Gamma}%{\undef{\Gamma}_{N_1}) \right]^3$, $\left\{ \applied{\textup{S}}|_{C} \right\}_{C \in \cornerSet{N}} \in \mathbb{R}^{\#\cornerSet{N}}$, and $\applied{B}_{nn} \in L^2(\Gamma}%{\undef{\Gamma}_{N_2})$. \begin{remark} Often when dealing with homogeneous boundary conditions, it is convenient to split the domain boundary in to four disjoint sets, i.e., $\Gamma = \Gamma_C \cup \Gamma_{SS} \cup \Gamma_S \cup \Gamma_F$, where $\Gamma_C$ is the clamped portion, $\Gamma_{SS}$ is the simply supported portion, $\Gamma_S$ is the symmetric portion, and $\Gamma_F$ is the free portion. Physically, these boundary segments are summarized in the following: \begin{equation*} \begin{array}{llllll} \text{(clamped)} & \hat{\bf u} = {\bf 0} ,& \hat{\theta}_n = 0 & & &\text{on} \ \Gamma_{C} := \Gamma_{D_1} \cap \Gamma_{D_2}\\ \text{(simply supported)}&\hat{\bf u} = {\bf 0} ,& & & \hat{B}_{nn} = 0 &\text{on} \ \Gamma_{SS} := \Gamma_{D_1} \cap \Gamma_{N_2}\\ \text{(symmetric)}& & \hat{\theta}_n = 0, &\hat{\bf T} = {\bf 0} & &\text{on} \ \Gamma_{S} := \Gamma_{N_1} \cap \Gamma_{D_2}\\ \text{(free)} & & & \hat{\bf T} = {\bf 0} ,& \hat{B}_{nn} = 0 &\text{on} \ \Gamma_{F} := \Gamma_{N_1} \cap \Gamma_{N_2}\\ \end{array} \end{equation*} \label{remark:shellBCs} \end{remark} \begin{remark} \label{alternative_f_KLS} The linear functional $f^{S} \in \left( \mathcal{V}^{S} \right)^*$ we employ in this section may be replaced by its more common definition \begin{equation*} \begin{aligned} \left\langle f^{S}, {\bf v} \right\rangle = \int_{\Omega}%{\undef{\Omega}} \applied{\textbf{\textup{f}}} \cdot {\bf v} \ d \Omega}%{\undef{\Omega} + \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \applied{\bm{\tau}} \cdot {\bf v} \ d \Gamma}%{\undef{\Gamma} + \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \applied{B}_{nt} \theta_t({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \int_{\Gamma}%{\undef{\Gamma}_{N_2}} \applied{B}_{nn} \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} \end{aligned} \end{equation*} without changing the solution of the Kirchhoff-Love shell problem. This is because both linear functionals return the same result when acting on ${\bf v} \in \mathcal{V}^{S}_{{\bf 0},0}$. However, they do not return the same result when acting on arbitrary ${\bf v} \in \mathcal{V}^{S}$. In fact, it turns out that Assumption \ref{assumption1} from our abstract framework does not hold for the above definition of $f^{S} \in \left( \mathcal{V}^{S} \right)^*$ since the transverse shearing and twisting moment are not energetically conjugate to the boundary displacement and, hence, are not the Lagrange multiplier fields associated with enforcing the displacement boundary condition. Instead, the ersatz traction and corner forces are the Lagrange multiplier fields associated with enforcing the displacement boundary condition. \end{remark} \subsection{A Generalized Green's Identity} Now that we have stated a suitable variational formulation for the Kirchhoff-Love shell problem, we present a generalized Green's identity to be used later in constructing Nitsche's method. We first have the following lemma regarding performing integration by parts along a manifold. \begin{lemma}[Green's Theorems for In-Plane Vector and Tensor Fields on Manifolds] \label{lemma:gen_greens_man} Let $\phi$ be a differentiable scalar field, $\fullTens{v} = \surfVec{v} + v_3 \undef{\fullTens{\FFF}}^3$ be a differentiable vector field, and $\surfTens{M}$ be a differentiable in-plane tensor field. Then \begin{equation*} \begin{aligned} \int_{\Omega}%{\undef{\Omega}} \surfVec{\nabla} \phi \cdot \surfVec{v} \ d \Omega}%{\undef{\Omega} = \int_{\Gamma}%{\undef{\Gamma}} \phi \left( \surfVec{v} \cdot \surfVec{n} \right) \ d \Gamma}%{\undef{\Gamma} - \int_{\Omega}%{\undef{\Omega}} \phi \left( \surfVec{\nabla} \cdot \surfVec{v} \right) \ d \Omega}%{\undef{\Omega} \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} \int_{\Omega}%{\undef{\Omega}} \left( \surfVec{\nabla} \fullTens{v} \right) \colon \surfTens{M} \ d \Omega}%{\undef{\Omega} = \underbrace{\int_{\Gamma}%{\undef{\Gamma}} \surfVec{v} \cdot \surfTens{M} \cdot \surfVec{n} \ d \Gamma}%{\undef{\Gamma} - \int_{\Omega}%{\undef{\Omega}} \surfVec{v} \cdot \left( \surfVec{\nabla} \cdot \surfTens{M} \right) \ d \Omega}%{\undef{\Omega}}_{\text{in-plane}} - \underbrace{\int_{\Omega}%{\undef{\Omega}} \outOfPlane{v} \left( \surfTens{M} \colon \surfTens{\undef{\SFF}} \right) \ d \Omega}%{\undef{\Omega}}_{\text{out-of-plane}}. \end{aligned} \end{equation*} \begin{proof} By the product rule, we can write \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \surfVec{\nabla} \cdot \left( \phi \ \surfVec{v} \right) \ d \Omega}%{\undef{\Omega} = \int_{\Omega}%{\undef{\Omega}} \surfVec{\nabla} \phi \cdot \surfVec{v} \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \phi \left( \surfVec{\nabla} \cdot \surfVec{v} \right) \ d \Omega}%{\undef{\Omega} \end{equation*} and by the divergence theorem, it follows that \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \surfVec{\nabla} \cdot \left( \phi \ \surfVec{v} \right) \ d \Omega}%{\undef{\Omega} = \int_{\Gamma}%{\undef{\Gamma}} \phi \left( \surfVec{v} \cdot \surfVec{n} \right) \ d \Gamma}%{\undef{\Gamma}. \end{equation*} Combining these two expressions yields the first result. To establish the second result, we begin by expressing the vector field as $\fullTens{v} = \surfVec{v} + v_3 \undef{\fullTens{\FFF}}^3$ and observe that \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \left( \surfVec{\nabla} \fullTens{v} \right) \colon \surfTens{M} \ d \Omega}%{\undef{\Omega} = \int_{\Omega}%{\undef{\Omega}} \left( \surfVec{\nabla} \ \surfVec{v} \right) \colon \surfTens{M} \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \left( \surfVec{\nabla} \outOfPlane{v} \otimes \undef{\fullTens{\FFF}}^3 \right) \colon \surfTens{M} \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} v_3 \left( \surfVec{\nabla} \undef{\fullTens{\FFF}}^3 \right) \colon \surfTens{M} \ d \Omega}%{\undef{\Omega}. \end{equation*} The second-to-last integral in the above expression vanishes by the orthogonality between $\undef{\fullTens{\FFF}}^{3}$ and $(\undef{\fullTens{\FFF}}^{1},\undef{\fullTens{\FFF}}^{2})$ and the last integral can be rewritten as \begin{equation*} \int_{\Omega}%{\undef{\Omega}} v_3 \left( \surfVec{\nabla} \undef{\fullTens{\FFF}}^3 \right) \colon \surfTens{M} \ d \Omega}%{\undef{\Omega} = - \int_{\Omega}%{\undef{\Omega}} \outOfPlane{v} \left( \surfTens{M} \colon \surfTens{\undef{\SFF}} \right) \ d \Omega}%{\undef{\Omega} \end{equation*} by the relationship $\surfVec{\nabla} \undef{\fullTens{\FFF}}^{3} = - \surfTens{\undef{\SFF}}$, ultimately arriving at the out-of-plane expression in the result of the Green's identity. By the product rule, it follows that \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \surfVec{\nabla} \cdot \left( \surfVec{v} \cdot \surfTens{M} \right) \ d \Omega}%{\undef{\Omega} = \int_{\Omega}%{\undef{\Omega}} \left( \surfVec{\nabla} \ \surfVec{v} \right) \colon \surfTens{M} \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \surfVec{v} \cdot \left( \surfVec{\nabla} \cdot \surfTens{M} \right) \ d \Omega}%{\undef{\Omega}, \end{equation*} and by the divergence theorem, it follows that \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \surfVec{\nabla} \cdot \left( \surfVec{v} \cdot \surfTens{M} \right) \ d \Omega}%{\undef{\Omega} = \int_{\Gamma}%{\undef{\Gamma}} \surfVec{v} \cdot \surfTens{M} \cdot \surfVec{n} \ d \Gamma}%{\undef{\Gamma}. \end{equation*} Combining these two expressions yields the in-plane result of the Green's identity. \end{proof} \end{lemma} With the ability to perform vector integration by parts along manifolds, we are ready to state and prove our generalized Green's identity for the Kirchhoff-Love shell. Let \begin{equation} \tilde{\mathcal{V}}^{S} := \left\{ {\bf v} = \surfVec{v} + v_3 \undef{\fullTens{\FFF}}^3 \ \colon \ (v_1,v_2) \in \left[ H^2(\Omega) \right]^2 \hspace{5pt} \text{and} \hspace{5pt} v_3 \in H^4(\Omega}%{\undef{\Omega}) \right\} \label{eqn:V_tilde_KLS} \end{equation} and note that $\tilde{\mathcal{V}}^{S} \subset \mathcal{V}^{S}$ is indeed a subspace by Sobolev embedding \cite{EvansPDEs}. Recall from Remark~\ref{remark:V_tilde} that $\tilde{\mathcal{V}}^{S}$ is more regular than what is ultimately required for discretization, a point to be addressed in the next subsection. Then the following generalized Green's identity holds for the Kirchhoff-Love shell: \begin{lemma}[Generalized Green's Identity for the Kirchhoff-Love Shell] For ${\bf w} \in \tilde{\mathcal{V}}^{S}$ and ${\bf v} \in \mathcal{V}^{S}$, the following Green's identity holds: \begin{equation} \begin{aligned} a^{S}&({\bf w},{\bf v}) = \\ &\clipbox{-2 0 495 0}{$\underbrace{ \int_{\Omega}%{\undef{\Omega}} \surfVec{v} \cdot \left[ \surfVec{\nabla} \cdot \left( \surfTens{b} \cdot \surfTens{B}({\bf w}) \right) + \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \cdot \surfTens{b} - \surfVec{\nabla} \cdot \surfTens{A}({\bf w}) \right] \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \outOfPlane{v} \left[ \surfTens{B}({\bf w}) \colon \surfTens{c} - \surfVec{\nabla} \cdot \left( \surfTens{P} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \right) - \surfTens{A}({\bf w}) \colon \surfTens{b} \right] \ d \Omega}%{\undef{\Omega} \hspace{50em}}$} \\ &\phantom{=} \clipbox{10 0 -2 0}{$\underbrace{ \hspace{1em} + \int_{\Gamma}%{\undef{\Gamma}_{N_2}} B_{nn}({\bf w}) \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{N}} \left. \left( \llbracket B_{nt}({\bf w}) \rrbracket \outOfPlane{v} \right) \right|_{C} + \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \fullTens{v} \cdot {\bf T}({\bf w}) \ d \Gamma}%{\undef{\Gamma} }_{\displaystyle \langle \mathcal{L}^{S} {\bf w}, {\bf v} \rangle}$}\\ &\phantom{=} + \underbrace{ \int_{\Gamma}%{\undef{\Gamma}_{D_2}} B_{nn}({\bf w}) \theta_n(\fullTens{v}) \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \left. \left( \llbracket B_{nt}({\bf w}) \rrbracket \outOfPlane{v} \right) \right|_{C} + \int_{\Gamma}%{\undef{\Gamma}_{D_1}} \fullTens{v} \cdot {\bf T}({\bf w}) \ d \Gamma}%{\undef{\Gamma} }_{ \displaystyle \langle \mathcal{B}^{S} {\bf w}, \mathcal{T}^{S} {\bf v} \rangle}, \end{aligned} \label{eqn:Greens_ID_KLS} \end{equation} where $\surfTens{c} = \surfTens{b} \cdot \surfTens{b}$ is the third fundamental form of $\Omega}%{\undef{\Omega}$, $B_{nn}({\bf w}) = \surfVec{n} \cdot \surfTens{B}({\bf w})\cdot \surfVec{n}$ is the \textbf{bending moment}, $B_{nt}({\bf w}) = \surfVec{n} \cdot \surfTens{B}({\bf w})\cdot \surfVec{t}$ is the \textbf{twisting moment}, and \begin{equation} {\bf T}({\bf w}) = \underbrace{ \overbrace{ \surfTens{A}({\bf w}) \cdot \surfVec{n} }^{\surfVec{\textup{T}}^{(A)}({\bf w})} \overbrace{ - \surfTens{b} \cdot \left( \surfTens{B}({\bf w}) \cdot \surfVec{n} + \surfVec{t} B_{nt}({\bf w}) \right) }^{\surfVec{\textup{T}}^{(B)}({\bf w})} }_{\surfVec{\textup{T}}({\bf w})} + \underbrace{ \left[ \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \cdot \surfVec{n} + \frac{\partial B_{nt}({\bf w})}{\partial t} \right] }_{\outOfPlane{\textup{T}}({\bf w})} \undef{\fullTens{\FFF}}^{3} \label{eqn:KL_Shell_ersatz} \end{equation} is the \textbf{ersatz force}. Moreover, the solution ${\bf u}$ of Problem $(V^{S})$ satisfies $\mathcal{L}^{S} {\bf u} = f^{S}$ provided the problem parameters are smooth enough that ${\bf u} \in \tilde{\mathcal{V}}^{S}$. \begin{proof} The Green's identity follows immediately by one application of reverse integration by parts on the membrane contribution and two applications of reverse integration by parts on the bending contribution through the results of Lemma~\ref{lemma:gen_greens_man}. Beginning with the membrane portion of the variational form, we have \begin{equation*} \begin{aligned} \int_{\Omega}%{\undef{\Omega}} \surfTens{A}({\bf w}) \colon \surfTens{\alpha}({\bf v}) \ d \Omega}%{\undef{\Omega} &= \int_{\Omega}%{\undef{\Omega}} \surfTens{A}({\bf w}) \colon \text{Sym} \left( \surfVec{\nabla} {\bf v} \right) \ d \Omega}%{\undef{\Omega}\\ &= \int_{\Gamma}%{\undef{\Gamma}} \surfVec{v} \cdot \surfTens{A}({\bf w}) \cdot \surfVec{n} \ d \Gamma}%{\undef{\Gamma} - \int_{\Omega}%{\undef{\Omega}} \surfVec{v} \cdot \left( \surfVec{\nabla} \cdot \surfTens{A}({\bf w}) \right) \ d \Omega}%{\undef{\Omega} - \int_{\Omega}%{\undef{\Omega}} v_3 \left( \surfTens{A}({\bf w}) \colon \surfTens{\undef{\SFF}} \right) \ d \Omega}%{\undef{\Omega}. \end{aligned} \end{equation*} For the bending portion of the variational form, we begin by employing the decomposition for the bending strain found in Table~\ref{table:VariousStrains}, in particular, \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf w}) \colon \surfTens{\beta}({\bf v}) \ d \Omega}%{\undef{\Omega} = - \int_{\Omega}%{\undef{\Omega}} \left( \surfTens{b} \cdot \surfTens{B}({\bf w}) \right) \colon \text{Sym} \left( \surfVec{\nabla} {\bf v} \right) \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf w}) \colon \text{Sym} \left( \surfVec{\nabla} \ \surfVec{\theta}({\bf v}) \right) \ d \Omega}%{\undef{\Omega}. \end{equation*} We handle each of these integrals individually. Beginning with the first, we apply the results of Lemma~\ref{lemma:gen_greens_man} once to obtain \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \left( \surfTens{b} \cdot \surfTens{B}({\bf w}) \right) \colon \text{Sym} \left( \surfVec{\nabla} {\bf v} \right) \ d \Omega}%{\undef{\Omega} = \int_{\Gamma}%{\undef{\Gamma}} \surfVec{v} \cdot \left( \surfTens{b} \cdot \surfTens{B}({\bf w}) \right) \cdot \surfVec{n} \ d \Gamma}%{\undef{\Gamma} - \int_{\Omega}%{\undef{\Omega}} \surfVec{v} \cdot \left[ \surfVec{\nabla} \cdot \left( \surfTens{b} \cdot \surfTens{B}({\bf w}) \right) \right] \ d \Omega}%{\undef{\Omega} - \int_{\Omega}%{\undef{\Omega}} v_3 \left( \surfTens{B}({\bf w}) \colon \surfTens{c} \right) \ d \Omega}%{\undef{\Omega}. \end{equation*} The second integral proceeds as follows: \begin{equation*} \begin{aligned} \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf w}) \colon \text{Sym} \left( \surfVec{\nabla} \ \surfVec{\theta}({\bf v}) \right) \ d \Omega}%{\undef{\Omega} &= \int_{\Gamma}%{\undef{\Gamma}} B_{nn}({\bf w}) \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \int_{\Gamma}%{\undef{\Gamma}} B_{nt}({\bf w}) \theta_t({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \int_{\Omega}%{\undef{\Omega}} \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \cdot \left( \undef{\fullTens{\FFF}}_3 \cdot \surfVec{\nabla} {\bf v} \right) \ d \Omega}%{\undef{\Omega}\\ &= \int_{\Gamma}%{\undef{\Gamma}} B_{nn}({\bf w}) \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \int_{\Gamma}%{\undef{\Gamma}} B_{nt}({\bf w}) \theta_t({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \int_{\Omega}%{\undef{\Omega}} \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \cdot \surfVec{\nabla} v_3 \ d \Omega}%{\undef{\Omega} \\ &\phantom{=} - \int_{\Omega}%{\undef{\Omega}} \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \cdot \left( {\bf v} \cdot \surfVec{\nabla} \undef{\fullTens{\FFF}}_3 \right) \ d \Omega}%{\undef{\Omega}\\ &= \int_{\Gamma}%{\undef{\Gamma}} B_{nn}({\bf w}) \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \int_{\Gamma}%{\undef{\Gamma}} v_3 \frac{\partial B_{nt}({\bf w})}{\partial t} \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{}} \left. \left( \llbracket B_{nt}({\bf w}) \rrbracket \outOfPlane{v} \right) \right|_{C} - \int_{\Gamma}%{\undef{\Gamma}} B_{nt}({\bf w}) \left( \surfVec{t} \cdot \surfTens{b} \cdot \surfVec{v} \right) \ d \Gamma}%{\undef{\Gamma}\\ &\phantom{=} + \int_{\Omega}%{\undef{\Omega}} \left( \surfTens{P} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \right) \cdot \surfVec{\nabla} v_3 \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \surfVec{v} \cdot \left( \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \cdot \surfTens{b} \right) \ d \Omega}%{\undef{\Omega} \\ &= \int_{\Gamma}%{\undef{\Gamma}} B_{nn}({\bf w}) \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \int_{\Gamma}%{\undef{\Gamma}} v_3 \frac{\partial B_{nt}({\bf w})}{\partial t} \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{}} \left. \left( \llbracket B_{nt}({\bf w}) \rrbracket \outOfPlane{v} \right) \right|_{C} - \int_{\Gamma}%{\undef{\Gamma}} B_{nt}({\bf w}) \left( \surfVec{t} \cdot \surfTens{b} \cdot \surfVec{v} \right) \ d \Gamma}%{\undef{\Gamma}\\ &\phantom{=} + \int_{\Gamma}%{\undef{\Gamma}} v_3 \left[ \surfVec{n} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \right] \ d \Gamma}%{\undef{\Gamma} - \int_{\Omega}%{\undef{\Omega}} v_3 \left[ \surfVec{\nabla} \cdot \left( \surfTens{P} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \right) \right] \ d \Omega}%{\undef{\Omega}\\ &\phantom{=} + \int_{\Omega}%{\undef{\Omega}} \surfVec{v} \cdot \left( \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf w}) \right) \cdot \surfTens{b} \right) \ d \Omega}%{\undef{\Omega} \end{aligned} \end{equation*} for ${\bf w} \in \tilde{\mathcal{V}}^{S}$ and ${\bf v} \in \mathcal{V}^{S}$. Note that $\surfTens{P}$ arises in the above relations to make explicit that the tensor contraction between $\surfVec{\nabla} \cdot \surfTens{B}({\bf w})$ and ${\bf v} \cdot \surfVec{\nabla} \undef{\fullTens{\FFF}}_3$ only contains in-plane quantities. In general, $\surfVec{\nabla} \cdot \surfTens{B}({\bf w})$ contains both in-plane and out-of-plane components and, in order to apply Lemma~\ref{lemma:gen_greens_man} correctly, $\surfTens{P}$ is needed. Combining these relationships for membrane and bending contributions, utilizing the definition of the ersatz forces \eqref{eqn:KL_Shell_ersatz}, and splitting the boundary along the 1- and 2-portions yields the presented Green's identity for the linearized Kirchhoff-Love shell. Note that all integrals present in this relationship are well defined since ${\bf T}({\bf w}) \in \left( L^2(\Gamma}%{\undef{\Gamma}_{D_1}) \right)^3$ and $B_{nn}({\bf w}) \in L^2(\Gamma}%{\undef{\Gamma}_{D_2})$ by the Trace theorem for Sobolev spaces. Now suppose that the problem parameters are sufficiently smooth such that ${\bf u} \in \tilde{\mathcal{V}}^{S}$. We can then write \begin{equation} \begin{aligned} 0 &= \left\langle f^{S}, \delta {\bf u} \right\rangle - a^{S}({\bf u}, \delta {\bf u}) \\ &= \int_{\Omega}%{\undef{\Omega}} \delta \surfVec{u} \cdot \left( \applied{\surfVec{\textup{f}}} - \surfVec{\nabla} \cdot \left( \surfTens{b} \cdot \surfTens{B}({\bf u}) \right) - \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \cdot \surfTens{b} + \surfVec{\nabla} \cdot \surfTens{A}({\bf u}) \right) \ d \Omega}%{\undef{\Omega} + \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \delta \surfVec{u} \cdot \left( \applied{\surfVec{\textup{T}}} - \surfVec{\textup{T}}({\bf u}) \right) \ d \Gamma}%{\undef{\Gamma} \\ &\phantom{=} + \int_{\Omega}%{\undef{\Omega}} \delta u_3 \left( \applied{\textup{f}}_3 - \surfTens{B}({\bf u}) \colon \surfTens{c} + \surfVec{\nabla} \cdot \left( \surfTens{P} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \right) + \surfTens{A}({\bf u}) \colon \surfTens{b} \right) \ d \Omega}%{\undef{\Omega} + \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \delta u_3 \left( \applied{\textup{T}}_3 - \textup{T}_3({\bf u}) \right) \ d \Gamma}%{\undef{\Gamma}\\ &\phantom{=} + \int_{\Gamma}%{\undef{\Gamma}_{N_2}} \left( \applied{B}_{nn} - B_{nn}({\bf u}) \right) \theta_n(\delta {\bf u}) \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{N}} \left( \left( \applied{\textup{S}} - \llbracket B_{nt}({\bf u}) \rrbracket \right) \delta u_3 \right) \Big|_{C} \label{eq:g_inv_1} \end{aligned} \end{equation} for all $\delta {\bf u} \in \mathcal{V}^{S}_{{\bf 0},0}$ as a consequence of the generalized Green's identity. Since $\left( C^\infty_0 (\Omega}%{\undef{\Omega}) \right)^3 \subset \mathcal{V}^{S}_{{\bf 0},0}$, \begin{equation*} \begin{aligned} 0 &= \int_{\Omega}%{\undef{\Omega}} \delta \surfVec{u} \cdot \left( \applied{\surfVec{\textup{f}}} - \surfVec{\nabla} \cdot \left( \surfTens{b} \cdot \surfTens{B}({\bf u}) \right) - \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \cdot \surfTens{b} + \surfVec{\nabla} \cdot \surfTens{A}({\bf u}) \right) \ d \Omega}%{\undef{\Omega}\\ &\phantom{=} + \int_{\Omega}%{\undef{\Omega}} \delta u_3 \left( \applied{\textup{f}}_3 - \surfTens{B}({\bf u}) \colon \surfTens{c} + \surfVec{\nabla} \cdot \left( \surfTens{P} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \right) + \surfTens{A}({\bf u}) \colon \surfTens{b} \right) \ d \Omega}%{\undef{\Omega} \end{aligned} \end{equation*} for all infinitely smooth compact test functions $\delta {\bf u} \in \left( C^\infty_0 (\Omega}%{\undef{\Omega}) \right)^3$. Since $\left( C^\infty_0 (\Omega}%{\undef{\Omega}) \right)^3$ is dense in $\left( L^2(\Omega}%{\undef{\Omega}) \right)^3$, $\applied{\bf f} \in \left( L^2(\Omega}%{\undef{\Omega}) \right)^3$, $\left[ \surfVec{\nabla} \cdot \left( \surfTens{b} \cdot \surfTens{B}({\bf u}) \right) + \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \cdot \surfTens{b} - \surfVec{\nabla} \cdot \surfTens{A}({\bf u}) \right] \in \left( L^2(\Omega}%{\undef{\Omega}) \right)^3$, and $\surfTens{B}({\bf u}) \colon \surfTens{c} - \surfVec{\nabla} \cdot \left( \surfTens{P} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \right) - \surfTens{A}({\bf u}) \colon \surfTens{b} \in L^2(\Omega}%{\undef{\Omega})$, it follows that \begin{equation} \applied{\surfVec{\textup{f}}} = \surfTens{P} \cdot \left[ \surfVec{\nabla} \cdot \left( \surfTens{b} \cdot \surfTens{B}({\bf u}) \right) + \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \cdot \surfTens{b} - \surfVec{\nabla} \cdot \surfTens{A}({\bf u}) \right] \label{eq:g_inv_2} \end{equation} and \begin{equation} \applied{\textup{f}}_3 = \surfTens{B}({\bf u}) \colon \surfTens{c} - \surfVec{\nabla} \cdot \left( \surfTens{P} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \right) - \surfTens{A}({\bf u}) \colon \surfTens{b} \label{eq:g_inv_3} \end{equation} almost everywhere in $\Omega}%{\undef{\Omega}$. Inserting \eqref{eq:g_inv_2} and \eqref{eq:g_inv_3} into \eqref{eq:g_inv_1} in turn yields \begin{equation} \begin{aligned} 0 &= \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \delta {\bf u} \cdot \left( \applied{\bf T} - {\bf T}({\bf u}) \right) \ d \Gamma}%{\undef{\Gamma} + \int_{\Gamma}%{\undef{\Gamma}_{N_2}} \left( \applied{B}_{nn} - B_{nn}({\bf u}) \right) \theta_n(\delta {\bf u}) \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{N}} \left( \left( \applied{\textup{S}} - \llbracket B_{nt}({\bf u}) \rrbracket \right) \delta u_3 \right) \Big|_{C} \label{eq:g_inv_4} \end{aligned} \end{equation} for all $\delta {\bf u} \in \mathcal{V}^{S}_{{\bf 0},0}$. To proceed, let \begin{equation*} \mathcal{Q}^{S}_1 := \left\{ {\bf q} \in \left( L^{2}\left(\Gamma}%{\undef{\Gamma}_{N_1}\right) \right)^3: \left( \mathscr{E}_1^{S} q_1, \mathscr{E}_1^{S} q_2 \right) \in \left(H^{1/2}(\Gamma}%{\undef{\Gamma})\right)^2, \mathscr{E}_1^{S} q_3 \in H^{3/2}(\Gamma}%{\undef{\Gamma}), \textup{ and } q_3 |_{\chi_N} = 0 \right\} \end{equation*} where $\mathscr{E}_1^{S} : L^2(\Gamma}%{\undef{\Gamma}_{N_1}) \rightarrow L^2(\Gamma}%{\undef{\Gamma})$ is an extension-by-zero operator. By the surjectivity of the trace operator, we can define a linear and bounded lifting operator $\mathscr{L}_1^{S} : \mathcal{Q}^{S}_1 \rightarrow \mathcal{V}^{S}_{{\bf 0},0}$ such that $\mathscr{L}_1^{S} {\bf q} |_{\Gamma}%{\undef{\Gamma}} = \left(\mathscr{E}_1^{S} q_i\right) \undef{\fullTens{\FFF}}^i$ and $\theta_n(\mathscr{L}_1^{S} {\bf q})|_{\Gamma}%{\undef{\Gamma}} = 0$ for all ${\bf q} \in \mathcal{Q}^{S}_1$. Then, for ${\bf q} \in \mathcal{Q}^{S}_1$, we can choose $\delta {\bf u} = \mathscr{L}_1^{S} {\bf q}$ in \eqref{eq:g_inv_4}, yielding \begin{equation*} \begin{aligned} 0 &= \int_{\Gamma}%{\undef{\Gamma}_{N_1}} {\bf q} \cdot \left( \applied{\bf T} - {\bf T}({\bf u}) \right) \ d \Gamma}%{\undef{\Gamma}. \end{aligned} \end{equation*} Since $\mathcal{Q}^{S}_1$ is dense in $\left( L^{2}\left(\Gamma}%{\undef{\Gamma}_{N_1}\right) \right)^3$, $\applied{\bf T} \in \left( L^{2}\left(\Gamma}%{\undef{\Gamma}_{N_1}\right) \right)^3$, and ${\bf T}({\bf u}) \in \left( L^{2}\left(\Gamma}%{\undef{\Gamma}_{N_1}\right) \right)^3$, it follows that \begin{equation} \applied{\bf T} = {\bf T}({\bf u}) \label{eq:g_inv_5} \end{equation} almost everywhere on $\Gamma_{N_1}$. Next, let \begin{equation*} \mathcal{Q}^{S}_2 := \left\{ q \in L^2\left(\Gamma}%{\undef{\Gamma}_{N_2}\right): \mathscr{E}_2^{S} q \in H^{1/2}(\Gamma}%{\undef{\Gamma}) \right\} \end{equation*} where $\mathscr{E}_2^{S} : L^2\left(\Gamma}%{\undef{\Gamma}_{N_2}\right) \rightarrow L^2(\Gamma}%{\undef{\Gamma})$ is an extension-by-zero operator. By the surjectivity of the trace operator, we can define a linear and bounded lifting operator $\mathscr{L}_2^{S} : \mathcal{Q}^{S}_2 \rightarrow \mathcal{V}^{S}_{{\bf 0},0}$ such that $\mathscr{L}_2^{S} q |_{\Gamma}%{\undef{\Gamma}} = {\bf 0}$ and $\theta_n(\mathscr{L}_1^{S} {\bf q})|_{\Gamma}%{\undef{\Gamma}} = \mathscr{E}_2^{S} q$ for all $q \in \mathcal{Q}^{S}_2$. Then, for $q \in \mathcal{Q}^{S}_2$, we can choose $\delta {\bf u} = \mathscr{L}_2^{S} q$ in \eqref{eq:g_inv_4}, yielding \begin{equation*} \begin{aligned} 0 &= \int_{\Gamma}%{\undef{\Gamma}_{N_2}} \left( \applied{B}_{nn} - B_{nn}({\bf u}) \right) q \ d \Gamma}%{\undef{\Gamma}. \end{aligned} \end{equation*} Since $\mathcal{Q}^{S}_2$ is dense in $L^{2}\left(\Gamma}%{\undef{\Gamma}_{N_2}\right)$, $\applied{B}_{nn} \in L^2\left(\Gamma}%{\undef{\Gamma}_{N_2}\right)$, and $B_{nn}({\bf u}) \in L^2\left(\Gamma}%{\undef{\Gamma}_{N_2}\right)$, it follows that \begin{equation} \applied{B}_{nn} = B_{nn}({\bf u}) \label{eq:g_inv_6} \end{equation} almost everywhere on $\Gamma_{N_2}$. Finally, inserting \eqref{eq:g_inv_5} and \eqref{eq:g_inv_6} into \eqref{eq:g_inv_4} yields \begin{equation*} \begin{aligned} 0 &= \sum_{C \in \cornerSet{N}} \left( \left( \applied{\textup{S}} - \llbracket B_{nt}({\bf u}) \rrbracket \right) \delta u_3 \right) \Big|_{C} \end{aligned} \end{equation*} for all $\delta {\bf u} \in \mathcal{V}^{S}_{{\bf 0},0}$. For each $C \in \cornerSet{N}$, there exists a $\delta {\bf u} \in \mathcal{V}^{S}_{{\bf 0},0}$ such that $\delta u_3|_C = 1$ and $\delta u_3|_{C'} = 0$ for $C' \in \cornerSet{N}$ such that $C' \neq C$. It follows that \begin{equation} \applied{\textup{S}} = \llbracket B_{nt}({\bf u}) \rrbracket \label{eq:g_inv_7} \end{equation} on $\cornerSet{N}$. Combining \eqref{eq:g_inv_2}, \eqref{eq:g_inv_3}, \eqref{eq:g_inv_5}, \eqref{eq:g_inv_6}, and \eqref{eq:g_inv_7} yields the desired result that $\mathcal{L}^{S} {\bf u} = f^{S}$. \end{proof} \label{lemma:Greens_ID_KLS} \end{lemma} \begin{remark} The Euler-Lagrange equations of Problem $(V^{S})$ give rise to the following strong formulation: $$ (S^{S}) \left\{ \hspace{5pt} \parbox{6.00in}{ \noindent \textup{Find ${\bf u} \colon \overline{\Omega}%{\undef{\Omega}} \rightarrow \mathbb R^3$ such that:} \begin{equation*} \begin{aligned} \begin{array}{rll} \surfTens{P} \cdot \left[ \surfVec{\nabla} \cdot \left( \surfTens{b} \cdot \surfTens{B}({\bf u}) \right) + \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \cdot \surfTens{b} - \surfVec{\nabla} \cdot \surfTens{A}({\bf u}) \right] &= \surfVec{\applied{\textup{f}}} \hspace{10pt} &\text{in} \ \Omega}%{\undef{\Omega}\\ \surfTens{B}({\bf u}) \colon \surfTens{c} - \surfVec{\nabla} \cdot \left( \surfTens{P} \cdot \left( \surfVec{\nabla} \cdot \surfTens{B}({\bf u}) \right) \right) - \surfTens{A}({\bf u}) \colon \surfTens{b} & = \applied{\textup{f}}_3 \hspace{10pt} &\text{in} \ \Omega}%{\undef{\Omega}\\ {\bf u} &= \applied{\bf u} \hspace{10pt} &\text{on} \ \Gamma}%{\undef{\Gamma}_{D_1}\\ \theta_n({\bf u}) &= \applied{\theta}_n \hspace{10pt} &\text{on} \ \Gamma}%{\undef{\Gamma}_{D_2}\\ {\bf T}({\bf u}) &= \applied{\bf T} \hspace{10pt} &\text{on} \ \Gamma}%{\undef{\Gamma}_{N_1}\\ B_{nn}({\bf u}) &= \applied{B}_{nn} \hspace{10pt} &\text{on} \ \Gamma}%{\undef{\Gamma}_{N_2}\\ \llbracket B_{nt}({\bf u}) \rrbracket &= \applied{\textup{S}} &\text{on} \ \chi_N. \end{array} \end{aligned} \label{eqn:KLS_Strong} \end{equation*} } \right. $$ \noindent This result follows immediately from the relationship $\mathcal{L}^{S} {\bf u} = f^{S}$ that was proved in Lemma~\ref{lemma:Greens_ID_KLS}. \end{remark} \begin{remark} \label{rem:IncorrectErsatz} The Euler-Lagrange equations presented above differ from those commonly presented in the literature, for example, from those presented in \cite[p.155]{Ciarlet2005} and in \cite[p.156]{Koiter1973foundations}. In those references, the in-plane bending contribution to the ersatz force is reported to be (in our notation) ``$-2 \surfTens{b} \cdot \surfTens{B}({\bf w}) \cdot \surfVec{n}$'', which does not agree when compared with our derived forces in \eqref{eqn:KL_Shell_ersatz}, i.e., $\surfVec{\textup{T}}^{(B)}({\bf w})$. However, we believe that the ones presented here are correct for several reasons. First of all, the Euler-Lagrange equations presented here derive directly from the Green's identity presented in Lemma \ref{lemma:Greens_ID_KLS}. Furthermore, we later use the Euler-Lagrange equations presented here to derive the required applied forces, tractions, and bending moments for a set of manufactured solutions. These manufactured solutions are then employed to numerically confirm convergence rates for our proposed Nitsche formulation in conjunction with an isogeometric Kirchhoff-Love shell discretization. By contrast, when using the equations in \cite{Ciarlet2005} to derive applied forces, tractions, and bending moments, we do not see convergence in the corresponding numerical results to the manufactured solutions. Although the origin of the erroneous term is unclear, we have traced several references back to Koiter's early work \cite[(3.10)]{Koiter1970foundation} which does not include the full derivation. Later work by Koiter and his student, van der Heijden, \cite[p.20]{van1976modified} states that these incorrect boundary terms arise ``after fairly lengthy algebra'' and cites a paper listed in the references section as ``to be published''. As such, we have been unable to trace exactly where the algebra leading to the incorrect result went awry. \end{remark} \begin{remark} The decomposition of the in-plane ersatz force into membrane and bending contributions, i.e., $\surfVec{\textup{T}}^{(A)}({\bf u})$ and $\surfVec{\textup{T}}^{(B)}({\bf u})$, respectively, is done for later convenience in order to establish trace inequality and penalty constants that are independent of thickness. \end{remark} \subsection{Generalized Trace and Cauchy-Schwarz Inequalities} With a Green's identity in place, we are ready to provide generalized trace and Cauchy-Schwarz inequalities satisfying Assumption~\ref{assumption2}, the final pieces required before presenting Nitsche's method for the Kirchhoff-Love shell. We establish a mesh $\mathcal{K}$ of non-overlapping (mapped) polygons, which we refer to henceforth as elements, associated with $\Omega}%{\undef{\Omega}$ that is comprised of elements such that $\Omega}%{\undef{\Omega} = \text{int}(\overline{\cup_{K \in \mathcal{K}} K})$. Next, we assume that the approximation space $\mathcal{V}^{S}_h$ consists of (at least) $C^1$-continuous piecewise polynomial or rational approximations over the mesh $\mathcal{K}$. For each element $K \in \mathcal{K}$, we associate an element size $h_K = \text{diam}(K)$, and we associate with the entire mesh $\mathcal{K}$ a mesh size $h = \max_{K \in \mathcal{K}} h_K$. We collect the boundary edges into an edge mesh $\mathcal{E}$. In the case of the Kirchhoff-Love shell, we must construct two additional edge meshes, $\mathcal{E}_{D_1}$ and $\mathcal{E}_{D_2}$. We associate the members of $\mathcal{E}_{D_1}$ with elements whose edges belong to $\Gamma}%{\undef{\Gamma}_{D_1}$ and likewise for members of $\mathcal{E}_{D_2}$, i.e., for $\alpha = 1,2$, \begin{equation*} \mathcal{E}_{D_\alpha} = \left\{ E \in \mathcal{E} \colon E \subset \Gamma}%{\undef{\Gamma}_{D_\alpha} \right\}. \end{equation*} To ensure that each edge in $\mathcal{E}$ belongs to either the Neumann or Dirichlet boundaries, assume that $\Gamma}%{\undef{\Gamma}_{D_\alpha} = \text{int}(\overline{\cup_{E \in \mathcal{E}_{D_\alpha}} E})$ for $\alpha = 1,2$. We associate an edge size $h_E = h_K$ for each edge $E \in \mathcal{E}$, where $K \in \mathcal{K}$ is the element for which $E$ is the edge. This is not the only size we can associate with the edge, but it is the simplest. For anisotropic meshes, other prescriptions may be more appropriate (see, e.g., \cite{bazilevs2007weak}). Note that when it is necessary to differentiate between edges in $\mathcal{E}_{D_1}$ and $\mathcal{E}_{D_2}$, we will introduce subscripts on the edge variable, e.g., $E_1 \in \mathcal{E}_1$. Lastly, since each $C \in \cornerSet{}$ is associated with an element $K \in \mathcal{K}$, we define $h_C = h_K$. With these definitions in place, we have the following lemma: \begin{lemma}[Trace Inequalities] There exists five positive, dimensionless constants $\cTrace{,1}^{S}, \cTrace{,2}^{S}, \cTrace{,3}^{S}, \cTrace{,4}^{S}, \cTrace{,5}^{S} > 0$ such that \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{\cTrace{,1}^{S} \zeta^3 | \mathbb{C} |} \left( \textup{T}_3({\bf v}_h) \right)^2 \ d \Gamma}%{\undef{\Gamma} \le \frac{1}{5} a^{S}({\bf v}_h,{\bf v}_h) \label{eqn:TI_KLS_1} \end{equation} \begin{equation} \sum_{C \in \cornerSet{D}} \frac{h_C^2}{\cTrace{,2}^{S} \zeta^3 | \mathbb{C} |} \llbracket B_{nt}({\bf v}_h) \rrbracket^2 \Big|_C \le \frac{1}{5} a^{S}({\bf v}_h,{\bf v}_h) \label{eqn:TI_KLS_2} \end{equation} \begin{equation} \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{h_{E_2}}{\cTrace{,3}^{S} \zeta^3 | \mathbb{C} |} \left( B_{nn}({\bf v}_h) \right)^2 \ d \Gamma}%{\undef{\Gamma} \le \frac{1}{5} a^{S}({\bf v}_h,{\bf v}_h) \label{eqn:TI_KLS_3} \end{equation} \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\cTrace{,4}^{S} \zeta^3 | \mathbb{C} |} \left| \surfVec{\textup{T}}^{(B)}({\bf v}_h) \right|^2 \ d \Gamma}%{\undef{\Gamma} \le \frac{1}{5} a^{S}({\bf v}_h,{\bf v}_h) \label{eqn:TI_KLS_4} \end{equation} \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\cTrace{,5}^{S} \zeta | \mathbb{C} |} \left| \surfVec{\textup{T}}^{(A)}({\bf v}_h) \right|^2 \ d \Gamma}%{\undef{\Gamma} \le \frac{1}{5} a^{S}({\bf v}_h,{\bf v}_h) \label{eqn:TI_KLS_5} \end{equation} for all ${\bf v}_h \in \mathcal{V}^{S}_h$. \begin{proof} We prove \eqref{eqn:TI_KLS_1} and remark that the proofs for \eqref{eqn:TI_KLS_2}, \eqref{eqn:TI_KLS_3}, \eqref{eqn:TI_KLS_4}, and \eqref{eqn:TI_KLS_5} follow in an identical manner. We begin by denoting the space of rigid body modes associated with the Kirchhoff-Love shell by \begin{equation*} \text{Rig}^{S}(\Omega) \colon = \left\{ {\bf v}_h \in \mathcal{V}_h^{S} \colon \surfTens{\alpha}({\bf v}_h) = \surfTens{\beta}({\bf v}_h) = \surfTens{0} \right\}\footnote{The nomenclature for this space refers to the fact that it contains the \textbf{\emph{rigid body modes}} associated with the various strain tensors.}. \end{equation*} We then denote the orthogonal complement of this space by \begin{equation} \mathring{\mathcal{V}}^{S}_h \colon = \left\{ {\bf v} \in \mathcal{V}_h^{S} \colon ({\bf v}, {\bf r})_{L^2} = 0 \ \forall \ {\bf r} \in \text{Rig}^{S}(\Omega) \right\}. \end{equation} Since $\text{Rig}^{S}(\Omega)$ is the kernel of $\alpha$ and $\beta$, it follows that $\alpha\left( \mathcal{V}_h^{S} \right) = \alpha\left( \mathring{\mathcal{V}}^{S}_h \right)$ and $\beta\left( \mathcal{V}_h^{S} \right) = \beta\left( \mathring{\mathcal{V}}^{S}_h \right)$; hence, for any ${\bf v}_h \in \mathcal{V}^{S}_h$, there exists $\mathring{{\bf v}}_h \in \mathring{\mathcal{V}}^{S}_h$ such that $\surfTens{\alpha}({\bf v}_h) = \surfTens{\alpha}(\mathring{\bf v}_h)$ and $\surfTens{\beta}({\bf v}_h) = \surfTens{\beta}(\mathring{\bf v}_h)$. Consequently, if there exists a positive dimensionless constant $\cTrace{,1}^{S} > 0$ such that \eqref{eqn:TI_KLS_1} holds for all ${\bf v}_h \in \mathring{\mathcal{V}}^{S}_h$, then \eqref{eqn:TI_KLS_1} holds with the same constant $\cTrace{,1}^{S}$ for all ${\bf v}_h \in \mathcal{V}^{S}_h$. Now consider the generalized eigenproblem: Find $({\bf u}_h, \lambda_h) \in \mathring{\mathcal{V}}^{S}_h \times \mathbb{R}$ such that \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{ \zeta^3 | \mathbb{C} |} \textup{T}_3({\bf v}_h) \textup{T}_3(\delta {\bf v}_h) \ d \Gamma}%{\undef{\Gamma} = \lambda_h a^{S}({\bf v}_h, \delta {\bf v}_h) \label{eq:Gen_EP_KLS} \end{equation} for all $\delta {\bf u}_h \in \mathring{\mathcal{V}}^{S}_h$. Since the bilinear form $a^{S}({\bf v}_h, \delta {\bf v}_h)$ is coercive on $\mathring{\mathcal{V}}^{S}_h$, all eigenvalues of the above generalized eigenproblem are non-negative and finite, and they are finite in number. Moreover, the min-max theorem states that the max eigenvalue satisfies \begin{equation} \lambda^{S}_{\textup{max}} = \sup_{\substack{ {\bf v}_h \in \mathring{\mathcal{V}}^{S}_h \\ {\bf v}_h \neq {\bf 0} }} \frac{ \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{ \zeta^3 | \mathbb{C} |} \left( \textup{T}_3({\bf v}_h) \right)^2 \ d \Gamma}%{\undef{\Gamma}}{a^{S}({\bf v}_h, {\bf v}_h)}. \nonumber \end{equation} It is easily seen then that the lemma is satisfied for $\cTrace{,1}^{S} = \lambda^{S}_{\textup{max}}/5$. \end{proof} \label{lemma:TI_KLS} \end{lemma} \begin{remark} From its proof, we see that Lemma~\ref{lemma:TI_KLS} is satisfied for $C^{S}_{\textup{tr},1} = \lambda^{S}_{\textup{max},1}/5$, where $ \lambda^{S}_{\textup{max},1}$ is the largest eigenvalue of the generalized eigenproblem \eqref{eq:Gen_EP_KLS}. Unfortunately, it is very difficult to construct a basis for the space $\mathring{\mathcal{V}}^{S}_h$. Fortunately, $\lambda^{S}_{\textup{max}}$ is also the largest finite eigenvalue of this simpler generalized eigenproblem: Find $({\bf u}_h, \lambda_h) \in \mathcal{V}^{S}_h \times \mathbb{R}$ such that \begin{equation*} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{ \zeta^3 | \mathbb{C} |} \textup{T}_3({\bf u}_h) \textup{T}_3(\delta {\bf u}_h) \ d \Gamma}%{\undef{\Gamma} = \lambda_h a^{S}({\bf u}_h, \delta {\bf u}_h) \end{equation*} for all $\delta {\bf u}_h \in \mathcal{V}^{S}_h$. Given a basis $\{ N_i {\bf e}_j \}_{i=1}^n$ and $j = 1,2,3$ for the space $\mathcal{V}^{S}_h$, it then follows that $\lambda^{S}_{\textup{max}}$ may be computed as the largest finite eigenvalue of the generalized matrix eigenproblem $\left({\bf A} - \lambda {\bf B} \right) {\bf x} = {\bf 0}$, where \begin{equation*} \left[ {\bf A} \right]_{ij} = \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{\zeta^3 | \mathbb{C} |} \textup{T}_3({\bf u}_h) \textup{T}_3(\delta {\bf u}_h) \ d \Gamma}%{\undef{\Gamma} \end{equation*} and \begin{equation*} \left[ {\bf B} \right]_{ij} = a^{S}({\bf u}_h, \delta {\bf u}_h). \end{equation*} Thus, it is tractable to compute an explicit value for the trace constant $\cTrace{,1}^{S}$. In a similar vein, $C^{S}_{\textup{tr},2} = \lambda^{S}_{\textup{max},2}/5$, $C^{S}_{\textup{tr},3} = \lambda^{S}_{\textup{max},3}/5$, $C^{S}_{\textup{tr},4} = \lambda^{S}_{\textup{max},4}/5$, and $C^{S}_{\textup{tr},5} = \lambda^{S}_{\textup{max},5}/5$, where $\lambda^{S}_{\textup{max},i}$, for $i = 1,2,\ldots,5$, correspond to the largest finite eigenvalues of generalized eigenproblems derived from \eqref{eqn:TI_KLS_1}, \eqref{eqn:TI_KLS_2}, \eqref{eqn:TI_KLS_3}, \eqref{eqn:TI_KLS_4}, and \eqref{eqn:TI_KLS_5}, respectively. The associated eigenproblems for these constants can likewise be constructed and solved for explicitly. \end{remark} To construct Nitsche's method for the Kirchhoff-Love shell, we must specify suitable linear maps $\epsilon^{S}$ and $\eta^{S}$ such that the generalized trace and Cauchy-Schwarz inequalities appearing in Assumption~\ref{assumption2} are satisfied. We begin by extending the domain of definition of the boundary operator $\mathcal{B}^{S} \colon \tilde{\mathcal{V}}^{S} \rightarrow \left( \mathcal{Q}^{S} \right)^*$, defined in \eqref{eqn:Greens_ID_KLS}, to the enlarged space $\tilde{\mathcal{V}}^{S} + \mathcal{V}^{S}_h$. We accomplish this by expressing this boundary operator as a summation of integrals and point evaluations over the edge meshes and corners, rather than as a single integration and function evaluation in the continuous setting. In particular, \begin{equation} \begin{aligned} \left\langle \mathcal{B}^{S} {\bf w}, \mathcal{T}^{S} {\bf v} \right\rangle &= \int_{\Gamma}%{\undef{\Gamma}_1} {\bf T}({\bf w}) \cdot \fullTens{v} \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \left( \llbracket B_{nt}({\bf w}) \rrbracket \outOfPlane{v} \right) \Big|_{C} + \int_{\Gamma}%{\undef{\Gamma}_2} B_{nn}({\bf w}) \theta_n(\fullTens{v}) \ d \Gamma}%{\undef{\Gamma}\\ &= \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} {\bf T}({\bf w}) \cdot \fullTens{v} \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \left( \llbracket B_{nt}({\bf w}) \rrbracket \outOfPlane{v} \right) \Big|_{C} + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} B_{nn}({\bf w}) \theta_n(\fullTens{v}) \ d \Gamma}%{\undef{\Gamma} \end{aligned} \label{eqn:bdy_integrals_KLS} \end{equation} for all ${\bf w} \in \tilde{\mathcal{V}}^{S}$ and ${\bf v} \in \mathcal{V}^{S}$. Expressing the duality pairing in this manner permits a trivial extension of the domain of definition of $\mathcal{B}^{S}$ to the enlarged space $\tilde{\mathcal{V}}^{S} + \mathcal{V}^{S}_h$. This extension is necessary because the first set of integrals in the above equation may not be well defined for arbitrary ${\bf w} \in \mathcal{V}_h^{S}$ because the Kirchhoff-Love shell requires third derivatives along the boundary rendering low continuity discretizations inadmissible. However, the second set of integrals is well defined for any piecewise $C^1$-continuous polynomial or rational approximation over the mesh $\mathcal{K}$ since these types of discretizations are $C^\infty$ over each edge. Next, we define the linear map $\epsilon^{S}: \textup{dom}(\epsilon^{S}) \subseteq \left( \mathcal{Q}^{S} \right)^* \rightarrow \mathcal{Q}^{S}$ through its action: \begin{equation*} \begin{aligned} \left\langle \left( \epsilon^{S} \right)^{-1} {\bf w}, {\bf v} \right\rangle &= \zeta^3 | \mathbb{C} | \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,1}^{S}}{h^3_{E_1}} w_3 v_3 \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \frac{\cPen{,2}^{S}}{h^2_{C}} ( w_3 v_3 ) \Big|_{C} + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{\cPen{,3}^{S}}{h_{E_2}} \theta_n({\bf w}) \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} \right)\\ &\phantom{=} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} \zeta | \mathbb{C} |}{h_{E_1}} \surfVec{w} \cdot \surfVec{v} \ d \Gamma}%{\undef{\Gamma} \end{aligned} \end{equation*} for all ${\bf w}, {\bf v} \in \mathcal{Q}^{S}$, where $\cPen{,1}^{S} > \cTrace{,1}^{S}$, $\cPen{,2}^{S} > \cTrace{,2}^{S}$, $\cPen{,3}^{S} > \cTrace{,3}^{S}$, and $\cPen{,4}^{S} > \cTrace{,4}^{S} + \cTrace{,5}^{S}$ are positive dimensionless constants. \begin{remark} The choice of penalty constants presented here in this paper is not the only stable choice. For user-specified dimensionless constants $\alpha_1 > 0$, $\alpha_2 > 0$, $\alpha_3 > 0$, $\alpha_4 > 0$, and $\alpha_5 > 0$, we can alternatively select $\cPen{,1}^{S} > \alpha_1 \cTrace{}^{S}$, $\cPen{,2}^{S} > \alpha_2 \cTrace{}^{S}$, $\cPen{,3}^{S} > \alpha_3 \cTrace{}^{S}$, and $\cPen{,4}^{S} > \left( \alpha_4 + \alpha_5 \right) \cTrace{}^{S}$, where $\cTrace{}^{S} > 0$ is a dimensionless constant such that \begin{equation*} \begin{aligned} \frac{1}{\zeta^3 | \mathbb{C} |} &\left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{\alpha_1} \textup{T}_3({\bf w}) \textup{T}_3({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \frac{h_C^2}{\alpha_2} ( \llbracket B_{nt}({\bf w}) \rrbracket \llbracket B_{nt}({\bf v}) \rrbracket ) \Big|_C + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{h_{E_2}}{\alpha_3} B_{nn}({\bf w}) B_{nn}({\bf v}) \ d \Gamma}%{\undef{\Gamma} \right. \\ &\phantom{=} \left. + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\alpha_4} \surfVec{\textup{T}}^{(B)}({\bf w}) \cdot \surfVec{\textup{T}}^{(B)}({\bf v}) \ d \Gamma}%{\undef{\Gamma} \right) + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\zeta | \mathbb{C} | \alpha_5} \surfVec{\textup{T}}^{(A)}({\bf w}) \cdot \surfVec{\textup{T}}^{(A)}({\bf v}) \ d \Gamma}%{\undef{\Gamma} \le \cTrace{}^{S} a^{S}({\bf v}_h, {\bf v}_h) \end{aligned} \end{equation*} for all ${\bf v}_{h} \in \mathcal{V}^{S}_h$. The advantage of this approach is that only one trace constant, namely, $\cTrace{}^{S}$, must be estimated. The disadvantage of this approach is that $\alpha_1$, $\alpha_2$, $\alpha_3$, $\alpha_4$, and $\alpha_5$, which control the relative weightings of the out-of-plane displacement boundary condition along $\Gamma_{D_1}$, the displacement boundary condition at corners in $\cornerSet{D}$, the rotation boundary conditions along $\Gamma_{D_2}$, and the in-plane displacement boundary conditions along $\Gamma_{D_1}$, respectively, must be specified. \label{remark:eig_KLS} \end{remark} Let $\eta^{S} \colon \text{dom}(\eta^{S}) \subseteq \left( \mathcal{Q}^{S} \right)^* \rightarrow \mathcal{Q}^{S}$ be a densely defined, positive, self-adjoint linear map that is defined on the enlarged space \begin{equation*} \left\{ \mathcal{B}^{S} {\bf v} \colon {\bf v} \in \tilde{\mathcal{V}}^{S} + \mathcal{V}^{S}_h \right\} \end{equation*} and satisfies \begin{equation*} \begin{aligned} \left\langle \mathcal{B}^{S} {\bf w}, \eta^{S} \mathcal{B}^{S} {\bf v} \right\rangle &= \frac{1}{\zeta^3 | \mathbb{C} |} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{\cTrace{,1}^{S}} \textup{T}_3({\bf w}) \textup{T}_3({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \frac{h_C^2}{\cTrace{,2}^{S}} ( \llbracket B_{nt}({\bf w}) \rrbracket \llbracket B_{nt}({\bf v}) \rrbracket ) \Big|_C \right.\\ &\phantom{=} \left. + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{h_{E_2}}{\cTrace{,3}^{S}} B_{nn}({\bf w}) B_{nn}({\bf v}) \ d \Gamma}%{\undef{\Gamma} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\cTrace{,4}^{S}} \surfVec{\textup{T}}^{(B)}({\bf w}) \cdot \surfVec{\textup{T}}^{(B)}({\bf v}) \ d \Gamma}%{\undef{\Gamma} \right)\\ &\phantom{=} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\zeta | \mathbb{C} | \cTrace{,5}^{S}} \surfVec{\textup{T}}^{(A)}({\bf w}) \cdot \surfVec{\textup{T}}^{(A)}({\bf v}) \ d \Gamma}%{\undef{\Gamma} \end{aligned} \end{equation*} for all ${\bf w}, {\bf v} \in \tilde{\mathcal{V}}^{S} + \mathcal{V}^{S}_h$. After these choices of linear maps have been made, the generalized trace and Cauchy-Schwarz inequalities appearing in Assumption~\ref{assumption2} are satisfied. \begin{lemma}[Generalized Trace Inequality for the Kirchhoff-Love Shell] It holds that \begin{equation*} \left\langle \mathcal{B}^{S} {\bf v}_h, \eta^{S} \mathcal{B}^{S} {\bf v}_h \right\rangle \leq a^{S}({\bf v}_h, {\bf v}_h) \end{equation*} for all ${\bf v}_h \in \mathcal{V}^{S}_h$. \begin{proof} The proof follows immediately from Lemma~\ref{lemma:TI_KLS} and the definition of $\eta^{S}$. \end{proof} \label{lemma:TI_KLS_gen} \end{lemma} \begin{lemma}[Generalized Cauchy-Schwarz Inequality for the Kirchhoff-Love Shell] Let $\cPen{,1}^{S} = \gamma_1^2 \cTrace{,1}^{S}$, $\cPen{,2}^{S} = \gamma_2^2 \cTrace{,2}^{S}$, $\cPen{,3}^{S} = \gamma_3^2 \cTrace{,3}^{S}$, and $\cPen{,4}^{S} = \gamma_4^2 \max(\cTrace{,4}^{S},\cTrace{,5}^{S})$, where $\gamma_1, \gamma_2, \gamma_3, \gamma_4 \in (1,\infty)$. Then \begin{equation*} \left| \left\langle \mathcal{B}^{S} {\bf v}, \mathcal{T}^{S} {\bf w} \right\rangle \right| \le \frac{1}{\gamma} \left\langle \mathcal{B}^{S} {\bf v}, \eta^{S} \mathcal{B}^{S} {\bf v} \right\rangle^{1/2} \left\langle \left( \epsilon^{S} \right)^{-1} \mathcal{T}^{S} {\bf w}, \mathcal{T}^{S} {\bf w} \right\rangle^{1/2} \end{equation*} for all ${\bf v}, {\bf w} \in \tilde{\mathcal{V}}^{S} + \mathcal{V}^{S}_h$, where $\gamma = \min(\gamma_1,\gamma_2,\gamma_3,\gamma_4)$. \begin{proof} Recall \eqref{eqn:bdy_integrals_KLS} and the ersatz force decomposition presented in \eqref{eqn:KL_Shell_ersatz}. We then write \begin{equation} \begin{aligned} \left\langle \mathcal{B}^{S} {\bf w}, \mathcal{T}^{S} {\bf v} \right\rangle &= \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \textup{T}_3({\bf w}) v_3 \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \left( \llbracket B_{nt}({\bf w}) \rrbracket \outOfPlane{v} \right) \Big|_{C} + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} B_{nn}({\bf w}) \theta_n(\fullTens{v}) \ d \Gamma}%{\undef{\Gamma}\\ &\phantom{=} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \surfVec{\textup{T}}^{(B)}({\bf w}) \cdot \surfVec{v} \ d \Gamma}%{\undef{\Gamma} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \surfVec{\textup{T}}^{(A)}({\bf w}) \cdot \surfVec{v} \ d \Gamma}%{\undef{\Gamma}. \end{aligned} \label{eqn:KLS_BvTw_CS_proof} \end{equation} We individually bound these five terms in \eqref{eqn:KLS_BvTw_CS_proof} by utilizing standard continuous ($(f, g)_{L^2(D)} \leq \| f \|_{L^2(D)} \| g \|_{L^2(D)}$ for $f, g \in L^2(D)$) and discrete ($|(x,y)| \leq \| x \|_2 \| y \|_2$ for $x, y \in \mathbb{R}^n$) Cauchy-Schwarz inequalities. The first term is bounded according to \begin{equation*} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \textup{T}_3({\bf w}) v_3 \ d \Gamma}%{\undef{\Gamma} \le \frac{1}{\gamma_1} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{\cTrace{,1}^{S} \zeta^3 | \mathbb{C} |} \outOfPlane{\textup{T}}({\bf w}) \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,1}^{S} \zeta^3 | \mathbb{C} |}{h_{E_1}^3} v_{3} \ d \Gamma}%{\undef{\Gamma} \right)^{1/2}. \end{equation*} The second term is bounded according to the following relationship: \begin{equation*} \sum_{C \in \cornerSet{D}} \left( \llbracket B_{nt}({\bf w}) \rrbracket \outOfPlane{v} \right) \Big|_{C} \le \frac{1}{\gamma_2} \left( \sum_{C \in \cornerSet{D}} \frac{h_C^2}{\cTrace{,2}^{S} \zeta^3 | \mathbb{C} |} \llbracket B_{nt}({\bf w}) \rrbracket^2 \Big|_{C} \right)^{1/2} \left( \sum_{C \in \cornerSet{D}} \frac{\cPen{,2}^{S} \zeta^3 | \mathbb{C} |}{h_C^2} ( \outOfPlane{v} )^2 \Big|_{C} \right)^{1/2}. \end{equation*} The third term is bounded according to \begin{equation*} \begin{aligned} \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} B_{nn}({\bf w}) \theta_n({\bf v}) \ d \Gamma}%{\undef{\Gamma} &\le \frac{1}{\gamma_3} \left( \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{h_{E_2}}{\cTrace{,3}^{S} \zeta^3 | \mathbb{C} |} ( B_{nn}({\bf w}) )^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \left( \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{\cPen{,3}^{S} \zeta^3 | \mathbb{C} |}{h_{E_2}} ( \theta_n({\bf v}) )^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2}. \end{aligned} \end{equation*} The fourth term is bounded according to \begin{equation*} \begin{aligned} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \surfVec{\textup{T}}^{(B)}({\bf w}) \cdot \surfVec{v} \ d \Gamma}%{\undef{\Gamma} &\le \frac{1}{\gamma_4} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_2}}{\cTrace{,4}^{S} \zeta^3 | \mathbb{C} |} \left| \surfVec{\textup{T}}^{(B)}({\bf w}) \right|^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} \zeta^3 | \mathbb{C} |}{h_{E_2}} \left| \surfVec{v} \right|^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2}. \end{aligned} \end{equation*} and finally, the fifth term is bounded according to \begin{equation*} \begin{aligned} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \surfVec{\textup{T}}^{(A)}({\bf w}) \cdot \surfVec{v} \ d \Gamma}%{\undef{\Gamma} &\le \frac{1}{\gamma_4} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_2}}{\cTrace{,5}^{S} \zeta | \mathbb{C} |} \left| \surfVec{\textup{T}}^{(A)}({\bf w}) \right|^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} \zeta | \mathbb{C} |}{h_{E_2}} \left| \surfVec{v} \right|^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2}. \end{aligned} \end{equation*} Combining these bounds with the bounds $1/\gamma_1, 1/\gamma_2, 1/\gamma_3, 1/\gamma_4 < 1/\gamma$, where $\gamma = \min(\gamma_1,\gamma_2,\gamma_3,\gamma_4)$, yields the following: \begin{equation*} \begin{aligned} \left\langle \mathcal{B}^{S} {\bf v}, \mathcal{T}^{S} {\bf w} \right\rangle &\le \frac{1}{\gamma_1} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{\cTrace{,1}^{S} \zeta^3 | \mathbb{C} |} \outOfPlane{\textup{T}}({\bf w}) \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,1}^{S} \zeta^3 | \mathbb{C} |}{h_{E_1}^3} v_{3} \ d \Gamma}%{\undef{\Gamma} \right)^{1/2}\\ &\phantom{\le} + \frac{1}{\gamma_2} \left( \sum_{C \in \cornerSet{D}} \frac{h_C^2}{\cTrace{,2}^{S} \zeta^3 | \mathbb{C} |} \llbracket B_{nt}({\bf w}) \rrbracket^2 \Big|_{C} \right)^{1/2} \left( \sum_{C \in \cornerSet{D}} \frac{\cPen{,2}^{S} \zeta^3 | \mathbb{C} |}{h_C^2} ( \outOfPlane{v} )^2 \Big|_{C} \right)^{1/2}\\ &\phantom{\le} + \frac{1}{\gamma_3} \left( \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{h_{E_2}}{\cTrace{,3}^{S} \zeta^3 | \mathbb{C} |} ( B_{nn}({\bf w}) )^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \left( \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{\cPen{,3}^{S} \zeta^3 | \mathbb{C} |}{h_{E_2}} ( \theta_n({\bf v}) )^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2}\\ &\phantom{\le} + \frac{1}{\gamma_4} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_2}}{\cTrace{,4}^{S} \zeta^3 | \mathbb{C} |} \left| \surfVec{\textup{T}}^{(B)}({\bf w}) \right|^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} \zeta^3 | \mathbb{C} |}{h_{E_2}} \left| \surfVec{v} \right|^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \\ &\phantom{\le} + \frac{1}{\gamma_4} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_2}}{\cTrace{,5}^{S} \zeta | \mathbb{C} |} \left| \surfVec{\textup{T}}^{(A)}({\bf w}) \right|^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2} \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} \zeta | \mathbb{C} |}{h_{E_2}} \left| \surfVec{v} \right|^2 \ d \Gamma}%{\undef{\Gamma} \right)^{1/2}. \end{aligned} \end{equation*} Taking absolute values of both sides followed by an application of the discrete Cauchy-Schwarz inequality yields the desired result. \end{proof} \label{lemma:CS_KLS} \end{lemma} \subsection{Nitsche's Method} Following the abstract variational framework of Section~\ref{sec:Nitsche} and with the appropriate definitions of $\epsilon^{S}$, $\eta^{S}$, and $\mathcal{B}^{S}$ in place, our Nitsche-based formulation for the Kirchhoff-Love shell is posed as: \begin{mybox}[\emph{Nitsche's Method for the Kirchhoff-Love Shell}] \vspace{-7pt} $$ (N^{S}_h) \left\{ \hspace{5pt} \parbox{6.00in}{ Given $f^{S} \in \left( \mathcal{V}^{S} \right)^*$ and $\left( \hat{\bf u}, \hat{\theta}_n \right) \in \mathcal{Q}^{S}$, find ${\bf u}_h \in \mathcal{V}^{S}_h$ such that \begin{equation*} \begin{aligned} a^{S}_h({\bf u}_h, \delta {\bf u}_h) &= \underbrace{ \int_{\Omega}%{\undef{\Omega}} \applied{\textbf{\textup{f}}} \cdot \delta {\bf u}_h \ d \Omega}%{\undef{\Omega} + \int_{\Gamma}%{\undef{\Gamma}_{N_1}} \applied{\bf T} \cdot \delta {\bf u}_h \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{N}} \left. \left( \applied{\textup{S}} \delta u_{3,h} \right) \right|_{C} + \int_{\Gamma}%{\undef{\Gamma}_{N_2}} \applied{B}_{nn} \theta_n(\delta {\bf u}_h) \ d \Gamma}%{\undef{\Gamma}}_{\left\langle f^{S}, \delta {\bf u}_h \right\rangle}\\ &\phantom{=} {\color{ForestGreen} \underbrace{ - \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} {\bf T}(\delta {\bf u}_h) \cdot \applied{\bf u} \ d \Gamma}%{\undef{\Gamma} - \sum_{C \in \cornerSet{D}} \left( \llbracket B_{nt}(\delta {\bf u}_h) \rrbracket \applied{u}_3 \right) \Big|_{C} - \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} B_{nn}(\delta {\bf u}_h) \applied{\theta}_n \ d \Gamma}%{\undef{\Gamma}}_{\text{Symmetry Terms}} }\\ &\phantom{=} \clipbox{-2 0 400 0}{${\color{Orchid} \underbrace{ + \zeta^3 | \mathbb{C} | \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,1}^{S}}{h^3_{E_1}} \delta u_{3,h} \applied{u}_{3} \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \frac{\cPen{,2}^{S}}{h^2_{C}} ( \delta u_{3,h} \applied{u}_{3} ) \Big|_{C} \right. \hspace{40em} }}$} \\ &\phantom{=} \hspace{2pt} \clipbox{10 0 -2 0}{$ {\color{Orchid} \underbrace{ \hspace{1em} \left. + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{\cPen{,3}^{S}}{h_{E_2}} \theta_n(\delta {\bf u}_h) \applied{\theta}_n \ d \Gamma}%{\undef{\Gamma} \right) + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} \zeta | \mathbb{C} |}{h_{E_1}} \delta \surfVec{u}_h \cdot \surfVec{\applied{u}} \ d \Gamma}%{\undef{\Gamma}}_{\text{Penalty Terms}} }$} \end{aligned} \label{eqn:KL_Shell_Weak_Nitsche} \end{equation*} for every $\delta {\bf u}_h \in \mathcal{V}^{S}_h$, where $a^{S}_h \colon \left( \tilde{\mathcal{V}}^{S} + \mathcal{V}^{S}_h \right) \times \left( \tilde{\mathcal{V}}^{S} + \mathcal{V}^{S}_h \right) \rightarrow \mathbb{R}$ is the bilinear form defined by \begin{equation*} \begin{aligned} a^{S}_h({\bf u}_h, \delta {\bf u}_h) &= \underbrace{ \int_{\Omega}%{\undef{\Omega}} \surfTens{A}({\bf u}_h) \colon \surfTens{\alpha}(\delta {\bf u}_h) \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf u}_h) \colon \surfTens{\beta}(\delta {\bf u}_h) \ d \Omega}%{\undef{\Omega} }_{a^{S}({\bf u}_h, \delta {\bf u}_h)}\\ &\phantom{=} {\color{Cerulean} \underbrace{ - \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} {\bf T}({\bf u}_h) \cdot \delta {\bf u}_h \ d \Gamma}%{\undef{\Gamma} - \sum_{C \in \cornerSet{D}} \left( \llbracket B_{nt}({\bf u}_h) \rrbracket \delta u_{3,h} \right) \Big|_{C} - \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} B_{nn}({\bf u}_h) \theta_n(\delta {\bf u}_h) \ d \Gamma}%{\undef{\Gamma} }_{\text{Consistency Terms}} }\\ &\phantom{=} {\color{ForestGreen} \underbrace{ - \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} {\bf T}(\delta {\bf u}_h) \cdot {\bf u}_h \ d \Gamma}%{\undef{\Gamma} - \sum_{C \in \cornerSet{D}} \left( \llbracket B_{nt}(\delta {\bf u}_h) \rrbracket u_{3,h} \right) \Big|_{C} - \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} B_{nn}(\delta {\bf u}_h) \theta_n({\bf u}_h) \ d \Gamma}%{\undef{\Gamma} }_{\text{Symmetry Terms}} }\\ &\phantom{=} \clipbox{-2 0 395 0}{${\color{Orchid} \underbrace{ + \zeta^3 | \mathbb{C} | \left( \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,1}^{S}}{h^3_{E_1}} \delta u_{3,h} u_{3,h} \ d \Gamma}%{\undef{\Gamma} + \sum_{C \in \cornerSet{D}} \frac{\cPen{,2}^{S}}{h^2_{C}} ( \delta u_{3,h} u_{3,h} ) \Big|_{C} \right. \hspace{40em} }}$} \\ &\phantom{=} \hspace{2pt} \clipbox{10 0 -2 0}{$ {\color{Orchid} \underbrace{ \hspace{1em} \left. + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{\cPen{,3}^{S}}{h_{E_2}} \theta_n(\delta {\bf u}_h) \theta_n({\bf u}_h) \ d \Gamma}%{\undef{\Gamma} \right) + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} \zeta | \mathbb{C} |}{h_{E_1}} \delta \surfVec{u}_h \cdot \surfVec{u}_h \ d \Gamma}%{\undef{\Gamma}}_{\text{Penalty Terms}} }$}. \end{aligned} \end{equation*} } \right. $$ \end{mybox} Now that we have constructed a Nitsche-based formulation for the Kirchhoff-Love shell that satisfies Assumptions~\ref{assumption1} and~\ref{assumption2} according to Lemmas~\ref{lemma:Greens_ID_KLS},~\ref{lemma:TI_KLS_gen}, and~\ref{lemma:CS_KLS}, we have the following theorem stating well-posedness and an error estimate for our formulation. \begin{theorem}[Well-Posedness and Error Estimate for the Kirchhoff-Love Shell] Let $\cPen{,1}^{S} = \gamma_1^2 \cTrace{,1}^{S}$, $\cPen{,2}^{S} = \gamma_2^2 \cTrace{,2}^{S}$, $\cPen{,3}^{S} = \gamma_3^2 \cTrace{,3}^{S}$, and $\cPen{,4}^{S} = \gamma_4^2 \max(\cTrace{,4}^{S}, \cTrace{,5}^{S})$, where $\gamma_1, \gamma_2, \gamma_3, \gamma_4 \in (1,\infty)$. Then there exists a unique discrete solution ${\bf u} \in \mathcal{V}^{S}_h$ to the Nitsche-based formulation of the Kirchhoff-Love shell Problem $(N_h^{S})$. Moreover, if the continuous solution ${\bf u} \in \mathcal{V}^{S}$ to Problem $(V^{S})$ satisfies ${\bf u} \in \tilde{\mathcal{V}}^{S}$, then the discrete solution ${\bf u}_h$ satisfies the error estimate \begin{equation*} \vvvertiii{{\bf u} - {\bf u}_h}_{S} \leq \left( 1+ \frac{2}{1-\frac{1}{\gamma}} \right) \min_{{\bf v}_h \in \mathcal{V}^{S}_h} \vvvertiii{{\bf u} - {\bf v}_h}_{S}, \label{eq:KLS_Error} \end{equation*} where $\gamma = \min(\gamma_1,\gamma_2,\gamma_3,\gamma_4)$ and $\vvvertiii{\cdot}_{S}: \tilde{\mathcal{V}}^{S} + \mathcal{V}^{S}_h \rightarrow \mathbb{R}$ is the energy norm defined by \begin{equation*} \vvvertiii{\bf v}_{S}^2 := a^{S}({\bf v},{\bf v}) + \left\langle \mathcal{B}^{S} {\bf v}, \eta^{S} \mathcal{B}^{S} {\bf v} \right\rangle + 2 \left\langle \left( \epsilon^{S}\right)^{-1} \mathcal{T}^{S} {\bf v}, \mathcal{T}^{S} {\bf v} \right\rangle. \end{equation*} \begin{proof} Note that the presented Nitsche-based formulation for the Kirchhoff-Love shell precisely fits into the abstract variational framework presented in Section~\ref{sec:Nitsche} with $\mathcal{V} = \mathcal{V}^{S}$, $\mathcal{Q} = \mathcal{Q}^{S}$, $a(\cdot,\cdot) = a^{S}(\cdot,\cdot)$, $f = f^{S}$, $\mathcal{T} = \mathcal{T}^{S}$, $\tilde{\mathcal{V}} = \tilde{\mathcal{V}}^{S}$, $\mathcal{L} = \mathcal{L}^{S}$, $\mathcal{B} = \mathcal{B}^{S}$, $\mathcal{V}_h = \mathcal{V}^{S}_h$, $\epsilon = \epsilon^{S}$, and $\eta = \eta^{S}$. Moreover, Assumption~\ref{assumption1} of the abstract variational framework is satisfied due to Lemma~\ref{lemma:Greens_ID_KLS}, and Assumption~\ref{assumption2} is satisfied due to Lemmas~\ref{lemma:TI_KLS_gen} and~\ref{lemma:CS_KLS}. Then well-posedness is a direct result of the Lax-Milgram theorem and coercivity and continuity as established by Lemmas~\ref{lemma:abstract_coercivity} and~\ref{lemma:abstract_continuity}, and the error estimate follows directly from Theorem~\ref{theorem:error_estimate}. \end{proof} \label{thm:KLS_Error} \end{theorem} The above result indicates that our Nitsche-based formulation is quasi-optimal in the energy norm (in the sense that the error in the discrete solution is proportional to the best approximation error) when the continuous solution ${\bf u} \in \mathcal{V}^{S}$ to Problem $(V^{S})$ satisfies ${\bf u} \in \tilde{\mathcal{V}}^{S}$. However, the above result does not reveal the rates of convergence of the energy norm error, nor does it reveal the rates of convergence for other norms one may care about (for instance, the $L^2$-norm). \begin{remark} The presented Nitsche-based formulation for the Kirchhoff-Love shell as well as the presented well-posedness and error estimate results are new to the best of our knowledge, though the presented formulation is quite similar to the formulations presented in \cite{guo2015weak,guo2015nitsche}. However, the formulations diverge in two important ways. First, the formulation presented in this paper includes corner forces, while the formulations presented in \cite{guo2015weak,guo2015nitsche} do not. Second, following \cite[p.155]{Ciarlet2005} and \cite[p.156]{Koiter1973foundations}, the formulations presented in \cite{guo2015weak,guo2015nitsche} employ incorrect plane bending contributions to the ersatz force (see Remark \ref{rem:IncorrectErsatz}). Consequently, the formulations presented in \cite{guo2015weak,guo2015nitsche} are actually variationally inconsistent and yield sub-optimal convergence rates when used with common boundary condition specifications. We demonstrate this through numerical example later in Section \ref{sec:num_results}. \end{remark} \begin{remark} Note that, according to our analysis, a practitioner may select any $\gamma_1, \gamma_2, \gamma_3, \gamma_4 \in (1,\infty)$. Generally speaking, Dirichlet boundary conditions are enforced more strongly for larger $\gamma_1, \gamma_2, \gamma_3, \gamma_4$ as opposed to smaller $\gamma_1, \gamma_2, \gamma_3, \gamma_4$. However, the condition number of the linear system associated with Nitsche's method scales linearly with $\max\left( \gamma_1, \gamma_2, \gamma_3, \gamma_4 \right)$ \cite{juntunen2009nitsche} and, in certain circumstances, the discrete solution becomes over-constrained and boundary locking occurs as $\max\left( \gamma_1, \gamma_2, \gamma_3, \gamma_4 \right) \rightarrow \infty$, resulting in a loss of solution accuracy \cite{lew2008discontinuous}. On the other hand, as $\min\left( \gamma_1, \gamma_2, \gamma_3, \gamma_4 \right) \rightarrow 1$, Lemma~\ref{lemma:abstract_coercivity} suggests that the linear system associated with Nitsche's method may lose definiteness, and Theorem~\ref{thm:KLS_Error} suggests that the energy norm error may blow up in the limit $\min\left( \gamma_1, \gamma_2, \gamma_3, \gamma_4 \right) \rightarrow 1$. It is advisable then to choose moderate values for $\gamma_1, \gamma_2, \gamma_3, \gamma_4$. Based on our collective experience, we recommend setting $\gamma_1 = \gamma_2 = \gamma_3 = \gamma_4 = 2$. \end{remark} Now that we have derived, presented, and proved well-posedness and an error estimate for our Nitsche-based formulation for the Kirchhoff-Love shell, we proceed with a discussion of the spline discretization to be employed for our numerics followed by a discussion of the associated \textit{a priori} error estimates for the Kirchhoff-Love shell. \section{NURBS-Based Isogeometric Kirchhoff-Love Shell Discretizations} \label{sec:apriori} In this section, we provide a brief discussion of the discretization we employ for our numerical results, namely, \textbf{\emph{Non-Uniform Rational B-Splines}}, or NURBS. After presenting a brief introduction to NURBS, we provide \textit{a priori} error estimates for the Kirchhoff-Love shell under a NURBS discretization. It is worth noting that although we choose to employ NURBS for our numerical results, our theoretical exposition is not limited to such discretizations. In fact, since we have not discussed discretization until this point, the abstract Nitsche's framework discussed in Section~\ref{sec:Nitsche} is amenable to any discretization, as long as it provides sufficient smoothness. \subsection{B-splines and NURBS} The $i^{th}$ univariate B-spline basis function of degree $p$, herein denoted by $\hat{N}_{i,p}(\xi)$, is generated recursively over a \textbf{\emph{parametric domain}}, denoted herein by $\hat{\Omega}$. This parametric domain is defined by a \textbf{\emph{knot vector}}, that is, a non-decreasing set of real numbers called knots $\Xi = \{ \xi_1,\xi_2,\ldots,\xi_{n+p+1} \}$, where $n$ is the number of basis functions. The knot vector describes the support and continuity of the resulting basis functions. In NURBS-based isogeometric analysis, we typically employ an \textbf{\emph{open knot vector}}, where the first and last knots are repeated $p+1$ times, thus ensuring that the basis interpolates the geometry and solution field at the boundaries in one dimension and at the corners in higher dimensions. We consider the maximally smooth case when each interior knot is unique, which yields a basis that is $C^{p-1}$-continuous. We also assume that the first and last knots in the knot vector are $0$ and $1$, respectively, without loss of generality. The parametric domain is then $\hat{\Omega} = (0,1)$ in the one-dimensional setting. The multivariate, tensor-product B-spline basis is obtained through a product of one-dimensional basis functions. In particular, \begin{equation*} \hat{N}_{{\bf i},{\bf p}}(\bm{\xi}) = \prod_{j=1}^{d_p} \hat{N}_{i_j,p_j}(\xi^j) \end{equation*} for multi-indices ${\bf i} = (i_1,i_2,...,i_{d_p})$ and ${\bf p} = (p_1,p_2,...,p_{d_p})$ representing basis function number and polynomial degree, respectively. Here, $d_p$ refers to the \textbf{\emph{parametric dimension}} while $d_s$ later refers to the \textbf{\emph{spatial dimension}}. Note that $d_s \ge d_p$. For the Kirchhoff-Love shell, $d_p = 2$ and $d_s = 3$. A NURBS function is a projective transformation of a B-spline function in one higher spatial dimension. Given a set of B-spline basis functions and \textbf{\emph{NURBS weights}}, $w_{\bf i} \in \mathbb R^+$, we define the corresponding set of NURBS basis functions via \begin{equation*} \hat{R}_{{\bf i},{\bf p}}(\bm{\xi}) = \frac{w_{\bf i} \hat{N}_{{\bf i},{\bf p}}(\bm{\xi})}{w(\bm{\xi})}, \hspace{20pt} \text{where} \hspace{20pt} w(\bm{\xi}) = \sum_{\bf i} w_{\bf i} \hat{N}_{{\bf i},{\bf p}}(\bm{\xi}). \end{equation*} Here we have adopted the multi-index notation used for the multivariate B-splines in this definition. We construct the \textbf{\emph{control mesh}} in $d_s$-dimensions that, together with the complete set of NURBS basis functions, define a $d_s$-dimensional geometry $\Omega}%{\undef{\Omega} \subset \mathbb R^{d_s}$. This serves as our \textbf{\emph{physical domain}}. More specifically, given a set of NURBS control points ${\bf P}_{\bf i}$ and weights $w_{\bf i}$, the parameterization of the physical domain $\fullTens{x} \colon \parametric{\Omega} \rightarrow \Omega}%{\undef{\Omega}$ is given by \begin{equation*} \fullTens{x}(\bm{\xi}) = \sum_{\bf i} \fullTens{P}_{\bf i} \hat{R}_{{\bf i}}(\bm{\xi}) \end{equation*} \noindent for all $\bm{\xi} \in \parametric{\Omega}$, where $\parametric{\Omega} = (0,1)^{d_p}$. Note that we dropped the subscript ${\bf p}$ for notational ease, as we do henceforth. Since the vector-valued PDE considered herein is cast over a spatial variable, we require an appropriate space of basis functions defined in the physical space. To this end, we leverage the isoparametric concept through our geometric parameterization. Namely, we use the \textbf{\emph{push-forward}} operator describing how the physical variable $\fullTens{x}$ is related to the parametric variable $\bm{\xi}$ in order to define NURBS basis functions in physical space as \begin{equation*} R_{{\bf i}}({\bf{x}}(\bm{\xi})) = \hat{R}_{{\bf i}}(\bm{\xi}). \end{equation*} \noindent We then describe test and trial functions in terms of NURBS basis functions in physical space. For the Kirchhoff-Love shell, we set \begin{equation*} \mathcal{V}^{S}_h := \left\{ {\bf{v}} : \Omega \rightarrow \mathbb{R}^3 : {\bf{v}}({\bf{x}}) = \sum_{\bf i} {\bf{v}}_{\bf i} R_{{\bf i}}({\bf{x}}) \right\}, \end{equation*} \noindent where the coefficients ${\bf{v}}_{\bf i} \in \mathbb{R}^3$ are commonly referred to as \textbf{\emph{control variables}}. For a comprehensive discussion of NURBS, their properties, and their implementation see \cite{Piegl2012}, and for a deeper discussion of NURBS-based isogeometric analysis and various applications, see \cite{Hughes2005,Cottrell2009}. It should be noted that complex geometries of arbitrary topology may be represented using so-called multi-patch NURBS mappings \cite[Chapter~2]{Cottrell2009} or alternative parameterization techniques such as subdivision surfaces \cite{Cirak2000} and T-splines \cite{Bazilevs2010}. Now that the discretization we employ has been presented, we take note of a subtle but important detail. As discussed in Remarks \ref{remark:eig_KLS}, suitable trace constants for an isogeometric shell discretization may be attained by solving generalized eigenproblems. In the asymptotic range, it is well known that these trace constants are independent of the mesh size $h$ for quasi-uniform isogeometric discretizations \cite{evans2013explicit}. In practice, it is usually sufficient to compute trace constants for a coarse isogeometric discretization and then employ them for finer isogeometric discretizations. However, this property does not hold in general for all discretizations since it relies on the existence of discrete trace inequalities with mesh-independent constants. \subsection{Sobolev Spaces on Manifolds} To establish \textit{a priori} error estimates for NURBS-based Kirchhoff-Love shell discretizations, we first need to extend the concept of a Sobolev space from the Euclidean setting to the more general manifold setting. To this end, let $\Omega}%{\undef{\Omega} \subset \mathbb{R}^3$ be a smooth two-dimensional immersed manifold with Lipschitz-continuous boundary $\Gamma = \partial \Omega}%{\undef{\Omega}$, and assume that $\Omega}%{\undef{\Omega}$ is represented in terms of a smooth bijective mapping $\undef{\fullTens{x}}: \parametric{\Omega} \rightarrow \Omega}%{\undef{\Omega}$, where $\parametric{\Omega} \subset \mathbb{R}^2$ is an open domain with Lipschitz-continuous boundary $\hat{\Gamma} = \partial \parametric{\Omega}$. In this subsection, we use the word smooth to mean infinitely differentiable. We can define a number of differential geometric objects on the manifold as discussed in \ref{sec:Appendix_Diff_Geo}, \ref{sec:Appendix_Cont_Mech}, and \ref{sec:Appendix_Components} and, in particular, we can define the surface gradient of a smooth scalar-valued function $v: \Omega}%{\undef{\Omega} \rightarrow \mathbb{R}$ as \begin{equation*} \surfVec{\nabla} v = \frac{\partial v}{\partial \xi^{\alpha}} {\bf a}^{\alpha}, \end{equation*} where $\xi^{\alpha}$ is the $\alpha^{\text{th}}$ in-plane convective coordinate and ${\bf a}^{\alpha}$ is the $\alpha^{\text{th}}$ contravariant tangent vector. Similarly, we can define the surface gradient of a smooth order-$r$ tensor-valued function ${\bf A} = A_{m_1 \ldots m_r} {\bf a}^{m_1} \otimes \ldots \otimes {\bf a}^{m_r}$ as \begin{equation*} \surfVec{\nabla} {\bf A} = \frac{\partial {\bf A}}{\partial \xi^{\alpha}} \otimes {\bf a}^{\alpha}. \end{equation*} Thus, the surface gradient of a smooth order-$r$ tensor-valued function is a smooth order-$(r+1)$ tensor-valued function. Higher-order surface derivatives are defined recursively (e.g., $\surfVec{\nabla}^2 {\bf A} = \surfVec{\nabla} \left( \surfVec{\nabla} {\bf A} \right)$), and it is easily seen that the $k^{\text{th}}$ surface gradient of a smooth order-$r$ tensor-valued function is a smooth order-$(r+k)$ tensor-valued function. Consequently, we may write the $k^{\text{th}}$ surface gradient of a smooth order-$r$ tensor-valued function as \begin{equation*} \surfVec{\nabla}^k {\bf A} = \left( \surfVec{\nabla}^k {\bf A} \right)_{m_1 \ldots m_{r+k}} {\bf a}^{m_1} \otimes \ldots \otimes {\bf a}^{m_{r+k}}, \end{equation*} and we define the magnitude of the $k^{\text{th}}$ surface gradient as \begin{equation*} | \surfVec{\nabla}^k {\bf A} |^2 = a^{m_1 n_1} \ldots a^{m_k n_{r+k}} \left( \surfVec{\nabla}^k {\bf A} \right)_{m_1 \ldots m_{r+k}} \left( \surfVec{\nabla}^k {\bf A} \right)_{n_1 \ldots n_{r+k}}, \end{equation*} where $a^{ij} = {\bf a}^i \cdot {\bf a}^j$ are the contravariant metric coefficients. By convention, we define $\surfVec{\nabla}^0 {\bf A} = {\bf A}$. It should be noted that we can define the surface divergence of a smooth order-$r$ tensor-valued function similarly, namely, \begin{equation*} \surfVec{\nabla} \cdot {\bf A} = \frac{\partial {\bf A}}{\partial \xi^{\alpha}} \cdot {\bf a}^{\alpha}, \end{equation*} and the surface divergence of a smooth order-$r$ tensor-valued function is a smooth order-$(r-1)$ tensor-valued function. The above definitions of surface gradient and surface divergence generalize the definitions used in Section \ref{sec:KL_Shell}. Now, let $C^{\infty}(\Omega}%{\undef{\Omega})$ denote the space of smooth scalar-valued functions over the manifold. Also, for $s$ a non-negative integer, let \begin{equation*} C^{\infty}_s(\Omega}%{\undef{\Omega}) := \left\{ v \in C^{\infty}(\Omega}%{\undef{\Omega}) : \left\| v \right\|^2_{H^s(\Omega}%{\undef{\Omega})} < \infty \right\}, \end{equation*} where \begin{equation*} \left\| v \right\|^2_{H^s(\Omega}%{\undef{\Omega})} := \sum_{k = 0}^{s} \ell^{2k-2} \int_{\Omega}%{\undef{\Omega}} | \surfVec{\nabla}^k v |^2 \ d \Omega}%{\undef{\Omega} \end{equation*} and $\ell = \text{diam}(\Omega}%{\undef{\Omega})$. We then define the Sobolev space $H^s(\Omega}%{\undef{\Omega})$ of scalar-valued functions as the completion of $C^{\infty}_s(\Omega}%{\undef{\Omega})$ with respect to $\left\| \cdot \right\|_{H^s(\Omega}%{\undef{\Omega})}$, and the Sobolev spaces for tensor-valued functions analogously. The Sobolev space $H^0(\Omega}%{\undef{\Omega})$ coincides with $L^2(\Omega}%{\undef{\Omega})$, the space of square-integrable scalar-valued functions equipped with the norm \begin{equation*} \left\| v \right\|^2_{L^2(\Omega}%{\undef{\Omega})} := \ell^{-2} \int_{\Omega}%{\undef{\Omega}} v^2 d\Omega}%{\undef{\Omega}. \end{equation*} Note that all of the above Sobolev norms have the same units, simplifying the following analysis. The Sobolev spaces presented above also coincide with those employed in Section \ref{sec:KL_Shell}. Let $L^2(\Gamma)$ denote the space of square-integrable functions over the boundary of the manifold, equipped with the norm \begin{equation*} \left\| v \right\|^2_{L^2(\Gamma)} := \ell^{-1} \int_{\Gamma} v^2 d\Gamma. \end{equation*} As in the Euclidean setting (see, e.g., \cite{Adams2003}), we define a linear and bounded trace operator $\text{Tr} : H^1(\Omega}%{\undef{\Omega}) \rightarrow L^2(\Gamma)$ such that $\text{Tr}(v) = v|_{\Gamma}$ for smooth scalar-valued functions $v \in H^1(\Omega}%{\undef{\Omega})$. For non-negative integers $s$, define $H^{s+1/2}(\Gamma) = \text{Tr}(H^{s+1}(\Omega}%{\undef{\Omega}))$ and \begin{equation*} \left\| w \right\|^2_{H^{s+1/2}(\Gamma)} := \inf_{\substack{v \in H^{s+1}(\Omega}%{\undef{\Omega}) \\ \text{Tr}(v) = w}} \left\| v \right\|^2_{H^{s+1}(\Omega}%{\undef{\Omega})}. \end{equation*} These fractional Sobolev spaces on the manifold boundary coincide with those employed in Section \ref{sec:KL_Shell}. As a final remark, note that we can also define Sobolev spaces on non-smooth manifolds. In particular, if the geometric mapping $\undef{\fullTens{x}}: \parametric{\Omega} \rightarrow \Omega}%{\undef{\Omega}$ is a $C^{s-1}$-continuous NURBS mapping, then we can define the space $H^s(\Omega)$ on the manifold similarly to that presented here. We can also define Sobolev spaces on manifolds that cannot be described in terms of a single parametric mapping. This requires the use of charts, atlases, and transition maps. For more information, see \cite{Schick2001}. \subsection{Interpolation Estimates for NURBS-Based Kirchhoff-Love Shell Discretizations} We are now in a position to state interpolation estimates for NURBS-based Kirchhoff-Love shell discretizations. Following the work of \cite{Bazilevs2006}, we can construct a quasi-interpolation operator $\mathcal{I}^{S}_h: \left(L^2(\Omega)\right)^3 \rightarrow \mathcal{V}^{S}_h$ such that, for each set of integers $0 \leq k < l \leq p + 1$ and for all ${\bf v} \in \left(H^l(\Omega)\right)^3$, \begin{equation*} \left\| {\bf v} - \mathcal{I}^{S}_h {\bf v} \right\|_{\left(H^k(\Omega)\right)^3} \leq C_{\text{interp}} \left( \frac{h}{\ell} \right)^{l-k} \left\| {\bf v} \right\|_{\left(H^l(\Omega)\right)^3}, \end{equation*} where $h = \max_{K \in \mathcal{K}} h_K$ is the mesh size and $C_{\text{interp}}$ is a dimensionless constant independent of the mesh-to-domain-size ratio $h/\ell$ but dependent on the integers $k$ and $l$, the polynomial degree $p$, the normalized geometric mapping $\left({\bf x}(\bm{\xi}) - {\bf x}(\bm{0})\right)/\ell$, and the parametric mesh regularity. The quasi-interpolation operator is defined by first constructing a locally $L^2$-stable quasi-interpolation operator $\hat{\mathcal{I}}^{S}_h$ over the parametric domain using locally supported dual basis functions \cite[Chapter~12]{Schumaker2007} and then setting $\mathcal{I}^{S}_h = \hat{\mathcal{I}}^{S}_h \circ {\bf x}^{-1}$. Similar interpolation estimates hold over individual elements of the computational mesh, as in \cite[Theorem~3.1]{Bazilevs2006}. \subsection{\textit{A Priori} Error Estimate in the Energy Norm for NURBS-Based Kirchhoff-Love Shell Discretizations} Armed with interpolation estimates, we are able to prove the following result for NURBS-based Kirchhoff-Love shell discretizations. \begin{theorem}[\textit{A Priori} Error Estimate in the Energy Norm for the Kirchhoff-Love Shell] If $p \geq 2$, then for any ${\bf u} \in \left(H^{p+1}(\Omega)\right)^3$, we have the estimate $$\vvvertiii{{\bf u} - {\bf u}_h}^2_{S} \leq C_{\text{bound}} | \mathbb{C} | \ell \left( \left( \frac{\zeta}{\ell} \right) \left( \frac{h}{\ell} \right)^{2p} + \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \right) \| {\bf u} \|^2_{\left(H^{p+1}(\Omega)\right)^3},$$ where $C_{\text{bound}}$ is a dimensionless constant independent of the mesh-to-domain-size ratio $h/\ell$, the thickness-to-domain-size ratio $\zeta/\ell$, and the normalized elasticity tensor $\mathbb{C}/|\mathbb{C}|$, but dependent on polynomial degree $p$, the normalized geometric mapping $\left({\bf x}(\bm{\xi}) - {\bf x}(\bm{0})\right)/\ell$, the trace constants $C^S_{\textup{tr},1}$, $C^S_{\textup{tr},2}$, $C^S_{\textup{tr},3}$, $C^S_{\textup{tr},4}$, and $C^S_{\textup{tr},5}$, the penalty constants $C^S_{\textup{pen},1}$, $C^S_{\textup{pen},2}$, $C^S_{\textup{pen},3}$, and $C^S_{\textup{pen},4}$, and the parametric mesh regularity. \end{theorem} \begin{proof} From Theorem~\ref{thm:KLS_Error}, we know that $$\vvvertiii{{\bf u} - {\bf u}_h}_{S} \leq \left( 1+ \frac{2}{1-\frac{1}{\gamma}} \right) \min_{{\bf v}_h \in \mathcal{V}^{S}_h} \vvvertiii{{\bf u} - {\bf v}_h}_{S},$$ so it holds that \begin{equation} \vvvertiii{{\bf u} - {\bf u}_h}^2_{S} \leq \left( 1+ \frac{2}{1-\frac{1}{\gamma}} \right)^2 \vvvertiii{{\bf u} - \mathcal{I}^{S}_h {\bf u}}^2_{S}. \label{eq:Theorem_7_First} \end{equation} We now expand as follows: \begin{align*} \vvvertiii{{\bf u} - \mathcal{I}^{S}_h {\bf u}}^2_{S} &= \int_{\Omega}%{\undef{\Omega}} \surfTens{A}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \colon \surfTens{\alpha}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \ d \Omega}%{\undef{\Omega} + \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \colon \surfTens{\beta}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \ d \Omega}%{\undef{\Omega}\\ &\phantom{=} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{\cTrace{,1}^{S} | \mathbb{C} | \zeta^3} \left|T_3({\bf u} - \mathcal{I}^{S}_h {\bf u})\right|^2 + \sum_{C \in \cornerSet{D}} \frac{h_C^2}{\cTrace{,2}^{S} | \mathbb{C} | \zeta^3} \llbracket B_{nt}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \rrbracket^2 \Big|_C\\ &\phantom{=} + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{h_{E_2}}{\cTrace{,3}^{S} | \mathbb{C} | \zeta^3} \left|B_{nn}({\bf u} - \mathcal{I}^{S}_h {\bf u})\right|^2 \ d \Gamma}%{\undef{\Gamma} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\cTrace{,4}^{S} | \mathbb{C} | \zeta} \left| \surfVec{T}^{(A)}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \right|^2 \ d \Gamma}%{\undef{\Gamma}\\ &\phantom{=} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\cTrace{,5}^{S} | \mathbb{C} | \zeta^3} \left| \surfVec{T}^{(B)}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \right|^2 \ d \Gamma}%{\undef{\Gamma} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,1}^{S} |\mathbb{C}| \zeta^3}{h^3_{E_1}} \left| u_3 - \mathcal{I}^{S}_h u_3 \right|^2 \ d \Gamma}%{\undef{\Gamma}\\ &\phantom{=} + \sum_{C \in \cornerSet{D}} \frac{\cPen{,2}^{S} |\mathbb{C}| \zeta^3}{h^2_{C}} \left| u_3 - \mathcal{I}^{S}_h u_3 \right|^2 \Big|_{C} + \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{\cPen{,3}^{S} |\mathbb{C}| \zeta^3}{h_{E_2}} \left| \theta_n({\bf u} - \mathcal{I}^{S}_h {\bf u}) \right|^2 \ d \Gamma}%{\undef{\Gamma}\\ &\phantom{=} + \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} | \mathbb{C} | \zeta}{h_{E_1}} \left| \surfVec{u} - \mathcal{I}^{S}_h \surfVec{u} \right|^2 \ d \Gamma}%{\undef{\Gamma}, \end{align*} where we used the abuse of notation $\mathcal{I}^{S}_h u_3 = \mathcal{I}^{S}_h {\bf u} \cdot {\bf a}_3$ and $\mathcal{I}^{S}_h \surfVec{u} = \left( \mathcal{I}^{S}_h {\bf u} \cdot {\bf a}_{\alpha} \right) {\bf a}^{\alpha}$. A quick calculation reveals that \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \surfTens{A}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \colon \surfTens{\alpha}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \ d \Omega}%{\undef{\Omega} \leq | \mathbb{C} | \ell \left( \frac{\zeta}{\ell} \right) \| {\bf u} - \mathcal{I}^{S}_h {\bf u} \|^2_{\left(H^1(\Omega)\right)^3}, \end{equation*} so, by our interpolation estimates, \begin{equation} \int_{\Omega}%{\undef{\Omega}} \surfTens{A}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \colon \surfTens{\alpha}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \ d \Omega}%{\undef{\Omega} \leq C_{1} | \mathbb{C} | \ell \left( \frac{\zeta}{\ell} \right) \left( \frac{h}{\ell} \right)^{2p} \| {\bf u} \|^2_{\left(H^{p+1}(\Omega)\right)^3}, \end{equation} where $C_{1}$ is a dimensionless constant only dependent on polynomial degree, the normalized geometric mapping, and the parametric mesh regularity. A similar calculation reveals that \begin{equation*} \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \colon \surfTens{\beta}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \ d \Omega}%{\undef{\Omega} \leq | \mathbb{C} | \ell \left( \frac{\zeta}{\ell} \right)^3 \| {\bf u} - \mathcal{I}^{S}_h {\bf u} \|^2_{\left(H^2(\Omega)\right)^3}, \end{equation*} so, by our interpolation estimates, \begin{equation} \int_{\Omega}%{\undef{\Omega}} \surfTens{B}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \colon \surfTens{\beta}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \ d \Omega}%{\undef{\Omega} \leq C_{2} | \mathbb{C} | \ell \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \| {\bf u} \|^2_{\left(H^{p+1}(\Omega)\right)^3}, \end{equation} where $C_{2}$ also is a dimensionless constant only dependent on polynomial degree, the normalized geometric mapping, and the parametric mesh regularity. The other terms require more finesse and patience to bound. However, by appealing to the continuous trace equality and local versions of our interpolation estimates (see, e.g., the proof of \cite[Theorem~6.2]{Evans2013DivFree}), it can be shown that \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}^3}{\cTrace{,1}^{S} | \mathbb{C} | \zeta^3} \left|T_3({\bf u} - \mathcal{I}^{S}_h {\bf u})\right|^2 \leq C_{{\bf u}} C_{3} \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \end{equation} \begin{equation} \sum_{C \in \cornerSet{D}} \frac{h_C^2}{\cTrace{,2}^{S} | \mathbb{C} | \zeta^3} \llbracket B_{nt}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \rrbracket^2 \Big|_C \leq C_{{\bf u}} C_{4} \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \end{equation} \begin{equation} \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{h_{E_2}}{\cTrace{,3}^{S} | \mathbb{C} | \zeta^3} \left|B_{nn}({\bf u} - \mathcal{I}^{S}_h {\bf u})\right|^2 \ d \Gamma}%{\undef{\Gamma} \leq C_{{\bf u}} C_{5} \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \end{equation} \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\cTrace{,4}^{S} | \mathbb{C} | \zeta} \left| \surfVec{T}^{(A)}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \right|^2 \ d \Gamma}%{\undef{\Gamma} \leq C_{{\bf u}} C_{6} \left( \frac{\zeta}{\ell} \right) \left( \frac{h}{\ell} \right)^{2p} \end{equation} \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{h_{E_1}}{\cTrace{,5}^{S} | \mathbb{C} | \zeta^3} \left| \surfVec{T}^{(B)}({\bf u} - \mathcal{I}^{S}_h {\bf u}) \right|^2 \ d \Gamma}%{\undef{\Gamma} \leq C_{{\bf u}} C_{7} \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \end{equation} \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,1}^{S} |\mathbb{C}| \zeta^3}{h^3_{E_1}} \left| u_3 - \mathcal{I}^{S}_h u_3 \right|^2 \ d \Gamma}%{\undef{\Gamma} \leq C_{{\bf u}} C_{8} \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \end{equation} \begin{equation} \sum_{C \in \cornerSet{D}} \frac{\cPen{,2}^{S} |\mathbb{C}| \zeta^3}{h^2_{C}} \left| u_3 - \mathcal{I}^{S}_h u_3 \right|^2 \Big|_{C} \leq C_{{\bf u}} C_{9} \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \end{equation} \begin{equation} \sum_{E_2 \in \mathcal{E}_{D_2}} \int_{E_2} \frac{\cPen{,3}^{S} |\mathbb{C}| \zeta^3}{h_{E_2}} \left| \theta_n({\bf u} - \mathcal{I}^{S}_h {\bf u}) \right|^2 \ d \Gamma}%{\undef{\Gamma} \leq C_{{\bf u}} C_{10} \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2} \end{equation} \begin{equation} \sum_{E_1 \in \mathcal{E}_{D_1}} \int_{E_1} \frac{\cPen{,4}^{S} | \mathbb{C} | \zeta}{h_{E_1}} \left| \surfVec{u} - \mathcal{I}^{S}_h \surfVec{u} \right|^2 \ d \Gamma}%{\undef{\Gamma} \leq C_{{\bf u}} C_{11} \left( \frac{\zeta}{\ell} \right) \left( \frac{h}{\ell} \right)^{2p}, \label{eq:Theorem_7_Last} \end{equation} where $C_{\bf u} = | \mathbb{C} | \ell \| {\bf u} \|^2_{\left(H^{p+1}(\Omega)\right)^3}$ and $C_{3}$ through $C_{11}$ are dimensionless constants only dependent on polynomial degree, the normalized geometric mapping, the parametric mesh regularity, the trace constants $C^S_{\text{tr},1}$, $C^S_{\text{tr},2}$, $C^S_{\text{tr},3}$, $C^S_{\text{tr},4}$, and $C^S_{\text{tr},5}$, and the penalty constants $C^S_{\text{pen},1}$, $C^S_{\text{pen},2}$, $C^S_{\text{pen},3}$, and $C^S_{\text{pen},4}$. Collecting \eqref{eq:Theorem_7_First}-\eqref{eq:Theorem_7_Last}, we obtain \begin{equation*} \begin{aligned} \vvvertiii{{\bf u} - {\bf u}_h}^2_{S} &\leq C_{\bf u} \left( 1 + \frac{2}{1-\frac{1}{\gamma}} \right)^2 \Bigg( C_{1} + C_{6} + C_{11} \Bigg) \left( \frac{\zeta}{\ell} \right) \left( \frac{h}{\ell} \right)^{2p} \\ &\phantom{=} + C_{\bf u} \left( 1+ \frac{2}{1-\frac{1}{\gamma}} \right)^2 \Bigg( C_{2} + C_{3} + C_{4} + C_{5} + C_{7} + C_{8} + C_{9} + C_{10} \Bigg) \left( \frac{\zeta}{\ell} \right)^3 \left( \frac{h}{\ell} \right)^{2p-2}. \end{aligned} \end{equation*} Since the coercivity constant $\gamma$ depends only on the trace constants $C^S_{\text{tr},1}$, $C^S_{\text{tr},2}$, $C^S_{\text{tr},3}$, $C^S_{\text{tr},4}$, and $C^S_{\text{tr},5}$ and the penalty constants $C^S_{\text{pen},1}$, $C^S_{\text{pen},2}$, $C^S_{\text{pen},3}$, and $C^S_{\text{pen},4}$, the desired result follows with \begin{equation*} C_{\text{bound}} = \left( 1+ \frac{2}{1-\frac{1}{\gamma}} \right)^2 \max\left\{ C_{1} + C_{6} + C_{11}, C_{2} + C_{3} + C_{4} + C_{5} + C_{7} + C_{8} + C_{9} + C_{10} \right\}. \end{equation*} This completes the proof. \end{proof} \noindent An immediate consequence of the above theorem is that the membrane strain satisfies the error bound $$\int_{\Omega} \left( \left( \surfTens{\alpha}({\bf u}) - \surfTens{\alpha}({\bf u}_h) \right) : \frac{\mathbb{C}}{|\mathbb{C}|} : \left( \surfTens{\alpha}({\bf u}) - \surfTens{\alpha}({\bf u}_h) \right) \right) d\Omega \leq C_{\text{bound}} \left( \left( \frac{\zeta}{\ell} \right)^2 \left( \frac{h}{\ell} \right)^{2p-2} + \left( \frac{h}{\ell} \right)^{2p} \right) \| {\bf u} \|^2_{\left(H^{p+1}(\Omega)\right)^3}$$ and the bending strain satisfies the error bound $$\frac{\ell^2}{12} \int_{\Omega} \left( \left( \surfTens{\beta}({\bf u}) - \surfTens{\beta}({\bf u}_h) \right) : \frac{\mathbb{C}}{|\mathbb{C}|} : \left( \surfTens{\beta}({\bf u}) - \surfTens{\beta}({\bf u}_h) \right) \right) d\Omega \leq C_{\text{bound}} \left( \left( \frac{h}{\ell} \right)^{2p-2} + \left( \frac{\zeta}{\ell} \right)^{-2} \left( \frac{h}{\ell} \right)^{2p} \right) \| {\bf u} \|^2_{\left(H^{p+1}(\Omega)\right)^3}.$$ Thus, when the thickness-to-domain-size ratio $\zeta/\ell$ is fixed, both the membrane strain and the bending strain converge as the mesh-to-domain-size ratio $h/\ell$ tends to zero. Alternatively, when $h/\ell$ is fixed, the error bound for the membrane strain remains finite but the error bound for the bending strain tends to infinity as $\zeta/\ell$ tends to zero. This is a consequence of \textbf{\emph{membrane locking}}, which affects virtually all Kirchhoff-Love shell discretizations relying on a primal (i.e., displacement only) formulation. There are many different approaches to alleviate membrane locking, including the use of mixed methods wherein membrane strain is introduced as an additional variable \cite{Bathe1986}, but these approaches are not discussed further here since the focus is on weak enforcement of boundary conditions. \subsection{\textit{A Priori} Error Estimates in Lower-Order Norms for NURBS-Based Kirchhoff-Love Shell Discretizations} Using the well-known Aubin-Nitsche trick \cite[Chapter~4]{Strang1973}, we can also prove \textit{a priori} error estimates in low-order norms for NURBS-based Kirchhoff-Love shell discretizations. The proof of this result is omitted for brevity. \begin{theorem}[\textit{A Priori} Error Estimate in Lower-Order Norms for the Kirchhoff-Love Shell] If $p \geq 2$, then for any ${\bf u} \in \left(H^{p+1}(\Omega)\right)^3$, we have the estimates $$\| {\bf u} - {\bf u}_h \|^2_{\left(H^1(\Omega)\right)^3} \leq C_{\text{bound},1} \left( \frac{h}{\ell} \right)^{2p} \| {\bf u} \|^2_{\left(H^{p+1}(\Omega)\right)^3}$$ and $$\| {\bf u} - {\bf u}_h \|^2_{\left(L^2(\Omega)\right)^3} \leq C_{\text{bound},2} \left( \frac{h}{\ell} \right)^{\min\left\{2p+2,4p-4\right\}} \| {\bf u} \|^2_{\left(H^{p+1}(\Omega)\right)^3},$$ where $C_{\text{bound,1}}$ and $C_{\text{bound,2}}$ are dimensionless constants independent of the mesh-to-domain-size ratio $h/\ell$, but dependent on the thickness-to-domain-size ratio $\zeta/\ell$, the normalized elasticity tensor $\mathbb{C}/|\mathbb{C}|$, the polynomial degree $p$, the normalized geometric mapping $\left({\bf x}(\bm{\xi}) - {\bf x}(\bm{0})\right)/\ell$, the trace constants $C^S_{\textup{tr},1}$, $C^S_{\textup{tr},2}$, $C^S_{\textup{tr},3}$, $C^S_{\textup{tr},4}$, and $C^S_{\textup{tr},5}$, the penalty constants $C^S_{\textup{pen},1}$, $C^S_{\textup{pen},2}$, $C^S_{\textup{pen},3}$, and $C^S_{\textup{pen},4}$, and the parametric mesh regularity. \end{theorem} \noindent Note that it is impossible to make the bounding constants $C_{\text{bound,1}}$ and $C_{\text{bound,2}}$ appearing in the above theorem independent of the thickness-to-domain-size ratio $\zeta/\ell$ since membrane locking occurs in the zero thickness limit. Also, although we are able to employ a discretization comprised of quadratic splines, we only expect to see optimal convergence rates in the energy and $H^1$-equivalent norms for such a discretization. The $L^2$ norm cannot exceed convergence rates faster than second-order for quadratic discretizations of fourth-order partial differential equations \cite[Chapter~2]{Strang1973}. \section{Numerical Results} \label{sec:num_results} In this section, we demonstrate the robustness and effectiveness of our proposed methodology through numerical experiments. The robustness is shown by the ability of our framework to accommodate a wide variety of geometric configurations with complex boundary conditions, while the effectiveness is demonstrated by the discretization obtaining optimal convergence rates in both the associated energy- and $L^2$-norms. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{problem_1_geo.pdf} \caption*{Problem 1} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{problem_3_geo.pdf} \caption*{Problem 3} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{problem_5_geo.pdf} \caption*{Problem 5} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{problem_7_geo.pdf} \caption*{Problem 7} \end{subfigure} \\ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{problem_2_geo.pdf} \caption*{Problem 2} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{problem_4_geo.pdf} \caption*{Problem 4} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{problem_6_geo.pdf} \caption*{Problem 6} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{problem_8_geo.pdf} \caption*{Problem 8} \end{subfigure} \caption{ The eight problems that comprise the linear shell obstacle course. From left to right, the first column contains flat geometries, the second column contains parabolic geometries, the third column contains hyperbolic geometries, and the fourth column contains elliptic geometries. Boundaries with prescribed displacement and bending moment (e.g., simply supported boundaries) are denoted by \protect\raisebox{0pt}{\includegraphics{2DSS.pdf}}/\protect\raisebox{0pt}{\includegraphics{3DSS.pdf}}, boundaries with prescribed displacement and normal rotation (e.g., clamped boundaries) are denoted by \protect\raisebox{0pt}{\includegraphics{CLAMPED.pdf}}, boundaries with prescribed ersatz traction and normal rotation (e.g., symmetric boundaries) are denoted by \protect\raisebox{0pt}{\includegraphics{SYM.pdf}}, and boundaries with prescribed ersatz traction and bending moment (e.g., free boundaries) are denoted by \protect\raisebox{0pt}{\includegraphics{FREE.pdf}}. } \label{fig:shell_obstacle_course} \end{figure} Performing numerical validation for shells is particularly difficult for several reasons. There are few, if any, analytic solutions available for assessing discretization accuracy in Sobolev-equivalent norms. For this reason, many have resorted to measuring performance by pointwise measures, such as the displacement at the location of point-load application or where the point of maximal displacement is likely to occur. The so-called ``shell obstacle course'' found in much of the related literature is perhaps the most common suite of problems where converged pointwise values are used to indicate validity \cite{Scordelis1964,Belytschko1985}. Unfortunately, these values only agree up to a few digits of precision and are therefore not reliable for a rigorous assessment of convergence. Furthermore, as we have no theoretical error estimates in terms of pointwise quantities, this approach does not suffice for our numerical validation. Separately, we also observe that for fine meshes discretized with high-order elements, roundoff errors due to ill-conditioning of the resulting linear system often dominate the solution, as is evident in the forthcoming numerical results. This presents difficulty in using high-resolution solutions as a benchmark for convergence. To combat roundoff errors in the asymptotic regime before this phenomenon dominates the solution, we employ three iterations of residual-based iterative refinement to all of our forthcoming problems \cite{Wilkinson1948,Wilkinson1963}. To demonstrate that we truly obtain optimal convergence rates, we instead resort to ``manufactured'' forcing functions that, when applied to the geometries we consider, yield known displacement fields. For simple and flat domains, this task is relatively straightforward and amounts to applying the differential operator $\mathcal{L}$ to a desired solution field ${\bf u}$ to obtain the corresponding $\applied{\textbf{\textup{f}}}$. Manufacturing forcing functions for shell problems posed over curved manifolds is conceptually no different, but in practice it is a much more involved task and care must be taken at every step. To facilitate this process, we carefully implemented all the steps in Mathematica, which allows many of the operations to be done symbolically. We have found one other instance where such a process has been performed \cite{gfrerer2018code}, however our linear shell obstacle is comprehensive in that it encompasses all possible boundary condition configurations. Moreover, we have provided the forcing functions and their corresponding displacement, strain, and stress fields from our linear shell obstacle course for the research community\footnote{https://github.com/wdas/shell-obstacle-course}. In order to make our testing as exhaustive as possible, we devised a new linear shell obstacle course, which covers flat, parabolic, hyperbolic, and elliptic geometries subject to simply supported, clamped, free, and symmetric boundary conditions (see Figure~\ref{fig:shell_obstacle_course}). We fix the shell thickness to be $\zeta = 0.1 \ m$ and set the material parameters $E = 10 \ MPa$ and $\nu = 0.3$ in our linear constitutive model \eqref{eqn:constitutive} for all of the problems we consider. Note that in the forthcoming subsections, all displacement fields presented are in meters. Our methodology is free of membrane locking for all of the problems we consider because the thickness-to-domain-size ratio, $\zeta/\ell$, is always $0.1$. This enables us to numerically examine asymptotic rates of convergence. For all the experiments, we employ uniform, tensor-product meshes in the parametric domain. Since the geometric mappings are non-degenerate, the corresponding physical mesh is comprised of curvilinear quadrilateral elements. Recall Remark~\ref{remark:shellBCs} wherein it is mentioned that the Kirchhoff-Love shell accommodates four common types of boundary conditions: clamped, simply supported, symmetric, and free. In terms of Dirichlet boundary conditions, a simply supported shell is one such that the boundary displacement is zero while the normal rotation is unconstrained. A shell with symmetric boundary conditions is one such that the boundary displacement is unconstrained while normal rotation is zero. Finally, a shell with free boundary conditions is one such that both the boundary displacement and the normal rotation are unconstrained. From energetic principles, this implies that the quantities that are energetically conjugate to those that are unconstrained must vanish. For simply supported structures, this is the bending moment; for those with symmetric boundaries, this is the ersatz traction; and for those with free boundaries, this is both. Since it is nearly impossible to manufacture a solution exhibiting exactly these properties, our linear shell obstacle course simply emulates this behavior by instead prescribing nonhomogeneous boundary conditions in lieu of those that should be unconstrained. However, for readability, we still refer to these boundary condition types by their classical name throughout this section even though it is understood that they are in fact emulated. The presence of four covariant differentiation operators in the underlying strong formulation of the Kirchhoff-Love shell yields complex forcing functions that are not only very nonlinear, they are also often numerically unstable to evaluate in double precision. For this reason, it is recommended that these entities be evaluated in extended precision and truncated to the operating precision only after all operations comprising the function have been completed. For our particular results, we compute forcing function data as well as strain and stress tensor data with 100 digits of accuracy, truncate the data to double precision, and save the data in external files that are later read by our isogeometric analysis routines. This ensures that the function values evaluated at the quadrature points are accurate enough for our formation and assembly routines, as well as for post-processing. Furthermore, to handle the nonlinearities of these functions, we employ a $25 \times 25$-point quadrature rule to avoid under-integration. Alternatively, one could use an adaptive quadrature scheme to overcome this issue without suffering from the curse of dimensionality for tensor product meshes, since these nonlinearities are most prevalent near the boundaries and at ``corner'' points. For the benefit of the community, we provide a Mathematica notebook containing the problem data for our shell obstacle course that allows the results presented in this section to be reconstructed. In particular, the notebook includes geometric parameterizations, displacement fields, the strain and stress tensors for bending and membrane action, and the forcing function. For convenience, the control meshes and NURBS weights associated with the geometries we consider in our numerical results are also tabulated in \ref{sec:Geo_Param}. In the following, each manufactured displacement field is denoted by a superscript number in parenthesis that indicates the problem number. These numbers correspond to the tabulated geometry data in \ref{sec:Geo_Param} as well as in the supplemental notebook. \subsection{Flat Geometry} We begin our presentation of numerical results with the examples having flat geometric configurations. In this case, the in-plane and out-of-plane phenomena are completely decoupled due to the lack of curvature. This property is useful for determining that Nitsche's method is implemented correctly for in-plane and out-of-plane behaviors separately. Regardless of this decoupling, we still consider flat plate-membrane systems subject to both in-plane and out-of-plane displacement fields. The first problem set we consider is comprised of (i) a NURBS-mapped, annular domain and (ii) an astroid domain as shown in Figure~\ref{fig:shell_obstacle_course}. Note that the astroid domain is not truly an astroid by its mathematical definition, but rather closely resembles one. The annular domain is modeled through a quarter-annulus with symmetric boundary conditions employed on the straight edges. This domain is subject to a linear, radial displacement in-plane and an exponential transverse displacement field. Moreover it accommodates a clamped boundary on the inner radius and a free boundary on the outer radius. More specifically, the displacement field for Problem 1 over the annular domain is characterized by \begin{equation*} {\bf u}^{(1)}\left( \xi^1, \xi^2 \right) := \left( \frac{\xi^1}{\|\undef{\fullTens{\FFF}}_1\|} \right) \undef{\fullTens{\FFF}}_1 + \left( e^{\xi^1} - 1 \right)\xi^1 \undef{\fullTens{\FFF}}_3. \end{equation*} By comparison, the astroid domain is loaded such that the resulting in-plane displacement field is a vortex with no displacement on the domain boundary and a sinusoidal transverse displacement field. This choice of displacement field effectively emulates a plate with two, simply supported edges on opposite ends and clamped edges on the remaining boundaries. The displacement field for Problem 2 over the astroid domain is given by the following set of Cartesian displacement modes: \begin{equation*} {\bf u}^{(2)}\left( \xi^1, \xi^2 \right) := \left( \begin{array}{c} u_x\\ u_y\\ u_z \end{array} \right) = \left( \begin{array}{c} \left( \xi^1 - 1 \right)^2 \left( \xi^1 \right)^2 \left( \frac{1}{2} - \xi^2 \right) (1 - \xi^2) \xi^2 \vspace{1pt}\\ \left( \xi^2 - 1 \right)^2 \left( \xi^2 \right)^2 \left( \frac{1}{2} - \xi^1 \right) (1 - \xi^1) \xi^1 \vspace{1pt}\\ \left( 1 - \xi^1 \right) \xi^1 \sin(\pi \xi^1) \sin(\pi \xi^2) \end{array} \right). \end{equation*} \begin{figure}[t!] \includegraphics{flat_results.pdf} \caption{The convergence behavior of the annular problem in the $L^2$-norm (top left) and energy norm (top right) are shown. The convergence results for the astroid problem are shown in the $L^2$-norm (bottom left) and the energy norm (bottom right). Optimal convergence rates are observed with their theoretical counterparts shown as dashed lines with hollow, identical markers. The magnitude of the displacement field is plotted over the geometry for plots pertaining to the $L^2$-norm, while the total internal energy density is plotted over the geometry for plots pertaining to the energy norm.} \label{fig:flat_results} \end{figure} As is clearly demonstrated in Figure~\ref{fig:flat_results}, optimal convergence rates are obtained in both the $L^2$-norm for $p>2$ and in the energy norm for all polynomial degrees of discretization considered. Note that the convergence rate in the $L^2$-norm is sub-optimal for $p=2$, while the convergence rate in the energy norm is optimal. This phenomenon is expected and will be observed for all problems considered in the linear shell obstacle course. For an elaboration, refer to \cite[Chapter~2]{Strang1973}. For large $p$ and small $h$, we observe the aforementioned roundoff divergence due to matrix ill-conditioning. \subsection{Parabolic Geometry} The next problem class that we consider are shells over a parabolic geometry, namely, a NURBS-parameterized cylinder. In this instance, we encounter a coupling between in-plane and out-of-plane behaviors due to the curvature of the shell body. First, we consider a NURBS-mapped quarter-cylinder domain and, next, we model a full cylindrical shell by employing symmetric boundary conditions across the edges of the quarter-cylinder. In the first configuration, we apply a forcing function such that the resulting displacement field is a quartic-by-quadratic polynomial. This choice of displacement field emulates clamped and simply supported boundary conditions. Moreover, the displacement field for Problem 3 over the quarter-cylinder is given by the following: \begin{equation*} {\bf u}^{(3)}\left( \xi^1, \xi^2 \right) := - \left( \xi^1 - 1\right)^2 \left( \xi^1 \right)^2 \xi^2 \left(\xi^2 - 1 \right) \undef{\fullTens{\FFF}}_3. \end{equation*} The second problem configuration applies a forcing function such that the resulting displacement field is a sinusoid in the radial direction. This displacement field has free boundary conditions along the axis of the cylinder. The displacement field for Problem 4 over the cylindrical domain is given by the following: \begin{equation*} {\bf u}^{(4)}\left( \xi^1, \xi^2 \right) := \frac{1}{2} \cos \left( \pi \xi^1 \right) \undef{\fullTens{\FFF}}_3. \end{equation*} \begin{figure}[t!] \includegraphics{par_results.pdf} \caption{The convergence behavior of the quarter-cylinder from configuration 1 in the $L^2$-norm (top left) and energy norm (top right) are shown. The convergence results for the cylinder in configuration 2 are shown in the $L^2$-norm (bottom left) and in the energy norm (bottom right). Optimal convergence rates are observed with their theoretical counterparts shown as dashed lines with hollow, identical markers. The magnitude of the displacement field is plotted over the geometry for plots pertaining to the $L^2$-norm, while the total internal energy density is plotted over the geometry for plots pertaining to the energy norm.} \label{fig:par_results} \end{figure} Clearly, the results depicted in Figure~\ref{fig:par_results} demonstrate that the optimal convergence rates are obtained in the $L^2$-norm for $p>2$ and the energy norm for all polynomial degrees of discretization. Once again, the convergence rate in the $L^2$-norm is sub-optimal for $p=2$, while the convergence rate in the energy norm is optimal. Note that the displacement field for Problem 3 is in the span of biquartic polynomial basis functions. Moreover, the geometric mapping is a rational quadratic, as the NURBS weighting function is a quadratic polynomial. Therefore, we obtain machine precision for any mesh size with a sixth-order discretization in both $L^2$- and energy norms until the ill-conditioning roundoff divergence begins. In the case of the energy norm, we never truly obtain floating-point machine precision. This is also due to the ill-conditioning of the linear system and the computation of the strain energy. \subsection{Hyperbolic Geometry} This next class of geometries considered pertains to hyperbolic configurations. Once again, geometric curvatures couple the in-plane and out-of-plane effects; however, contrary to the previous two scenarios, this geometry is doubly curved and, hence, has a nonzero Gaussian curvature. Note that in this instance, the hyperbolic paraboloid is only an approximation in the sense that it is not a true NURBS domain but rather a B-spline approximation, i.e., $w(\bm{\xi}) \equiv 1$. This choice is made for sake of simplicity in solution field manufacturing and convergence analysis; it does not alter the hyperbolic classification of the geometry. The resulting forcing function, as well as the stress and strain tensors, for the NURBS-mapped hyperbolic paraboloid are drastically more complex than the polynomial counterpart. The first problem configuration we consider is a full hyperbolic paraboloid that is modeled through the use of symmetric boundary conditions. The top and bottom of the hyperbolic paraboloid have simply supported boundary conditions and the shell is subject to a loading such that the resulting geometry is a B-spline approximation of a cylinder. The displacement field for Problem 5 over the hyperbolic paraboloid domain is given by \begin{equation*} {\bf u}^{(5)}\left( \xi^1, \xi^2 \right) := \left( \begin{array}{c} u_x\\ u_y\\ u_z \end{array} \right) = \left( \begin{array}{c} \sqrt{2} \left( \left( \xi^1 \right)^2 - 1 \right) \left( \xi^2 -1 \right) \xi^2 \vspace{1pt}\\ \sqrt{2} \left( \xi^1 - 2 \right) \xi^1 \left( \xi^2 -1 \right) \xi^2 \vspace{1pt}\\ 0 \end{array} \right). \end{equation*} In the second configuration, we instead consider a quarter of this hyperbolic paraboloid. In this scenario, one edge is clamped while the other edges are free and the entire system is subject to a forcing such that the resulting displacement field is a sinusoid. In particular, the displacement field for Problem 5 over the quarter-hyperbolic paraboloid domain is given by \begin{equation*} {\bf u}^{(6)}\left( \xi^1, \xi^2 \right) := \left( \begin{array}{c} u_x\\ u_y\\ u_z \end{array} \right) = \left( \begin{array}{c} \xi^2 \sin \left( \frac{\pi}{2} \xi^2 \right) \vspace{1pt}\\ \xi^2 \sin \left( \frac{\pi}{2} \xi^2 \right) \vspace{1pt}\\ 0 \end{array} \right). \end{equation*} \begin{figure}[t!] \includegraphics{hyp_results.pdf} \caption{The convergence behavior of the hyperbolic paraboloid problem in configuration 1 are shown in the $L^2$-norm (top left) and energy norm (top right) are shown. The convergence results for the quarter-hyperbolic paraboloid in configuration 2 are shown in the $L^2$-norm (bottom left) and the energy norm (bottom right). Optimal convergence rates are observed with their theoretical counterparts shown as dashed lines with hollow, identical markers. The magnitude of the displacement field is plotted over the geometry for plots pertaining to the $L^2$-norm, while the total internal energy density is plotted over the geometry for plots pertaining to the energy norm.} \label{fig:hyp_results} \end{figure} \begin{figure}[t!] \includegraphics{ell_results.pdf} \caption{The convergence behavior of the hemispherical shell in configuration 1 is shown in the $L^2$-norm (top left) and in the energy norm (top right). The convergence results for the hemispherical shell in configuration 2 are shown in the $L^2$-norm (bottom left) and the energy norm (bottom right). Optimal convergence rates are observed with their theoretical counterparts shown as dashed lines with hollow, identical markers. The magnitude of the displacement field is plotted over the geometry for plots pertaining to the $L^2$-norm, while the total internal energy density is plotted over the geometry for plots pertaining to the energy norm.} \label{fig:ell_results} \end{figure} Figure~\ref{fig:hyp_results} demonstrate that our discretization yields optimal convergence behavior in this case in both the $L^2$-norm for $p>2$ and in the energy norm for all polynomial degrees considered. We once again observe sub-optimal convergence rates in the $L^2$ norm, while the energy norm is unaffected. Note that the displacement field for the first configuration is in the span of all polynomial degrees considered. This is because the B-spline approximations for both the hyperbolic paraboloid and the cylinder are quadratic, and consequently so is their difference. Therefore, we obtain machine precision for all degrees of discretization in this instance. Once again, matrix ill-conditioning presents itself in the form of roundoff divergence. The effects of this ill-conditioning are also present in the preasymptotic region where true floating-point machine precision is not obtained, as was the case for the $p=6$ discretization of Problem 3. In the second configuration, we obtain the expected convergence behavior. \subsection{Elliptic Geometry} The final class of geometries considered here are elliptic configurations in the form of a hemispherical shell. Much like the hyperbolic case, these geometries also have nonzero Gaussian curvature so, as before, we only approximate the hemisphere by letting $w(\bm{\xi}) \equiv 1$. The first problem configuration considered is a hemispherical shell subject to an internal pressure resulting in a radial sinusoidal displacement field. This problem is subject to symmetric boundary conditions along the edges of the hemispherical section as well as simply supported boundary conditions on the top and bottom of the shell. In particular, the displacement field for Problem 7 over the hemispherical shell domain is given by \begin{equation*} {\bf u}^{(7)}\left( \xi^1, \xi^2 \right) := - \sin \left( \pi \xi^1 \right) \undef{\fullTens{\FFF}}_3. \end{equation*} The second configuration is the same hemispherical shell employing symmetric boundary conditions along the edges to emulate a full hemisphere, but with the top edge clamped and the bottom edge free in this scenario. The shell is subject to a loading such that the resulting displacement field is exponential and oriented downward. The displacement field for Problem 8 over the hemispherical shell domain is given by \begin{equation*} {\bf u}^{(8)}\left( \xi^1, \xi^2 \right) := \left( \begin{array}{c} u_x\\ u_y\\ u_z \end{array} \right) = \left( \begin{array}{c} 0 \vspace{1pt}\\ 0 \vspace{1pt}\\ \left( \xi^1 - 1 \right)\left( e - e^{\xi^1} \right) \end{array} \right). \end{equation*} The convergence analysis shown in Figure~\ref{fig:ell_results} demonstrates that optimal convergence rates are once again obtained in the $L^2$-norm for $p>2$ and in the energy norm for all polynomial degrees discretized over the elliptic geometries. The convergence rate in the $L^2$ norm for $p=2$ is inhibited as discussed previously. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.99\textwidth} \centering \includegraphics{Disp_errs.pdf} \caption{Relative Displacement Errors} \end{subfigure} \\ \begin{subfigure}[b]{0.99\textwidth} \centering \includegraphics{Energy_errs.pdf} \caption{Relative Shell Energy Errors} \end{subfigure} \caption{(a) The relative displacement errors and (b) the relative shell energy errors visualized over the undeformed geometries for our new shell obstacle course for a $p=4$, $16 \times 16$-element mesh. Each column represents a geometric class of problems where, from left to right, we have flat, parabolic, hyperbolic, and elliptic geometries. From top to bottom, in the first column are the annular domain and the astroid, in the second column are the quarter cylinder and the full cylinder, in the third column are the inflated hyperbolic paraboloid and the hyperbolic paraboloid diving board, and in the fourth column are the inflated hemispherical shell and the stretched hemisphere.} \label{fig:Disp_and_energy_Errs} \end{figure} \subsection{Displacement and Energy Errors} Another benefit of our new shell obstacle manufactured solutions suite is the capability to visualize the pointwise displacement and energy density errors throughout the geometric domain. This ability is exceptionally useful for understanding various discretization methods and how errors accrue accordingly. To illustrate this, the displacement errors and energy errors are plotted in Figure~\ref{fig:Disp_and_energy_Errs}. As is shown in these figures, the error is quite oscillatory throughout the domain, but the amplitude of the oscillations is bounded on the order of the discretization error. \subsection{Comparison to Variationally Inconsistent Nitsche-Based Formulation} To convey the importance of variational consistency, we have included an additional set of numerical experiments in this subsection. In these experiments, we compare our formulation and discretization to one using the incorrect ersatz forces that were described in Remark~\ref{rem:IncorrectErsatz} and, more specifically, from those presented in \cite[p.155]{Ciarlet2005} and in \cite[p.156]{Koiter1973foundations}. The resulting Nitsche-based formulation is identical in form to the formulations proposed in \cite{guo2015weak,guo2015nitsche} up to corner forces. The results of this experiment are shown in Figure~\ref{fig:bad_results}. To reiterate the differences between these results and those in previous subsections, we employ the incorrect bending component of the ersatz force $\surfVec{\textup{T}}^{(B)}({\bf w}) = - 2 \surfTens{b} \cdot \surfTens{B}({\bf w}) \cdot \surfVec{n}$ instead of our derived forces $\surfVec{\textup{T}}^{(B)}({\bf w}) = - \surfTens{b} \cdot \left( \surfTens{B}({\bf w}) \cdot \surfVec{n} + \surfVec{t} B_{nt}({\bf w}) \right)$. Asymptotic convergence rates of roughly $0.5$ and $1.5$ are observed in the energy norm and the standard $L^2$-norm, respectively, regardless of polynomial degree. The deteriorated convergence rates are due to the inconsistency of the underlying formulation that renders the effectiveness of the formulation no better than a classical penalty method. In fact, the convergence rates presented in Figure~\ref{fig:bad_results} agree with theoretically-expected convergence rates from such a penalty formulation \cite[Thm. 5.2]{graser2019discretization}. Note the reference lines and slope parameters are computed using only the tail of the data where the arrested rates begin to highlight this observation. \begin{figure}[t!] \includegraphics{bad_results.pdf} \caption{The convergence behavior of two selected problems from our linear shell obstacle course using the variationally inconsistent Nitsche-based formulation. The results for Problem 3 are shown in the $L^2$-norm (top left) and energy norm (top right) and those for Problem 5 are shown in the $L^2$-norm (bottom left) and energy norm (bottom right). The results from our discretization are shown transparently in the background for reference. The convergence behavior of the incorrect discretization are shown by the lines with filled markers. The dashed lines of matching color are a linear fit of the tail-end of the data.} \label{fig:bad_results} \end{figure} The variationally inconsistent formulation results in arrested convergence rates after a certain critical mesh size where the boundary error dominates the total error. This is especially clear for the Problem 3 results in Figure~\ref{fig:bad_results} where optimal convergence rates are obtained initially until they are eventually inhibited. This observation is also readily seen in \cite[Fig.7]{guo2015weak} where initial, optimal convergence rates begin to taper off at the last available data point. The Problem 5 results in Figure~\ref{fig:bad_results} clearly illustrate the underlying formulation is inconsistent because the manufactured solution lies in the span of each discretization and yet the error is not at machine precision. This is also the case for the $p=6$ discretization for Problem 3. \section{Conclusion} \label{sec:conclusion} In this paper, we have presented a new Nitsche-based formulation for the linear Kirchhoff-Love shell that is provably stable and optimally convergent for general sets of admissible boundary conditions. To arrive at our formulation, we first presented a systematic framework for constructing Nitsche-based formulations for variational constrained minimization problems. We proved that this framework yields a well-posed and convergent Nitsche-based formulation provided that a generalized Green's identity and generalized trace and Cauchy-Schwarz inequalities are available. We then applied this framework to the linear Kirchhoff-Love shell and, for the particular case of NURBS-based isogeometric analysis, we proved that the resulting formulating Nitsche-based formulation yields optimal convergence rates in both the shell energy norm and the standard $L^2$-norm. To arrive at this formulation, we derived the Euler-Lagrange equations for general sets of admissible boundary conditions, and we discovered that the equations typically presented in the literature are incorrect. To verify our Nitsche-based formulation, we constructed a linear shell obstacle course encompassing flat, parabolic, hyperbolic, and elliptic geometric configurations subject to clamped, simply supported, symmetric, and free boundary conditions. For all examples, we used NURBS to discretize the governing equations, and we demonstrated that optimal convergence rates are obtained in both the shell energy norm and the standard $L^2$-norm for polynomial degrees $p = 2$ through $p = 6$. We also demonstrated that a variationally inconsistent Nitsche-based formulation based on the incorrect Euler-Lagrange equations typically presented in the literature yields sub-optimal convergence rates of 0.5 and 1.5 in the shell energy norm and standard $L^2$-norm, respectively. As discussed in Section~\ref{sec:num_results}, it is necessary to manufacture forcing functions that yield known displacement fields in order to confirm optimal convergence rates. This process is extremely non-trivial due to the inherent complexity of the PDE governing the Kirchhoff-Love shell. Historically, shell discretizations have been verified through ad hoc and unreliable means such as pointwise measures of convergence to values that are not backed by theory. Although these methods may show that a given discretization ultimately approaches an agreed-upon value, it has no notion of the rates at which it converges, nor the rigor associated with error estimates in standard Sobolev norms. To enable future researchers to rigorously validate their results, we have released the eight problems in our extended shell obstacle course in a supplemental notebook file. To the best of our knowledge, there does not otherwise exist a comprehensive suite of validation problems for shells that (i) encompasses all possible geometric classifications, (ii) considers all admissible boundary condition configurations, and (iii) serves as a tool for confirming optimal convergence behaviors. We therefore believe our new linear shell obstacle course stands as a valuable contribution to the research community on its own. Since our framework for constructing Nitsche-based formulations is general and applicable to more complex problems, we plan to extend our methodology to Kirchhoff-Love shells with both geometric and material nonlinearities. To this end, we plan to extend our new shell obstacle course to this setting as well to serve as another validation tool for the shell community. We also plan to explore alternative discretization strategies and, in particular, Catmull-Clark subdivision spline discretizations with extraordinary vertices. Finally, we plan to extend our methodology to the weak enforcement of displacement and normal rotation along patch interfaces for non-conforming multi-patch NURBS geometries and along trimming curves for trimmed NURBS geometries.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In \cite{Geelen}, Geelen, Gerards and Whittle make the following conjectures. \begin{conjecture}\label{Jim-1} For any polynomial $p(\cdot)$ there is a frame matroid $M$ such that for any set $\mathcal{A}$ of subsets of $E(M)$ with $|\mathcal{A}|\leq p(|E(M)|)$ there is a non-frame matroid $M'$ such that $E(M')=E(M)$ and $r_{M'}(A)=r_M(A)$ for each $A\in\mathcal{A}$. \end{conjecture} \begin{conjecture}\label{Jim-2} For any polynomial $p(\cdot)$ there is a lifted-graphic matroid $M$ such that for any set $\mathcal{A}$ of subsets of $E(M)$ with $|\mathcal{A}|\leq p(|E(M)|)$ there is a non-lifted-graphic matroid $M'$ such that $E(M')=E(M)$ and $r_{M'}(A)=r_M(A)$ for each $A\in\mathcal{A}$. \end{conjecture} Put in another way, the conjectures assert that, for matroids given by rank oracles, there does not exist a polynomial-time algorithm that determines whether a matroid is a frame matroid or a lifted-graphic matroid. In this paper we give proofs of both of these conjectures. In fact we prove slightly stronger results, in that we resolve the above conjectures within the class of quasi-graphic matroids. We devote the remainder of the introduction to clarifying the notions considered above. We begin by recalling material from \cite{Geelen}. Let $G$ be a graph and let $M$ be a matroid. For a vertex $v$ of $G$ we let loops$_G(v)$ denote the set of loop-edges of $G$ at the vertex $v$. We say that $G$ is a {\em framework} for $M$ if \begin{enumerate} \item $E(G)=E(M)$, \item $r(E(H))\leq |V(H)|$ for each component $H$ of $G$, and \item for each vertex $v$ of $G$ we have $\hbox{\rm cl}_M(E(G-v))\subseteq E(G-v)\cup {\rm loops}_G(v)$. \end{enumerate} A matroid is {\em quasi-graphic} if it has a framework. A matroid $M$ is a {\em lifted-graphic} matroid if there is a matroid $M'$ and an element $e\in E(M')$ such that $M'\backslash e=M$ and $M/e$ is graphic. A matroid $M$ is {\em framed} if it has a basis $V$ such that, for each element $e\in E(M)$, there is a subset $W$ of $V$ with at most two elements such that $e\in\hbox{\rm cl}_M(W)$. A {\em frame matroid} is a restriction of a framed matroid. It is proved in \cite{Geelen} that frame matroids and lifted-graphic matroids are quasi-graphic. It is also proved in \cite{Geelen} that if $M$ is a representable, 3-connected, quasi-graphic matroid, then $M$ is either a frame matroid or a lifted-graphic matroid. However, for non-representable matroids this is very far from the case. Frame matroids and lifted-graphic matroids were introduced by Zaslavsky \cite{Zaslavsky-II} from the perspective of biased graphs \cite{Zaslavsky-I}. We will need that perspective for this paper so we recall material from \cite{Zaslavsky-I,Zaslavsky-II} now. A {\em theta graph} is a graph that consists of a pair of vertices joined by three internally disjoint paths. A connected 2-regular graph is a {\em cycle}. A collection ${\mathcal B}$ of cycles of a graph $G$ satisfies the {\em theta property} of no theta subgraph of $G$ contains exactly two members of $\mathcal B$. A \emph{biased graph} consists of a pair $(G, \mathcal{B})$, where $G$ is a graph and $\mathcal{B}$ is a collection of cycles of $G$ satisfying the theta property. If $(G,{\mathcal B})$ is a biased graph, then the members of ${\mathcal B}$ are called {\em balanced cycles}, otherwise cycles of $G$ are called {\em unbalanced}. For a set $A$ of edges of a graph $G$, let $G[A]$ be the subgraph with edge set $A$ and vertex set consisting of all vertices incident with an edge in $A$. Zaslavsky \cite{Zaslavsky-II} defines two matroids associated with a biased graph $(G,\mathcal B)$. In the first, denoted LM$(G,\mathcal B)$, a subset $I$ of $E(G)$ is independent if and only if $G[I]$ contains no balanced cycle and at most one cycle. In the second, denoted FM$(G,\mathcal B)$, a subset $I$ of $E(G)$ is independent if and only if $G[I]$ contains no balanced cycle and every component of $G[I]$ contains at most one cycle. The next theorem follows from work of Zaslavsky \cite{Zaslavsky-II}. \begin{theorem} \label{zas} Let $M$ be a matroid. \begin{itemize} \item[(i)] $M$ is a lifted-graphic matroid if and only if there exists a biased graph $(G,\mathcal B)$ such that $M={\rm LM}(G,\mathcal B)$. \item[(ii)] $M$ is a frame-matroid if and only if there exists a biased graph $(G,\mathcal B)$ such that $M=FM(G,\mathcal B)$. \end{itemize} \end{theorem} Finally we note that, if $M={\rm LM}(G,\mathcal B)$ or $M={\rm FM}(G,\mathcal B)$ for some biased graph $(G,\mathcal B)$, then $G$ is a framework for $M$. In \cite{Geelen} it is also conjectured that, unlike lifted-graphic and frame matroids, the property of being a quasi-graphic matroid can be recognised with a polynomial number of rank evaluations. Given the results of this paper it is clear that this conjecture is the more natural extension of a theorem of Seymour \cite{Seymour} where he proves that graphic matroids can be recognised with a polynomial number of rank evaluations. \section{Relaxations and tightenings} Recall that a {\em circuit-hyperplane} of a matroid $M$ is a set $C$ that is both a circuit and a hyperplane. It is well known, see for example \cite[Proposition~1.5.14]{Oxley}, that if $C$ is a circuit-hyperplane of $M$, then $\mathcal B(M)\cup \{C\}$ is the set of bases of a matroid $M'$. In this case we say that $M'$ is obtained from $M$ by {\em relaxing} the circuit-hyperplane $C$. Relaxation will be an important operation for us, but we will also need the reverse operation, which is no doubt well understood, but does not seem to appear in the literature. Let $B$ be a basis of a matroid $M$. If the closure of each proper subset of $B$ is itself, then we say that $B$ is {\sl free}. Observe that, if $B$ is a free basis, then for each $e\in B$ and $f\in E(M)-B$ the set $B-e+f$ is a basis of $M$. \begin{lemma}\label{intensify} Let $B$ be a free basis of a matroid $M$. Then $\mathcal{B}(M)-\{B\}$ is the set of bases of a matroid. \end{lemma} \begin{proof} Let $B_1,B_2\in\mathcal{B}(M)-\{B\}$ and $e\in B_1-B_2$. Then there is $f\in B_2-B_1$ satisfying $B_1-\{e\}+\{f\}\in\mathcal{B}(M)$. Assume that $B=B_1-\{e\}+\{f\}$. Then $\hbox{\rm cl}_M(B_1-\{e\})=B_1-\{e\}$. Since $B_2-\{f\}\neq B_1-\{e\}$ as $B_2\neq B$, there is an element $f'\in B_2-(B_1\cup \{f\})$. Moreover, since $\hbox{\rm cl}_M(B_1-\{e\})=B_1-\{e\}$, we have $B_1-\{e\}+\{f'\}\in\mathcal{B}(M)-\{B\}$. \end{proof} We say that the matroid $(E(M), \mathcal{B}(M)-\{B\})$ given in Lemma \ref{intensify} is obtained from $M$ by {\sl tightening the free basis} $B$. Evidently, tightening is the reverse operation of relaxation. The following results are obvious and will be used in the next section without reference. \begin{lemma} Let $M'$ be a matroid obtained from a matroid $M$ by tightening a free basis $B$ . Then $r_{M'}(X)=r_M(X)$ for each $B\neq X\subseteq E(M)$. \end{lemma} \begin{lemma} Let $M'$ be a matroid obtained from a matroid $M$ by relaxing a circuit-hyperplane $C$ . Then $r_{M'}(X)=r_M(X)$ for each $C\neq X\subseteq E(M)$. \end{lemma} \section{Proof of the main theorems} In this section we give proofs of the following theorems, which, as observed earlier, are slight strengthenings of Conjectures~\ref{Jim-1} and \ref{Jim-2}. \begin{theorem} \label{1} For any polynomial $p(\cdot)$ there is a frame matroid $M$ such that for any set $\mathcal{A}$ of subsets of $E(M)$ with $|\mathcal{A}|\leq p(|E(M)|)$ there is a quasi-graphic non-frame matroid $M'$ such that $E(M')=E(M)$ and $r_{M'}(A)=r_M(A)$ for each $A\in\mathcal{A}$. \end{theorem} \begin{theorem} \label{2} For any polynomial $p(\cdot)$ there is a lifted-graphic matroid $M$ such that for any set $\mathcal{A}$ of subsets of $E(M)$ with $|\mathcal{A}|\leq p(|E(M)|)$ there is a quasi-graphic non-lifted-graphic matroid $M'$ such that $E(M')=E(M)$ and $r_{M'}(A)=r_M(A)$ for each $A\in\mathcal{A}$. \end{theorem} Let $G$ be a graph. Recall that a cycle $C$ is {\em chordless} if there is no edge in $E(G)-C$ that joins vertices of $C$. Two cycles are {\em disjoint} if they do not share a common vertex. We say that the pair $C$, $C'$ of cycles is a {\em covering pair} if every vertex of $G$ is in either $C$ or $C'$. Our interest will focus on covering pairs of disjoint chordless cycles. For the remainder of this paper we focus on biased graphs all of whose cycles are unbalanced, that is, biased graphs of the form $(G,\emptyset)$. We omit the elementary proofs of the next two lemmas. \begin{lemma} \label{easy1} Let $G$ be a graph and let $\{C,C'\}$ be a covering pair of disjoint chordless cycles of $G$. Then the following hold. \begin{itemize} \item[(i)] $C\cup C'$ is a circuit-hyperplane of the lift matroid ${\rm LM}(G,\emptyset)$. \item[(ii)] $C\cup C'$ is a free basis of the frame matroid ${\rm FM}(G,\emptyset)$. \end{itemize} \end{lemma} \begin{lemma} \label{easy2} Let $G$ be a graph and let $\{C,C'\}$ be a covering pair of disjoint chordless cycles of $G$. Then the following hold. \begin{itemize} \item[(i)] $G$ is a framework for the matroid obtained by relaxing the circuit-hyperplane $C\cup C'$ of ${\rm LM}(G,\emptyset)$. \item[(ii)] $G$ is a framework for the matroid obtained by tightening the free basis $C\cup C'$ of ${\rm FM}(G,\emptyset)$. \end{itemize} \end{lemma} Next we define the family of graphs that we will use to prove Theorems~\ref{1} and \ref{2}. Let $n\geq4$ be an even number. Let $G_n$ be the graph with $V(G_n)=\{u_1,u_2,\ldots,u_n,\ v_1,\\ v_2,\ldots, v_n\}$ and $E(G_n)=E(C_1\cup C_2\cup C_3\cup C_4)$, where \[\begin{aligned} C_1&=u_1u_2\ldots u_nu_1,\\ C_2&=v_1v_2\ldots v_nv_1,\\ C_3&=u_1v_1u_3v_3u_5\ldots v_{n-3}u_{n-1}v_{n-1}u_1,\\ C_4&=u_2v_2u_4v_4u_6\ldots v_{n-2}u_{n}v_{n}u_2. \end{aligned}\] are cycles of $G_n$. See Figure 1. For each integer $1\leq i\leq n$, set $e_i=u_iv_i$ and $f_{i,i+2}=v_iu_{i+2}$, where the subscripts are modulo $n$. \begin{figure}[htbp] \begin{center} \includegraphics[page=1,height=4.5cm]{4-cycle.pdf} \caption{the graph $G_8$.} \end{center} \end{figure} We say that $(e_{i+1}, f_{i, i+2})$ is a {\sl crossing pair}. Note that the two edges contained in a crossing pair are disjoint and each edge in $C_3\cup C_4$ belongs to exactly one crossing pair. Let $X$ be the set whose elements are the crossing pairs of $G_n$. Let $X'$ be the set of all subsets of $X$ with even size. Observe that $|X|=n$, we have $|X'|=2^{n-1}$. Let $$S=\{(e_{i_1+1}, f_{i_1, i_1+2}), (e_{i_2+1}, f_{i_2, i_2+2}),\ldots, (e_{i_{2k}+1}, f_{i_{2k}, i_{2k}+2})\}$$ be an element in $X'$ with $1\leq i_1<i_2<\ldots<i_{2k}\leq n$. There is a unique pair of disjoint cycles $C_S^1$ and $C_S^2$ such that the set of edges in crossing pairs in $S$ is equal to $E(C_3\cup C_4)\cap E(C_S^1\cup C_S^2)$ and with \[\begin{aligned} \{e_{i_1+1}, f_{i_2,i_2+2}, e_{i_3+1}, \ldots, f_{i_{2k}, i_{2k}+2}\}\subset C_S^1, \\ \{f_{i_1,i_1+2}, e_{i_2+1}, f_{i_3,i_3+2}, \ldots, e_{i_{2k}+1}\}\subset C_S^2. \end{aligned}\] See Figure 2. \begin{figure}[htbp] \begin{center} \includegraphics[page=2,height=4.5cm]{4-cycle.pdf} \caption{the disjoint cycles $C_S^1$ and $C_S^2$.} \end{center} \end{figure} For example, when $S$ contains all crossing pairs, $C_S^1=C_3, C_S^2=C_4$. When $S=\emptyset$, we have $C_S^1=C_1, C_S^2=C_2$. It follows from routine inspection that, for each $S\in X'$, the pair $\{C_S^1,C_S^2\}$ is a covering pair of disjoint chordless cycles of $G_n$ and $C_S^1,C_S^2$ are of the same length $n$. Let $\mathcal Z$ be the set of edge sets of all covering pairs of disjoint chordless cycles obtained from $X'$ in the above way. Then $|\mathcal Z|=2^{n-1}$. By Lemma~\ref{easy2} each member of $\mathcal Z$ is a circuit hyperplane of LM$(G_n,\emptyset)$ and a free basis of FM$(G_n,\emptyset)$. To distinguish $LM(G_n,\emptyset)$ from the matroid obtained by relaxing a member $Z$ of $\mathcal Z$ we need to check the rank of $Z$ and to distinguish $FM(G_n,\emptyset)$ from the matroid obtained by tightening $Z$ we also need to check the rank of $Z$. To distinguish LM$(G_n,\emptyset)$ or FM$(G_n,\emptyset)$ from all such matroids we need to check the rank of $2^{n-1}$ subsets. Evidently $2^{n-1}$ outgrows any polynomial function. Say $Z\in \mathcal Z$. Let $M_L=LM(G_n,\emptyset)$ and let $M_F=FM(G_n,\emptyset)$. Let $M_L^Z$ and $M_F^Z$ be the matroids obtained from $M_L$ and $M_F$ by respectively relaxing and tightening $Z$. To complete the proof of Theorems~\ref{1} and \ref{2} it suffices to show that $M_L^Z$ is not a lift matroid and $M_F^Z$ is not a frame matroid. We now turn attention to this task. Each cycle of $G_n$ in independent in $M_L^Z$ and $M_F^Z$. The next lemma follows from this observation and inspection of the graph $G_n$. \begin{lemma}\label{add g and h} \label{structure} Let $g$ and $h$ be distinct edges in $E(G_n)-Z$. \begin{itemize} \item[(i)] If $g$ and $h$ are adjacent in $G_n$, there is a partition $(P_1,P_2,P_3)$ of $Z$ with $|P_1|=2$ with $|P_1\cup P_2|=|P_3|=n$, and such that $(Z-P_i)\cup\{g,h\}$ is a circuit of $M_L^Z$ and $M_F^Z$ for each $1\leq i\leq 3$. \item[(ii)] If $g$ and $h$ are not adjacent in $G_n$, then there is a partition $(P_1,P_2,P_3,P_4)$ of $Z$ with $|P_1\cup P_2|=|P_3\cup P_4|=n$ such that $(Z-P_i)\cup\{g,h\}$ is a circuit of $M_L^Z$ and $M_F^Z$ for each $1\leq i\leq 4$; \item[(iii)] except the circuits in (a) and (b), there is no other circuit $C$ of $M_L^Z$ or $M_F^Z$ satisfying $\{g,h\}\subseteq C\subseteq Z\cup\{g,h\}$. \end{itemize} \end{lemma} Since $G_n[E(G_n)-Z]$ is a 2-regular graph, by Lemma~\ref{structure}, we have \begin{lemma}\label{gh-2} There are exactly $2n$ pairs of edges $g, h$ in $E(G_n)-Z$ such that there is a partition $(P_1,P_2,P_3)$ of $Z$ with $|P_1|=2$ with $|P_1\cup P_2|=|P_3|=n$, and such that $(F-P_i)\cup\{g,h\}$ is a circuit of $M_L^Z$ and $M_F^Z$ for each $1\leq i\leq 3$ \end{lemma} \begin{lemma}\label{G'} Let $G'$ be a framework for $M_F^Z$ or $M_L^Z$. Then $G'$ is a $4$-regular graph with $2n$ vertices and without loops. \end{lemma} \begin{proof} Evidently, $|V(G')|=2n$. Since each cocircuit in $M_F^Z$ or $M_L^Z$ has at least four elements and $|E(G')|=4n$, the graph $G'$ is a $4$-regular graph without loops. \end{proof} Let $C$ be an even cycle of a graph $H$, and $e=uv$ be an edge in $E(H)-E(C)$ with $u,v\in V(C)$. If the two paths in $C$ joining $u,v$ have the same length, then $e$ is a {\sl bisector} of $C$. \begin{lemma}\label{G'|F-F} Let $(G',\mathcal{B}')$ be a biased graph satisfying $M_F^Z=FM(G',\mathcal{B}')$. Then $G'[Z]$ is not a cycle. \end{lemma} \begin{proof} Assume to the contrary that $Z$ is a cycle of $G'$. Since $|Z|=2n$, we have $V(G')=V(G'[Z])$. Since $Z\in\mathcal{C}(M_F^Z)$, we have $Z\in\mathcal{B}'$. Let $g,h\in E(G_n)-Z$. since $Z$ is a circuit-hyperplane of $M_F^Z$ and $G'[Z]$ is a cycle in $\mathcal{B}'$, except $Z$ each cycle in $G'[Z\cup\{g\}]$ and $G'[Z\cup\{h\}]$ are not in $\mathcal{B}'$. When $g,h$ are adjacent in $G'$, not assuming $g$ and $h$ adjacent in $G_n$ by Lemma \ref{add g and h} the unique cycle containing $\{g,h\}$ in $G'[Z\cup\{g,h\}]$ is not in $\mathcal{B}'$; hence, there is a partition $(P_1,P_2,P_3)$ of $Z$ such that $(Z-P_i)\cup\{g,h\}$ is a circuit of $M_F^Z$ for each $1\leq i\leq 3$. Moreover, since $G'[E(G_n)-Z]$ is a 2-regular graph with exactly $2n$ vertices, by Lemma \ref{gh-2}, we have that (a) when $g,h$ are not adjacent in $G'$ there is a partition $(P_1,P_2,P_3,P_4)$ of $Z$ with $|P_1\cup P_2|=|P_3\cup P_4|=n$ and such that $(Z-P_i)\cup\{g,h\}$ is a circuit of $M_F^Z$ for each $1\leq i\leq 4$; and (b) for each $v\in V(G')$, assuming that $\{e,e'\}$ is the set of edges adjacent with $v$ but not in $Z$, the unique cycle $C$ containing $\{e,e'\}$ in $G'[Z\cup\{e,e'\}]$ has exactly four edges and either $e$ or $e'$ is a bisector of the cycle $Z$. Assume that $e'$ is a bisector of $Z$ and $u$ is the unique vertex in $C$ not adjacent with $e$ or $e'$. Let $f$ be the bisector of $Z$ adjacent with $u$. By the arbitrary choice of $v$ and (b) such $f$ exists. Since $\hbox{\rm si}(G'[Z\cup\{e,f\}])$ is a 4-edge cycle and $f$ is diameter of $Z$, (a) can not hold, a contradiction. \end{proof} Similarly (in fact, in a simpler way), we can prove \begin{lemma}\label{G'|F-L} Let $(G',\mathcal{B}')$ be a biased graph satisfying $M_L^Z=LM(G',\mathcal{B}')$. Then $G'[Z]$ is not a cycle. \end{lemma} \begin{lemma}\label{non-frame} $M_F^Z$ is a non-frame matroid. \end{lemma} \begin{proof} Assume to the contrary that there is a biased graph $(G',\mathcal{B}')$ such that $M_F^Z=FM(G',\mathcal{B}')$. Since $Z\in\mathcal{C}(M_F^Z)$, by Lemma \ref{G'|F-F} the graph $G'[Z]$ is a theta-graph or a handcuff. Hence, $|V(G'[Z])|=2n-1$ as $|Z|=2n$. Moreover, since $Z$ is a circuit-hyperplane of $M_F^Z$, we have $g\notin\hbox{\rm cl}_{M_F}(Z)$ for each $g\in E(G_n)-Z$; so all edges in $E(G_n)-Z$ are adjacent with the unique vertex in $V(G')-V(G'[Z])$. Hence, $G'$ is not 4-regular as $|E(G_n)-Z|=2n\geq8$, a contradiction to Lemma \ref{G'}. \end{proof} \begin{lemma}\label{non-lifted} $M_L^Z$ is a non-lifted-graphic matroid. \end{lemma} \begin{proof} Assume to the contrary that there is a biased graph $(G',\mathcal{B}')$ satisfying $M_L^Z=LM(G',\mathcal{B}')$. Since $Z$ is a basis of $M_L^Z$ and $G'[Z]$ is not a cycle by Lemma \ref{G'|F-L}, the graph $G'[Z]$ has degree-1 vertices and all edges in $E(G')-Z$ must be adjacent with all degree-1 vertices in $G'[Z]$, so $G'$ is not 4-regular, a contradiction to Lemma \ref{G'}. \end{proof} Theorems~\ref{1} and \ref{2} now follow from the fact that $G_n$ has exponentially many covering pairs of disjoint chordless cycles, that the matroids obtained by relaxing the circuit-hyperplane of LM$(G_n,\emptyset)$ or tightening the free basis of FM$(G_n,\emptyset)$ associated with any one of these chordless cycles is quasi-graphis, and Lemmas~\ref{non-lifted} and \ref{non-frame} which show that these matroids are respectively not lifted-graphic and not frame matroids.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Collective decision-making in large-scale decentralized multi-robot systems is required to coordinate and organize the system~\cite{raoufi2021speed, ebert2020bayes, brambilla2013swarm, valentini2016collective}. For example, a robot swarm needs to collectively agree on a common direction in flocking or on a task allocation~\cite{raoufi2019self}. While task allocation is an example for a discrete consensus problem similar to best-of-$n$ problems (collectively choosing from a finite and countable set), the flocking example is a continuous consensus achievement problem~\cite{valentini2017best}. Large portions of the collective decision-making literature in swarm robotics are focused on discrete problems, such as the popular collective perception benchmark scenario~\cite{valentini2016collective}. Here we focus on a continuous consensus achievement problem ~\cite{olfati2007consensus, ding2022consensus} in the form of a decentralized estimation scenario~\cite{leonard2022collective}. In our previous work we studied the effect of diverse information on the accuracy of collective estimation, which forms the exploration-exploitation trade-off~\cite{raoufi2021speed}. To achieve diverse-enough information, the swarm needs to expand and sample from larger area, which leads to a dispersal collective behavior. Among the proposed distributed methods in the literature on dispersion, some use information that is either costly or not available for all swarm platforms~\cite{ugur2007dispersion}. However, an approximate estimation of distance proved to be efficient to achieve such a goal. The performance of greedy gradient descent algorithm for dispersion predicted to be challenging, especially with large number of robots ($N>10$)~\cite{bayert2019robotic}. Thus, to overcome this, we propose a threshold-based random walk algorithm that proves to be efficient {\color{newChanges}enough} for larger swarms ($N=40$). \par In addition, we require a form of exploitation of the collective decision as the robots need to react to their collective decisions and aggregate at areas that are determined by their consensus. This comes with a design challenge. Should the robots separate a consensus finding phase from an exploitation phase? Either they synchronize and determine an end of the collective decision-making process or they asynchronously switch to exploitation and try to keep finding a consensus on the go. Here we propose a solution choosing the asynchronous option. Consequently, we face another challenge. As the robots initiate their exploitation process, they try to move towards the designated area while continuing to communicate with neighbors. They form a dynamic network topology while following the collective decision-making protocol. We know that the network topology influences the decision-making process~{\color{newChanges}\cite{srivastava2014collective, lobel2016preferences, becker2017network, mateo2019optimal, kwa2023effect}} and hence the emerging process is self-referential (network influences consensus, consensus influences spatial displacement). In that regards, there is a huge body of literature studying this effect from a network point of view. An example of such phenomenon is the homophily in social networks~\cite{khanam2022homophily, holme2006nonequilibrium}. However, studying the co-evolution of network and opinion dynamics in swarm robotics has been overlooked. In this paper, we show how the swarm of real robots disperse in an unbounded environment and then aggregating at the points where they agreed on. \par \newcommand\figFourWidth{1.9} \begin{figure*} \centering \subcaptionbox{Initial Distribution}{\includegraphics[height=\figFourWidth in]{Figures/snapshots/snapShot_at_39.png}}% \hfill \subcaptionbox{Dispersed}{\includegraphics[height=\figFourWidth in]{Figures/snapshots/snapShot_at_7400.png}}% \hfill \subcaptionbox{Final Consensus}{\includegraphics[height=\figFourWidth in]{Figures/snapshots/snapShot_at_23800.png}}% \hfill \subcaptionbox{Kilobot with light conductor}{\includegraphics[height= \figFourWidth in]{Figures/kilobot_conductor_new_cropped.jpeg}}% \caption{a-c) Snapshots from the top-view camera with detected real Kilobots (green circles) in a radial (cone-shaped) light distribution; the red lines show the possible link between two robots within the communication range. d)~The light conductor (transparent plastic) added to the Kilobots to solve the issue of shadows that are cast from robot bodies on their light sensor.} \label{fig:snapShots_pl_ATP_err_XprXpt} \end{figure*} \section{Method} Following our previous work~\cite{raoufi2021speed}, we study the co-evolution of network structure and collective estimation for a swarm of $N$ real robots. The value to estimate is a continuous, spatially distributed scalar feature of the environment. In our experiments, this will be realized by a spatially varying light intensity field. The swarm's goal is to estimate a global property of the distributed feature and approach it in the physical space. Our focus is on estimation and localization of the environmental field's mean value {\color{newChanges}(see Fig.~\ref{fig:snapShots_pl_ATP_err_XprXpt})}. \par We define two phases: exploration and exploitation. Having separate phases for exploration and exploitation has been shown to be more efficient than mixed phases~\cite{reina2015design}. During initial exploration (see Sec.~\ref{subsec:exploration}), we program the swarm to expand. The aim is for the individual robots to collect diverse estimates of the environmental feature. The robots are supposed to cover as much area as possible while keeping network largely connected. {\color{newChanges} The communication range and the swarm size determine how much the swarm can expand without being disconnected. We define the end of exploration as the moment when the collective achieves a maximal area coverage while still maintaining connectivity.} During the subsequent exploitation phase (see Sec.~\ref{subsect:exploitation}), robots communicate to achieve a consensus on the mean value, and at the same time, try to move toward the spots in the environment where the measured intensity is closer to the consensus. We showed previously that by combining these components a contour-capturing behavior emerges~\cite{raoufi2021speed}. A~possible application is to contain pollution or localize the position of a resource in the environment~\cite{zahugi2012design, kaviri2019coverage, amjadi2019cooperative, haghighat2022approach}. \par \newcommand\figTwoWidth{1.6} \begin{figure}[!b]% \centering \includegraphics[width=0.7\linewidth]{Figures/ATP_errors/Exploration_Exploitation_ATP_err_formula_compact_task_2_new.png} \caption{The relation between accuracy (trueness and precision) errors in collective contour capturing for the example of radial distribution. The red and blue circles show the ground truth and collective mean contours respectively, and the crosses are the positions of robots in physical space. The initial trueness error (top left circle) is reduced during exploration phase, whereas the precision error increases (bottom right). During exploration, the precision error decreases~\cite{raoufi2021speed}, and robots capture the contour (bottom left).} \label{fig:ATP_err__XprXpt} \end{figure} We minimized the requirements with respect to the robotic platform to enable the implementation of the algorithm even on minimal robots, here specifically the Kilobot platform~\cite{rubenstein2012kilobot}. Although some algorithmic details are specific to our implementation on Kilobots, our model is generally applicable regardless of the swarm robotic platform. The requirements are: \textbf{a)} fully distributed algorithm; no central control, \textbf{b)} only local environmental information available, \textbf{c)} communication only to local neighbors, within a limited communication range, \textbf{d)} no prior information neither about the environment, nor the neighbors, and \textbf{e)} unbounded arena. \par \renewcommand\figTwoWidth{1.3} \begin{figure*}[!t] \centering \subcaptionbox{}{\includegraphics[height=\figTwoWidth in]{Figures/dispersion/coverage_cm2_colBlind.png}}% \hfill \subcaptionbox{}{\includegraphics[height=\figTwoWidth in]{Figures/dispersion/meanDeg_colBlind.png}}% \hfill \subcaptionbox{}{\includegraphics[height=\figTwoWidth in]{Figures/dispersion/giantComp_withRandomWalk_colBlind.png}}% \hfill \subcaptionbox{}{\includegraphics[height=\figTwoWidth in]{Figures/dispersion/mDeg_vs_CovArea_colBlind.png}}% % \caption{Real-world experiments of the dispersion algorithm while preserving the network connectivity for 5 repetitions and swarm sizes of \{10,20,30,40\}: a)~covered area by the swarm, b)~mean degree of the communication network, c)~number of nodes in the giant component of the network versus time. The orange line shows the result for a diffusion algorithm without preserving the connectivity of the network for N$=40$ robots. d)~Trade-off between network connectivity and coverage area. The transparent lines show individual experiment, and the solid lines are the mean value for a corresponding swarm size. The mean value truncated as soon as the shortest experiment finishes.} \label{fig:Res_Dispersion_ArCov_mDeg_gntComp} \end{figure*} \subsection{Exploration} \label{subsec:exploration} {\color{newChanges}With exploration, the variation and diversity of information available to the swarm increases.} During the exploration phase, no information is aggregated. As we demonstrated before~\cite{raoufi2021speed}, the exploration phase reduces the trueness error (systematic bias). In principle, any dispersion behavior may achieve the goal {\color{newChanges}in an unbounded environment}. However, due to limited connectivity of the distributed robots, a \emph{pure} random dispersion may disconnect robots from their neighbors and fragment the network. Blind random motion in an unbounded environment is dangerous as robots might get lost and never find their way back to the swarm~{\color{newChanges}\cite{hornischer2021cimax}}. \par As an alternative, we suggest a random walk while preserving the connectivity of the network. A~robot requires to know the approximate distance to its neighbors. We will show that even with noisy distance estimations the method is able to {\color{newChanges}keep the swarm largely connected}. With Kilobots, the estimation of the distance is calculated by considering the strength of the received infra-red (IR) signal~\cite{rubenstein2012kilobot}. Hence, making the random walk conditional on the distance to the nearest neighbor is the algorithm we implemented on robots. {\color{newChanges}Once the minimum distance to local neighbors goes below the threshold, the robot stops and waits for its local neighbors to finish their random walk, then it switches to exploitation. Violations of the desired distance take the robot back to dispersion phase.} By the end of this phase, the collective has the potential to make a less biased (or bias-free) estimation. Then, the swarm exploits the information distributed throughout the collective to increase the precision. See Fig.~\ref{fig:ATP_err__XprXpt} for an illustration on how exploration and exploitation can modulate the trueness and precision components of the total accuracy error. \subsection{Exploitation} \label{subsect:exploitation} Exploitation operates not only in the information domain, but also in the real physical space. By exploiting the information contained in the swarm, the collective estimation converges to the mean value in the information domain. The exploitation in the physical space results in individual robots converging towards the mean contours of the environmental field. Here, we introduce two mechanisms for each of the domains: local averaging and consensus-based phototaxis. \subsubsection{Local averaging} The first part of exploitation is used to {\color{newChanges}reach} consensus in the information domain, which is achieved by local communication of robots. The results of interactions in this phase facilitate the wisdom of crowds effect~\cite{simons2004many, surowiecki2005wisdom}, by enabling the agents to average their imperfect estimates of environmental cues~\cite{hills2015exploration, becker2017network}. The updating rule comes from the local averaging of DeGroot model~\cite{degroot1974reaching}, and we modified it by adding a memory term~\cite{raoufi2021speed}. The ultimate updating rule is formulated as: \begin{align} & \hat{z}_{i}^{t+1} = \alpha \hat{z}_{i}^{t} + \frac{1-\alpha}{1 + N_i} s_{i}^{t} + \frac{1-\alpha}{1 + N_i} \sum\limits_{j \in \boldsymbol{N_i}} {\hat{z}_{j}^{t}}\ . \label{Eq:consensus} \end{align} Here, each robot updates its estimation ($\hat{z}_{i}^{t+1}$) based on what it measures ($s_{i}^{t}$), and the average of its {\color{newChanges}$N_i$} neighbors' estimation, with a weighting factor $\alpha$. Robots repeat these updates for a fixed number of iterations $t_\text{comm.}=100$. The output of this phase is the consensus value (although all robots might not have exactly the same opinion about the consensus.) Robots use this value as input for the next {\color{newChanges}phase}. \par The updating equation (Eq.~\ref{Eq:consensus}) can be reformulated from a network point of view~\cite{olfati2007consensus, golub2010naive}. This would convert the model to a linear system whose transition matrix is the normalized weighted adjacency matrix of the network, its states are agents' estimation and the measurements are the inputs. Assuming the general system without input, the result of such local averaging, given that the network of communication is fully connected, is the mean value of information available within the collective~\cite{golub2010naive}. Later, we briefly discuss how the connectivity of the network (mean node degree, in particular) changes the dynamic of this system. \subsubsection{Consensus-based Phototaxis (CBPT)} We implement a sample-based pseudo gradient descent for the motion of robots which implements homophily on networks. Homophily is the tendency to interact more with like-minded agents in a social group~\cite{khanam2022homophily}. We require a collective motion that moves robots sharing similar opinions closer to each other and thus establishes links~\cite{raoufi2021speed}. As a pseudo gradient descent method, we choose the bio-inspired phototaxis method. By CBPT the robots are guided to areas where light measurements match the consensus value. \section{Metrics and Setup} \subsection{Covered Area} We measure the area that is covered by the swarm. We consider a disk centered at each robot's position with radius ($r_\text{cover}$). For Kilobots, we choose $r_\text{cover} = 3r_\text{rob} = 5\ \text{cm}$ which is roughly half its communication range {\color{newChanges}($r_\text{rob}$~is robot radius)}. We calculate the collective coverage as the (non-overlapping) intersection of areas~$A_{\text{cover},i}^{(x_i,y_i)}$ with $\|A_{\text{cover},i}^{(x_i,y_i)}\|=\pi r_\text{cover}^2$ covered by each robot~$i$ {\color{newChanges}located} at~$(x_i,y_i)$: \begin{equation} A_\text{cover} = \bigcap\limits_{i=1}^{\text{N}}A_{\text{cover},i}^{(x_i,y_i)} \;. \end{equation} \subsection{Network Properties} The inter-agent communication network plays a critical role for the whole scenario. It is challenging to determine the existence of actual robot-robot communication links forming the network, as these links are noisy and difficult to extract from the robot swarm during an experiment. For simplicity we assume that if the distance between two robots is less than the average communication range, then there is a link. The communication range is assumed to be $r_\text{comm}=10\ \text{cm}$~\cite{pinciroli2018simulating}. The links are estimated based on robot positions and distances obtained from tracking via a top-view camera. False positives and negatives for links between robots are possible as this is only an estimation. \par We record the connectivity of the network by measuring the connectivity using two metrics: mean node degree and giant component size. Although the communication network of Kilobots is not necessarily undirected (signal strength is not always symmetric), we assume an undirected network for simplicity. In- and out-degree of all nodes are equal as well as the \emph{mean} in- and out-degree. As second network metric we use giant component sizes, that is the number of nodes in the largest connected component of the network. This way we quantify how many robots have disconnected from the main cluster (implemented with NetworkX Library~\cite{hagberg2008exploring}). \par \subsection{Accuracy Metrics} Collective estimation (accuracy) error is decomposed into trueness and precision error, which relates to the bias and variance decomposition of the total error. {\color{newChanges}We showed that the generality and case-independence of these metrics enable their usage in various conditions (see~\cite{raoufi2021speed} for details).} We assume as ground truth for estimation the mean value of the light intensity across the environment ${z}_\text{gt}=\bar{z}_\text{env}$. By defining the individual estimation for robot~$i$ as $\hat{z}_i$ and {\color{newChanges} collective estimation as $\hat{z}_\text{col}=\sum_{i=1}^{N}\hat{z}_i / {N}$}, we obtain for trueness, precision, and accuracy errors: \begin{align} E_\text{T} =& (\hat{z}_\text{col} - {z}_\text{gt})^2 \ , E_\text{P} = \frac{1}{\text{N}} \sum\limits_{i=1}^\text{N}(\hat{z}_i - \hat{z}_\text{col})^2 \ , \\ E_\text{A} =& \frac{1}{\text{N}} \sum\limits_{i=1}^\text{N}(\hat{z}_i - {z}_\text{gt})^2 = E_\text{T} + E_\text{P} \ . \end{align} As we have no direct access to a robot's current estimation, we use its position as an indicator of its estimation. For each environmental distribution, there is a mapping between the camera-detected Cartesian robot positions and the coordination of interest. For example, in the radial distribution of Fig.~\ref{fig:snapShots_pl_ATP_err_XprXpt}, the mapping $m(x_i,y_i)$ is: \begin{equation} \hat{z}_i = r_i = m(x_i,y_i) = \sqrt{(x_i-x_c)^2 + (y_i-y_c)^2}, \end{equation} where, $(x_c,y_c)$ is the distribution's center, and $(x_i,y_i)$ is the detected robot's position in the captured frame. \\ \subsection{Experimental Setup} In our experiments we use Kilobot robot swarms~\cite{rubenstein2012kilobot} of up to 40 robots, on a {\color{newChanges}$90\times90\ \text{cm}^2$ arena of a $1.5\times2.5\ \text{m}^2$ white-board}. For tracking we use a downward-facing camera and Hough circle transformation from OpenCV Library~\cite{opencv_library}. {\color{newChanges} Otherwise mentioned, we used the same parameters as~\cite{raoufi2021speed}.} \newcommand\figTwoHeight{0.8} \begin{figure}[b] \centering \subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/consensus/E_p__vs__Time_50_blue_Large.png}}% \hfill \subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/consensus/log_E_p__vs__MDeg__25_50_100_Large.png}}% \hfill \subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/consensus/t2ss_vs_mDeg__and_2ndEigVal__25_50_100_Large.png}}% \caption{The simulation of consensus model on static networks with different connectivity. a) Time evolution of precision error for different networks (darker lines indicate lower connectivity). b) Steady-state precision error (last time step) versus mean node degree, c)~time to achieve a steady state (solid lines) and second largest eigenvalue of adjacency matrix (dotted lines) versus mean node degree. The results are the average of 1000 {\color{newChanges}independent Monte Carlo simulations.}} \label{fig:consensus} \end{figure} \section{Results and Discussion} \newcommand{1.15}%{3.5}{1.15 \begin{figure*}[ht] \centering \subcaptionbox{}{\includegraphics[height=1.15}%{3.5 in]{Figures/contourCapturing/violins_radial_px_and_cm_16boxs.png}}% \hfill \subcaptionbox{}{\includegraphics[height=1.15}%{3.5 in]{Figures/contourCapturing/Error_vs_Time_wide_colBlind_rescaled.png}}% \hfill \subcaptionbox{}{\includegraphics[height=1.15}%{3.5 in]{Figures/contourCapturing/AreaCov_n_mDeg_vs_Time_wide_colBlind.png}} \caption{Real-world experiments of the full scenario in an environment with radial distribution for $N=40$ Kilobots. a) The distribution of robots of a single experiment in the {\color{newChanges}polar coordinate system}, b) the accuracy errors, c) the coverage area and mean node degree of the network over time. The transparent lines show the result of 8 independent real robot experiments, and the solid lines are the average over different experiments.} \label{fig:Res_contCapturing} \end{figure*} We study each component of our scenario (dispersion, consensus, CBPT) as stand-alone swarm tasks. Later, we combine these components to form a complex scenario. \subsection{Dispersion} \label{sec:dispersion} The aim of dispersion is to increase covered area. We measure how much area is covered by robots (Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}-a). To indicate the dynamic network structure, we measure the mean degree of the network (Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}-b). The results in Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}, indicate that initially the collective starts from a dense distribution with low coverage area and high connectivity in the network. Due to dispersion, the collective expands and covers larger area while the mean degree decreases. This increase in the covered area can lead to a lower trueness error in the collective estimation. The network gets sparser (reduced node degrees) {\color{newChanges} while the giant component size of the network does not change significantly, suggesting that the network connectivity is largely preserved.} Later we show how reduced connectivity results in lower speed of convergence during the decision-making process. Both the covered area and mean degree converge to steady state values. Once robots stop moving, we finish the experiment. \par In Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}-c, we show the size of the giant component. The algorithm keeps the majority of the swarm connected while a few robots disconnect from the swarm. In our analysis we found that often two (or more) robots stick to each other and while measuring strong signals from each other, they continue moving. They detach from the swarm, although they are members of a small cluster. As a control experiment, we tested a random walk diffusion algorithm that does not try to preserve connectivity (solid orange line in Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}-c). Almost half of the swarm disconnects within three minutes. In comparison, our algorithm preserves connectivity well. \subsection{Consensus} Consensus occurs only in the information domain, which makes it difficult to measure in a real robot experiment. However, we simulated the consensus algorithm on a static network in order to show how the precision error changes over time (Fig.~\ref{fig:consensus}-a) and how its dynamics change with changing network properties, namely mean degree. We studied spatial networks with N$=\{25,50,100\}$ nodes and different connectivity to investigate the role of mean degree. For doing so, we distributed N agents uniformly in an environment, and drew a deterministic network with a specific communication range. Then, we varied the communication range (ratio to the environment size) to achieve networks with various mean degree. As agents share and update their estimation about the mean value of the distribution, they converge to the consensus estimation, and thus the precision error decreases (Fig.~\ref{fig:consensus}-a)--this is the well-known speed-vs-accuracy trade-off happening over the course of decision-making. \par In Fig.~\ref{fig:consensus}-b, we show how the mean node degree of the network influences the accuracy (precision) of the steady-state collective estimation. A higher mean degree leads to a lower precision error. With respect to speed of consensus, we measured the time to reach a steady state using a threshold ($\delta=1e-4$) and recorded the first passage time of the precision error. The peaks in Fig.~\ref{fig:consensus}-c show the slowest convergence time for a specific mean degree. The speed reduces significantly for lower and higher degrees. A~low or zero mean degree means there are few or no links in the network. Convergence is fast without information flow but not accurate. As known from graph theory, the network is immediately fully connected once the mean degree exceeds a critical value. This is where the second largest eigenvalue of the network adjacency matrix {\color{newChanges}becomes less than} one. \newcommand{0.95}{0.95} \begin{figure*}[!ht] \centering \subcaptionbox{}{\includegraphics[height=0.95 in]{Figures/contourCapturing/control/Error_vs_Time_wide_sw20.png}}% \hfill \subcaptionbox{}{\includegraphics[height=0.95 in]{Figures/contourCapturing/control/Error_vs_Time_wide_sw80.png}}% \hfill \subcaptionbox{}{\includegraphics[height=0.95 in]{Figures/contourCapturing/control/Error_vs_Time_wide_sw170.png}}% \hfill \subcaptionbox{}{\includegraphics[height=0.95 in]{Figures/contourCapturing/control/nRemained_E_A_Bbox_withCollective.png}} \caption{Real-world control experiments with $N=40$ Kilobots. The accuracy errors over time for the control experiments show the performance of the individualistic method. Each plot is the average of 5 real robot experiments for different switching times, a) $t_\text{sw} = 20$, b) $t_\text{sw} = 80$, c) $t_\text{sw} = 170$. d) Box plot of the number of robots remained within the area of interest at the last snapshot (black boxes) and the final accuracy error {\color{newChanges}(red boxes)} comparing the collective scenario and the control experiment (individual) with different switching times.} \label{fig:Res_contCapturing_controlExp} \end{figure*} \subsection{Contour Capturing} Next, we present our results for the scenario of contour capturing with a swarm of $N=40$ Kilobots. The objective is to gather the robots at the contours with mean light distribution. First, we give results of our fully distributed collective method. Second, we define a control experiment without robot-robot communication as baseline for comparison. \subsubsection{Collective Scenario--radial distribution} Here we present our main result in real-world experiments with Kilobots for the whole scenario by assembling the above components: dispersion while keeping the network connected, local averaging to achieve consensus, and homophily by CBPT to approach the consensus value. For a radial light distribution, Fig.~\ref{fig:Res_contCapturing}-a shows the radial distribution of robot positions during the experiment. Initially, the robots are distributed rather densely close to the center ($r_i \rightarrow 0$). During the dispersion the distribution becomes more uniform by spreading to larger radii. Then the local consensus finding with minimal movement starts while the spatial distribution of robots remains largely unchanged ($200<t<400$). In a third phase robots approach the mean contour line by CBPT and the distribution contracts around 160 pixels {\color{newChanges}($\approx 25\text{cm}$)}. \par The temporal evolution of the trueness, precision and accuracy errors is illustrated in Fig.~\ref{fig:Res_contCapturing}-b. The trueness error quickly drops to a small value by the end of the dispersion phase ($t\approx200$ sec). However, the variation is still large although the mean value of the radial distribution is close to the ground truth. Thus, in contrast to the accurate mean value of the collective, each robots' estimation is not yet accurate. This is because robots did not aggregate any information during dispersion. But, now that the collective is less-biased, and the network is connected robots exploit the information available within the entire collective. This is implemented via the local average from the consensus method (see eq.~\ref{Eq:consensus}). At time~$t\approx400$~s, the swarm arrives at a consensus on the information domain, but robot positions are still off the mean contour line. During the CBPT phase, robots approach the mean value in space and precision error is reduced. We observe both a low precision error and a low accuracy error. These results confirm our previous work in simulations~\cite{raoufi2021speed}. \par The mean degree and area coverage of the swarm evolve in an anti-correlated manner. During dispersion, the swarm spreads out to cover more area and the spatial distribution gets sparser, hence, reducing the mean node degree. But the process inverts during exploitation as robots get closer to each other and increase network connectivity. Covered area decreases because robots form a denser distribution around the contour line and the overlap area increases. \subsubsection{Control experiment--no communication} As control experiment, the robots do contour capturing without collaboration between robots or exchange of any information. During exploration, each robot walks randomly while updating and aggregating its mean value estimation. Robots {\color{newChanges}iteratively} average over measured samples. The random walk is random diffusion and without effects by other robots (in difference to Sec.~\ref{sec:dispersion}). It stops after a predetermined number of samples ($t_\text{sw}$). Then robots switch to exploitation and follow the CBPT algorithm to approach the estimated mean light spot. We used three different switching times: $t_\text{sw} = \{20, 80, 170\}$. \par As seen in Fig.~\ref{fig:Res_contCapturing_controlExp}-a, a too short exploration ($t_\text{sw} = 20$) does only insufficiently reduce the trueness error (red line). Whereas the precision error (blue line) remains as high as the initial value due to insufficient spatial dispersal of robots. In Fig.~\ref{fig:Res_contCapturing_controlExp}-b, a sufficiently long exploration ($t_\text{sw} = 80$) reduces the trueness error, and manages the temporarily high precision error ($t\approx100$~s). Fig.~\ref{fig:Res_contCapturing_controlExp}-c indicates a too long exploration phase resulting in a larger precision error. In our previous work~\cite{raoufi2021speed}, we already showed that (in a bounded environment) too late switching can cause the precision error to remain high (for a limited time budget). \\ The \emph{unbounded} environment is challenging as the swarm tends to loose more and more robots (lost connectivity) with increased exploration time (Fig.~\ref{fig:Res_contCapturing_controlExp}-d). In addition to the known speed-vs-accuracy trade-off, we find this new trade-off in unbounded environments. With uncontrolled diffusion, one does not only pay in speed for accuracy, but also in the number of robots that get lost. \subsubsection{Collective Scenario--V-shape ramp distribution} In the model simulations presented in~\cite{raoufi2021speed}, we showed that the algorithm is able to capture the mean contour line for different environmental distributions, including uni- and multi-modal ones. In this part, we tested another distribution that is of an inverted V-shape, with a peak on its diagonal as in Fig.~\ref{fig:Res_contCapturing_rotRamp}-a. The evolution of the distribution of robots over time (Fig.~\ref{fig:Res_contCapturing_rotRamp}-b) demonstrates how the swarm expands uniformly up until the exploitation phase. Then, they branch into two different clusters; one on the top left and the other on the bottom right of the diagonal. The accuracy errors of Fig.~\ref{fig:Res_contCapturing_rotRamp}-(c) have the same qualitative trends as in Fig.~\ref{fig:Res_contCapturing} for radial distribution. However, the remaining precision error at the end of the experiments indicates that the problem here is more difficult to solve. We note that here the precision error represents the dominant contribution to the total error. \renewcommand{\figTwoHeight}{1.15} \begin{figure}[hb] \centering \subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/contourCapturing/RotatedRamp/snapShot_at_39500.png}}% \hfill \subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/contourCapturing/RotatedRamp/Error_vs_Time_wide_RotRamp.png}} \caption{Real-world results for the scenario with diagonal distribution: a)~representative example of final robot distribution showing the position of two major clusters on each side of the ramp. b)~Three errors types over time (6~repetitions).} \label{fig:Res_contCapturing_rotRamp} \end{figure} \section{Conclusion} Starting from our previous work on the speed-accuracy trade-off in collective estimation~\cite{raoufi2021speed}, we have successfully implemented a real robot swarm (Kilobots) to capture a contour in a continuous environmental field in an unbounded arena. Our dispersion method {\color{newChanges}largely} preserves connectivity of the swarm and minimizes losing robots {\color{newChanges} during exploration}. As another component, we introduced a sample-based optimization method inspired by phototaxis that makes the Kilobots approach the desired contour. We added a light conductor to the robot (minimizing shadows on the sensor) to improve light measurements. This seems to be a novel implementation of a gradient ascent for Kilobots with various potential applications. {\color{newChanges} The codes we used in this paper are available on GitHub \cite{MRaoufi_Github}.} \par Previously we showed that besides the {\color{newChanges} speed-vs-accuracy} there are also exploration-vs-exploitation trade-offs~\cite{raoufi2021speed} that are generally non-trivial to resolve. With our new dispersion method, an optimal switching time to finish exploration is not required anymore. The swarm automatically ends dispersion at supposed best achievement constrained by connectivity. Here we discussed another trade-off induced by dynamic network topologies. During exploration, the temporarily low mean degree slows down collective decision-making. But the swarm expansion improves the accuracy of the estimation. \par In future work, we plan to study contour-capturing scenarios in dynamic environments. We also plan to analyze scalability and test different light distributions. \section*{Acknowledgment} We thank Marshall Lutz Mykietyshyn and Noran Abdelsalam for their contribution to real robot experiments. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A unitary fusion category can be seen as the generalization of a finite group $G$, which is neither assumed to be commutative nor co-commutative. In particular, the easiest examples are the category $\Rep(G)$ of unitary representation of a finite group $G$ and the category $\Hilb_G$ of $G$-graded finite dimensional Hilbert spaces. Note that $G$ is co-commutative in the sense that $\Rep(G)$ is commutative, while $G$ is in general non-commutative. A factor is a von Neumann algebra with trivial center and a rather boring object. On the other hand a subfactor, an inclusion $N\subset M$ of a factor $N$ into another, turns often out to be a really interesting object. For example subfactor obtained by taking a fixed point with respect to a free action of a finite group $N=M^G\subset M$ gives $\bim M\mathcal{F} M\cong\Hilb_G$ and $\bim N \mathcal{F} N\cong \Rep(G)$. In general, a finite depth subfactor $N\subset M$ gives two unitary fusion categories $\bim N\mathcal{F} N$ and $\bim M\mathcal{F} M$ which are (higher) Morita equivalent. Conversely, having a unitary fusion category $\mathcal{F}$, there is a subfactor $N\subset M$, such that $\bim N\mathcal{F} N \cong \bim M \mathcal{F} M \cong \mathcal{F}$. An important invariant \cite{Jo1983} is the index $[M:N]$ of a subfactor, which by Jones' index theorem takes values in: $$ [M:N]\in \left\{ 4\cos^2\left(\frac{\pi}{m}\right) : m=3,4,\ldots\right\}\cup [4:\infty]. $$ Another invariant is a pair of graphs, the principle and dual principal graphs, which are bipartite graphs. For index $[M:N]<4$ they are given by $A$-$D_{2n}$-$E_{6,8}$\@\xspace Dynkin diagrams, where the index is related to the Coxeter number $m$ of the graph by $[M:N]=4\cos^2(\pi/m)$. A unitary braided fusion category is a unitary fusion category with a braiding A braiding is a natural family of unitaries $\varepsilon(\rho,\sigma)\in\Hom(\rho\otimes \sigma,\sigma \otimes \rho)$. Braided categories give a representation of the $n$-strand braid groups $B_n=\langle e_1,\ldots,e_{n-1} : e_{i+1}e_ie_{i+1}=e_ie_{i+1}e_i,~e_ie_j=e_je_i \text{ if }|i-j|\geq2\rangle$ on $\Hom(\rho^{\otimes n},\rho^{\otimes n})$. If $\varepsilon(\rho,\sigma)\varepsilon(\sigma,\rho)=1_{\sigma\otimes\rho}$ for all objects $\sigma,\rho$, it is called a symmetric fusion category. In this case the representations of the braid group are actually representations of the symmetric group. On the other hand, in a unitary modular tensor category (UMTC) the braiding is non-degenerated, in the sense that if $\varepsilon(\rho,\sigma)\varepsilon(\sigma,\rho)=1_{\sigma\otimes\rho}$ for all $\rho$, then $\sigma$ is a direct sum of the trivial object. Simple examples of UMTCs $\mathcal{C}$ are the one where every irreducible object is invertible (has dimension 1). Then the fusion rules form an abelian group $A$ and $\mathcal{C}$ is characterized by a non-degenerated quadratic form on $A$. The Drinfeld center of a UFC $\mathcal{F}$, or the quantum double of a finite depth subfactor $N\subset M$, which equals the Drinfeld center $Z(\mathcal{F})$ of either of its fusion categories $\mathcal{F}\in\{\bim N\mathcal{F} N, \bim M \mathcal{F} M\}$, is a unitary modular tensor category \cite{Mg2003II}. A coordinate version of modular tensor categories were invented by Moore and Seiberg \cite{MoSe1990} to axiomatize (the topological behaviour of) conformal field theories. Braided tensor categories also appeared in algebraic quantum field theory \cite{FrReSc1989} and UMTCs and their structure were analyzed by Rehren in \cite{Re1989}. There are two axiomatizations for chiral CFT: vertex operator algebras (VOAs) and conformal nets and in both approaches the representation theory gives under certain sufficient conditions a (unitary) modular tensor category. The natural question arises, if all modular tensor categories arise as representation categories of chiral CFT. A subquestion is if the quantum double of subfactors or equivalently Drinfeld centers of unitary fusion categories arise in this way. We want to discuss such a question in the framework of conformal nets, which is naturally related to the study of subfactors. More precisely, if $\mathcal{A}$ is a completely rational conformal net, then the category of Doplicher--Haag--Roberts representions $\Rep(\mathcal{A})$ is a unitary modular tensor category (UMTC) by \cite{KaLoMg2001}. We vaguely conjecture that the following is true. \begin{conjecture} \label{conj:1} Let $\mathcal{C}$ be a unitary modular tensor category (UMTC), then there is at completely rational conformal net $\mathcal{A}$ with $\Rep(\mathcal{A})\cong\mathcal{C}$. \end{conjecture} An analogous statement in higher dimensional algebraic quantum field theory (see \cite{Ha}) is known to be true. Namely, it is shown that under natural assumption for every net $\mathcal{A}$ there is a compact (metrizable) group $G$ with a central involutive element $k\in G$, such that the category of Doplicher--Haag--Roberts representations $\DHR(\mathcal{A})$ is the category of unitary representations of $G$, which is $\mathbb{Z}_2$-graded by $k$. Every such pair $\{G,k\}$ can be realized using free field theory \cite{DoPi2002}. Conjecture \ref{conj:1} would imply the following weaker conjecture. \begin{conjecture} \label{conj:2} Let $\mathcal{F}$ be a unitary fusion category (UFC), then there is a completely rational conformal net $\mathcal{A}$ with $\Rep(\mathcal{A})\cong Z(\mathcal{F})$, where $Z(\mathcal{F})$ is the Drinfeld center. Equivalently, let $N\subset M$ be a finite depth subfactor, then there is a completely rational conformal net $\mathcal{A}$ with $\Rep(\mathcal{A})\cong D(N\subset M)$, where $D(N\subset M)$ denotes the quantum double of $N\subset M$. \end{conjecture} \begin{rmk} The net $\mathcal{A}$ in Conjecture \ref{conj:1} or \ref{conj:2} would be far from unique. Namely, let $\mathcal{B}$ be a holomorphic net, \ie the representation category is trivial $\Rep(\mathcal{B})\cong \Hilb$, then $\Rep(\mathcal{A}\otimes \mathcal{B})\cong\Rep(\mathcal{A})$. \end{rmk} So far a technique which produces from a subfactor or a fusion category a conformal field theory is not established, though see \cite{Jo2014} for some recent approach. But subfactors up to index 5 are classified and we can try to exhaust (part of) the classification list, by constructing a CFT model for every subfactor in the list. If we have a UMTC $\mathcal{C}$ we can replace the braiding by its opposite braiding $\varepsilon(\rho,\sigma)=\varepsilon(\sigma,\rho)^\ast$ which gives (in general) a new UMTC denoted $\rev{\mathcal{C}}$. \begin{conjecture} \label{conj:3} Let $\mathcal{A}$ be a completely rational conformal net. Then there exist a completely rational conformal net $\tilde \mathcal{A}$, such that $\Rep(\tilde \mathcal{A})\cong \rev{\Rep(\mathcal{A})}$. \end{conjecture} Here the positivity of energy is crucial. One can easily construct $\tilde \mathcal{A}$ with ``negative energy'' having this property. Note that Conjecture \ref{conj:3} would imply that Conjecture \ref{conj:2} would hold for all $\mathcal{F}=\Rep(\mathcal{A})$ which are representation category of a conformal net $\mathcal{A}$. Indeed, $\mathcal{C}=\Rep(\mathcal{A})$ is a UMTC and thus $Z(\mathcal{C})\cong \mathcal{C} \boxtimes \rev{\mathcal{C}} \cong\Rep(\mathcal{A}\otimes\tilde \mathcal{A})$. There are more exotic subfactors for which the realization by conformal field theory in any sense is not know. The first is the Haagerup subfactor \cite{Ha1994}. Its quantum double is considered to be exotic in \cite{HoRoWa2008}. In the same article also the quantum double of the $E_6$ subfactor is considered exotic. The authors admit that they did not consider simple current extensions. We show that the double of $E_6$ indeed just arises as $\mathbb{Z}_2$--simple current construction of $\mathop{\mathsf{SU}}(2)_{10}\times \mathop{\mathsf{SO}}(11)_1$ and thus is far from exotic. We also note that the even part of the $E_6$ subfactor is a pivotal fusion category of rank 3 and the lowest rank example of a pivotal fusion category which is not braided by the classification of rank 3 pivotal fusion categories \cite{Os2013}. Conjecture \ref{conj:2} would give a positive answer to the question: \begin{question} \label{quest:AllSubfactors} Does every finite depth subfactor come from conformal field theory (\cf \cite{Jo2014}). \end{question} Namely, for every completely rational conformal net $\mathcal{A}$, Kawahigashi, Longo, Rehren and the author have recently shown that certain subfactors related to $\Rep(\mathcal{A})$ classify the phase boundaries of a full conformal field theory on Minkowski space based on the chiral theory $\mathcal{A}$. \begin{prop}[see Proposition \ref{prop:PhaseBoundaries}] Let $N\subset M$ be a subfactor and a completely rational conformal net $\mathcal{A}$ with $\Rep(\mathcal{A})\cong D(N\subset M)$. Then there is a phase boundary related to the subfactor $N\subset M$. \end{prop} So, in this sense Conjecture \ref{conj:2} would really give a positive answer to Question \ref{quest:AllSubfactors}. The main goal of this article is to confirm Conjecture \ref{conj:2} for the simple case $[M:N]<4$. \begin{prop}[see Corollary \ref{cor:Doubles}] Every quantum double $D(N\subset M)$ of a subfactor $N\subset M$ with $[M:N] < 4$ is realized by a completely rational conformal net $\mathcal{A}_{N\subset M}$, \ie $\Rep(\mathcal{A}_{N\subset M})\cong D(N\subset M)$. \end{prop} We note that the next possible index is realized by the Haagerup subfactor mentioned above with index $$[M:N]=\frac{5+\sqrt{13}}{2} \approx 4.303$$ and there is strong indication in \cite{EvGa2011}, that there is a conformal net realizing its double. We hope that our techniques here give new ideas to construct this examples. This article is organized as follows. In Section \ref{sec:QD} we give some preliminaries about braided subfactors and quantum doubles and in Section \ref{sec:CN} we give some preliminaries about conformal nets on the circle and introduce some examples which we later need. We give some characterization and structural results of conformal nets whose representation category is a quantum double. In Section \ref{sec:Realization} we give some results about conformal nets having the opposite braiding of a given net. We give examples of nets having opposite braiding of $\mathop{\mathsf{SU}}(2)_k$. We give a general criterion how the quantum double of a subfactor arising by $\alpha$-induction of an inclusion of conformal nets yields a conformal net realizing the quantum double of it. We use these techniques for the realization of quantum doubles for index less than 4 and some sporadic examples between $4$ and $5$. In Section \ref{sec:VOA}, by using the categorical nature of our result, whe show how to relate it to vertex operator algebras. In particular, there is also are realization of quantum doubles of subfactors with index less than 4 by vertex operator algebras. \subsection*{Acknowledgements} I would like to thank Zhengwei Liu for some useful comment and Luca Giorgetti and Yasuyuki Kawahigashi for remarks on early version of this manuscript. Ideas for this work was were obtained at the Workshop ID: 1513 ``Subfactors and Conformal Field Theory'' at the Mathematische Forschungsinstitut Oberwolfach and the author would like thanks the organizers and the MFO \section{Quantum Doubles} \label{sec:QD} We are using here the language of endomorphisms of type III factors (see \cite{BiKaLoRe2014-2}), but the same can be understand in terms of bimodules of type II or type III factors or in terms of unitary fusion categories. We note the it follows from \cite{HaYa2000}, (more indirect also from \cite{Po1993,Po1994-2} and in certain cases \cite{Oc1988}) that any abstract unitary fusion category $\mathcal{F}$ can be realized as $\mathcal{F}\subset \End(M)$ with $M$ the hyperfinite type III${}_1$ factor. By Popa's theorem \cite{Po1993} such a realization is unique, namely if $\tilde\mathcal{F}\subset \End(N)$ another realization then there is an isomorphism $N\to M$ implementing the equivalence between the two fusion categories (\cf \cite[Proof of Corollary 35]{KaLoMg2001}). Given an inclusion $N\subset M$ of hyperfinite type III${}_1$ factors $M,N$ with finite minimal index $[M:N] < \infty$ \cite{Jo1983,Ko1986} we denote by $\iota\colon N\to M$ the inclusion map. We often write $\iota(N)\subset M$ to have a uniform notation if we consider endomorphisms $\rho$ of $M$ and inclusions $\rho(M)\subset M$. By the finite index assumption, there is a conjugate morphism $\bar \iota\colon M \to N$, such that $\id_M\prec \iota\circ\bar \iota$ and $\id_N\prec \iota\circ\bar \iota$. Then $(\bar\iota\circ\iota)^{\circ n}$ and $(\iota\circ\bar\iota)^n$, $n\in \mathbb{N}$ generate full $C^\ast$\@\xspace-tensor categories (\cite{LoRo1997}) $\bim[\iota]N\mathcal{F} N \subset \End_0(N)$ and $\bim[N\subset M]M\mathcal{F} M \subset \End_0(N)$, respectively, and we say that $\iota(N)\subset M$ has finite depth if and only if $|\Irr(\bim[N\subset M]N\mathcal{F} N )|< \infty$, or equivalently $|\Irr(\bim[N\subset M]M\mathcal{F} M )|< \infty$. Similarly, one defines full replete subcategories $\bim[N\subset M]N\mathcal{F} M=\langle(\bar\iota\circ\iota)^n\circ\bar\iota\rangle\subset \Mor_0(M,N)$ and $\bim[N\subset M]M\mathcal{F} N=\langle \iota\circ(\bar\iota\circ\iota)^n\rangle\subset \Mor_0(N,M)$. The (strict) 2-category $\mathcal{F}^{N\subset M}$ with two 0-objects $\{N,M\}$ and the hom-categories given by $\bim[N\subset M]N\mathcal{F} N$, $\bim[N\subset M]N\mathcal{F} M$, $\bim[N\subset M]M\mathcal{F} N$ and $\bim[N\subset M]M\mathcal{F} M$ is called the \textbf{standard invariant} of $N\subset M$. The finite depth condition corresponds to rationality in conformal field theory. Given a fusion category $\bim N\mathcal{F} N\subset \End(N)$ and a subfactor $N\subset M$ \textbf{related} to $\bim N\mathcal{F} N$, \ie $\bar \iota\circ\iota\in \bim N\mathcal{F} N$ (then $\bim[N\subset M] N{\mathcal{F}}{N}\subset \bim N\mathcal{F} N$) the \textbf{dual category} $\bim M \mathcal{F} M\subset \End_0(M)$ is the fusion category generated by $\beta\prec \iota \circ\rho \circ\bar\iota$ with $\rho \in \bim N{\mathcal{F}} N$. The categories $\bim N\mathcal{F} N$ and $\bim M\mathcal{F} M$ are Morita equivalent in the sense of \cite{Mg2003} the Morita equivalence is given by tensoring with $\iota$ and $\bar \iota$. We start with a unitary \textbf{modular tensor category (UMTC)} $\bim N\mathcal{C} N\subset \End_0(N)$, where the unitary braiding in $\Hom(\rho\circ\sigma,\sigma\circ\rho)$ is denoted by $\varepsilon^+(\rho,\sigma)$ or simply $\varepsilon(\rho,\sigma)$ and the reversed braiding by $\varepsilon^-(\rho,\sigma)=\varepsilon(\sigma,\rho)^\ast$. Let us fix $\iota(N)\subset M$ related to $\bim N\mathcal{C} N$. This gives $\theta=\bar\iota \circ \iota$ the structure of an algebra object in $\bim N\mathcal{C} N$, more precisely a Q-system $\Theta=(\theta,x,w)$. There is a notion of commutativity, namely let $x\in\Hom(\theta,\theta\circ\theta)$ be the co-multiplication, then the Q-system is called \textbf{commutative} if and only if $\varepsilon(\theta,\theta)x=x$. Let us fix a subfactor $\iota(N)\subset M$ related to $\bim N\mathcal{C} N$. Then $\alpha$-induction maps from $\mathcal{C}=\bim N\mathcal{C} N$ to the dual category $\mathcal{D} =\bim M\mathcal{C} M$ and is given by: \begin{align*} \bim N \mathcal{C} N &\longrightarrow \bim M \mathcal{C} M\subset \bim M \mathcal{C} M\\ \lambda & \longmapsto \alpha ^\pm_\lambda :=\bar\iota^{-1}\circ \Ad (\varepsilon^\pm(\lambda,\theta)) \circ \lambda\circ \bar \iota \in \End(\mathcal{M}) \,. \end{align*} We denote by $\mathcal{D}_\pm \equiv\bim[\pm]M{\mathcal{C}}M= \langle \alpha_\rho^\pm:\rho\in\bim N\mathcal{C} N\rangle$ the UFC generated by $\alpha^\pm$-induction, respectively, and by $\mathcal{D}_0\equiv\bim[0]M{\mathcal{C}}M=\mathcal{D}_+\cap\mathcal{D}_-$ the \textbf{ambichiral} category. Let $\mathcal{F}$ be a unitary fusion category. We can assume that it is (essentially uniquely) realized as $\mathcal{F}\cong\bim N \mathcal{F} N\subset \End_0(N)$ with $N$ a hyperfinite type III${}_1$ factor. Let $A:=N\otimes N^\mathrm{op} \subset \iota_\mathrm{LR}(B)$ be the Longo--Rehren inclusion with $\bim A\mathcal{C} A \cong \bim N \mathcal{F} N \boxtimes \bim[\mathrm{op}] N \mathcal{F} N$ and $\bim B \mathcal{C} B$ the category generated by $\langle (\bar\iota_\mathrm{LR}\circ \beta\circ\iota_\mathrm{LR})^n : \beta \in \bim A\mathcal{C} A \rangle \subset \End(B)$. Then Izumi showed that $\bim B \mathcal{C} B\cong Z(\mathcal{F})$, where $Z(\mathcal{F})$ denotes the unitary Drinfeld center \cite[Section 6]{Mg2003II} of $\mathcal{F}$, which is a UMTC by \cite{Mg2003II}. The Q-system $\Theta_\mathrm{LR}=(\theta_\mathrm{LR},w_\mathrm{LR},x_\mathrm{LR})$ with $\theta=\bar \iota_\mathrm{LR}\circ\iota_\mathrm{LR}$ is commutative and $d \theta_\mathrm{LR} = \Dim(\mathcal{F})$, where $\Dim(\mathcal{F}) = \sum_{\rho\in \Irr(\mathcal{F})} d\rho^2$ is the \textbf{global dimension}. If we start with a finite depth subfactor $\iota(N)\subset M$, then $Z(\bim[N\subset M]N\mathcal{F} N)\cong Z(\bim[N\subset M]M\mathcal{F} M)$ (see proof of Proposition \ref{prop:sandwich} below) and we can talk about the \textbf{quantum double of} $\iota(N)\subset M$, denoted by $D(N\subset M)$. \begin{example} The quantum double $D(N\subset M)$ has been calculated in \cite[Section 4]{EvKa1998} for $A_n$ subfactors and \cite[Examples 5.1,5.2]{BcEvKa2001} for $E_6$ and $E_8$ subfactors. The quantum double of $E_6$ has also been computed using the tube algebra and half-braidings in \cite{Iz2001II}. \end{example} The quantum double is related to the Ocneanu's asymptotic inclusion \cite{Oc1988}, Popa's symmetric enveloping algebra \cite{Po1994} and the Longo--Rehren subfactor \cite{LoRe1995}, see also \cite{Ma2000,Iz2000}. Izumi showed \cite{Iz2000} that there is a Galois correspondence, namely there is a one-to-one correspondence between intermediate subfactors $B\subset Q \subset A$ and subcategories $\mathcal{G} \subset \mathcal{F}$. The following (3) was observed \cite[Theorem 12]{Oc2001} for $\mathcal{C}$ being a $\mathop{\mathsf{SU}}(2)_k$ category and is partially contained in \cite[Corollary 3.10, 4.8]{BcEvKa2001}. \begin{prop} \label{prop:sandwich} Let $\mathcal{C} \subset \End(N)$ be a UMTC, and $\iota(N)\subset M$ subfactor with commutative Q-system $\Theta\in \mathcal{C}$. Denote by $\mathcal{D}=\langle \beta \prec \iota \rho\bar \iota : \rho\in\mathcal{C}\rangle\subset \End(M)$ the dual category. Then \begin{enumerate} \item $Z(\mathcal{C})\cong Z(\mathcal{D} ) \cong \mathcal{C} \boxtimes \rev{\mathcal{C}}$, \item $Z(\mathcal{D}_0) \cong \mathcal{D}_0\boxtimes \rev{\mathcal{D}_0}$, \item $Z(\mathcal{D}_+) \cong \mathcal{C}\boxtimes \rev{\mathcal{D}_0}$, \item $Z(\mathcal{D}_-) \cong \rev{\mathcal{C}}\boxtimes \mathcal{D}_0$. \end{enumerate} \end{prop} \begin{proof} For (1) it follows by \cite{Sc2001} together with \cite{Mg2003II} that $Z(\mathcal{C})\cong Z(\mathcal{D})$, because $\mathcal{C}$ and $\mathcal{D}$ are Morita equivalent, and again by \cite{Mg2003II} $Z(\mathcal{C})\cong \mathcal{C}\boxtimes \rev{\mathcal{C}}$. It was shown \eg in \cite[Theorem 4.2]{BcEvKa2000}, that $\mathcal{D}_0$ is modular, thus the statement (2) follows from (1). $\mathcal{D}_+$ is equivalent with $\mathcal{C}_\Theta$ (\cf \cite[Remark 5.6]{BiKaLo2014}) and by \cite[Corollary 3.30]{DaMgNiOs2013}, see also \cite[Remark 4.3]{DrGeNiOs2010} we have $Z(\mathcal{C}_\Theta)\cong \mathcal{C}\boxtimes \rev{\mathcal{C}^0_\Theta}$, which is braided equivalent with $\mathcal{C}\boxtimes\mathcal{D}_0$, thus (3). Finally, (4) follows by applying (3) to $\rev{\mathcal{C}}$. \end{proof} \section{Conformal Nets} \label{sec:CN} By a conformal net $\mathcal{A}$, we mean a local M\"obius covariant net on the circle. It associates with every proper interval $I\subset S^1\subset \mathbb{C}$ on the circle a von Neumann algebra $\mathcal{A}(I)\subset \B(\mathcal{H}_\mathcal{A})$ on a fixed Hilbert space $\mathcal{H}$, such that the following properties hold: \begin{enumerate}[{\bf A.}] \item \textbf{Isotony.} $I_1\subset I_2$ implies $\mathcal{A}(I_1)\subset \mathcal{A}(I_2)$. \item \textbf{Locality.} $I_1 \cap I_2 = \emptyset$ implies $[\mathcal{A}(I_1),\mathcal{A}(I_2)]=\{0\}$. \item \textbf{Möbius covariance.} There is a unitary representation $U$ of $\mathsf{M\ddot ob}$ on $\mathcal{H}$ such that $ U(g)\mathcal{A}(I)U(g)^\ast = \mathcal{A}(gI)$. \item \textbf{Positivity of energy.} $U$ is a positive energy representation, i.e. the generator $L_0$ (conformal Hamiltonian) of the rotation subgroup $U(z\mapsto \mathrm{e}^{\mathrm{i} \theta}z)=\mathrm{e}^{\mathrm{i} \theta L_0}$ has positive spectrum. \item \textbf{Vacuum.} There is a (up to phase) unique rotation invariant unit vector $\Omega \in \mathcal{H}$ which is cyclic for the von Neumann algebra $\mathcal{A}:=\bigvee_{I\in\mathcal{I}} \mathcal{A}(I)$. \end{enumerate} A local Möbius covariant net on $\mathcal{A}$ on $\mathbb{S}^1$ is called \textbf{completely rational} if it \begin{enumerate}[{\bf A.}] \setcounter{enumi}{5} \item fulfills the \textbf{split property}, \ie for $I_0,I\in \mathcal{I}$ with $\overline{I_0}\subset I$ the inclusion $\mathcal{A}(I_0) \subset \mathcal{A}(I)$ is a split inclusion, namely there exists an intermediate type I factor $M$, such that $\mathcal{A}(I_0) \subset M \subset \mathcal{A}(I)$. \item is \textbf{strongly additive}, \ie for $I_1,I_2 \in \mathcal{I}$ two adjacent intervals obtained by removing a single point from an interval $I\in\mathcal{I}$ the equality $\mathcal{A}(I_1) \vee \mathcal{A}(I_2) =\mathcal{A}(I)$ holds. \item for $I_1,I_3 \in \mathcal{I}$ two intervals with disjoint closure and $I_2,I_4\in\mathcal{I}$ the two components of $(I_1\cup I_3)'$, the \textbf{$\mu$-index} of $\mathcal{A}$ \begin{equation*} \mu(\mathcal{A}):= [(\mathcal{A}(I_2) \vee \mathcal{A}(I_4))': \mathcal{A}(I_1)\vee \mathcal{A}(I_3) ] \end{equation*} (which does not depend on the intervals $I_i$) is finite. \end{enumerate} A \textbf{representation} $\pi$ of $\mathcal{A}$ is a family of representations $\pi=\{\pi_I\colon\mathcal{A}(I)\to \B(\mathcal{H}_\pi)\}_{I\in\mathcal{I}}$ on a common Hilbert space $\mathcal{H}_\pi$ which are compatible, i.e.\ $\pi_J\restriction \mathcal{A}(I) =\pi_I$ for $I\subset J$. Every non-degenerate representation $\pi$ with $\mathcal{H}_\pi$ separable turns---for every choice of an interval $I_0\in\mathcal{I}$---out to be equivalent to a representation $\rho$ on $\mathcal{H}$, such that $\rho_J=\id_{\mathcal{A}(J)}$ for $J\cap I_0=\emptyset$. Then Haag duality implies that $\rho_{I}$ is an endomorphism of $\mathcal{A}(I)$ for every $I \in \mathcal{I}$ with $I\supset I_0$. Thus we can realize the representation category of $\mathcal{A}$ inside the C$^\ast$ tensor category of endomorphisms $\End_0(N)$ of a type III factor $N=\mathcal{A}(I)$ and the embedding turns out to be full and replete. We denote this category by $\Rep^I(\mathcal{A})$. In particular, this gives the representations of $\mathcal{A}$ the structure of a tensor category \cite{DoHaRo1971}. It has a natural \textbf{braiding}, which is completely fixed by asking that if $\rho$ is localized in $I_1$ and $\sigma$ in $I_2$ where $I_1$ is left of $I_2$ inside $I$ then $\varepsilon(\rho,\sigma)=1$ \cite{FrReSc1989}. The \textbf{statistical dimension} of $\rho\in\Rep^I(\mathcal{A})$ is given by $d\rho=[N:\rho(N)]^{\frac12}$. Let $\mathcal{A}$ be completely rational conformal net, then by \cite{KaLoMg2001} $\Rep^I(\mathcal{A})$ is a UMTC and $\mu_\mathcal{A}=\dim(\Rep^I(\mathcal{A}))$. We write $\mathcal{A}\subset \mathcal{B}$ or $\mathcal{B}\supset \mathcal{A}$ if there is a representation $\pi=\{\pi_I\colon\mathcal{A}(I)\to\mathcal{B}(I)\subset\B(\mathcal{H}_\mathcal{B})\}$ of $\mathcal{A}$ on $\mathcal{H}_\mathcal{B}$ and an isometry $V\colon \mathcal{H}_\mathcal{A}\to \mathcal{H}_\mathcal{B}$ with $V\Omega_\mathcal{A}=\Omega_\mathcal{B}$ and $VU_\mathcal{A}(g)=U_\mathcal{B}(g)V$. We ask that further that $Va=\pi_I(a)V$ for $I\in\mathcal{I}$, $a\in\mathcal{A}(I)$. Define $p$ the projection on $\mathcal{H}_{\mathcal{A}_0}=\overline{\pi_I(\mathcal{A}(I))\Omega}$. Then $pV$ is a unitary equivalence of the nets $\mathcal{A}$ on $\mathcal{H}_\mathcal{A}$ and $\mathcal{A}_0$ defined by $\mathcal{A}_0(I)=\pi_I(\mathcal{A}(I))p$ on $\mathcal{H}_{\mathcal{A}_0}$. \begin{defi} \label{def:coset} Let $\mathcal{A}\subset \mathcal{B}$ an inclusion of conformal nets. Then we define the \textbf{coset net} $\mathcal{A}^\mathrm{c}(I)=\mathcal{B}(I)\cap \mathcal{A}'$. Note that $\mathcal{A}^\mathrm{c}\subset \mathcal{B}$. We call $\mathcal{A}\subset \mathcal{B}$ \textbf{normal} if $\mathcal{A}^\mathrm{cc}=\mathcal{A}$. We call $\mathcal{A}\subset \mathcal{B}$ \textbf{co-finite} if $[\mathcal{B}(I):\mathcal{A}(I)\otimes\mathcal{A}^\mathrm{c}(I)]<\infty$. \end{defi} For every co-finite extension $\mathcal{A}\subset \mathcal{B}$ holds: $\mathcal{B}$ is completely rational iff $\mathcal{A}$ and $\mathcal{A}^\mathrm{c}$ are completely rational \cite{Lo2003}. \subsection{On conformal nets realizing quantum doubles/Drinfeld centers} In this section we give some structural results about conformal nets whose representation category is a quantum double. If we talk about a subfactor $N\subset M$, we are just interested in finite depth subfactors which are hyperfinite of type II$_{1}$ or III$_{1}$. In this case standard invariant is a complete invariant \cite{Po1993}. We might also replace subfactor by subfactor standard invariant. We write $N\subset M \approx N_1\subset M_1$ if both have equivalent standard invariant. \begin{defi} A \textbf{holomorphic net} $\mathcal{A}$ is a completely rational conformal net with trivial representation category $\Rep(\mathcal{A})\cong\Hilb$, or equivalently \cite{KaLoMg2001} with $\mu_\mathcal{A}=1$. \end{defi} \begin{prop}[{\cf \cite[Corollary 3.5]{Mg2010}, \cite[Theorem 2.4]{Ka2015}}] \label{prop:Holomorphic} Let $\mathcal{A}$ be a completely rational conformal net. The following are equivalent: \begin{enumerate} \item There is a holomorphic local irreducible extension $\mathcal{B}\supset \mathcal{A}$. \item $\Rep(\mathcal{A}) \cong Z(\mathcal{F})$ for some unitary fusion category $\mathcal{F}$. \item $\Rep(\mathcal{A}) \cong D(N\subset M)$ for some finite depth subfactor $N\subset M$. \end{enumerate} \end{prop} \begin{proof} Given $N\subset M$ take $\mathcal{F}:=\bim N\mathcal{F} N$. Conversely, we may assume that $\mathcal{F}$ is a full subcategory of $\End(M)$ and we can take $N=\rho(M)\subset M$, where $\rho=\bigoplus_{\rho_i\in\Irr(\mathcal{F})} \rho_i$. Thus (2) and (3) are equivalent. If (2) is true the dual Q-system of the Longo--Rehren inclusion associated with $\mathcal{F}$ gives a commutative Q-system $\Theta=(\theta,w,x)$ in $\Rep^I(\mathcal{A})$ with $d\theta=\sqrt{\mu_\mathcal{A}}$ the corresponding extension $\mathcal{B}\supset \mathcal{A}$ has $\mu_\mathcal{B}=1$. Conversely, if (1) holds, let $\Theta=(\theta,w,x)$ in $\Rep^I(\mathcal{A})$ be the Q-system characterizing $\mathcal{B}\supset \mathcal{A}$. The Q-system $\Theta$ is commutative with $d\theta=\sqrt{\Dim \Rep \mathcal{A}}$, thus a Lagrangian Q-system which forces $\Rep(\mathcal{A})\cong Z(\mathcal{F})$ for some fusion category $\mathcal{F}$. Indeed, for $N:=\mathcal{A}(I)\subset \mathcal{B}(I):=M$ and $\bim N\mathcal{C} N=\Rep^I(\mathcal{A})$ using Proposition \ref{prop:sandwich} (3) we get $$Z(\mathcal{F})=Z(\bim[+]M\mathcal{C} M)\cong \bim N\mathcal{C} N\boxtimes \bim[0] M\mathcal{C} M\cong \Rep^I(\mathcal{A}) \boxtimes \Rep^I(\mathcal{B}) \cong \Rep^I(\mathcal{A})\,$$ using \cite[Proposition 6.4]{BiKaLo2014} in the second last step. \end{proof} \begin{rmk} One might see $\mathcal{A}\subset \mathcal{B}$ as a generalization of an orbifold by a finite group. Namely, if $\mathcal{F}$ is pointed and the fusion rules are given by the finite group $G$, then for the associated with $\mathcal{F}$ associated extension $\mathcal{B}\supset \mathcal{A}$ from Proposition \ref{prop:Holomorphic} the net $\mathcal{A} =\mathcal{B}^G$ is indeed the $G$-orbifold of $\mathcal{B}$, \ie $\mathcal{A}=\mathcal{B}^G$, \cf \cite{Mg2005}. \end{rmk} Let $N\subset M$ be a finite index and finite depth subfactor. Conjecture \ref{conj:2} is equivalent with the existence of a conformal net $\mathcal{A}$ with $\Rep(\mathcal{A}) \cong D(N\subset M)$ for every such $N\subset M$. Conversely, in the following Proposition we show that if such a net $\mathcal{A}$ exists, there are two extensions $\mathcal{B}_{N}$ and $\mathcal{B}_{M}$, such that $\mathcal{B}_{N}(I)\subset\mathcal{B}_{M}(I)\approx N\subset M$. But any morphism $\beta \colon \mathcal{B}_{N}(I) \to \mathcal{B}_{M}(I)$ related to $\Rep^I(\mathcal{A})$, \ie $\bar\iota_{\mathcal{B}_{M}(I)} \circ \beta\circ \iota_{\mathcal{B}_{N}(I)}\in \Rep^I(\mathcal{A})$, prescribes a defect line or phase boundary \cite{BiKaLoRe2014} between the full conformal field theories $\mathcal{B}_\mathrm{L}=\mathcal{B}_{N}\otimes \mathcal{B}_{N} \supset \mathcal{A}\otimes \mathcal{A}$ and $\mathcal{B}_\mathrm{R}\supset \mathcal{A}\otimes \mathcal{A}$ on 2D Minkowski space, which is invisible if restricted to $\mathcal{A}\otimes \mathcal{A}$, also called $\mathcal{A}$--topological. Here the net $\mathcal{B}_R$ comes from the $\alpha$-induction construction \cite{Re2000} of $\mathcal{A}\subset \mathcal{B}_M$, which coincides with the full center construction \cite{BiKaLo2014}. Thus the subfactor $\mathcal{B}_{N}(I)\subset \mathcal{B}_{M}(I)\approx N\subset M$ is related to a phase boundary in conformal field theory. \begin{prop} \label{prop:PhaseBoundaries} Let $\mathcal{A}$ be a completely rational net with $\Rep(\mathcal{A})\cong D(N\subset M)$. Then there exist $\mathcal{B}_{N} \supset \mathcal{A}$ local extension with $\Rep(\mathcal{B}_\bullet)\cong \Hilb$ and a (non-local) extension $\mathcal{B}_{M}\supset \mathcal{B}_{N}\supset \mathcal{A}$ with $\mathcal{B}_N(I)\subset \mathcal{B}_M(I) \approx N\subset M$. Thus the inclusion $\iota \colon \mathcal{B}_{N}(I) \to \mathcal{B}_{M}(I)$ is related to $\Rep^I(\mathcal{A})$ and prescribes a phase boundary in the sense of \cite{BiKaLoRe2014}. \end{prop} \begin{proof} The dual Q-systems of the Longo-Rehren inclusion associated with $\bim[N\subset M]N\mathcal{F} N$ gives a commutative Q-systems in $D(N\subset M)\cong Z(\bim[N\subset M]N\mathcal{F} N)\cong Z(\bim[N\subset M]M\mathcal{F} M)$, which we use to define the local extensions $\mathcal{B}_N\supset\mathcal{A}$. Let $A=\mathcal{A}(I)$, $B_N= \mathcal{B}_N (I)$, then with $\bim A\mathcal{C} A\cong D(N\subset M)$ we have $\bim {B_N}\mathcal{C}{B_N} \cong\bim[N\subset M] N\mathcal{F} N\boxtimes(\bim[N\subset M] N\mathcal{F} N)^\mathrm{op}$. Finally, the Q-system $\Theta_{N\subset M}\boxtimes \id =(\theta_{N \subset M}\boxtimes \id,w_{N\subset M}\boxtimes 1_{\id}, x_{N\subset M} \boxtimes 1_{\id})$ gives an extension $B_M\supset B_N$ which gives a non-local extension $\mathcal{B}_M\supset \mathcal{A}$, where $\Theta_{N\subset M}$ is the Q-system in $\bim[N\subset M]N\mathcal{F} N$ of the subfactor $N\subset M$. Because $B_N\subset B_N$ and $N\subset M$ have by construction equivalent Q-systems, they have the same standard invariant. \end{proof} \subsection{Some conformal nets} \begin{example} We denote by $\mathcal{A}_{\mathop{\mathsf{SU}}(2),k}$ or simply by $\mathcal{A}_k$ the $\mathop{\mathsf{SU}}(2)$ loop group net at level $k$ \cite{Wa}, which is completely rational \cite{Xu2000} and thus gives a UTMC $\Rep(\mathcal{A}_k)$. The simple objects are $\{\rho_0,\ldots,\rho_k\}$ with fusion rules $$ [\rho_i]\times [\rho_j]=\bigoplus_{\substack{\ell=|i-j|\\i+\ell \text{ even}\\i+j+\ell\leq 2k}}^{i+j}[\rho_\ell]. $$ The dimensions $d\rho_i$ and twists $\omega_{\rho_i}$ are given by \begin{align*} d_i&=d\rho_i =[i+1]_q:=\frac{\si \frac{(i+1)\pi}{k+2 }{\sin \frac{\pi}{k+2 }, & \omega_i&=\omega_{\rho_i}=\exp\left({2\pi\mathrm{i}\frac{i(i+2)}{4(k+2)}}\right),& q&=\exp\left({\frac{\mathrm{i}\pi}{k+2}}\right \end{align*} and the central charge $c_k$ and global dimension $D_k$ by $$ c_k=\frac{3k}{k+2}, \quad D_k=\sum_{i=0}^{k} d_i^2 =\frac{k+2}{2\sin^2 \left(\frac{\pi}{k+2}\right)}\,. $$ \end{example} We remember the classification of $\mathop{\mathsf{SU}}(2)_k$ conformal nets \cite{KaLo2004}, \cite{BcEv1998}. \begin{prop} \label{prop:ADE} Local irreducible extensions $\mathcal{B}\supset \mathcal{A}_{k}$, \ie a local net $\mathcal{B}$ containing $\mathcal{A}_k$ as a subnet, such that $\mathcal{A}_k(I)'\cap \mathcal{B}(I)=\mathbb{C}$ are in one-to-one correspondence with $A$-$D_{2n}$-$E_{6,8}$\@\xspace Dynkin diagrams of Coxeter number $k+2$. The $E_{6,8}$ Dynkin diagram correspond to the conformal inclusions $\mathcal{A}_{10}\subset \mathcal{A}_{\mathop{\mathsf{Spin}}(5),1}$ and $\mathcal{A}_{28}\subset \mathcal{A}_{\mathop{\mathsf{G}_2},1}$, respectively. The subfactor $\alpha^\pm_{\rho_1}(\mathcal{B}(I))\supset \mathcal{B}(I)$ has a principal graph the corresponding Dynkin diagram. \end{prop} \begin{example} The loop group net of $\mathop{\mathsf{Spin}}(2n+1)$ at level 1 $\mathcal{A}_{\mathop{\mathsf{Spin}}(2n+1),1}$ \cite[Theorem 3.1]{Bc1996} and \cite[Lemma 3.1]{Xu2009} has global dimension $D=4$ and has the Ising fusion rules, \ie the same fusion rules as the net $\mathcal{A}_{\mathop{\mathsf{SU}}(2),2}=\mathcal{A}_{\mathop{\mathsf{Spin}}(3),1}$. We denote the (choice of simple objects) by $\{\rho_0,\rho_1,\rho_2\}$. The category is determined by the fusion rules and twists \cite[Proposition 8.2.6]{FrKe1993}, which are: $$ \omega_{\rho_1}=\exp\left({\frac{2\pi \mathrm{i}(2n+1)}{16}}\right),\quad \omega_{\rho_2}=-1\,. $$ \end{example} \begin{example} We get a net $\mathcal{A}_{\mathop{\mathsf{G}_2},1}$ associated with $(\mathop{\mathsf{G}_2})_1$ as an extension of $\mathcal{A}_{28}$. The category of representations is the Fibonacci or golden category with fusion rules $[\tau]\times[\tau]=[\id]+[\tau]$. There is a conformal inclusion of $\mathcal{A}_{\mathop{\mathsf{SU}}(3),2}\otimes \mathcal{A}_{\mathop{\mathsf{SU}}(3),1}\subset \mathcal{A}_{\mathop{\mathsf{F}_4},1}$, thus $\mathcal{A}_{\mathop{\mathsf{F}_4},1}$ is completely rational. There is also $\mathcal{A}_{\mathop{\mathsf{F}_4},1}\otimes \mathcal{A}_{\mathop{\mathsf{G}_2},1}\subset \mathcal{A}_{\mathop{\mathsf{E}}_8,1}$, in particular $\Rep(\mathcal{A}_{\mathop{\mathsf{G}_2},1})\cong \rev{\Rep(\mathcal{A}_{\mathop{\mathsf{F}_4},1})}$, which is an application of Proposition \ref{prop:Mirror}. \end{example} \begin{example} For central charge $c$ with values $$ c_m=1-\frac6{(m+1)(m+2)}, \quad (m=2,3,\ldots)\,, $$ the \textbf{Virasoro net} $\Vir_{c_m}$ is given by the coset net of the inclusion $\mathcal{A}_{m}\subset \mathcal{A}_{m-1}\otimes\mathcal{A}_1$ \cite{KaLo2004}, in other words we have the conformal inclusion $$ \Vir_{c_m}\otimes \mathcal{A}_{m}\subset \mathcal{A}_{m-1}\otimes\mathcal{A}_1\,. $$ and $\Vir_{c_m}$ is completely rational, see \cite{KaLo2004}. \end{example} \section{Realization of some Quantum Doubles by Conformal Nets} \label{sec:Realization} \subsection{Realization of the opposite braiding} \begin{prop} \label{prop:MirrorExtension} Let $\mathcal{A},\tilde \mathcal{A}$ be completely rational conformal nets with $\Rep(\tilde \mathcal{A})\cong \rev{\Rep(\mathcal{A})}$ and $\mathcal{B}\supset\mathcal{A}$ be an irreducible local extension (which is automatically completely rational). Then there is an irreducible local extension $\tilde \mathcal{B}\supset \tilde \mathcal{A}$ with $\Rep(\tilde\mathcal{B})\cong \rev{\Rep(\mathcal{B})}$. \end{prop} \begin{proof} Using the equivalence $\Rep(\tilde \mathcal{A})\cong \rev{\Rep(\mathcal{A})}$, the commutative Q-system $\Theta \in\Rep(\mathcal{A})$ gives a commutative Q-system $\tilde \Theta\in \Rep(\tilde \mathcal{A})$, which defines an extension $\tilde \mathcal{B} \supset \tilde \mathcal{A}$ with the asked properties. \end{proof} \begin{rmk} This is a trivial instance of mirror extensions \cite{Xu2007}, namely take $\mathcal{B}_{\mathrm{LR}}\supset \mathcal{A}\otimes \tilde \mathcal{A}$ the Longo--Rehren extension \cite{LoRe1995}, which gives $\Rep(\mathcal{B}_{\mathrm{LR}})\cong \Hilb$. Then $\mathcal{A} \subset \mathcal{B}_{\mathrm{LR}}$ is normal and co-finite and $\tilde \mathcal{A}$ is its coset and $\tilde \mathcal{B}\supset \tilde \mathcal{A}$ the mirror extension of $\mathcal{A}\subset \mathcal{B}$. Using \cite[Proposition 6.4]{BiKaLo2014} $\Rep(\tilde\mathcal{B})$ is equivalent as UFC with $\Rep(\mathcal{B})$ and has the opposite braiding. \end{rmk} \begin{prop} \label{prop:Mirror} Let $\mathcal{B}$ be a holomorphic net and $\mathcal{A}\subset \mathcal{B}$ be co-finite and normal. Let $\tilde \mathcal{A}$ be the coset net of the inclusion $\mathcal{A}\subset \mathcal{B}$. Then the nets $\mathcal{A}$ and $\mathcal{A}^\mathrm{c}$ are completely rational with $\Rep(\mathcal{A}^\mathrm{c})\cong \rev{\Rep(\mathcal{A})}$. \end{prop} \begin{proof} $\mathcal{A}$ and $\mathcal{A}^\mathrm{c}$ are completely rational by assumption (using \cite{Lo2003} see above). The Q-system $\Theta=(\theta,w,x)$ giving the extension $\mathcal{A}\otimes \mathcal{A}^\mathrm{c} \subset \mathcal{B}$ is of the form $$ [\theta]=\bigoplus_{\substack{\mu\in\Irr(\Rep(\mathcal{A}))\\\nu\in\Irr(\Rep(\mathcal{A}^\mathrm{c}))}} Z_{\mu,\nu} [\mu\otimes \nu]\,. $$ By normality of $\mathcal{A},\mathcal{A}^{c}\subset \mathcal{B}$ we have $Z_{\mu,\id}=\delta_{\id,\mu}$ and $Z_{\id,\nu}=\delta_{\id,\nu}$. Then it follows that there is a braided equivalence $\phi\colon \mathcal{C}\to \rev{\mathcal{D}}$, for some full and replete subcategories $\mathcal{C}\subset \Rep(\mathcal{A})$ and $\mathcal{D}\subset \Rep(\mathcal{B})$, such that $\Theta$ is the by $\phi$ twisted Longo--Rehren extension, see \cite[Definition 4.1]{BiKaLo2014} for the definition. On the one hand $(d\theta)^2=\Dim\Rep(\mathcal{A})\cdot\Dim\Rep(\mathcal{A}^\mathrm{c})$, because $\mathcal{B}$ is holomorphic. On the other hand $d\theta=\Dim\mathcal{C}=\Dim \mathcal{D}$. Together, because all dimensions are positive, this implies $\mathcal{C}=\Rep(\mathcal{A})$ and $\mathcal{D}= \Rep(\mathcal{A}^\mathrm{c})$. \end{proof} Let $\mathcal{A}_k=\mathcal{A}_{\mathop{\mathsf{SU}}(2),k}$ and let $\mathcal{B}_k$ be the coset net of $$ \mathcal{A}_{k} \subset \mathcal{A}_{1}^{\otimes k}=\mathcal{A}_{1}^{\otimes k}\, $$ which is normal by \cite[Lemma 4.2 (1)]{Xu2007}. By induction, it follows that we have conformal inclusions: $$ \mathcal{A}_k \otimes \Vir_{c_2}\otimes \cdots \otimes \Vir_{c_k}\subset \mathcal{A}_k \otimes \mathcal{B}_k \subset \mathcal{A}_{\mathop{\mathsf{SU}}(2),1}^{\otimes k} $$ thus $\mathcal{B}_k$ it is completely rational by \cite{Lo2003}. Using the conformal inclusion $\mathcal{A}_{\mathop{\mathsf{E}}_7,1}\otimes \mathcal{A}_1\subset \mathcal{A}_{\mathop{\mathsf{E}}_8,1}$, which are all conformal nets associated with even lattices (\cf \cite{Bi2012}) and which is a Longo--Rehren extension and thus normal we get the conformal inclusions: $$ \mathcal{A}_{k}\otimes \mathcal{B}_k \otimes \mathcal{A}_{\mathop{\mathsf{E}}_7,1}^{\otimes k} \subset \mathcal{A}_{A_1}^{\otimes k}\otimes \mathcal{A}_{\mathop{\mathsf{E}}_7,1}^{\otimes k} \subset \mathcal{A}_{\mathop{\mathsf{E}}_8,1}^{\otimes k}\,. $$ Now we take $\tilde \mathcal{A}_{k}$ to be the coset of the normal inclusion \cite[Lemma 4.2 (1)]{Xu2007} $\mathcal{A}_{k}\subset \mathcal{A}_{\mathop{\mathsf{E}}_8,1}^k$. This is completely rational, because it is an intermediate net of completely rational nets: $$\mathcal{A}_{k}\otimes \mathcal{B}_k \otimes \mathcal{A}_{\mathop{\mathsf{E}}_7,1}^{\otimes k} \subset \mathcal{A}_k\otimes \tilde \mathcal{A}_k \subset \mathcal{A}_{\mathop{\mathsf{E}}_8,1}^{\otimes k} \,.$$ Thus using Proposition \ref{prop:Mirror} we have proven: \begin{prop} \label{prop:Akmirror} The coset net $\tilde \mathcal{A}_k$ of the inclusion $\mathcal{A}_k\subset \mathcal{A}^{\otimes k}_{\mathop{\mathsf{E}}_8,1}$ above is completely rational with $\Rep(\tilde \mathcal{A}_k)\cong \rev{\Rep(\mathcal{A}_k)}$. \end{prop} \begin{example} We note that $\tilde \mathcal{A}_1=\mathcal{A}_{\mathop{\mathsf{E}}_7,1}$ and that $\Vir_{c_k}\otimes \tilde \mathcal{A}_1\otimes \tilde \mathcal{A}_{k-1}\otimes \mathcal{A}_k\subset \mathcal{A}_{\mathop{\mathsf{E}}_8,1}^{\otimes k}$. We get the intermediate inclusion: $$ \Vir_{c_k}\otimes \tilde \mathcal{A}_1\otimes \tilde \mathcal{A}_{k-1}\otimes \mathcal{A}_k\subset \tilde\mathcal{A}_k\otimes \mathcal{A}_k \subset \mathcal{A}_{\mathop{\mathsf{E}}_8,1}^{\otimes k} $$ Thus also $\Vir_{c_k}\otimes\tilde \mathcal{A}_{k-1}\otimes\tilde\mathcal{A}_1\subset\tilde \mathcal{A}_k$ and $\Vir_{c_k}$ can be obtained back from the coset of $\tilde\mathcal{A}_{k-1}\subset \tilde\mathcal{A}_1\otimes \tilde\mathcal{A}_k$. We also get that $\Vir_{c_m}\subset \mathcal{A}_{\mathop{\mathsf{E}}_8,1}^{\otimes m}$ is normal and co-finite, and thus its coset $\tilde \Vir_{c_m}=\Vir_{c_m}^\mathrm{c}$ realizes the opposite braiding of $\Vir_{c_m}$. Further, $\Vir_{c_m}\otimes\tilde\Vir_{c_m}$ realizes, using Proposition \ref{prop:sandwich}(1), the Drinfeld center $Z(\Rep^I(\Vir_{c_m}))$. \end{example} \subsection{Realization of quantum doubles} The next proposition shows, that if a subfactor $N\subset M$ arises from $\alpha$-induction of a local irreducible extension $\mathcal{A}\subset \mathcal{B}$ and we have a net $\tilde \mathcal{A}$ realizing the opposite braiding of $\mathcal{A}$, then the there is a net $\mathcal{B}_{N\subset M}$ with $\Rep(\mathcal{B}_{N\subset M})=\mathcal{D}(N\subset M)$. \begin{prop} \label{prop:GaloisDouble} Let $N\subset M$ be an irreducible subfactor. Assume there exists a completely rational conformal net $\mathcal{A}$ and an irreducible local extension $\mathcal{B}\supset \mathcal{A}$, such that $N\subset M$ arises by $\alpha^\pm$ induction, \ie there is a $\rho\in \bim{\mathcal{A}(I)}{\mathcal{C}}{\mathcal{A}(I)}=\Rep^I(\mathcal{A})$ and a $[\beta]\prec[\alpha_\rho^{\pm}]$, such that $\beta(\mathcal{B}(I))\subset \mathcal{B}(I) \sim N\subset M$. Further, assume there exists $\tilde \mathcal{A}$, a completely rational conformal net with $\Rep(\tilde \mathcal{A})\cong \rev{\Rep(\mathcal{A})}$. Then \begin{enumerate} \item There exists a completely rational conformal net $\mathcal{B}_{N\subset M}$ realizing the quantum double $D(\mathcal{N}\subset M)$, \ie $\Rep(\mathcal{B}_{N\subset M})\cong D(N\subset M)$. \item It can be given as a local irreducible extension: \begin{itemize} \item $\mathcal{B}_{N\subset M}\supset \mathcal{A}\otimes \tilde\mathcal{B}$, in the $\alpha^+$ case or \item $\mathcal{B}_{N\subset M}\supset \tilde \mathcal{A} \otimes \mathcal{B}$ in the $\alpha^-$ case. \end{itemize} \item In the case that $[\beta],[\bar\beta]$ (tensor) generate $\bim[\pm]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)}$, but $[\bar\beta\circ\beta]$ does not, the former extension is a $\mathbb{Z}_2$--simple current extension. \item In the case that $[\bar\beta\circ\beta]$ (tensor) generates $\bim[\pm]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)}$, then $\mathcal{B}_{N\subset M}$ equals $\mathcal{A}\otimes \tilde \mathcal{B}$ or $\tilde \mathcal{A} \otimes \mathcal{B}$, respectively. \end{enumerate} \end{prop} \begin{proof} By Proposition \ref{prop:sandwich} we have $\Rep(\mathcal{A}\otimes \tilde\mathcal{B}) \cong Z(\bim[+]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)})$ and $\Rep(\tilde \mathcal{A}\otimes \mathcal{B}) \cong Z(\bim[-]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)})$. Let $\bim[\beta]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)}\subset \bim[\pm]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)}$ be the subcategory (tensor) generated by $\bar\beta\circ\beta$, then by $D(N\subset M)\cong Z( \bim[\beta]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)})$ by assumption. Further, there is a holomorphic net $\mathcal{B}_\mathrm{holo} \supset \mathcal{A}\otimes \tilde\mathcal{B}$ or $\mathcal{B}_\mathrm{holo} \supset \tilde \mathcal{A}\otimes \mathcal{B}$, respectively, which is the Longo--Rehren inclusion and by Galois correspondence there is an intermediate net $\mathcal{B}_{N\subset M}$ with $\Rep(\mathcal{B}_{N\subset M})\cong Z(\bim[\beta]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)}) \cong D(N\subset M)$. In the case of (2) we have $2\Dim \bim[\beta]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)} = \Dim \bim[\pm]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)} $ and $\mathcal{B}_\mathrm{holo} \supset \mathcal{A}\otimes \tilde\mathcal{B}$ or $\mathcal{B}_\mathrm{holo} \supset \tilde \mathcal{A}\otimes \mathcal{B}$, respectively, have index two, thus it is a $\mathbb{Z}_2$--simple current extensions. In the case of (3) we have $\bim[\beta]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)} =\bim[\pm]{\mathcal{B}(I)}\mathcal{C}{\mathcal{B}(I)}$, respectively and the extension is trivial. \end{proof} For subfactors with index $<4$ it is well-known that they arise via $\alpha$-induction from $\mathop{\mathsf{SU}}(2)_k$ loop group models $\mathcal{A}_k$, see Proposition \ref{prop:ADE}. Together with $\tilde \mathcal{A}_k$ from Proposition \ref{prop:Akmirror} we thus get: \begin{cor} \label{cor:Doubles} For every subfactor $N\subset M$ with $[M:N]<4$, \ie for every the standard invariant label by $G\in\{A_n,D_{2n},E_{6,8},\bar E_{6,8}\}$, there is a conformal net $\mathcal{A}_{N\subset M}$ with $\Rep(\mathcal{A}_{N\subset M})=D(N\subset M)$. The realizations can be given as follows: \begin{description} \item[$A_{k+1}$] $\mathcal{A}_{k}\otimes \tilde \mathcal{A}_k \rtimes_{\rho_{k,k}} \mathbb{Z}_2$ the simple current extension with respect to the automorphism $\rho_k\otimes\tilde\rho_k$. \item[$D_{2n}$] $\mathcal{B}_{D_{2n}}\otimes \tilde\mathcal{B}_{D_{2n}}$, where $\mathcal{B}_{D_{2n}}$ and $\tilde \mathcal{B}_{D_{2n}}$ are the $\mathbb{Z}_2$--simple current extensions of $\mathcal{A}_{4n-4}$ and $\tilde \mathcal{A}_{4n-4}$ by $\rho_{4n-4}$ and $\tilde \rho_{4n-4}$, respectively. \item[$E_6$] $\mathcal{A}_{10}\otimes \mathcal{A}_{\mathop{\mathsf{Spin}}(11),1} \rtimes_{[\rho_{10,2}]} \mathbb{Z}_2$, where we can replace $\mathcal{A}_{\mathop{\mathsf{Spin}}(11),1}$ by $\tilde\mathcal{B}\supset\tilde \mathcal{A}_{10}$, the extension obtained from Proposition \ref{prop:MirrorExtension} applied to $\mathcal{A}_{10}\subset \mathcal{A}_{\mathop{\mathsf{Spin}}(5),1}$. \item[$\bar E_6$] $\tilde \mathcal{A}_{10}\times \mathcal{A}_{\mathop{\mathsf{Spin}}(5),1} \rtimes_{[\rho_{10,2}]} \mathbb{Z}_2$. \item[$E_8$] $\mathcal{A}_{28}\otimes \mathcal{A}_{\mathop{\mathsf{F}_4},1} \rtimes_{[\rho_{28,0}]} \mathbb{Z}_2$, \ie it is given by $\mathcal{B}_{D_{16}}\otimes \mathcal{A}_{\mathop{\mathsf{F}_4},1}$. We can replace $\mathcal{A}_{\Ffour1}$ by the $\tilde\mathcal{B}\supset\tilde \mathcal{A}_{28}$, the extension obtained from Proposition \ref{prop:MirrorExtension} applied to $\mathcal{A}_{28}\subset \mathcal{A}_{\mathop{\mathsf{G}_2},1}$. \item[$\bar E_8$] $\tilde \mathcal{A}_{28}\otimes \mathcal{A}_{\mathop{\mathsf{G}_2},1} \rtimes_{[\rho_{28,0}]} \mathbb{Z}_2$ \ie it is given by $\tilde \mathcal{B}_{D_{16}}\otimes \mathcal{A}_{\mathop{\mathsf{G}_2},1}$. \end{description} \end{cor} \begin{proof} All subfactors arise as $\alpha^{\pm}_{\rho_1}(\mathcal{B}_G(I))\subset \mathcal{B}_G(I)$, where $\mathcal{B}_G\supset \mathcal{A}_k$ is the extension in Proposition \ref{prop:ADE}. Further $\bar \alpha_{\rho_1}^\pm$ generates $\bim[\pm]{\mathcal{B}(I)}{\mathcal{C}}{\mathcal{B}(I)}$, while $\bar\alpha^\pm_{\rho_1}\circ\alpha_{\rho_1}^\pm$ does not. Thus in each case we are in the situation of case (2) of Proposition \ref{prop:GaloisDouble} and in each case there is just one possible $\mathbb{Z}_2$--simple current extension. \end{proof} \begin{rmk} Our method also applies to some subfactors with index between $4$ and $5$: \begin{itemize} \item The GHJ subfactor \cite{GoHaJo1989} with index $3+\sqrt{3}$ arises as the subfactor $\mathcal{A}_{10}(I)\subset \mathcal{A}_{\mathop{\mathsf{Spin}}(5),1}(I)$, see \cite[Section 2.2]{BcEvKa1999}. Thus the even part of it coincides with the even part of $\Rep(\mathcal{A}_{10})$, \ie with the even part of the $A_{11}$ subfactor. Thus its quantum double is the same as of the $A_{11}$ subfactor and thus also realized by $\mathcal{A}_{10}\otimes\tilde \mathcal{A}_{10}\rtimes_{\rho_{10,10}} \mathbb{Z}_2$. \item The 2221 subfactor with index $(5+\sqrt{21})/2$ arises from the conformal inclusion $\mathcal{A}_{\mathop{\mathsf{G}_2},3}\subset \mathcal{A}_{\mathop{\mathsf{E}}_6,1}$ by $\alpha$-induction \cite{XuUnpublished}, see also \cite[Appendix]{CaMoSn2011}. The subfactor was also constructed by Izumi in \cite{Iz2000}. Note that $\Rep(\mathcal{A}_{\mathop{\mathsf{SU}}(3),1})\cong\rev{\Rep( \mathcal{A}_{\mathop{\mathsf{E}}_6,1})}$, thus by Proposition \ref{prop:GaloisDouble} (3) the net $\mathcal{A}_{\mathop{\mathsf{G}_2},3}\otimes \mathcal{A}_{\mathop{\mathsf{SU}}(3),1}$ realizes its quantum double. A similar observation was made by Ostrik \cite[Remark A.4.3]{CaMoSn2011}. The complex conjutage should be realized by $\tilde \mathcal{A}_{\mathop{\mathsf{G}_2},3}\otimes \mathcal{A}_{\mathop{\mathsf{E}}_6,1}$, but we do not know how to realize the net $\tilde \mathcal{A}_{\mathop{\mathsf{G}_2},3}$. \end{itemize} \end{rmk} \subsection{Modular invariants} All our examples in Corollary \ref{cor:Doubles} are $\mathbb{Z}_2$--simple current extension. We remember that for $\mathcal{A}\subset \mathcal{B}$ an extension, $N=\mathcal{A}(I)\subset \mathcal{B}(I)=M$ and $\bim N\mathcal{C} N=\Rep^I(\mathcal{A})$, the matrix $Z=(Z_\mu,\nu)_{\mu,\nu\in\Irr(\bim N\mathcal{C} N)}$ with $Z_{\mu,\nu}= \dim\Hom(\alpha^+_\mu,\alpha^-_\nu)$ is a modular invariant \cite{BcEvKa1999}, \ie commutes with the $S$ and $T$ associated with $\bim N\mathcal{C} N$. The modular invariant of a commutative $\mathbb{Z}_2$--simple current extension $\theta=[\rho_0]\oplus[\rho_g]$ is given by (\cf (3.59) in \cite{FuRuSc2004} for the general formula) $$ Z_{i,j}=\frac12\left(1+\frac{\omega_{gi}}{\omega_i}\right)\left(\delta_{i,j}+\delta_{gi,j}\right)\,, $$ where $gi$ is the action of $g$ on $i$, \ie $[\rho_{gi}]=[\rho_g]\times[\rho_i]$. We conveniently write the modular invariant in character form as: $$ Z=\sum_{\mu,\nu} Z_{\mu,\nu}\chi_\mu\bar\chi_\nu\,. $$ We include the modular invariants, from which one can derive the fusion rules of the representation category. We note, although it is not necessary and follows from the above ``abstract non-sense'', one can directly check that the, for example the representation category of the net $\mathcal{A}_{10}\otimes \mathcal{A}_{\mathop{\mathsf{Spin}}(11),1} \rtimes_{[\rho_{10,2}]} \mathbb{Z}_2$ has the fusion rules of the $E_6$ double as in \cite{Iz2001II,HoRoWa2008}. Some of this calculation is contained in \cite{BcEvKa2001}. \begin{example}[$A_{k+1}$-case] For the inclusion $\mathcal{A}_k\otimes \tilde \mathcal{A}_k\subset \mathcal{A}_{N\subset N}= (\mathcal{A}_k\otimes \tilde \mathcal{A}_k)\rtimes \mathbb{Z}_2$ has the modular invariant is given by: $$ Z_{\rho_{i_1,j_1},\rho_{i_2,j_2}} =\frac12\left(1+(-1)^{i_1-j_1}\right)\left(\delta_{i_1,i_2}\delta_{j_1,j_2} +\delta_{i_1,k-i_2}\delta_{j_1,k-j_2}\right) $$ and thus $$ Z=\frac12\sum_{\substack{i,j=0 \\ i+j=\mathrm{even}}}^k |\chi_{\rho_{i,j}}+\chi_{\rho_{k-i,k-j}}|^2\,. $$ \end{example} \begin{example}[$D_{2n}$-case] Let $k=4n-4$. Let $\mathcal{A}_k$, then there is a simple current extension $\mathcal{B}_k=\mathcal{A}_{k}\rtimes_{\rho_{k}} \mathbb{Z}_2$ of $\mathcal{A}_k$ corresponding to the Dynkin diagram $D_{2n}$ in Proposition \ref{prop:ADE} with modular invariant: $$ Z_{D_{2n}}=\frac12\sum_{\ell=0}^{\frac k2} |\chi_{2\ell}+\chi_{k-2l}|^2 \,. $$ The same is true for $\tilde \mathcal{B}_k=\tilde \mathcal{A}_{k}\rtimes_{\rho_{k}} \mathbb{Z}_2$. The net $\mathcal{A}_{N\subset M}$ for $D_{2n}$ is just $\mathcal{B}_k\otimes \tilde\mathcal{B}_k$, which is an $\mathbb{Z}_2$ extension of $$\mathcal{A}_k\otimes \tilde \mathcal{B}_k \subset \mathcal{B}_k\otimes \tilde \mathcal{B}_k \supset \mathcal{B}_k\otimes \tilde \mathcal{A}_k\,.$$ So the modular invariant for the $\mathbb{Z}_2$-simple current extension is $Z_{D_{2n}}\otimes I_{n+1}$, where $I_m$ is the $m\times m$ identity matrix. \end{example} \begin{example}[$E_{6}$-cases] Then modular invariant for $\mathcal{A}_{\mathop{\mathsf{SU}}(2),{10}}\otimes\mathcal{A}_{\mathop{\mathsf{Spin}}(11),1}\subset \mathcal{A}_{N\subset M}$ for $E_6$ is given by: $$Z=X+Y +2 |\chi_{5,1}|^2\,,$$ with \begin{align*} X&= |\chi_{0,0}+\chi_{10,2}|^2 + |\chi_{0,2}+\chi_{10,0}|^2 + |\chi_{2,0}+\chi_{8,2}|^2 + |\chi_{0,2}+\chi_{8,0}|^2 + |\chi_{4,0}+\chi_{6,2}|^2 + |\chi_{4,2}+\chi_{6,0}|^2 \\ Y&= |\chi_{1,1}+\chi_{9,1}|^2 + |\chi_{3,1}+\chi_{7,1}|^2 \,. \end{align*} One can read of the number of irreducible sectors: $|\bim N\Delta N|=33$, $|\bim N\Delta M|=|\bim M{\Delta^\pm} M|=18$, $|\bim M\Delta M|=36$ and $|\bim M{\Delta^0} M|=10$. The category $\bim N\mathcal{C} N$ has $A_{11}\times A_{3}$ fusion rules, see Figure \ref{fig:A11A3} and the $\mathbb{Z}_2$--simple current extension is an ``orbifold'' giving the fusion rules of $\bim[\pm] M\mathcal{C} M$, Figure \ref{fig:E6DualFusionGraph}. \begin{figure} $$ \tikzmath[1]{ \foreach \k in {0,...,10}{ \foreach \l in {0,1,2}{ \node at (\k,\l) {$\bullet$}; \node at (\k,\l) [below right] {$\scriptstyle{\rho_{\k,\l}}$}; } } \foreach \k in {0,...,10}{ \draw[dashed] (\k,0)--(\k,2); } \foreach \l in {0,1,2}{ \draw (0,\l)--(10,\l); } } $$ \caption{Fusion rules of $\mathop{\mathsf{SU}}(2)_{10}\times \mathop{\mathsf{Spin}}(11)_1$} \label{fig:A11A3} \end{figure} \begin{figure} \label{fig:E6SimplecurrentPrincipalGraph} $$ \underbrace{ \tikzmath[1]{ \node at (0,0) {$\bullet$}; \node at (1,0) {$\bullet$}; \node at (.5,1) {$\bullet$}; \node at (0,2) {$\bullet$}; \node at (1,2) {$\bullet$}; \node at (0,0) [below] {$\scriptstyle{\rho_{0,0}}$}; \node at (1,0) [below] {$\scriptstyle{\rho_{10,2}}$}; \draw (0,0)--(.5,1); \draw (1,0)--(.5,1); \draw (0,2)--(.5,1); \draw (1,2)--(.5,1); } \cdots \tikzmath[1]{ \node at (0,0) {$\bullet$}; \node at (1,0) {$\bullet$}; \node at (.5,1) {$\bullet$}; \node at (0,2) {$\bullet$}; \node at (1,2) {$\bullet$}; \node at (0,0) [below] {$\scriptstyle{\rho_{10,0}}$}; \node at (1,0) [below] {$\scriptstyle{\rho_{0,2}}$}; \draw (0,0)--(.5,1); \draw (1,0)--(.5,1); \draw (0,2)--(.5,1); \draw (1,2)--(.5,1); } }_6 \underbrace{ \tikzmath[1]{ \node at (0,0) {$\bullet$}; \node at (1,0) {$\bullet$}; \node at (.5,1) {$\bullet$}; \node at (0,2) {$\bullet$}; \node at (1,2) {$\bullet$}; \node at (0,0) [below] {$\scriptstyle{\rho_{1,1}}$}; \node at (1,0) [below] {$\scriptstyle{\rho_{9,1}}$}; \draw (0,0)--(.5,1); \draw (1,0)--(.5,1); \draw (0,2)--(.5,1); \draw (1,2)--(.5,1); } \tikzmath[1]{ \node at (0,0) {$\bullet$}; \node at (1,0) {$\bullet$}; \node at (.5,1) {$\bullet$}; \node at (0,2) {$\bullet$}; \node at (1,2) {$\bullet$}; \node at (0,0) [below] {$\scriptstyle{\rho_{3,1}}$}; \node at (1,0) [below] {$\scriptstyle{\rho_{7,1}}$}; \draw (0,0)--(.5,1); \draw (1,0)--(.5,1); \draw (0,2)--(.5,1); \draw (1,2)--(.5,1); } }_{2} \tikzmath[1]{ \node at (.5,0) {$\bullet$}; \node at (0,1) {$\bullet$}; \node at (1,1) {$\bullet$}; \node at (-.25,2) {$\bullet$}; \node at (.25,2) {$\bullet$}; \node at (.75,2) {$\bullet$}; \node at (1.25,2) {$\bullet$}; \node at (.5,0) [below] {$\scriptstyle{\rho_{5,1}}$}; \draw (0.5,0)--(0,1); \draw (.5,0)--(1,1); \draw (0,1)--(-.25,2); \draw (0,1)--(.25,2); \draw (1,1)--(.75,2); \draw (1,1)--(1.25,2); } \underbrace{ \tikzmath[1]{ \node at (0,0) {$\bullet$}; \node at (1,0) {$\bullet$}; \node at (.5,1) {$\bullet$}; \node at (0,2) {$\bullet$}; \node at (1,2) {$\bullet$}; \node at (0,0) [below] {$\scriptstyle{\rho_{1,0}}$}; \node at (1,0) [below] {$\scriptstyle{\rho_{9,2}}$}; \draw (0,0)--(.5,1); \draw (1,0)--(.5,1); \draw (0,2)--(.5,1); \draw (1,2)--(.5,1); } \cdots \tikzmath[1]{ \node at (0,0) {$\bullet$}; \node at (1,0) {$\bullet$}; \node at (.5,1) {$\bullet$}; \node at (0,2) {$\bullet$}; \node at (1,2) {$\bullet$}; \node at (0,0) [below] {$\scriptstyle{\rho_{4,1}}$}; \node at (1,0) [below] {$\scriptstyle{\rho_{6,1}}$}; \draw (0,0)--(.5,1); \draw (1,0)--(.5,1); \draw (0,2)--(.5,1); \draw (1,2)--(.5,1); } }_{8} $$ \caption{(Dual) principal graph for $\mathcal{A}_{\mathop{\mathsf{SU}}(2),{10}}\otimes\mathcal{A}_{\mathop{\mathsf{Spin}}(11),1}\subset \mathcal{A}_{N\subset M}$} \end{figure} \begin{figure} $$ \tikzmath[1]{ \foreach \k in {0,...,4}{ \foreach \l in {0,1,2}{ \pgfmathparse{Mod(\k+\l,2)==0?1:0} \ifnum\pgfmathresult>0 \draw (\k,\l) circle (.15); \node at (\k,\l) {$\bullet$}; \else \node at (\k,\l) {$\bullet$}; \fi \node at (\k,\l) [below right] {$\scriptstyle{\alpha_{\k,\l}}$}; } } \foreach \k in {0,...,4}{ \draw[dashed] (\k,0)--(\k,2); } \foreach \l in {0,1,2}{ \draw (0,\l)--(4,\l); } \node at (4.5,.5) {$\bullet$}; \node at (4.5,.5) [below right] {$\scriptstyle{\beta_{5,1}}$}; \node at (4.5,1.5) {$\bullet$}; \node at (4.5,1.5) [below right] {$\scriptstyle{\bar\beta_{5,1}}$}; \draw (4.5,.5)--(4,1)--(4.5,1.5); \node at (6.5,1) {$\bullet$}; \node at (6.5,1) [below right] {$\scriptstyle{\alpha_{5,0}}$}; \draw[dashed] (4.5,.5)--(6.5,1)--(4.5,1.5); \draw (4,0)--(6.5,1)--(4,2); \draw (4.5,.5) circle (.15); \draw (4.5,1.5) circle (.15); } $$ \caption{The fusion graph of $\bim[+]M\mathcal{C} M$ for $\mathcal{A}_{\mathop{\mathsf{SU}}(2),{10}}\otimes\mathcal{A}_{\mathop{\mathsf{Spin}}(11),1}\subset \mathcal{A}_{N\subset M}$ for $E_6$} \label{fig:E6DualFusionGraph} \end{figure} \end{example} \begin{example}[$E_{8}$-cases] Note the net $\mathcal{B}_{N\subset M}$ for the $E_8$ subfactor can be realized as $\mathcal{A}_{D_{16}}\otimes \mathcal{A}_{\mathop{\mathsf{F}_4},1}$, where we can replace $\mathcal{A}_{\Ffour1}$ by the $\tilde\mathcal{B}\supset\tilde \mathcal{A}_{28}$ the extension from Proposition \ref{prop:MirrorExtension} of $\mathcal{B}=\mathcal{A}_{\mathop{\mathsf{G}_2},1}\supset \tilde \mathcal{A}_{28}$. The modular invariant of the inclusion $\mathcal{A}_{28}\otimes \mathcal{A}_{\mathop{\mathsf{F}_4},1} \subset \mathcal{A}_{D_{16}}\otimes \mathcal{A}_{\mathop{\mathsf{F}_4},1}$ is $Z_{D_{12}}\otimes I_{2}$. \end{example} \section{Categorical Picture and Vertex Operator Algebras} \label{sec:VOA} Local irreducible extensions $\mathcal{B}\supset \mathcal{A}$ of completely rational nets are characterized by commutative Q-systems $\Theta\in \Rep(\mathcal{A})$ \cite{LoRe1995} and the representation theory is given by the ambichiral sectors $\bim[0]M\mathcal{C} M$. The Q-system is a commutative (Frobenius) algebra in the braided tensor category $\Rep(\mathcal{A})$. Because $\Theta$ is commutative, the right-modules $\Mod(\Theta)=\mathcal{C}_\Theta$, see \cite{KiOs2002}, form itself a tensor category. This category is equivalent with $\bim[+] M\mathcal{C} M$. Interchanging the braiding, there is another tensor product under which $\Mod(\Theta)$ is equivalent with $\bim[-] M\mathcal{C} M$. The ambichiral sectors are braided equivalent $\bim[0]M\mathcal{C} M$ with the category of local or dyslexic modules $\Mod_0(\Theta)$, see \cite{BiKaLo2014}. The same categorical structure arises for extensions of vertex operator algebras. \cite{KiOs2002, HuKiLe2014}. It follows: \begin{prop} \label{prop:VOAext} Let $\mathcal{A}$ be a completely rational conformal net and $V$ a vertex operator algebra, such that the category $\mathcal{C}_V$ has a natural vertex tensor category structure (\cf \cite{HuKiLe2014}) and is braided equivalent to $\Rep(\mathcal{A})$. Then for every local irreducible extension $\mathcal{B}\supset\mathcal{A}$ there exists a vertex operator algebra $V_\mathcal{B}\supset V$, whose category of modules is braided equivalent to $\Rep(\mathcal{B})$. \end{prop} Using this proposition we can transport our result to vertex operator algebras. By \cite[Proposition 8.2.6]{FrKe1993} ribbon categories with $\mathop{\mathsf{SU}}(2)_k$ are determined by its twists which are given by the exponential of the conformal weights using \cite{GuLo1996}. The fusion rules calculated by \cite{Wa} coincide with the one of the corresponding affine Kac--Moody VOA. Thus we can conclude that the modular tensor categories are equivalent. For a VOA corresponding to the net $\mathcal{A}_k$, \ie a VOA which has the opposite braiding of $\mathop{\mathsf{SU}}(2)_k$, we could in principle apply Proposition \ref{prop:VOAext}, but we do not know that the categories for the Virasoro minimal models are equivalent for VOAs and conformal nets. But we can argue as follows. Let $V_k=V_{\mathop{\mathsf{SU}}(2)_k}$ be the vertex operator algebra of affine Kac-Moody algebra $\hat{\mathfrak{sl}}_2$ at level $k$. As in Proposition \ref{prop:Akmirror} we get an inclusion into $V_{E_8}^{\otimes k}$, where $V_{E_8}$ is the vertex operator algebra associated with the even lattice $E_8$, which coincides by the Kac--Frenkel construction with the affine Kac-Moody algebra of the Lie algebra $E_8$ at level 1. Let $\tilde V_k$ be the coset of the inclusion $V_k\subset V_{E_8}^{\otimes k}$. Then $V_{E_8}^{\otimes k}$ decomposes as $$ \bigoplus Z_{kl} M_k\otimes \tilde M_l \,, $$ where $M_k$ are modules of $V_{k}$ and $\tilde M_l$ of the coset net $\tilde V_l$. It is $Z_{k0}=\delta_{k,0}$ and $Z_{0l}=\delta_{l,0}$. We call such an inclusion of $V_k\subset V_{E_8}^{\otimes k}$ normal. By the same argument as in Proposition \ref{prop:VOAext} the analogue of Proposition \ref{prop:Mirror} holds using the same proof and $\tilde V_k$ has as representation category $\mathop{\mathsf{SU}}(2)_k$ with the opposite braiding. Then Corollary \ref{cor:Doubles} together with Proposition \ref{prop:VOAext} gives: \begin{prop} There is a unitary rational VOA $\tilde V_k$ which has the opposite braiding of $\mathop{\mathsf{SU}}(2)_k$. For every subfactor $[M:N]<4$ there is a unitary rational VOA $V_{N\subset M}$, whose category of modules is equivalent to the quantum double $D(N\subset M)$ of the subfactor $N\subset M$, \ie the Drinfeld center of the fusion category of the even part of $N\subset M$. \end{prop} \begin{rmk} For the construction of $\tilde V_k$ and $V_{N\subset M}$ we could also use directly the correspondence between conformal nets and vertex operator algebras in \cite{CaKaLoWi2015}. We still have to use the categorical arguments to show that the corresponding representations categories are equivalent. It would be nice to have a result that states that the representation categories of $V$ and $\mathcal{A}_V$ are the same. \end{rmk} \begin{example} Let $V$ be the vertex operator algebra obtained by $\mathbb{Z}_2$--simple current extension $\hat{\mathfrak{sl}}_{2,10}\otimes \hat{\mathfrak{so}}_{11,1}$. Then the category modules of $V$ is equivalent to $Z(\frac12E_6)$, the quantum double of the $E_6$ subfactor. \end{example} \section{Conclusions and Outlook} We gave some structural results of completely rational conformal nets whose representation category is a quantum double (Drinfeld center of a unitary fusion category). We showed that the quantum doubles of subfactors with index less than 4, or equivalently the Drinfeld centers of their even part fusion categories, are realized as representation theories in chiral conformal field theory, either as a conformal net of von Neumann algebras or as VOAs. The most interesting is the realization of the quantum double of $E_6$ (or $\bar E_6$) as a $\mathbb{Z}_2$-simple current extension of $\mathop{\mathsf{SU}}(2)_{10}\times \mathop{\mathsf{Spin}}(11)_1$. In particular, \cite{HoRoWa2008} it was shown that the quantum double of $E_6$ is universal for topological quantum computing. On the other hand, it was proposed in the same article that it might be exotic. Our construction shows that it is indeed not exotic. This example was the main motivation of the article, because no direct realization in conformal field theory or quantum groups is contained in the literature. Further, the even part of $E_6$ is the smallest non-trivial fusion category \cite{Os2013} in the sense that it is not braided or coming from groups. Drinfeld centers of braided fusion categories and groups are easy. Despite the fact that the even part of $E_6$ has no braiding, the realization as a CFT is still very easy. We conjecture that the double of $E_6$ is also related to Chern--Simons theory with non-simply connected gauge group $(\mathop{\mathsf{SU}}(2)\times \mathop{\mathsf{Spin}}(11)) /\mathbb{Z}_2$. It is also related to the $\mathop{\mathsf{SU}}(2)_{10}\times \mathop{\mathsf{Spin}}(11)_1$ quantum group as a kind of quantum subgroup. Indeed the $\mathbb{Z}_2$-simple current extension correspond to a quantum subgroup in the sense of Ocneanu \cite{Oc2001}. It would be interesting to find realizations of the doubles of exotic subfactors, like the Haagerup subfactor using similar methods like here. \def$'${$'$} \begin{bibdiv} \begin{biblist} \bib{Bc1996}{techreport}{ author={Böckenhauer, Jens}, title={{An Algebraic Formulation of Level One Wess-Zumino-Witten Models}}, date={1996}, volume={8}, number={DESY 95-138}, url={http://arxiv.org/abs/hep-th/9507047}, } \bib{BcEv1998}{article}{ author={Böckenhauer, J.}, author={Evans, D.~E.}, title={{Modular invariants, graphs and {$\alpha$}-induction for nets of subfactors. {I}}}, date={1998}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={197}, number={2}, pages={361–386}, eprint={arXiv:hep-th/9801171}, url={http://dx.doi.org/10.1007/s002200050455}, review={\MR{1652746 (2000c:46121)}}, } \bib{BcEvKa2000}{article}{ author={Böckenhauer, Jens}, author={Evans, David~E.}, author={Kawahigashi, Yasuyuki}, title={{Chiral structure of modular invariants for subfactors}}, date={2000}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={210}, number={3}, pages={733–784}, url={http://dx.doi.org/10.1007/s002200050798}, review={\MR{1777347 (2001k:46097)}}, } \bib{BcEvKa2001}{article}{ author={Böckenhauer, Jens}, author={Evans, David~E.}, author={Kawahigashi, Yasuyuki}, title={Longo-{R}ehren subfactors arising from {$\alpha$}-induction}, date={2001}, ISSN={0034-5318}, journal={Publ. Res. Inst. Math. Sci.}, volume={37}, number={1}, pages={1\ndash 35}, url={http://projecteuclid.org/euclid.prims/1145476688}, review={\MR{1815993 (2002d:46053)}}, } \bib{BcEvKa1999}{article}{ author={Böckenhauer, Jens}, author={Evans, David~E.}, author={Kawahigashi, Yasuyuki}, title={{On {$\alpha$}-induction, chiral generators and modular invariants for subfactors}}, date={1999}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={208}, number={2}, pages={429–487}, url={http://dx.doi.org/10.1007/s002200050765}, review={\MR{1729094 (2001c:81180)}}, } \bib{Bi2012}{article}{ author={Bischoff, Marcel}, title={{Models in Boundary Quantum Field Theory Associated with Lattices and Loop Group Models}}, date={2012}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, pages={1–32}, eprint={arXiv:1108.4889v1 [math-ph]}, url={http://dx.doi.org/10.1007/s00220-012-1511-2}, note={10.1007/s00220-012-1511-2}, } \bib{BiKaLo2014}{misc}{ author={Bischoff, Marcel}, author={Kawahigashi, Yasuyuki}, author={Longo, Roberto}, title={{Characterization of 2D rational local conformal nets and its boundary conditions: the maximal case}}, date={2014}, } \bib{BiKaLoRe2014}{article}{ author={Bischoff, Marcel}, author={Kawahigashi, Yasuyuki}, author={Longo, Roberto}, author={Rehren, Karl-Henning}, title={{Phase boundaries in algebraic conformal QFT}}, date={2014-05}, eprint={arxiv:1405.7863v1 [math-ph]}, url={http://arxiv.org/abs/1405.7863v1}, } \bib{BiKaLoRe2014-2}{book}{ author={Bischoff, Marcel}, author={Kawahigashi, Yasuyuki}, author={Longo, Roberto}, author={Rehren, Karl-Henning}, title={Tensor categories and endomorphisms of von neumann algebras: with applications to quantum field theory}, series={SpringerBriefs in Mathematical Physics}, publisher={Springer}, date={2015}, volume={3}, url={http://arxiv.org/abs/1407.4793}, } \bib{CaKaLoWi2015}{article}{ author={Carpi, Sebastiano}, author={Kawahigashi, Yasuyuki}, author={Longo, Roberto}, author={Weiner, Mih{\'a}ly}, title={From vertex operator algebras to conformal nets and back}, date={2015}, journal={arXiv preprint arXiv:1503.01260}, } \bib{CaMoSn2011}{article}{ author={Calegari, Frank}, author={Morrison, Scott}, author={Snyder, Noah}, title={Cyclotomic integers, fusion categories, and subfactors}, date={2011}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={303}, number={3}, pages={845\ndash 896}, url={http://dx.doi.org/10.1007/s00220-010-1136-2}, review={\MR{2786219 (2012e:18013)}}, } \bib{DrGeNiOs2010}{article}{ author={Drinfeld, Vladimir}, author={Gelaki, Shlomo}, author={Nikshych, Dmitri}, author={Ostrik, Victor}, title={On braided fusion categories. {I}}, date={2010}, ISSN={1022-1824}, journal={Selecta Math. (N.S.)}, volume={16}, number={1}, pages={1\ndash 119}, url={http://dx.doi.org/10.1007/s00029-010-0017-z}, review={\MR{2609644 (2011e:18015)}}, } \bib{DoHaRo1971}{article}{ author={Doplicher, Sergio}, author={Haag, Rudolf}, author={Roberts, John~E.}, title={Local observables and particle statistics. {I}}, date={1971}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={23}, pages={199\ndash 230}, review={\MR{0297259 (45 \#6316)}}, } \bib{DaMgNiOs2013}{article}{ author={Davydov, Alexei}, author={Müger, Michael}, author={Nikshych, Dmitri}, author={Ostrik, Victor}, title={{The {W}itt group of non-degenerate braided fusion categories}}, date={2013}, ISSN={0075-4102}, journal={J. Reine Angew. Math.}, volume={677}, pages={135–177}, review={\MR{3039775}}, } \bib{DoPi2002}{article}{ author={Doplicher, Sergio}, author={Piacitelli, Gherardo}, title={Any compact group is a gauge group}, date={2002}, ISSN={0129-055X}, journal={Rev. Math. Phys.}, volume={14}, number={7-8}, pages={873\ndash 885}, url={http://dx.doi.org/10.1142/S0129055X02001430}, note={Dedicated to Professor Huzihiro Araki on the occasion of his 70th birthday}, review={\MR{1932669 (2003g:81118)}}, } \bib{EvGa2011}{article}{ author={Evans, David~E.}, author={Gannon, Terry}, title={{The exoticness and realisability of twisted {H}aagerup-{I}zumi modular data}}, date={2011}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={307}, number={2}, pages={463–512}, url={http://dx.doi.org/10.1007/s00220-011-1329-3}, review={\MR{2837122 (2012m:17040)}}, } \bib{EvKa1998}{book}{ author={Evans, David~E.}, author={Kawahigashi, Yasuyuki}, title={{Quantum symmetries on operator algebras}}, series={{Oxford Mathematical Monographs}}, publisher={The Clarendon Press Oxford University Press}, address={New York}, date={1998}, ISBN={0-19-851175-2}, note={Oxford Science Publications}, review={\MR{1642584 (99m:46148)}}, } \bib{FrKe1993}{book}{ author={Fr{\"o}hlich, J{\"u}rg}, author={Kerler, Thomas}, title={Quantum groups, quantum categories and quantum field theory}, series={Lecture Notes in Mathematics}, publisher={Springer-Verlag, Berlin}, date={1993}, volume={1542}, ISBN={3-540-56623-6}, review={\MR{1239440 (95f:81042)}}, } \bib{FuRuSc2004}{article}{ author={Fuchs, J{\"u}rgen}, author={Runkel, Ingo}, author={Schweigert, Christoph}, title={T{FT} construction of {RCFT} correlators. {III}. {S}imple currents}, date={2004}, ISSN={0550-3213}, journal={Nuclear Phys. B}, volume={694}, number={3}, pages={277\ndash 353}, url={http://dx.doi.org/10.1016/j.nuclphysb.2004.05.014}, review={\MR{2076134 (2005e:81209)}}, } \bib{FrReSc1989}{article}{ author={Fredenhagen, K.}, author={Rehren, K.-H.}, author={Schroer, B.}, title={{Superselection sectors with braid group statistics and exchange algebras. {I}.\ {G}eneral theory}}, date={1989}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={125}, number={2}, pages={201–226}, url={http://projecteuclid.org/getRecord?id=euclid.cmp/1104179464}, review={\MR{1016869 (91c:81047)}}, } \bib{GoHaJo1989}{book}{ author={Goodman, Frederick~M.}, author={de~la Harpe, Pierre}, author={Jones, Vaughan F.~R.}, title={{Coxeter graphs and towers of algebras}}, series={{Mathematical Sciences Research Institute Publications}}, publisher={Springer-Verlag}, address={New York}, date={1989}, volume={14}, ISBN={0-387-96979-9}, url={http://dx.doi.org/10.1007/978-1-4613-9641-3}, review={\MR{999799 (91c:46082)}}, } \bib{GuLo1996}{article}{ author={Guido, Daniele}, author={Longo, Roberto}, title={The conformal spin and statistics theorem}, date={1996}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={181}, number={1}, pages={11\ndash 35}, url={http://projecteuclid.org/euclid.cmp/1104287623}, review={\MR{1410566 (98c:81121)}}, } \bib{Ha1994}{incollection}{ author={Haagerup, Uffe}, title={{Principal graphs of subfactors in the index range {$4<[M:N]<3+\sqrt2$}}}, date={1994}, booktitle={{Subfactors ({K}yuzeso, 1993)}}, publisher={World Sci. Publ., River Edge, NJ}, pages={1–38}, review={\MR{1317352 (96d:46081)}}, } \bib{Ha}{book}{ author={Haag, Rudolf}, title={{Local quantum physics}}, publisher={Springer Berlin}, date={1996}, } \bib{HuKiLe2014}{misc}{ author={Huang, Yi-Zhi}, author={Kirillov, Alexander~Jr.}, author={Lepowsky, James}, title={Braided tensor categories and extensions of vertex operator algebras}, date={2014}, url={http://arxiv.org/abs/1406.3420}, } \bib{HoRoWa2008}{article}{ author={Hong, Seung-Moon}, author={Rowell, Eric}, author={Wang, Zhenghan}, title={{On exotic modular tensor categories}}, date={2008}, ISSN={0219-1997}, journal={Commun. Contemp. Math.}, volume={10}, number={suppl. 1}, pages={1049–1074}, url={http://dx.doi.org/10.1142/S0219199708003162}, review={\MR{2468378 (2009j:18005)}}, } \bib{HaYa2000}{article}{ author={Hayashi, Tomohiro}, author={Yamagami, Shigeru}, title={Amenable tensor categories and their realizations as {AFD} bimodules}, date={2000}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={172}, number={1}, pages={19\ndash 75}, url={http://dx.doi.org/10.1006/jfan.1999.3521}, review={\MR{1749868 (2001d:46092)}}, } \bib{Iz2000}{article}{ author={Izumi, Masaki}, title={{The Structure of Sectors Associated with Longo–Rehren Inclusions\\I. General Theory}}, date={2000}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={213}, pages={127–179}, url={http://dx.doi.org/10.1007/s002200000234}, } \bib{Iz2001II}{article}{ author={Izumi, Masaki}, title={The structure of sectors associated with {L}ongo-{R}ehren inclusions. {II}. {E}xamples}, date={2001}, ISSN={0129-055X}, journal={Rev. Math. Phys.}, volume={13}, number={5}, pages={603\ndash 674}, url={http://dx.doi.org/10.1142/S0129055X01000818}, review={\MR{1832764 (2002k:46161)}}, } \bib{Jo2014}{article}{ author={Jones, Vaughan~F.R.}, title={Some unitary representations of {T}hompson's groups {$F$} and {$T$}}, date={2014}, journal={arXiv preprint arXiv:1412.7740}, } \bib{Jo1983}{article}{ author={Jones, V. F.~R.}, title={{Index for subfactors}}, date={1983}, ISSN={0020-9910}, journal={Invent. Math.}, volume={72}, number={1}, pages={1–25}, url={http://dx.doi.org/10.1007/BF01389127}, review={\MR{696688 (84d:46097)}}, } \bib{Ka2015}{article}{ author={Kawahigashi, Yasuyuki}, title={A remark on gapped domain walls between topological phases}, date={2015}, journal={arXiv preprint arXiv:1504.01088}, } \bib{KaLo2004}{article}{ author={Kawahigashi, Y.}, author={Longo, Roberto}, title={{Classification of local conformal nets. Case {$c < 1$}.}}, date={2004}, ISSN={0003-486X}, journal={Ann. Math.}, volume={160}, number={2}, pages={493–522}, } \bib{KaLoMg2001}{article}{ author={Kawahigashi, Y.}, author={Longo, Roberto}, author={Müger, Michael}, title={{Multi-Interval Subfactors and Modularity of Representations in Conformal Field Theory}}, date={2001}, journal={Comm. Math. Phys.}, volume={219}, pages={631–669}, eprint={arXiv:math/9903104}, } \bib{KiOs2002}{article}{ author={Kirillov, Jr.~Alexander}, author={Ostrik, Viktor}, title={{On a {$q$}-analogue of the {M}c{K}ay correspondence and the {ADE} classification of {$\germ {sl}\_2$} conformal field theories}}, date={2002}, ISSN={0001-8708}, journal={Adv. Math.}, volume={171}, number={2}, pages={183–227}, url={http://dx.doi.org/10.1006/aima.2002.2072}, review={\MR{1936496 (2003j:17019)}}, } \bib{Ko1986}{article}{ author={Kosaki, Hideki}, title={{Extension of {J}ones' theory on index to arbitrary factors}}, date={1986}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={66}, number={1}, pages={123–140}, url={http://dx.doi.org/10.1016/0022-1236(86)90085-6}, review={\MR{829381 (87g:46093)}}, } \bib{Lo2003}{article}{ author={Longo, Roberto}, title={{Conformal Subnets and Intermediate Subfactors}}, date={2003}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={237}, pages={7–30}, eprint={arXiv:math/0102196v2 [math.OA]}, url={http://dx.doi.org/10.1007/s00220-003-0814-8}, } \bib{LoRe1995}{article}{ author={Longo, Roberto}, author={Rehren, Karl-Henning}, title={{Nets of Subfactors}}, date={1995}, journal={Rev. Math. Phys.}, volume={7}, pages={567–597}, eprint={arXiv:hep-th/9411077}, } \bib{LoRo1997}{article}{ author={Longo, R.}, author={Roberts, J.~E.}, title={{A theory of dimension}}, date={1997}, ISSN={0920-3036}, journal={K-Theory}, volume={11}, number={2}, pages={103–159}, eprint={arXiv:funct-an/9604008v1}, url={http://dx.doi.org/10.1023/A:1007714415067}, review={\MR{1444286 (98i:46065)}}, } \bib{Mg2003}{article}{ author={Müger, Michael}, title={{From subfactors to categories and topology. {I}. {F}robenius algebras in and {M}orita equivalence of tensor categories}}, date={2003}, ISSN={0022-4049}, journal={J. Pure Appl. Algebra}, volume={180}, number={1-2}, pages={81–157}, url={http://dx.doi.org/10.1016/S0022-4049(02)00247-5}, review={\MR{1966524 (2004f:18013)}}, } \bib{Mg2003II}{article}{ author={Müger, Michael}, title={{From subfactors to categories and topology. {II}. {T}he quantum double of tensor categories and subfactors}}, date={2003}, ISSN={0022-4049}, journal={J. Pure Appl. Algebra}, volume={180}, number={1-2}, pages={159–219}, url={http://dx.doi.org/10.1016/S0022-4049(02)00248-7}, review={\MR{1966525 (2004f:18014)}}, } \bib{Mg2005}{article}{ author={Müger, Michael}, title={{Conformal Orbifold Theories and Braided Crossed G-Categories}}, date={2005}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={260}, pages={727–762}, url={http://dx.doi.org/10.1007/s00220-005-1291-z}, } \bib{Mg2010}{inproceedings}{ author={Müger, Michael}, title={{On superselection theory of quantum fields in low dimensions}}, date={2010}, booktitle={{X{VI}th {I}nternational {C}ongress on {M}athematical {P}hysics}}, publisher={World Sci. Publ., Hackensack, NJ}, pages={496–503}, url={http://dx.doi.org/10.1142/9789814304634_0041}, review={\MR{2730815 (2012i:81165)}}, } \bib{Ma2000}{article}{ author={Masuda, Toshihiko}, title={Generalization of {L}ongo-{R}ehren construction to subfactors of infinite depth and amenability of fusion algebras}, date={2000}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={171}, number={1}, pages={53\ndash 77}, url={http://dx.doi.org/10.1006/jfan.1999.3523}, review={\MR{1742858 (2001f:46093)}}, } \bib{MoSe1990}{incollection}{ author={Moore, Gregory}, author={Seiberg, Nathan}, title={{Lectures on {RCFT}}}, date={1990}, booktitle={{Superstrings '89 ({T}rieste, 1989)}}, publisher={World Sci. Publ., River Edge, NJ}, pages={1–129}, review={\MR{1159969 (93m:81133a)}}, } \bib{Oc2001}{incollection}{ author={Ocneanu, Adrian}, title={Operator algebras, topology and subgroups of quantum symmetry---construction of subgroups of quantum groups}, date={2001}, booktitle={Taniguchi {C}onference on {M}athematics {N}ara '98}, series={Adv. Stud. Pure Math.}, volume={31}, publisher={Math. Soc. Japan, Tokyo}, pages={235\ndash 263}, review={\MR{1865095 (2002j:57059)}}, } \bib{Oc1988}{incollection}{ author={Ocneanu, Adrian}, title={Quantized groups, string algebras and {G}alois theory for algebras}, date={1988}, booktitle={Operator algebras and applications, {V}ol.\ 2}, series={London Math. Soc. Lecture Note Ser.}, volume={136}, publisher={Cambridge Univ. Press, Cambridge}, pages={119\ndash 172}, review={\MR{996454 (91k:46068)}}, } \bib{Os2013}{article}{ author={Ostrik, Victor}, title={Pivotal fusion categories of rank 3 (with an appendix written jointly with dmitri nikshych)}, date={2013}, journal={arXiv preprint arXiv:1309.4822}, } \bib{Po1994-2}{article}{ author={Popa, Sorin}, title={Classification of amenable subfactors of type {II}}, date={1994}, ISSN={0001-5962}, journal={Acta Math.}, volume={172}, number={2}, pages={163\ndash 255}, url={http://dx.doi.org/10.1007/BF02392646}, review={\MR{1278111 (95f:46105)}}, } \bib{Po1994}{article}{ author={Popa, Sorin}, title={Symmetric enveloping algebras, amenability and {AFD} properties for subfactors}, date={1994}, ISSN={1073-2780}, journal={Math. Res. Lett.}, volume={1}, number={4}, pages={409\ndash 425}, url={http://dx.doi.org/10.4310/MRL.1994.v1.n4.a2}, review={\MR{1302385 (95i:46095)}}, } \bib{Po1993}{book}{ author={Popa, Sorin}, title={Classification of subfactors and their endomorphisms}, series={CBMS Regional Conference Series in Mathematics}, publisher={Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI}, date={1995}, volume={86}, ISBN={0-8218-0321-2}, review={\MR{1339767 (96d:46085)}}, } \bib{Re2000}{article}{ author={Rehren, K.-H.}, title={{Canonical tensor product subfactors}}, date={2000}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={211}, number={2}, pages={395–406}, url={http://dx.doi.org/10.1007/s002200050818}, review={\MR{1754521 (2001d:46093)}}, } \bib{Re1989}{incollection}{ author={Rehren, Karl-Henning}, title={Braid group statistics and their superselection rules}, date={1990}, booktitle={The algebraic theory of superselection sectors ({P}alermo, 1989)}, publisher={World Sci. Publ., River Edge, NJ}, pages={333\ndash 355}, review={\MR{1147467}}, } \bib{Sc2001}{article}{ author={Schauenburg, Peter}, title={The monoidal center construction and bimodules}, date={2001}, ISSN={0022-4049}, journal={J. Pure Appl. Algebra}, volume={158}, number={2-3}, pages={325\ndash 346}, url={http://dx.doi.org/10.1016/S0022-4049(00)00040-2}, review={\MR{1822847 (2002f:18013)}}, } \bib{Wa}{article}{ author={Wassermann, Antony}, title={{Operator algebras and conformal field theory III. Fusion of positive energy representations of LSU(N) using bounded operators}}, date={1998}, journal={Invent. Math.}, volume={133}, number={3}, pages={467–538}, eprint={arXiv:math/9806031v1 [math.OA]}, } \bib{Xu2000}{article}{ author={Xu, Feng}, title={{Jones-{W}assermann subfactors for disconnected intervals}}, date={2000}, ISSN={0219-1997}, journal={Commun. Contemp. Math.}, volume={2}, number={3}, pages={307–347}, eprint={arXiv:q-alg/9704003}, url={http://dx.doi.org/10.1142/S0219199700000153}, review={\MR{1776984 (2001f:46094)}}, } \bib{XuUnpublished}{misc}{ author={Xu, Feng}, title={Unpublished note}, date={2001}, note={As cited in appendix [CMS11]}, } \bib{Xu2007}{article}{ author={Xu, Feng}, title={{Mirror extensions of local nets}}, date={2007}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={270}, number={3}, pages={835–847}, url={http://dx.doi.org/10.1007/s00220-006-0184-0}, review={\MR{2276468 (2008f:81148)}}, } \bib{Xu2009}{article}{ author={Xu, Feng}, title={{On Affine Orbifold Nets Associated with Outer Automorphisms}}, date={2009}, ISSN={0010-3616}, journal={Comm. Math. Phys.}, volume={291}, pages={845–861}, eprint={arXiv:1002.2710v1 [math.OA]}, url={http://dx.doi.org/10.1007/s00220-009-0763-y}, } \end{biblist} \end{bibdiv} \address \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:int}INTRODUCTION} The study of the thermodynamic as well as microscopic properties of Fermi-liquid systems has a long history,\cite{landau57,landau59,pines,gfg} but the interest in nonanalytic corrections to the Fermi-liquid behavior is more recent. The existence of well-defined quasiparticles at the Fermi surface is the basis for the phenomenological description due to Landau\cite{landau57} and justifies the fact that a system of interacting fermions is similar in many ways to the Fermi gas. The Landau theory of the Fermi liquid is a fundamental paradigm which has been successful in describing properties of ${}^3$He, metals, and two-dimensional electronic systems. In particular, the leading temperature dependence of the specific heat or the spin susceptibility (i.e., $C_s$ linear in $T$ and $\chi_s$ approaching a constant) is found to be valid experimentally and in microscopic calculations. However, deviations from the ideal Fermi gas behavior exist in the subleading terms. For example, while the low-temperature dependence of $C_s/T$ for a Fermi gas is a regular expansion in $T^2$, a correction to $C_s/T$ of the form $T^2 \ln T$ was found in three dimensions.\cite{pethick73} These nonanalytic features are enhanced in two dimensions and, in fact, a correction linear in $T$ is found.\cite{coffey93,PhysRevB.55.9452,PhysRevB.68.155113} These effects were observed in ${}^3$He, both in the three-\cite{greywall83} and two-dimensional case.\cite{casey03} The nonanalytic corrections manifest themselves not only in the temperature dependence. For the special case of the spin susceptibility, it is of particular interest to determine also its dependence on the wave vector $Q$. The deviation $\delta\chi_s$ from the $T=Q=0$ value parallels the temperature dependence of the specific heat discussed above: from a second-order calculation in the electron interaction, corrections proportional to $Q^2\ln Q$ and $Q$ were obtained in three and two dimensions respectively.\cite{PhysRevB.55.9452,hirashima98,PhysRevB.68.155113} On the other hand, the dependence on $T$ was found to be $\delta\chi_s \sim T^2$ in three dimensions\cite{PhysRevB.16.1933,PhysRevB.55.9452} (without any logarithmic factor) and $\delta\chi_s \sim T$ in two dimensions.\cite{hirashima98,JETPLett.58.709,PhysRevLett.86.5337,PhysRevB.64.054414,PhysRevB.68.155113} We cite here the final results in the two dimensional case (on which we focus in this paper), valid to second order in the interaction potential $V(q)$, \begin{equation}\label{eq:deltachi_2} \delta\chi_s^{(2)}(T,Q)=2K(T,Q)V^{2}(2 k_F), \end{equation} where \begin{equation}\label{eq:KT} K(T,0)=\frac{m^3}{16\pi^3}\frac{k_B T}{E_{F}} \end{equation} and \begin{equation}\label{eq:KQ} K(0,Q)\equiv \frac{m^3}{48\pi^4}\frac{v_F Q}{E_{F}}. \end{equation} Here $m$ is the effective mass, $k_F$ is the Fermi wave vector, $E_F=k_F^2/2m$, and we use $\hbar=1$ throughout the paper. Our purpose is to extend this perturbative result to higher order by taking into account the Cooper channel renormalization of the scattering amplitudes. The extension to higher order of the second-order results has mostly focused on the temperature dependence, both for the specific heat\cite{chubukov05a,chubukov05b,chubukov06,chubukov07,aleiner06} and the spin susceptibility.\cite{chubukov05a,PhysRevB.74.205122,ProcNatlAcadSci.103.15765,schwiete06} Recently the spin susceptibility has been measured in a silicon inversion layer as a function of temperature.\cite{PhysRevB.67.205407} A strong dependence on $T$ is observed, seemingly incompatible with a $T^2$ Fermi-liquid correction, and the measurements also reveal that the (positive) value of the spin susceptibility is \emph{decreasing} with temperature, in disagreement with the lowest order result cited above. This discrepancy has stimulated further theoretical investigations in the nonperturbative regime. Possible mechanisms that lead to a negative slope were proposed in Refs.~\onlinecite{PhysRevB.74.205122} and \onlinecite{ProcNatlAcadSci.103.15765} if strong renormalization effects in the Cooper channel become important. These can drastically change the picture given by the lowest order perturbation theory, allowing for a nonmonotonic behavior and, in particular, a negative slope at small temperatures. The mechanism we consider here to modify the linear $Q$ dependence is very much related to Ref.~\onlinecite{PhysRevB.74.205122}. There it is found that, at $Q=0$ and finite temperature, $V^2(2k_F)$ in Eq.~(\ref{eq:deltachi_2}) is substituted by $|\Gamma(\pi)|^{2}$, where \begin{equation}\label{eq:Gamma_theta_def} \Gamma(\theta)\equiv\sum_{n}\Gamma_{n}e^{in\theta} \end{equation} is the scattering amplitude in the Cooper channel with $\theta$ being the scattering angle ($\theta=\pi$ corresponds to the backscattering process). An additional temperature dependence arises from the renormalization of the Fourier amplitudes \begin{equation}\label{eq:GammaT} \Gamma_{n}(k_{B}T) = \frac{V_{n}}{1-\frac{m V_{n}}{2\pi}\ln\frac{k_B T}{W}}, \end{equation} where $W$ is a large energy scale $W\sim E_{F}$ and $V_n$ are the Fourier amplitudes of the interaction potential for scattering in the vicinity of the Fermi surface \begin{equation}\label{eq:Vdef} V(2k_{F}\sin{\theta/2}) = \sum_{n}V_{n}e^{in\theta}. \end{equation} A negative slope of $\delta\chi_s$ is possible, for sufficiently small $T$ if one of the amplitudes $V_n$ is negative.\cite{PhysRevB.74.205122,PhysRevLett.15.524,PhysRevB.48.1097} For $\frac{m V_{n}}{2\pi}\ln\frac{k_B T_{KL}}{W}=1$, the denominator in Eq.~(\ref{eq:GammaT}) diverges what corresponds to the Kohn-Luttinger (KL) instability.\cite{PhysRevLett.15.524} At $T\gtrsim T_{KL}$ the derivative of the spin susceptibility is negative due to the singularity in $\Gamma_{n}(k_{B}T)$ and becomes positive far away from $T_{KL}$. At $T=0$ an analogous effect occurs for the momentum dependence. Indeed, it is widely expected that the functional form of the spin susceptibility in terms of $k_B T$ or $v_F Q$ is similar. As in the case of a finite temperature, the lowest order expression gains an additional nontrivial dependence on $Q$ due to the renormalization of the backscattering amplitude $V^2(2k_F)$. We obtain \begin{equation}\label{eq:MainRes} \delta\chi_s(Q)=2K(0,Q)|\Gamma(\pi)|^2, \end{equation} where $\Gamma(\pi)$ is given by Eq.~(\ref{eq:Gamma_theta_def}) and \begin{equation}\label{eq:GammaDef} \Gamma_n(v_{F}Q)=\frac{V_n}{1-\frac{m V_n}{2\pi}\ln\frac{v_F Q}{W}}. \end{equation} Such result is obtained from renormalization of the interaction in the Cooper channel, while other possible effects are neglected. Moreover, at each perturbative order, only the leading term in the limit of small $Q$ is kept. Therefore, corrections to Eq.~(\ref{eq:MainRes}) exist which, for example, would modify the proportionality of $\delta\chi_s$ to $|\Gamma(\pi)|^2$ (see Ref.~\onlinecite{PhysRevB.74.205122}). However, in the region $v_F Q \gtrsim k_B T_{KL}$, close to the divergence of $\Gamma_n(v_{F}Q)$ relative to the most negative $V_n$, Eq.~(\ref{eq:MainRes}) is expected to give the most important contribution to the spin susceptibility. The result of Eqs.~(\ref{eq:MainRes}) and (\ref{eq:GammaDef}) could have been perhaps easily anticipated and, in fact, it was suggested already in Ref.~\onlinecite{PhysRevB.77.045108}. The question of the functional dependence of the spin susceptibility on momentum is crucial in light of the ongoing studies on the nuclear spin ferromagnetism,\cite{PhysRevLett.98.156401,PhysRevB.77.045108,PhysRevB.67.144520} as the stability of the ferromagnetic phase is governed by the electron spin susceptibility. In this context, Eqs.~(\ref{eq:MainRes}) and (\ref{eq:GammaDef}) were motivated by a renormalization-group argument. We provide here a complete derivation, based on the standard diagrammatic approach. The paper is organized as follows: in Sec. \ref{sec:ppp} we discuss the origin of Cooper instability and derive expressions for a general ladder diagram, which is an essential ingredient for the higher order corrections to the spin susceptibility. In Sec.~\ref{sec:2od} we give a short overview of the lowest order results to understand the origin of the nonanalytic corrections. Based on the results of Section \ref{sec:ppp}, we provide an alternative derivation of one of the contributions, which can be easily generalized to higher order. Sec.~\ref{sec:hod} contains the main finding of this paper: the Cooper renormalization of the nonanalytic correction to the spin susceptibility is obtained there. We find an efficient approach to calculate higher order diagrams based on the second-order result. In Section \ref{sec:RG} the diagrammatic calculation is discussed in relation to the renormalization-group argument of Ref.~\onlinecite{PhysRevB.77.045108}. Sec.~\ref{sec:con} contains our concluding remarks. More technical details have been moved to the Appendixes~\ref{app:lad}-\ref{app:ppsmallQ}. \section{\label{sec:ppp}PARTICLE-PARTICLE PROPAGATOR} In this section we consider a generic particle-particle propagator, which includes $n$ interaction lines, as depicted in Fig.~\ref{fig:2}. The incoming and outgoing frequencies and momenta are $k_\mu, p_\mu$ and $k'_\mu, p'_\mu$, respectively, using the relativistic notation $k_\mu=(\omega_{k},\mathbf{k})$. This particle-particle propagator represents an essential part of the diagrams considered in this paper and corresponds to the following expression: \begin{align}\label{eq:Pidef} \notag \Pi&^{(n)}(p_{\mu},p_{\mu}',k_{\mu}) = (-1)^{n-1}\int\frac{d^{3} q_{1} \dots d^{3}q_{n-1}}{(2\pi)^{3n-3}} V(|{\bf q}_1|) \\ &\times \prod_{i=1}^{n-1}G(k_{\mu}-q_{i,\mu})G(p_{\mu}+q_{i,\mu})V(|\mathbf{q}_{i+1}-\mathbf{q}_{i}|), \end{align} where $\mathbf{q}_{n}\equiv\mathbf{p}'-\mathbf{p}$. The frequencies are along the imaginary axis, i.e., $G(k_\mu)=G(\omega_{k},\mathbf{k})=(i\omega_{k}-\xi_{\mathbf{k}})^{-1}$, where $\xi_{\mathbf{k}}=k^2/2m-E_F$ with $k=|{\bf k}|$. \begin{figure} \includegraphics[width=.4\textwidth]{fig2.eps} \caption{\label{fig:2}The building block (on the left) of any ladder diagram (on the right). Of special interest is the limit of correlated momenta $\mathbf{p}=-\mathbf{k}$, leading to the Cooper instability.} \end{figure} In particular, we are interested in the case when the sum of incoming frequencies and momenta is small; i.e., $P_\mu\equiv p_\mu+k_\mu \approx0$. Under this assumption we obtain the following useful result for which we provide details of the derivation in Appendix~\ref{app:lad}: \begin{align}\label{eq:PiNres} \notag \Pi^{(n)}(P_\mu,\theta) = {\sum_{M_{1}\dots M_{n-1}}}^{\hspace{-10pt}\prime}& ~\Pi_{M_{1}}(P_\mu)\dots\Pi_{M_{n-1}}(P_\mu)\\ &\times\tilde{V}_{M_{1}\dots M_{n-1}}^{n}(\theta_P, \theta), \end{align} where the sum is restricted to $M_i=0,\pm2,\pm4 \ldots$. The angle of ${\bf P}={\bf p}+\bf{k}$ is from the direction of the incoming momentum ${\bf p}$, i.e., $\theta_{P}\equiv\angle(\mathbf{P},\mathbf{p})$, while $\theta\equiv\angle(\mathbf{p}',\mathbf{p})$. In the above formula, \begin{equation}\label{eq:Pi0} \Pi_{0}(P_\mu) = \frac{m}{2\pi} \ln{\frac{|\Omega_{P}|+\sqrt{\Omega_{P}^{2}+v_{F}^{2}P^{2}}}{W}}\qquad \end{equation} and ($M$ even) \begin{equation}\label{eq:PiM} \Pi_{M\neq0}(P_\mu) = -\frac{m}{2\pi}\frac{(-1)^{|M|/2}}{|M|} \Big(\frac{1-\sin\phi}{\cos\phi}\Big)^{|M|}, \end{equation} with $W \sim E_F$ a high energy cutoff and $\phi\equiv\arctan\frac{|\Omega_{P}|}{v_{F}P}$. Notice that $\Pi_M(P_\mu)$ has no angular ($\theta_P$, $\theta$) dependence, which is only determined by the following quantity: \begin{align}\label{eq:VMNdef} \notag \tilde{V}_{M_{1}\dots M_{n-1}}^{n}&(\theta_P,\theta) \equiv \sum_{m,m'}V_{m}V_{m-M_{1}}\dots V_{m-M_{1}-\ldots M_{n-1}}\\ &\times e^{i m'\theta_{P}- i \, m \theta} \, \delta_{M_{1}+M_2+\ldots M_{n-1},m'} \end{align} defined in terms of the amplitudes $V_n$. Equation~(\ref{eq:Vdef}) can be used to approximate the interaction potential in Eq.~(\ref{eq:Pidef}) since the relevant contribution originates from the region of external ($p \approx p'\approx k \approx k' \approx k_{F}$) and internal momenta ($|\mathbf{p}+\mathbf{q}_i|\approx|\mathbf{k}-\mathbf{q}_i|\approx k_{F}$) close to the Fermi surface. Furthermore, the direction of ${\bf P}$ can be equivalently measured from $\bf{k}$ without affecting the result since $\theta_{P}=\angle(\mathbf{P},\mathbf{k})+\pi$ and $e^{im'\pi}=1$ ($m'$ is even). Notice also that the leading contribution to Eq.~(\ref{eq:PiNres}), in the limit of small $\Omega_P$ and $P$, is determined by the standard logarithmic singularity of $\Pi_{0}(P_\mu)$. However, it will become apparent that this leading contribution is not sufficient to obtain the correct result for the desired (linear-in-$Q$) corrections to the response function. The remaining terms, $\Pi_{M}(P_\mu)$, are important because of their nonanalytic form due to the dependence on the ratio $\frac{|\Omega_{P}|}{v_{F}P}$. \section{\label{sec:2od} SECOND-ORDER CALCULATION} The lowest order nonanalytic correction to the spin susceptibility has been calculated in Ref.~\onlinecite{PhysRevB.68.155113} as a sum of four distinct contributions from the diagrams in Fig.~\ref{fig:1}, \begin{align} \delta\chi^{(2)}_{1}(Q)&=K(0,Q)[V^{2}(2k_F)+V^{2}(0)]\label{eq:chi21},\\ \delta\chi^{(2)}_{3}(Q)&=K(0,Q)[V^{2}(2k_F)-V^{2}(0)]\label{eq:chi23},\\ \delta\chi^{(2)}_{4}(Q)&=K(0,Q)V(0)V(2 k_F)\label{eq:chi24}, \end{align} and $\delta\chi^{(2)}_{2}=-\delta\chi^{(2)}_{4}$ such that the final result reads as \begin{equation}\label{eq:2nd_order_final} \delta\chi_s^{(2)}(Q)=2K(0,Q)V^{2}(2k_F). \end{equation} \begin{figure} \includegraphics[width=.4\textwidth]{fig1.eps} \caption{\label{fig:1}The nonvanishing second-order diagrams contributing to the nonanalytic behavior of the electron spin susceptibility.} \end{figure} We refer to Ref.~\onlinecite{PhysRevB.68.155113} for a thorough discussion of these lowest order results, but we find it useful to reproduce here the result for $\delta\chi_1^{(2)}$. In fact, Eq.~(\ref{eq:chi21}) has been obtained in Ref.~\onlinecite{PhysRevB.68.155113} as a sum of two nonanalytic contributions from the particle-hole bubble at small ($q=0$) and large ($q=2k_{F}$) momentum transfer. These two contributions, proportional to $V^{2}(0)$ and $V^{2}(2k_F)$, respectively, can be directly seen in Eq.~(\ref{eq:chi21}). However, it is more natural for our purposes to obtain the same result in the particle-particle channel by making use of the propagator discussed in Sec.~\ref{sec:ppp}. This approach is more cumbersome but produces these two contributions at the same time. Furthermore, once the origin of the lowest order nonanalytic correction is understood in the particle-particle channel, higher order results are most easily obtained. \begin{figure} \includegraphics[width=.2\textwidth]{fig0.eps} \caption{\label{fig:0} Labeling of the $\delta\chi_1^{(2)}$ diagram, as in Eq.~(\ref{eq:chi1-2def}).} \end{figure} We start with the analytic expression of $\delta\chi_{1}^{(2)}(Q)$ (see Fig.~\ref{fig:0}) in terms of $\Pi^{(2)}$, the $n=2$ case of Eq.~(\ref{eq:PiNres}); \begin{align}\label{eq:chi1-2def} \notag\delta\chi_{1}^{(2)}(Q) =& -8\int\frac{d^{3}k}{(2\pi)^{3}} \int\frac{d^{3}P}{(2\pi)^{3}}G^{2}(k_{\mu}) G(k_{\mu}+Q_{\mu})\\ &\times G(P_{\mu}-k_{\mu})\Pi^{(2)}(P_{\mu},0). \end{align} It is convenient to define the angle of ${\bf k}$ as $\theta_k\equiv\angle(\mathbf{k},\mathbf{Q})$, and $\theta_P\equiv\angle(\mathbf{P},\mathbf{k})$. We first perform the integration in $d^3 k$, as explained in Appendix~\ref{app:chi1-2}, to obtain \begin{align}\label{eq:2ndorder_dimensional} \notag &\delta\chi_{1}^{(2)} = -\frac{m}{\pi^{4}v_{F}^{2}Q^{2}}\int_{0}^{\infty}P{ d}P \int_{0}^{\infty}{ d}\Omega_{P}\int_{0}^{2\pi} \Pi^{(2)}(P_{\mu},0)\\ &\times \left(1-\frac{\sqrt{(\Omega_{P}+iv_{F}P\cos\theta_P)^{2}+(v_{F}Q)^{2}}} {\Omega_{P}+iv_{F}P\cos\theta_P}\right) { d}\theta_P . \end{align} Following the method of Ref.~\onlinecite{PhysRevB.68.155113}, we rescale the integration variables: $\Omega_{P}=Rv_{F}Q\sin\phi$, $P=RQ\cos\phi$, and $d\Omega_{P}dP=Rv_{F}Q^{2}dRd\phi$. This gives \begin{align}\label{eq:chi1-2rescaled} \notag & \delta\chi_{1}^{(2)} = -\frac{m Q}{\pi^{4}v_{F}}\int_{0}^{\infty}R^{2}{ d}R \int_{0}^{\pi/2}{ d}\phi\int_{0}^{2\pi} \Pi^{(2)}(R,\phi,\theta_P,0) \\ & \times \cos\phi \left(1-\frac{\sqrt{R^{2}(\sin\phi+i\cos\phi\cos\theta_P)^{2}+1}} {R(\sin\phi+i\cos\phi\cos\theta_P)}\right) { d}\theta_P . \end{align} where, from Eqs.~(\ref{eq:PiNres}) and (\ref{eq:VMNdef}), \begin{align}\label{eq:ppprop} \notag \Pi^{(2)}(R,& \phi,\theta_P, \theta)= {\sum_{M}}^{\prime}\tilde{V}_{M}^{2}(\theta_P,\theta)\Pi_{M}(R,\phi)\\ = & {\sum_M}^\prime \Pi_M(R,\phi){\sum_{m}} V_{m}V_{m-M} e^{iM\theta_P-i m \theta} \end{align} with the primed sum restricted to even values of $M$. Now we can see clearly that the linear dependence on $Q$ in Eq.~(\ref{eq:chi1-2rescaled}) can only be modified by the presence of $\Pi^{(2)}$ in the integrand because of \begin{equation}\label{eq:Pi0Rphi} \Pi_{0}(R,\phi)=\frac{m}{2\pi}\ln\frac{v_{F}Q}{W}+\frac{m}{2\pi}\ln R(1+\sin\phi). \end{equation} The first logarithmic term is diverging at small $Q$ but does not contribute to the final result since it does not depend on $\theta_P$ and $\phi$. In fact, if we keep only the $\frac{m}{2\pi}\ln\frac{v_{F}Q}{W}$ contribution, after the change of variable $r=R(\sin\phi+i\cos\phi\cos\theta_P)$ in Eq.~(\ref{eq:chi1-2rescaled}), we obtain the angular integral $\int_0^{2\pi}{ d}\theta_P\int_0^{\pi/2}\cos\phi (\sin\phi+i\cos\phi\cos\theta_P)^{-3} \, { d} \phi=0$ [cf. Eq.~(\ref{eq:intphitheta}) for $M=0$]. Details of the calculation are provided in Appendix~\ref{app:chi1_rederived}. Therefore, only the second term of Eq.~(\ref{eq:Pi0Rphi}) is relevant. The integral in Eq.~(\ref{eq:chi1-2rescaled}) becomes independent of $Q$ and gives only a numerical prefactor. The final result is given by Eq.~(\ref{eq:chi21}), in agreement with Ref.~\onlinecite{PhysRevB.68.155113}. In a similar way, the remaining diagrams of Fig.~\ref{fig:1} can be calculated. \section{\label{sec:hod}HIGHER ORDER DIAGRAMS} In this section we aim to find the renormalization of the four diagrams depicted in Fig.~\ref{fig:1} due to higher order contributions in the particle-particle channel. It is well known that the scattering of two electrons with opposite momenta, in the presence of the Fermi sea, leads to the emergence of a logarithmic singularity.\cite{PhysRevB.71.045338,Mahan} Furthermore, in two dimensions there are just two processes that contribute to $\delta\chi_{i}^{(2)}(Q)$, namely, forward- (small momentum transfer, $q=0$) and back-scattering (large momentum transfer, $q=2k_{F}$). This results in the renormalization of the scattering amplitudes appearing in the second-order results (see Sec.~\ref{sec:int}). A direct calculation of the particle-particle propagators, depicted in Fig.~\ref{fig:2}, shows that for $n+1$ interaction lines, the divergence always appears as the $n$th power of a logarithm. At each order of the perturbative expansion, we only consider the single diagram which contributes to the nonanalytic correction with the leading logarithmic singularity. This requirement restricts the freedom of adding interaction lines in unfettered manner to the existing second-order diagrams: in order to produce the most divergent logarithmic term, all interaction lines have to build up at most one ladder for $\delta\chi_{1}$, $\delta\chi_{2}$, and $\delta\chi_{4}$, or two ladders for $\delta\chi_{3}$. The subset of diagrams generated in this way is not sufficient to obtain the general momentum dependence of the spin susceptibility. However, if one of the harmonics $V_{n}$ is negative, these diagrams are the only relevant ones in the vicinity of the Kohn-Luttinger instability, $v_{F}Q\gtrsim k_{B}T_{KL}$. Furthermore, at each order $n$ in the interaction, it suffices to keep the leading contribution in $Q$ of the individual diagrams. This turns out to be of order $Q\ln^{n-2}Q$ because the term proportional to $\ln^{n-1}Q$ is suppressed by an additional factor $Q^2$. Other perturbative terms, e.g., in the particle-hole channel,\cite{ProcNatlAcadSci.103.15765} can be safely neglected as they result in logarithmic factors of lower order. In the following we discuss explicitly how to insert a ladder diagram into the pre-existing second-order diagrams and show the line of the calculation that has to be carried out. \subsection{\label{sec:d124}Diagrams 1, 2, and 4} \begin{figure} \includegraphics[width=.4\textwidth]{fig3.eps} \caption{\label{fig:3}The series of diagrams contributing to $\delta\chi_{1}(Q)$.} \end{figure} \begin{figure} \includegraphics[width=.4\textwidth]{fig4.eps} \caption{\label{fig:4}An example of diagram contributing to $\delta\chi_{2}(Q)$. The maximally crossed diagram (left) is topologically equivalent to its untwisted counterpart (right) in which the particle-particle ladder appears explicitly.} \end{figure} \begin{figure} \includegraphics[width=.4\textwidth]{fig6.eps} \caption{\label{fig:6}A maximally crossed diagram (left) and its untwisted equivalent (right) contributing to $\delta\chi_{4}(Q)$.} \end{figure} These three diagrams can all be expressed to lowest order in terms of a single particle-particle propagator $\Pi^{(2)}$, which at higher order is substituted by $\Pi^{(n)}$. For the first term we have \begin{align}\label{eq:chi1-ndef} \notag \delta\chi_{1}^{(n)}(Q) =& -8\int\frac{{ d}^{3}k}{(2\pi)^{3}} \int\frac{{ d}^{3}P}{(2\pi)^{3}}G^{2}(k_{\mu})\\ &\times G(k_{\mu}+Q_{\mu})G(P_{\mu}-k_{\mu})\Pi^{(n)}(P_{\mu},0), \end{align} where the $n=2$ case was calculated in Sec.~\ref{sec:2od}. The corresponding diagrams are, in this case, easily identified and shown in Fig.~\ref{fig:3}. It is slightly more complicated to renormalize $\delta\chi_{2}^{(2)}$ and $\delta\chi_{4}^{(2)}$. It requires one to realize that the diagrams depicted in Fig.~\ref{fig:4} are topologically equivalent; i.e., the maximally crossed diagram on the left is equivalent to the untwisted ladder diagram on the right. A similar analysis shows how to lodge the ladder diagram into $\delta\chi_{4}^{(2)}$, as illustrated in Fig.~\ref{fig:6}. The corresponding analytic expressions are: \begin{align}\label{eq:chi2-ndef} \notag \delta\chi_{2}^{(n)}(Q) & = 4\int\frac{{ d}^{3}k}{(2\pi)^{3}} \int\frac{{ d}^{3}P}{(2\pi)^{3}}G^{2}(k_{\mu})\\ &\times G(k_{\mu}+Q_{\mu}) G(P_{\mu}-k_{\mu})\Pi^{(n)}(P_{\mu},\pi), \end{align} \begin{align}\label{eq:chi4-ndef} \notag & \delta \chi_{4}^{(n)}(Q) = 2\int\frac{{ d}^{3}k}{(2\pi)^{3}} \int\frac{{ d}^{3}P}{(2\pi)^{3}}G(k_{\mu})G(k_{\mu}+Q_{\mu})\\ &\times G(P_{\mu}-k_{\mu})G(P_\mu-k_{\mu}-Q_{\mu})\Pi^{(n)}(P_{\mu},\pi). \end{align} We show now that the final results can be simply obtained to leading order in $Q$ based on the second-order calculation. In fact, we can perform the integration in ${ d}^3 k$ and the rescaling of variables as before. For $\delta\chi_{1}$ we have \begin{align} \notag &\delta\chi_{1}^{(n)} = -\frac{m Q}{\pi^{4}v_{F}}\int_{0}^{\infty}R^{2}{ d}R \int_{0}^{\pi}{d}\theta_P\int_{0}^{\pi/2} \Pi^{(n)}(R,\phi,\theta_P,0)\\ &\times\bigg(1-\frac{\sqrt{R^{2}(\sin\phi+i\cos\phi\cos\theta_P)^{2}+1}} {R(\sin\phi+i\cos\phi\cos\theta_P)}\bigg)\cos\phi ~ d\phi . \end{align} In the above formula, the $Q$ dependence in the integrand is only due to $\Pi^{(n)}$. It is clear that a similar situation occurs for the second and fourth diagrams. The $Q$ dependence of the rescaled Eq.~(\ref{eq:PiNres}) is determined (as in the second order) by the factors $\Pi_0(R,\phi)$. The first term appearing in $\Pi_0(R,\phi)$, see Eq.~(\ref{eq:Pi0Rphi}), is large in the small $Q$ limit we are interested in. Therefore, we can expand $\Pi^{(n)}$ in powers of $\frac{m}{2\pi}\ln\frac{v_{F}Q}{W}$ and retain at each perturbative order $n$ only the most divergent nonvanishing contribution. The detailed procedure is explained in Appendix~\ref{app:ppsmallQ}. It is found that the largest contribution from $\Pi^{(n)}$ is of order $(\frac{m}{2\pi}\ln\frac{v_{F}Q}{W})^{n-1}$. However, as in the case of the second-order diagram discussed in Sec.~\ref{sec:2od}, this leading term has an analytic dependence on $P_\mu$ (in fact, it is a constant), and gives a vanishing contribution to the linear-in-$Q$ correction to the spin susceptibility. Therefore, the $(\frac{m}{2\pi}\ln\frac{v_{F}Q}{W})^{n-2}$ contribution is relevant here. A particularly useful expression is obtained upon summation of $\Pi^{(n)}$ to infinite order. In fact, for each diagram, the sum of the relative series involves the particle-particle propagator only. Therefore, $\delta\chi_{1}$, $\delta\chi_{2}$, and $\delta\chi_{4}$ are given by Eqs.~(\ref{eq:chi1-ndef})--(\ref{eq:chi4-ndef}) if $\Pi^{(n)}$ is substituted by \begin{equation} \Pi^{(\infty)}(P_{\mu},\theta)=\sum_{n=2}^\infty \Pi^{(n)}(P_{\mu},\theta). \end{equation} The relevant contribution of $\Pi^{(\infty)}(P_{\mu},\theta)$, in the rescaled variables, is derived in Appendix~\ref{app:ppsmallQ}. The final result is \begin{align}\label{eq:finalPinSum} \notag & \Pi^{(\infty)} (R,\phi,\theta_P,\theta)=\sum_{n=2}^\infty \Pi^{(n)} (R,\phi,\theta_P,\theta)\\ = & {\sum_M}^\prime \Pi_M(R,\phi) {\sum_{m}} \Gamma_{m}\Gamma_{m-M} e^{iM\theta_P-im\theta}+\ldots, \end{align} which should be compared directly to Eq.~(\ref{eq:ppprop}). The only difference is the replacement of $V_n$ with the renormalized amplitudes $\Gamma_n$, which depend on $Q$ as in Eq.~(\ref{eq:GammaDef}). Hence, it is clear that the final results follow immediately from Eqs.~(\ref{eq:chi21})--(\ref{eq:chi24}); \begin{align} \delta\chi_{1}(Q)&=K(0,Q)[\Gamma^{2}(0)+\Gamma^{2}(\pi)]\label{eq:chi1},\\ \delta\chi_{4}(Q)&=K(0,Q)\Gamma(0)\Gamma(\pi)\label{eq:chi2and4}, \end{align} and $\delta\chi_{2}(Q)=-\delta\chi_{4}(Q)$. We have used notation (\ref{eq:Gamma_theta_def}) while $K(0,Q)$ is defined in Eq.~(\ref{eq:KQ}). This explicitly proves what was anticipated in Sec~\ref{sec:int} (and in Ref.~\onlinecite{PhysRevB.77.045108}), i.e., that the renormalization affects only the scattering amplitude. The bare interaction potential is substituted by the dressed one, which incorporates the effect of other electrons on the scattering pair. \subsection{\label{sec:d3}Diagram 3} \begin{figure} \includegraphics[width=.4\textwidth]{fig5.eps} \caption{\label{fig:5}The series of diagrams contributing to $\delta\chi_{3}(Q)$. At the top, the second- and third-order order diagrams. Two equivalent third-order diagrams arise from the addition of a parallel interaction line to either the upper or the lower part of the second-order diagram. At the bottom, three fourth-order diagrams.} \end{figure} The last diagram $\delta\chi_{3}^{(2)}$ differs from those already discussed in the sense that it allows for the separate renormalization of either the upper or lower interaction line. This results in the appearance of two equivalent third-order diagrams and three fourth-order diagrams (of which two are equal), and so forth. These lowest order diagrams are shown in Fig.~\ref{fig:5}. Accordingly, we define the quantities $\delta\chi_{3}^{(i,j)}$, where ladders of order $i$ and $j$ are inserted in place of the original interaction lines. In particular, $\delta\chi_{3}^{(n)}=\sum_{i,j}\delta\chi_{3}^{(i,j)}\delta_{n,i+j}$ and \begin{equation} \delta\chi_{3}(Q)=\sum_{i,j=1}^{\infty}\delta\chi_{3}^{(i,j)}(Q). \end{equation} The second difference stems from the fact that a finite nonanalytic correction is obtained from the leading terms in the particle-particle ladders of order $(\frac{m}{2\pi}\ln\frac{v_{F}Q}{W})^{i-1}$ and $(\frac{m}{2\pi}\ln\frac{v_{F}Q}{W})^{j-1}$, respectively. In fact, extracting this leading term from Eq.~(\ref{eq:PiNres}) we obtain \begin{equation} \Pi^{(j)}(P_\mu,\theta)=\sum_n V_n^j e^{-i n\theta} \Big(\frac{m}{2\pi}\ln\frac{v_{F}Q}{W}\Big)^{j-1}+\ldots~, \end{equation} and by performing the sum over $j$ we get \begin{equation} \sum_{j=1}^{\infty}\Pi^{(j)}(P_\mu,\theta)=\Gamma(\theta)+\ldots~. \end{equation} A similar argument can be repeated for the $i$th order interaction ladder. Therefore, the bare potential is replaced by renormalized expression (\ref{eq:Gamma_theta_def}) and the final result, \begin{equation} \delta\chi_{3}(Q)=K(0,Q)[\Gamma^{2}(\pi)-\Gamma^{2}(0)], \end{equation} is immediately obtained from Eq.~(\ref{eq:chi23}). \subsection{Renormalized nonanalytic correction} Combining the results of Sec.~\ref{sec:d124} and \ref{sec:d3}, it is clear that the final result has the same form of Eq.~(\ref{eq:2nd_order_final}) if $V(2k_F)$ is substituted by $\Gamma(\pi)$. The explicit expression reads as \begin{equation}\label{eq:final_result_explicit} \delta\chi_s(Q)=\frac{m^3}{24\pi^4}\frac{Q}{k_F}\left[\sum_n \frac{V_n (-1)^n}{1-\frac{m V_n}{2\pi}\ln\frac{v_F Q}{W}}\right]^2. \end{equation} \section{\label{sec:RG} Relation to the RG treatment} As discussed, our calculation was partially motivated by the renormalization group (RG) argument of Ref.~\onlinecite{PhysRevB.77.045108}. In this section, we further substantiate this argument. Starting from Eqs.~(\ref{eq:ppprop}) and (\ref{eq:Pi0Rphi}), one can calculate the second-order correction to the bare vertex $\Pi^{(1)}=\sum_n V_n e^{i n \theta }$ given by \begin{equation}\label{eq:rg_second_order} \Pi^{(2)}(P_\mu, \theta)=\frac{m}{2\pi}\ln \frac{v_FQ}{W} \sum_n V_n^2 e^{i n \theta} + \ldots~, \end{equation} where we explicitly extracted the dependence on the upper cutoff $W$. From Eq.~(\ref{eq:rg_second_order}), we can immediately derive the following RG equations for the scale-dependent couplings $\Gamma_n(\Lambda=v_FQ)$: \begin{equation}\label{eq:RG} \frac{ {\rm d} \Gamma_n}{{\rm d}\ln \frac{\Lambda}{W}}=\frac{m}{2\pi} \Gamma_n^2, \end{equation} as in Ref.~\onlinecite{PhysRevB.77.045108}. This leads to the standard Cooper channel renormalization. A direct derivation of these scaling equations can be found in Ref.~\onlinecite{shankar_review}. At this lowest order, we obtain an infinite number of independent flow equations, one for each angular momentum $n$. The integration of these scaling equations directly leads to Eq. (\ref{eq:GammaDef}). These flow equations tell us that the couplings $\Gamma_n$ are marginally relevant in the infrared limit when the bare $\Gamma_n$ are negative and marginally irrelevant otherwise. Notice that at zero temperature, the running flow parameter $\Lambda$ is replaced by the momentum $v_FQ$ in the Cooper channel. The idea of the RG is to replace in the perturbative calculations of a momentum-dependent quantity the bare couplings $\Gamma_n$ by their renormalized values. By doing so, we directly resum an infinite class of (ladder) diagrams. Let us apply this reasoning now to the susceptibility diagrams and note that the first nonzero contribution to the linear-in-$Q$ behavior of $\chi_s(Q)$ appears in the second order in $\Gamma_n$. For the particular example of $\delta\chi_3$, the renormalization procedure has to be carried out independently for the two interaction lines, as illustrated by the series of diagrams in Fig.~\ref{fig:5}. For a given order of the interaction ladder in the bottom (top) part of the diagram, one can perform the Cooper channel resummation of the top (bottom) interaction ladders to infinite order, as described in Sec.~\ref{sec:d3} or by using the RG equations. The fact that renormalized amplitudes $\Gamma_n$ appear in the final results for the remaining diagrams $\delta\chi_{1,2,4}$ is also clear from the RG argument, after insertion of particle-particle ladders as in Figs.~\ref{fig:3}--\ref{fig:6}. Finally, we note that the same series of diagrams that renormalizes the nonanalytic second-order contributions $\delta\chi^{(2)}_{1,2,4}$ also contributes to the renormalization of the first-order diagrams displayed in Fig.~\ref{fig:FirstOrder} (notice that the first one is actually vanishing because of charge neutrality). As it is clear from the explicit calculation Sec.~\ref{sec:hod}, the highest logarithmic powers, i.e., $\propto (\ln v_FQ/W)^{n-1}$ at order $n$, renormalize $V_m$ to $\Gamma_m$ in the final expressions for Fig.~\ref{fig:FirstOrder}. These first-order diagrams have an analytic dependence, at most $Q^2$. Therefore, in agreement with the discussion in Sec.~\ref{sec:d124}, the largest powers of the logarithms are not important for the linear dependence in $Q$ and, in fact, they were already neglected to second order.\cite{PhysRevB.68.155113} \begin{figure} \includegraphics[width=.4\textwidth]{fig7.eps} \caption{\label{fig:FirstOrder} First-order diagrams contributing to the spin susceptibility. These are renormalized by the leading logarithmic terms of the higher order diagrams (see Figs.~\ref{fig:3}--\ref{fig:6}). However, they do not produce a nonanalytic correction and can be neglected in the limit of small $Q$.} \end{figure} \section{\label{sec:con}Conclusions} In this paper we discussed the renormalization effects in the Cooper channel on the momentum-dependent spin susceptibility. The main result of the paper is given by Eq.~(\ref{eq:final_result_explicit}) and shows that each harmonics gets renormalized independently. The derivation of the higher order corrections to the spin susceptibility was based on the second-order result, which we revisited through an independent direct calculation in the particle-particle channel. Taking the angular dependence of the scattering potential explicitly into account, we verified that the main contribution indeed enters through forward- and back-scattering processes. At higher order, we found a simple and efficient way of resumming all the diagrams which contribute to the Cooper renormalization. We identified the leading nonvanishing logarithm in each ladder and used this result in the second-order correction. This method saves a lot of effort and, in fact, makes the calculation possible. It was argued elsewhere that these renormalization effects might underpin the nonmonotonic behavior of the electron spin susceptibility if the higher negative harmonics override the initially leading positive Fourier components. This would results in the negative slope of the spin susceptibility at small momenta or temperatures.\cite{PhysRevB.74.205122,PhysRevB.77.045108} Other effects neglected here, as subleading logarithmic terms and nonperturbative contributions beyond the Cooper channel renormalization,\cite{ProcNatlAcadSci.103.15765} become relevant far away from the Kohn-Luttinger instability condition, but a systematic treatment in this regime is outside the scope of this work. Our results could be also extended to include material-related issues such as disorder and spin-orbit coupling, which are possibly relevant in actual samples. We also notice that final expression (\ref{eq:final_result_explicit}) parallels the temperature dependence discussed in Ref.~\onlinecite{PhysRevB.74.205122}, suggesting that the temperature and momentum dependence are qualitatively similar in two dimensions. This was already observed from the second-order calculation, in which a linear dependence both in $Q$ and $T$ is obtained. In our work we find that this correspondence continues to hold in the nonperturbative regime if the Cooper channel contributions are included. This conclusion is nontrivial and, in fact, does not hold for the three-dimensional case. The last remark, together with the experimental observation of Ref.~\onlinecite{PhysRevB.67.205407}, supports the recent prediction that the ferromagnetic ordering of nuclear spins embedded in the two-dimensional electron gas is possible.\cite{PhysRevLett.98.156401,PhysRevB.77.045108} The ferromagnetic phase would be stabilized by the long-range Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, as determined by the nonanalytic corrections discussed here. \begin{acknowledgments} We thank M.~Borhani and D.~Maslov for their insightful comments. We also acknowledge discussions with B.~Braunecker, O.~Chalaev, J. C. Egues, D. Hirashima, C. P\'epin, and G. Schwiete. This work was supported by the Swiss NSF and the NCCR Nanoscience Basel. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Methods} ${\bf Classical~and~DFT~computer~simulations.}$~Molecular dynamics $( N, P, T )$ simulations were performed with the LAMMPS code.~\cite{lammps} The pressure and temperature in the system were kept fluctuating around a set-point value by using thermostatting and barostatting techniques in which some dynamic variables are coupled to the particle velocities and simulation box dimensions. Large simulation boxes containing $6,144$ atoms were used and periodic boundary conditions were applied along the three Cartesian directions. Newton's equations of motion were integrated using the customary Verlet's algorithm with a time-step length of $10^{-3}$~ps. A particle-particle particle-mesh $k$-space solver was used to compute long-range van der Waals and Coulomb interactions and forces beyond a cut-off distance of $12$~\AA~ at each time step. First-principles DFT calculations were performed with the VASP code,~\cite{vasp} following the generalized gradient approximation to the exchange-correlation energy due to Perdew~\cite{pbe96}. The ``projector augmented wave'' method was used to represent the ionic cores~\cite{bloch94}, and Ca's $2s$-$3s$-$3p$-$4s$, Pb's $5d$-$6s$-$6p$ and F's $2s$-$2p$ electronic states were considered as valence. Wave functions were represented in a plane-wave basis truncated at $500$~eV. By using these parameters and dense ${\bf k}$-point grids for Brillouin zone integration, the resulting enthalpies were converged to within $1$~meV per formula unit. In the geometry relaxations, a tolerance of $0.01$~eV$\cdot$\AA$^{-1}$ was imposed in the atomic forces. Further details of our classical and \emph{ab initio} molecular dynamics simulations can be found in the Supplementary Information.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This paper is a sequel to~\cite{G} and familiarity with~\cite{G} would help the reader. We keep the introduction brief, still we will recall all definitions which are needed. A flow $\varphi^t\colon M \to M$ is {\it partially hyperbolic} if the tangent bundle $TM$ splits into $Df$-invariant continuous subbundles $TM= E^{s}\oplus E^c \oplus E^{u}$ such that \begin{equation} \label{def_ph_flow} \|D\varphi^t(v^s) \| <\lambda^t< \|D\varphi^t (v^c)\| < \mu^t< \|D\varphi^t(v^u) \|,\; t\ge 1, \end{equation} for some Riemannian metric $\|\cdot\|$, some $\lambda<1<\mu$ and all unit vectors $v^s\in E^s$, $v^c\in E^c$ and $v^u\in E^u$. Then it is clear that the generating vector field $\dot\phi$ lies in the center subbundle $E^c$. An invariant submanifold $N\subset M$ is called an {\it Anosov submanifold} for $\phi^t$ if $TN=E^s\oplus \dot \phi \oplus E^u$. Note that then the flow $\phi^t_N$ is given by the restriction $\varphi^t|_N$ is an Anosov flow. Further, the flow $\phi^t\colon M\to M$ is called {\it locally fiberwise } at $N$ if a neighborhood of $ N$ can be smoothly identified with $\mathbb D^k\times N$, where $\mathbb D^k=\{x\in\mathbb R^k: \|x\|<1\}$, in such a way that the restriction $\phi^t|_{\mathbb D^k\times N}$ has the product form \begin{equation} \label{def_a} \varphi^t(x,y)=(a^t(x), \varphi^t_{ N}(y)), \end{equation} where $a^t$ is a linear hyperbolic saddle flow. \begin{remark} Note that locally fiberwise assumption in this paper is weaker than the one in~\cite{G} as we no longer require $E^s\oplus E^u$ to be tangent to the $N$-fibers in the neighborhood $\mathbb D^k\times N$. Such weakening is crucial for the examples which we consider here. Also note that the locally fiberwise assumption implies that the normal bundle to $N$ is trivial. This assumption is not crucial for our argument, but it does simplify notation and calculations a lot. \end{remark} Now we can blow-up $M$ along $\{0\}\times N$ by replacing each point in $\{0\}\times N$ with the projective space of lines which pass through this point perpendicularly to $N$. The blown-up manifold $\hat M$ comes with a canonical {\it blow-down map} $\pi\colon\hat M\to M$ which collapses each projective space to its base point. The preimage $\pi^{-1}(\{0\}\times N)\simeq \field{R} P^{k-1}\times N$ is called {\it the exceptional set.} In smooth category, $\hat M$ is the result of replacing $\mathbb D^k\times N$ with $(\mathbb D^k\#\field{R} P^k)\times N$. We will write $\tilde\mathbb D^k$ for $(\mathbb D^k\#\field{R} P^k)$. If the flow $\phi^t\colon M\to M$ is locally fiberwise at $N$ then it induces a flow $\hat\phi^t\colon \hat M\to\hat M$ such that the diagram \begin{equation} \label{diag} \xymatrix{ \hat M\ar_\pi[d]\ar^{\hat \phi^t}[r] & \hat M\ar_\pi[d]\\ M\ar^{\phi^t}[r] & M } \end{equation} commutes. The induced flow $\hat\varphi^t\colon \hat M\to\hat M$ may or may not be partially hyperbolic. \begin{theoremM} \label{thm_main} Let $\varphi^t\colon M\to M$ be a partially hyperbolic flow with $C^1$ invariant splitting $E^s\oplus E^c\oplus E^u$ and let $N\subset M$ be an invariant Anosov submanifold of $M$. Assume that the dynamics is locally fiberwise in a neighborhood of $N$. Let $\hat\varphi^t\colon \hat M\to\hat M$ the induced flow on $\hat M$. Then there exists a partially hyperbolic flow $\tilde\varphi^t\colon\hat M\to \hat M$ which coincides with $\hat\varphi^t$ outside of a neighborhood of the exceptional set. \end{theoremM} The Main Theorem builds up on the earlier work~\cite{G}. However, strictly speaking, it is not a generalization of the results in~\cite{G}. Indeed in~\cite{G} the author showed that the blown-up flow $\hat\phi^t$ is itself partially hyperbolic under more restrictive assumptions, most importantly the {\it domination assumption}, which assures that the Anosov submanifold is sufficiently fast compared to the center. In this paper we have fully disposed of the domination assumption and, most interestingly, the Main Theorem applies to examples when $\hat\phi^t$ is not partially hyperbolic. The proof of the Main Theorem relies on some tools developed in~\cite{G} but also develops different technology for controlling the returns. The key basic ingredient of the proof is the {\it slow-down} construction in the neighborhood of the Anosov submanifold which provides a remedy for absence of domination. Consequently, unlike results of~\cite{G}, the construction here can only be used for flows and not for diffeomorphisms. The benefit of the slow-down construction is that we can also produce volume preserving examples which was impossible with techniques of~\cite{G}. We proceed to describe an application of our theorem in the setting of geodesic flows on compact complex hyperbolic manifolds. Let $M$ be a compact complex hyperbolic manifold of dimension $n$ (real dimension $2n$). One can realize $M$ as a quotient space of the complex hyperbolic space $\H_\field{C}^n$ by an action of a cocompact lattice in the group of biholomorphic isometries, $\Gamma\subset SU(n,1)$. Assume that there exists a compact totally geodesic complex curve $N\subset M$. Then, up to conjugating lattice $\Gamma$, the embedding $N\subset M$ is induced by the first coordinate embedding $\H_\field{C}^1\subset \H_\field{C}^n$. Now consider the geodesic flow on the unit tangent bundle $\phi^t\colon T^1M\to T^1M$. We view $\phi^t$ as a partially hyperbolic flow with $\dim E^s=\dim E^u=1$. Because $N$ is totally geodesic, $\phi^t$ restricts to $T^1N$. We blow-up $T^1N\subset T^1M$. It is easy to see that the induced flow $\hat\phi^t\colon\widehat{T^1M}\to \widehat{T^1M}$ is not partially hyperbolic because it has periodic orbits with dominated splittings of different dimension signatures. Further, we can check (see Section~\ref{section_example}) that all other assumptions of Main Theorem are satisfied as well. Hence we obtain the following corollary. \begin{corollary} \label{cor_main} Let $M$ be a compact complex hyperbolic manifold and let $N\subset M$ be a totally geodesic complex curve. Then the blow up $\widehat{T^1M}$ of $T^1M$ along $T^1N$ supports a partially hyperbolic flow $\tilde\varphi^t\colon\hat M\to \hat M$. Moreover, the flow $\tilde\varphi^t\colon\hat M\to \hat M$ can be chosen to be an arbitrarily $C^\infty$ small perturbation of $\hat\phi^t$. \end{corollary} Note that the if $\phi^t$ preserves a smooth volume $m$ then $\hat\phi^t$ preserves a smooth measure $\pi^*(m)$. However the density of $\pi^*(m)$ vanishes on the exceptional set. Nevertheless, following the idea of Katok and Lewis~\cite{KL}, we adapt our Main Theorem to the conservative setting. \begin{add} \label{add} Let $N\subset M$ and $\phi^t\colon M\to M$ be as in the Main Theorem. Assume that $\phi^t$ preserves a smooth volume $m$ which has product form in the neighborhood $\mathbb D^k\times N$; that is, $m|_{\mathbb D^k\times N}=vol\otimes vol_N$, where $vol$ is the standard Euclidean volume on $\mathbb D^k$ and $vol_N$ is a smooth $\phi^t|_N$-invariant volume on $N$. Then there exists a partially hyperbolic flow on $\hat M$ which preserves a smooth non-degenerate volume. \end{add} The following is a non-trivial corollary. \begin{corollary} \label{cor_main2} Let $M$ be a compact complex hyperbolic manifold and let $N\subset M$ be a totally geodesic complex curve. Then the blow up $\widehat{T^1M}$ of $T^1M$ along $T^1N$ supports a volume preserving partially hyperbolic flow $\tilde\varphi^t\colon\widehat{T^1M}\to \widehat{T^1M}$. \end{corollary} Finally we remark that similarly to~\cite[Section 3]{G} one can take multiple blow-ups as well as connected sums along Anosov submanifolds and produce partially hyperbolic diffeomorphisms on manifolds with even more complicated topology. \section{The proof of the Main Theorem} \label{section2} \subsection{Outline of the proof} The partially hyperbolic splitting $TM=E^s\oplus E^c\oplus E^u$ for $\phi^t\colon M\to M$ induces a splitting $T\hat M=\hat E^s\oplus\hat E^c\oplus \hat E^u$ which is invariant under $D\hat\phi^t\colon T\hat M\to T\hat M$. It can be checked in local coordinates that, because the partially hyperbolic splitting is $C^1$, the induced splitting $\hat E^s\oplus\hat E^c\oplus \hat E^u$ is continuous. Under and additional domination assumption on $\varphi^t$ at $N$ (and also a stronger locally fiberwise assumption) the latter splitting is partially hyperbolic and this situation was examined in~\cite{G}. However, in general, this splitting is not partially hyperbolic. To recover partial hyperbolicity we modify $\hat\phi^t$ in the neighborhood of the exceptional set. Recall that by the locally fiberwise assumption, in the neighborhood of $N$, the generator of the flow is given by $$ \frac{\partial\phi^t}{\partial t}(x,y)=X(x)+Y(y), $$ where $X$ is the vector field on $\mathbb D^k$ which generates the hyperbolic saddle $a^t$ and $Y$ is the generator of $\varphi^t_N$. We consider a smooth bump function $\rho\colon\mathbb D^k\to N$ which is radially symmetric, that is, $\rho(x)=\bar\rho(\|x\|)$ where smooth function $\bar \rho$ verifies \begin{enumerate} \item $\bar\rho(s)=\rho_0<1$, for $s\le\delta$; \item $\bar\rho$ is strictly increasing on $(\delta,2\delta)$ and $|\bar\rho'(s)|<1/\delta$ for $s\in(\delta,2\delta)$; \item $\bar\rho(s)=1$ for $s\ge 2\delta$ \end{enumerate} Here the constant $\rho_0$ only depends on the contraction and expansion rates of $D\phi^t$ along invariant subbundles. Constant $\delta$ will need to be chosen sufficiently small. Given such a bump function $\rho$ we replace the flow $\phi^t|_{\mathbb D^k\times N}$ with a new flow $\phi^t_\rho$ whose generator is given by a {\it slow-down of the saddle} $X$ \begin{equation} \label{eq_slow_down} \frac{\partial\phi^t_\rho}{\partial t}(x,y)=\rho X(x)+Y(y) \end{equation} Because $\rho=1$ on the boundary of $\mathbb D^k$ the flow $\phi^t_\rho$ extends to the rest of $M$ as $\phi^t$ and then the blown-up flow $\hat\phi^t_\rho$ is the posited partially hyperbolic flow. Now we briefly outline the proof of partial hyperbolicity before proceeding with a more detailed argument. First note that on the $\delta$-neighborhood of $N$ the flow $\phi^t_\rho$ is a direct product of the slow saddle $a^{\rho_0t}$ and $\phi^t_N$. Therefore, by choosing $\rho_0$ small enough, the domination condition of~\cite{G} holds on the $\delta$-neighborhood and the estimates provided in~\cite{G} yield partial hyperbolicity of $\hat\phi^t_\rho$ with respect to the splitting $T\hat M=\hat E^s\oplus\hat E^c\oplus\hat E^u$ on the $\delta$-neighborhood of the exceptional set. Also, by construction, $\hat\phi^t_\rho$ coincides with $\hat\phi^t$ outside the $2\delta$-neighborhood of the exceptional set. The main technical difficulty is that the splitting $\hat E^s\oplus\hat E^c\oplus\hat E^u$ does not remain invariant as orbits cross the transition region ($\delta\le s\le2\delta$). However, one can still consider cones centered at these non-invariant distributions and verify the Cone Criterion for partial hyperbolicity. In what follows we will only establish {\it the splitting into unstable and center-stable subbundles. } Roughly speaking, this follows from the fact that the damage done to the cones in the transition region ($\delta\le s\le2\delta$) is controlled uniformly (in $\delta$) thanks to the second property of $\bar\rho$ and the fact that orbits spend a bounded time of order $\delta$ in the transition region. Because all our constructions are time-symmetric, repetitions of the arguments also yields a splitting into center-unstable and stable subbundles and hence full partial hyperbolicity. \subsection{Cones near the exceptional set} We will need to introduce more notation in order to proceed with the precise description of the cones and the estimates. Denote by $\tilde \mathbb D^k_{<\delta}\times N$ the $\delta$-neighborhood of the exceptional set, that is, the preimage $$ \pi^{-1}(\{x\in\mathbb D^k:\|x\|<\delta \}\times N) $$ Denote by $TN= E^s_N\oplus E^c_N\oplus E^u_N$ the Anosov splitting of the restriction $\phi^t_N$ ({\it i.e.,} $E^c_N=\dot\phi^t_N$) and by $( E^s_N\oplus E^c_N\oplus E^u_N)\oplus H$ the product splitting on $\tilde \mathbb D^k_{<\delta}\times N$. Given a small number $\omega>0$ define the cones on $\tilde \mathbb D^k_{<\delta}\times N$ \begin{equation} \label{eq_cones} \begin{split} \mathcal{C}^u_\omega(x,y)=\{v\in T_{(x,y)}(\tilde \mathbb D^k_{<\delta}\times N): \measuredangle(v, E^u_N)<\omega\}\\ \mathcal{C}^{cs}_\omega(x,y)=\{v\in T_{(x,y)}(\tilde \mathbb D^k_{<\delta}\times N): \measuredangle(v, E^s_N\oplus E^c_N\oplus H)<\omega\} \end{split} \end{equation} \begin{remark} The splitting $E^s_N\oplus (E^c_N\oplus H)\oplus E^u_N$ coincides with the splitting $\hat E^s\oplus\hat E^c\oplus \hat E^u$ on the exceptional set only. \end{remark} Recall that $\lambda<1<\mu$ are the constants from the definition of partial hyperbolicity~(\ref{def_ph_flow}). Also let $\lambda'\in(\lambda, 1]$ and $\mu'\in [1,\mu)$ be the some constants for which we have \begin{equation*} c^{-1}(\lambda')^t\le \|Da^t (v)\|/\|v\| \le c(\mu')^t, \end{equation*} where $c>0$.\footnote{Constant $\mu'$ and $\lambda'$ can be chosen to be arbitrarily close to the ``outer" and ``inner" spectral radii of $a^t$ by choosing large $c>0$.} Here $a^t$ is the hyperbolic saddle given by the locally fiberwise structure~(\ref{def_a}) and $v\in T\mathbb D^k$. Now we pick a constant $\rho_0>0$ which enters the definition of the function $\rho$ in the previous subsection such that we have the following inequality \begin{equation} \label{eq_domination} \left(\frac{\lambda'}{\mu'}\right)^{\rho_0}>\max(\lambda, \mu^{-1}) \end{equation} This is the {\it domination condition}~\cite[(2.3)]{G} on the flow $\phi_\rho^t$. This condition yields required estimates on the cones on $\tilde \mathbb D^k_{<\delta}\times N$ for the blown-up flow. (In this paper we will focus on the case when $\rho_0<1$ because otherwise, if domination condition holds with $\rho_0=1$, our Main Theorem was already established in~\cite{G}.) Precisely, we have the following lemma. \begin{lemma} \label{lemma1} There exist $\omega>0$, $c>0$, $\kappa>1$ and $\delta_0>0$ such that for all $\delta<\delta_0$ there exists a Riemannian metric $\|\cdot\|_\delta$ on $\hat M$, which coincides with the metric $\|\cdot\|$ coming from $M$ outside the $\delta$-neighborhood of the exceptional set, such that the cone fields $\mathcal{C}^u_\omega$ and $\mathcal{C}^{cs}_\omega$ defined above are eventually (forward and backward) invariant under $D\phi^t_\rho$ and verify the following hyperbolic properties: \begin{enumerate} \item for all finite orbits segments $\{\phi^s_\rho(x,y), 0\le s\le t\}$, which are entirely contained in the $\delta$-neighborhood of the exceptional set and for all $ v\in \mathcal{C}^u_\omega(x,y)$ $$ \|D\phi^t_\rho(v)\|_\delta>\mu^t\|v\|_\delta,\,\, t\ge 0 $$ \item for all finite orbits segments $\{\phi^s_\rho(x,y), 0\le s\le t\}$, which are entirely contained in the $\delta$-neighborhood of the exceptional set, for all $v\in \mathcal{C}^u_\omega(x,y)$ and for all $w\in \mathcal{C}^{cs}_\omega(x,y)$ with $D\phi^tw\in \mathcal{C}^{cs}_\omega(\phi^t(x,y))$ $$ \frac{\|D\phi^t_\rho(v)\|_\delta}{\|v\|_\delta}>c\kappa^t\frac{\|D\phi^t_\rho(w)\|_\delta}{\|w\|_\delta}, \,\, t\ge 0 $$ \end{enumerate} \end{lemma} The proof of this lemma is the basic technical ingredient of the prequel paper~\cite{G}. More precisely, the construction of appropriate Riemannian metric $\|\cdot\|_\delta$ is given in Section~5.1 of~\cite{G}. (For this construction we need to assume that the Riemannian metric $\|\cdot\|$ from the definition of partial hyperbolicity~(\ref{def_ph_flow}) on $\mathbb D^k\times N$ is a direct sum of the canonical flat metric and a metric on $N$. It was explained in Section 5.3.2 of~\cite{G} that such assumption can be made without loss of generality.) Then Lemma~5.1 of~\cite{G} gives partial hyperbolicity of the splitting $E^s_N\oplus (E^c_N\oplus H)\oplus E^u_N$. Finally, the fact that the estimates hold for the vectors in the cones (with proper choice of $\omega$) is proved in Section~5.3.4 of~\cite{G}. \subsection{Control along the center in the transition domain} Consider the transition domain $A_\delta\times N$, where $A_\delta=\tilde \mathbb D^k_{<2\delta}\cap \tilde \mathbb D^k_{>\delta}$. Recall that the Riemannian metric $\|\cdot\|_\delta$ restricted to this domain is the direct sum of the flat metric $\|\cdot\|$ and a metric on $N$. Also recall that the flow $\phi^t_\rho$ is generated by $\rho(x) X(x)+Y(y)$, $(x,y)\in A_\delta\times N$. It follows that, even though $\rho$ is not constant, the splitting $E^s_N\oplus E^c_N\oplus E^u_N \oplus H$ stays invariant within this domain. Note that because of the nature of the dynamics of the hyperbolic saddle (invariance under rescaling) and because $\rho\ge\rho_0$ with $\rho_0$ independent of $\delta$, there exists a uniform upper bound on time $T$ which an orbit can spend in $A_\delta\times N$ of the form \begin{equation} \label{eq_time} T\le {C_1}{\delta}, \end{equation} where $C_1$ is a constant which depends on $a^t$ and $\rho_0$, but does not depend on $\delta$ and $\rho$. We proceed to explain how to control extra distortion which occurs along the ``horizontal" distribution $H$. Hence we focus on the dynamics of reparametrized saddle flow $a^t_\rho$ generated by $\rho X$. The extra distortion which occurs along $H$ is due to $\rho$-driven shear, hence gradient of $\rho$ will appear. We will perform all calculations in the canonical Euclidean coordinates on $A_\delta$. Let $v\in T_xA_\delta$, let $v^t=Da^t_\rho v$ and let $v_0^t$ stand for the (isometric) translate of $v^t$ such that $v$ and $v_0^t$ have the same foot-point. Then $$ v_0^t=v+t D(\rho X)v+h.o.t. $$ Differentiating with respect to $t$ yields $$ \frac{d\|v_0^t\|}{dt}\Big|_{t=0}=\frac{\langle D(\rho X)v,v\rangle}{\|v\|} $$ Further $$ \langle D(\rho X)v,v\rangle =\langle (\nabla\rho X)v,v\rangle+\rho\langle DXv,v\rangle\le (C_2\|\nabla\rho\|+C_3)\|v\|^2, $$ where $C_2$ and $C_3 (=\log\mu')$ are constants which depend only on $a^t$ (recall that $\rho\le 1$). We conclude that there exists a constant $C_4$ and a small $t_0$ such that for all $t<t_0$ and for all $v\in T_xA_\delta$, $x\in A_\delta$, we have $$ \frac{\|Da_\rho^tv\|}{\|v\|}\le 1+C_4t\big(\max_{A_\delta} \|\nabla\rho\| +1\big) $$ Now, using this inequality we obtain the following lemma. \begin{lemma} \label{lemma_transition} Assume that an orbit segment $\{a_\rho^s(x), \, 0\le s\le T\}$ is entirely contained in $A_\delta$, then for all $v\in T_xA_\delta$, $x\in A_\delta$ $$ \frac{\|Da_\rho^Tv\|}{\|v\|}\le C_5,\,\,\,\mbox{and}\,\,\,\, \frac{\|Da_\rho^Tv\|}{\|v\|}\ge \frac{1}{C_5}, $$ where $C_5$ is a constant which does not depend on $\rho$ and $\delta$ as long as $\delta\le 1$. \end{lemma} \begin{proof} Pick a large $m$ such that $T/m<t_0$, then \begin{multline*} \frac{\|Da_\rho^Tv\|}{\|v\|}=\prod_{i=0}^{m-1} \frac{\|Da_\rho^{(i+1)T/m}v\|}{\|Da_\rho^{iT/m}v\|}\le \\ \left(1+C_4\frac Tm\big(\max_{A_\delta} \|\nabla\rho\| +1\big)\right)^m\to \exp(C_4T\big(\max_{A_\delta} \|\nabla\rho\| +1\big)), m\to\infty \end{multline*} Now recall that, by the second condition on the bump function $\rho$ we have $\nabla\rho(x)=\bar\rho'(\|x\|)<1/\delta$. Using this and the upper bound on $T$ given by~(\ref{eq_time}) we obtain $$ \frac{\|Da_\rho^Tv\|}{\|v\|}\le\exp\left(C_4{C_1}{\delta}\left(\frac 1\delta+1\right)\right)\le C_5 $$ The second inequality of the lemma is derived by using the same argument applied to the inverse flow $a^{-t}_\rho$. \end{proof} \subsection{Cones away from the exceptional set} To define the cones on $M\backslash(\tilde\mathbb D^k_{>2\delta}\times N)$ we use the same $\omega$ given by Lemma~\ref{lemma1} and let \begin{equation*} \begin{split} \mathcal{C}^u_\omega(p)=\{v\in T_{p}(M\backslash(\tilde\mathbb D^k_{>2\delta}\times N)): \measuredangle(v,\hat E^u)<\omega\}\\ \mathcal{C}^{cs}_\omega(p)=\{v\in T_{p}(M\backslash(\tilde\mathbb D^k_{>2\delta}\times N)): \measuredangle(v,\hat E^c\oplus \hat E^s)<\omega\} \end{split} \end{equation*} Because $\phi^t_\rho=\phi^t$ and $\|\cdot\|_\delta=\|\cdot\|$ on $M\backslash(\tilde\mathbb D^k_{>2\delta}\times N)$ we then have invariance and hyperbolicity properties of these cones for orbit segments which stay in $M\backslash(\tilde\mathbb D^k_{>2\delta}\times N)$ by partial hyperbolicity of the flow $\phi^t$. \subsection{Proof of partial hyperbolicity} To obtain partially hyperbolic splitting $E^u_\rho\oplus E^{cs}_\rho$ for $\phi^t_\rho$ we use the cone criterion applied to $\mathcal{C}^u_\omega$ and $\mathcal{C}^{cs}_\omega$. We recall that on $\tilde\mathbb D^k_{<\delta}\times N$ the cone families are centered at $E^u_N$ and $E^s_N\oplus E^c_N\oplus H$ while on $M\backslash (\tilde\mathbb D^k_{>2\delta}\times N)$ the cone families are centered at $\hat E^u$ and $\hat E^c\oplus \hat E^s$. Note also that our cone families are not defined in the transition domain $A_\delta\times N$. However, we don't need to extend cones there because orbits spend a uniformly bounded time in $A_\delta\times N$. By preceding discussion the cones are eventually invariant and and possess hyperbolic properties required by the Cone Criterion as long as the orbit stays disjoint with $A_\delta\times N$. Hence we are left to analyze the case when $\phi^s(p)\in A_\delta\times N$, $0<s<T$, with $p$ and $\phi^T(p)$ in the boundary of $A_\delta\times N$. For the sake of concreteness we can focus on the case when $p\in \partial(\tilde\mathbb D^k_{<\delta}\times N)$ and $\phi^T(p)\in \partial(\tilde\mathbb D^k_{>2\delta}\times N)$. (The other two cases $p\in \partial(\tilde\mathbb D^k_{>2\delta}\times N)$, $\phi^T(p)\in \partial(\tilde\mathbb D^k_{<\delta}\times N)$ and $p\in \partial(\tilde\mathbb D^k_{>2\delta}\times N)$, $\phi^T(p)\in \partial(\tilde\mathbb D^k_{>2\delta}\times N)$ can be treated completely analogously.) Recall that cone aperture $\omega$ is a fixed number given by Lemma~\ref{lemma1} and is independent of $\delta$. Also recall that $\hat E^s$, $\hat E^c$ and $\hat E^u$ are continuous distributions\footnote{Here we rely on the smoothness assumption for the partially hyperbolic splitting of $\phi^t$ in an essential way.} which coincide with $E^s_N$, $E^c_N\oplus H$ and $E^u_N$, respectively, on the exceptional set. Hence for all sufficiently small $\delta$ we have $$ dist(E^s_N\oplus E^c_N\oplus H(q), \hat E^s\oplus \hat E^c(q))<\frac\omega{10} $$ and $$ dist(E^u_N(q), \hat E^u(q))<\frac\omega{10} $$ for all $q\in \tilde \mathbb D^k_{<3\delta}\times N$. Because, locally in the neighborhood of the exceptional set, the flow $\phi^t_\rho$ preserves both splittings $E^u_N\oplus(E^c_N\oplus H)\oplus E^s_N$ and $\hat E^s\oplus\hat E^c\oplus\hat E^u$ it follows that \begin{equation*} \begin{split} D\phi_\rho^T(E^u_N(p))\subset \mathcal{C}^u_\omega(\phi^T_\rho(p)),\\ D\phi^{-T}_\rho(\hat E^c\oplus \hat E^s(\phi^T_\rho(p)))\subset \mathcal{C}^{cs}_\omega(p) \end{split} \end{equation*} Combining this observation with control provided by Lemma~\ref{lemma_transition} one can easily verify the following statement. \begin{lemma} \label{lemma3} There exist constants $C_6>0$ and $C_7>0$ such that for all sufficiently small $\delta>0$ and for all points $\{p, \phi^T(p)\}\subset\partial(A_\delta\times N)$ we have \begin{equation*} \begin{split} D\phi_\rho^T(\mathcal{C}_\omega^u(p))\subset \mathcal{C}^u_{C_6\omega}(\phi^T_\rho(p)),\\ D\phi^{-T}_\rho(\mathcal{C}^{cs}_\omega (\phi^T_\rho(p)))\subset \mathcal{C}^{cs}_{C_6\omega}(p),\\ \|D\phi^T_\rho v\|_\delta\ge C_7\|v\|_\delta, \, v\in \mathcal{C}^u_\omega(p),\\ \|D\phi^{-T}_\rho v\|_\delta\ge C_7\|v\|_\delta,\, v\in \mathcal{C}^{cs}_\omega(\phi^T_\rho(p)) \end{split} \end{equation*} \end{lemma} Now note that by decreasing $\delta$ we can increase the return time to the $2\delta$-neighborhood of the exceptional set, $\tilde \mathbb D^k_{<2\delta}\times N$, as much as we wish. This observation combined with Lemma~\ref{lemma3} implies that $\mathcal{C}^u_\omega$ is eventually forward invariant and $\mathcal{C}^{cs}_\omega$ is eventually backward invariant for all sufficiently small $\delta$. Finally the exponential expansion of vectors in $\mathcal{C}^u_\omega$ and domination of $\mathcal{C}^u_\omega$ over $\mathcal{C}^{cs}_\omega$ can be checked by using a standard argument: subdividing the orbit into segments and pasting together the estimates given by Lemmas~\ref{lemma1}, \ref{lemma3} as well as hyperbolicity of cone families outside $\tilde \mathbb D^k_{<2\delta}\times N$. This arguments takes an advantage of the long return time to $\tilde \mathbb D^k_{<2\delta}\times N$ one more time. We suppress detailed estimates as they are very standard. \section{Volume preserving modification via Katok-Lewis trick} We first formulate a standard lemma. \begin{lemma} \label{lemma} Let $M$ be a smooth manifold equipped with a smooth non-degenerate volume form $m$. Assume that a flow generated by a smooth vector field preserves $m$. Consider a smooth function $\rho\colon M\to \field{R}$, $\rho>0$. Then the flow generated by $\rho X$ preserves $m/\rho$. \end{lemma} \begin{proof} By Cartan's formula $$ 0=\mathcal{L}_Xm=\iota_Xdm+d\iota_Xm=d\iota_Xm $$ and similarly $\mathcal{L}_X(m/\rho)=d\iota_X(m/\rho)$. We calculate \begin{multline*} \mathcal{L}_{\rho X}(m/\rho)=\rho\mathcal{L}_X(m/\rho)+d\rho\wedge\iota_X(m/\rho)=\rho d\iota_X(m/\rho)+\frac1\rho d\rho\wedge \iota_Xm=\\ \rho d(\frac1\rho\iota_Xm)+\frac1\rho d\rho\wedge \iota_Xm= \rho\left(-\frac{1}{\rho^2} d\rho\wedge\iota_Xm+\frac1\rho d\iota_Xm \right)+\frac1\rho d\rho\wedge \iota_Xm=d\iota_Xm=0 \end{multline*} \end{proof} The goal of this section is to prove the Addendum~\ref{add}. Recall that we assume that $\phi^t\colon M\to M$ preserves a smooth volume $m$ and $m|_{\mathbb D^k\times N} =vol\otimes vol_N$. Recall that $\phi^t_\rho$ is a slow down of $\phi^t$ along $\mathbb D^k$. By Lemma~\ref{lemma}, the flow $\phi^t_\rho$ also locally preserves the smooth volume $m_\rho|_{\mathbb D^k\times N} =\frac1\rho vol\otimes vol_N$. Note that $m_\rho=m$ near the boundary and hence extend to a smooth $\phi^t_\rho$-invariant volume on the whole of $M$ which we still denote by $m_\rho$. Because $\rho=\rho_0$ is a constant on $\mathbb D^k_{<\delta}$, we see that $m_\rho$ still have a product form $\frac{1}{\rho_0} vol\otimes vol_N$ on $\mathbb D^k_{<\delta}\times N$. \subsection{Replacing the smooth structure} If we equip $\mathbb D^k$ with the standard Euclidean coordinates $(x_1, x_2,\ldots , x_k)$ then \begin{equation} \label{eq_vol} vol=dx_1\wedge dx_2\wedge \ldots \wedge dx_k. \end{equation} By commutativity of~(\ref{diag}) $\hat\phi^t_\rho$ preserves $\phi^*m_\rho$, which is a smooth measure away from the exceptional set. Let's examine the form of $\phi^*m_\rho$ at the exceptional set. Because $\pi$ is a product, we only need to look at the pullback of $vol$ to $\tilde \mathbb D^k$ under $\tilde \mathbb D^k\to \mathbb D^k$. Recall that $$ \tilde\mathbb D^k=\{(x_1,x_2,\ldots x_k, \ell): (x_1,x_2,\ldots x_k)\in \ell\} $$ and that the standard smooth charts for $\tilde\mathbb D^k$ are given by extending the standard charts for the projective space $\field{R} P^{k-1}$. Namely the $i$-th chart is given by \begin{multline} \label{eq_chart} \Psi_i(u_1, u_2, \ldots u_k)=\\ (u_1u_i, u_2u_i,\ldots u_{i-1}u_i, u_i, u_{i+1}u_i, \ldots u_ku_i, [u_1:\ldots :u_{i-1}: 1: u_{i+1}:\ldots :u_k]) \end{multline} We can calculate the pull-back of $vol$ $$ d(u_1u_i)\wedge d(u_2u_i)\wedge\ldots\wedge du_i\wedge \ldots \wedge d(u_ku_i)=u_i^{k-1}du_1\wedge du_2\wedge\ldots\wedge du_k. $$ Hence, when $k>1$ the pull-back vanishes on the projective space. To remedy the situation we follow the idea of Katok-Lewis (which they used to construct non-standard higher rank volume preserving group actions.) Namely we replace the smooth structure on $\mathbb D^k$ by declaring that $$ \Phi\colon \vec u\mapsto \|\vec u\|^\alpha \vec u, \,\alpha<0 $$ is a smooth chart near the origin ({\it i.e., } by changing the smooth atlas). With respect to this chart the Euclidean norm of a vector $\vec u$ is given by \begin{equation} \label{eq_norm} \|\vec u\|_\textup{new}=\|\vec u\|^{1+\alpha} \end{equation} Accordingly we change the smooth structure on $M$ by declaring that $\Phi\times id_N\colon\mathbb D^k\times N\to M$ is a smooth chart at $N$. Note that $M$ equipped with the new smooth atlas, which we denote by $M^{\textup{new}}$, is obviously diffeomorphic to the original $M$. However, it is easy to check that $a^t_\rho\colon\mathbb D^k\to \mathbb D^k$ and, hence, $\phi^t_\rho\colon M^{\textup{new}}\to M^{\textup{new}}$ fail to be smooth. Accordingly we replace we replace charts~(\ref{eq_chart}) for $\tilde \mathbb D^k$ by composing $\Psi_i$ and $\Phi$, that is, \begin{multline*} \Psi_i^{\textup{new}}(u_1, u_2, \ldots u_k)=\\ \big(f_\alpha(u_1, \ldots, u_{i-1}, u_{i+1}, \ldots u_k)\|u_i\|^\alpha(u_1u_i, u_2u_i,\ldots u_i, \ldots u_ku_i), [u_1:\ldots :1:\ldots :u_k])\big), \end{multline*} where $$ f_\alpha(u_1, \ldots, u_{i-1}, u_{i+1}, \ldots u_k)=(u_1^2+u_2^2+\ldots +u_{i-1}^2+1+u_{i+1}^2+\ldots +u_k^2)^{\alpha/2} $$ Because the new smooth structure amounts to mere reparametrization in the radial direction the projective dynamics remains exactly the same. A direct calculation in chart shows that $\hat a_\rho^t\colon\tilde \mathbb D^k\to \tilde \mathbb D^k$ is smooth with respect to the new smooth structure. Hence $\hat\phi^t_\rho\colon \hat M^{\textup{new}}\to \hat M^{\textup{new}}$ is also smooth. Further, by appropriate choice of $\alpha$ we can now guarantee that $\pi^*m$ is a non-degenerate volume on $\hat M^{\textup{new}}$. We present the chart calculation which determines the ``right" value of $\alpha$. In order to simplify notation we perform this calculation in the first chart $\Psi_1^{\textup{new}}$. We also abbreviate $f_\alpha(u_2, u_3, \ldots u_k)$ to simply $f_\alpha$. Note that $$ df_\alpha\wedge du_2\wedge du_3\wedge\ldots \wedge du_k=0 $$ This is very helpful for the calculation: \begin{align*} d & (f_\alpha \|u_1\|^\alpha u_1)\wedge d(f_\alpha\|u_1\|^\alpha u_1 u_2)\wedge\ldots \wedge d(f_\alpha\|u_1\|^\alpha u_1u_k)=\\ & d(f_\alpha\|u_1\|^\alpha u_1)\wedge (u_2 d(f_\alpha\|u_1\|^\alpha u_1) + f_\alpha\|u_1\|^\alpha u_1 du_2 ) \wedge \ldots \wedge (u_k d(f_\alpha\|u_1\|^\alpha u_1) + f_\alpha\|u_1\|^\alpha u_1 du_k )=\\ & (f_\alpha\|u_1\|^\alpha u_1)^{k-1}d(f_\alpha\|u_1\|^\alpha u_1)\wedge du_2\wedge\ldots \wedge du_k=\\ & (f_\alpha\|u_1\|^\alpha u_1)^{k-1}\big(f_\alpha d(\|u_1\|^\alpha u_1)\wedge du_2\wedge\ldots \wedge du_k +\|u_1\|^\alpha u_1 df_\alpha\wedge du_2\wedge du_3\wedge\ldots \wedge du_k\big)=\\ & (f_\alpha\|u_1\|^\alpha u_1)^{k-1}(\alpha+1)f_\alpha\|u_1\|^\alpha du_1\wedge du_2\wedge \ldots \wedge du_k)=(\alpha+1)f_\alpha^k\|u_1\|^{k\alpha} u_1^{k-1} \end{align*} Notice that $f_\alpha$ is a smooth function. Hence the pull-back of $vol$ is a smooth and non-degenerate on $M^{\textup{new}}$ when $k\alpha+k-1=0$, {\it i.e., } $$ \alpha=-\frac{k-1}{k} $$ \begin{remark} \label{remark2} It is crucial for this construction that the initial volume on $\mathbb D^k$ given by~(\ref{eq_vol}) has constant density. Indeed, if we allow for a non-trivial density $\beta(x_1,\ldots x_k)$ and begin with $\beta dx_1\wedge dx_2\wedge \ldots \wedge dx_k$ instead, then all computations go through in the same way. However the expression for the density after the blow-up in the chart $\Psi_i^{\textup{new}}$ will have an additional factor $$ \beta(f_\alpha(u_1, \ldots, u_{i-1}, u_{i+1}, \ldots u_k)\|u_i\|^\alpha(u_1u_i, u_2u_i,\ldots u_i, \ldots u_ku_i)) $$ which is not $C^1$ at the exceptional set given by $u_i=0$ (unless the Taylor coefficients of $\beta$ up to order $k$ vanish). Hence we have a positive continuous density which is not $C^1$ on the exceptional set. This issue, in fact, gives us an additional difficulty to overcome in the proof of Corollary~\ref{cor_main2}. \end{remark} \subsection{Partial hyperbolicity in volume preserving setting} We now have a volume preserving flow $\hat\phi^t_\rho\colon M^{\textup{new}}\to M^{\textup{new}}$. Here we explain that this flow is also partially hyperbolic provided that constant $\rho_0$ (from the definition of $\rho$) is chosen to be sufficiently small. Namely, we amend the domination condition~(\ref{eq_domination}), as follows \begin{equation} \label{eq_domination2} \left(\frac{\lambda'}{\mu'}\right)^{\rho_0}>\max(\lambda, \mu^{-1}),\,\,\, \lambda< (\lambda')^{{\rho_0}/{k}},\,\,\, (\mu')^{{\rho_0}/{k}}<\mu \end{equation} Clearly these inequalities are verified for a sufficiently small $\rho_0$. The proof of partial hyperbolicity is the same as the one given in Section~\ref{section2}. The only difference which requires some commentary is the Lemma~\ref{lemma1} for $\hat\phi^t_\rho\colon M^{\textup{new}}\to M^{\textup{new}}$ under the condition~(\ref{eq_domination2}). Recall that the proof of this lemma mostly rests on Lemma~5.1 of~\cite{G} and the proof of Lemma~5.1 is the only place which requires some adjustments. We indicate how~(\ref{eq_domination2}) must be used in the proof of Lemma~5.1. Recall that on the small neighborhood of the projective space the dynamics of $\hat a^t_\rho$ is given by $$ \hat a^t_\rho(s,v)=(\hat{\hat a}^t_\rho(s),\bar a^t_s(v)),\,\,\, s\in\mathbb R P^{k-1}, v\in\mathbb R_+ $$ where $\hat{\hat a}^t_\rho\colon\field{R} P^{k-1}\to\field{R} P^{k-1}$ is the projectivization of $a^t_\rho$ (which coincides with the restriction of $\hat a^t_\rho$ to $\field{R} P^{k-1}$) and $\bar a^t_s$ is the cocycle over $\hat{\hat a}^t_\rho$ given by the action of $a^t_\rho$ on lines (see the proof of Lemma~5.3 in~\cite{G}).\footnote{ One difference which appears is that even though, with respect to the new smooth chart $\Psi^\textup{new}$, $a^t_\rho$ still sends lines to line the cocycle $\bar a_s^t$ is no longer linear. This, however, does not present any additional difficulty.} The estimate on $\hat{\hat a}^t_\rho$ (Claim~5.4 of~\cite{G}) remains exactly the same as the alternation of the smooth structure did not change the projective dynamics. The place where~(\ref{eq_domination2}) is needed is the inequality~(5.16) of~\cite{G} (estimate on the cocycle $\bar a_s^t$). Indeed, given a small $\vec u$, according to~(\ref{eq_norm}), we have the local estimate $$ \|a^t_\rho(\vec u)\|_\textup{new}=\|a^t_\rho(\vec u)\|^{1+\alpha}\le(c(\mu'^\rho_0)^t\|\vec u\|)^{1+\alpha}=c^{1/k}(\mu')^{\rho_0t/k}\|\vec u\|_\textup{new} $$ and similarly $$ \|a^t_\rho(\vec u)\|_\textup{new}\ge c^{-1/k}(\lambda')^{\rho_0t/k}\|\vec u\|_\textup{new} $$ This effects the last inequality in the proof of Lemma~5.3 of~\cite{G}. Namely, we obtain an exponential upper bound in $$ \max\left(\left(\frac{\lambda'}{\mu'}\right)^{\rho_0}, (\mu')^{\rho_0/k}\right) $$ (and, analogously, a lower bound with $(\lambda')^{\rho_0/k}$) Hence, in order for the rest of the proof to work we need to use~(\ref{eq_domination2}) instead of~(\ref{eq_domination}). \section{The example} In this section we introduce geodesic flows on complex hyperbolic manifolds in detail and then prove Corollaries~\ref{cor_main} and~\ref{cor_main2}. \label{section_example} \subsection{Complex hyperbolic manifolds} First recall that 1-dimensional complex hyperbolic space can be identified with 2-dimensional real hyperbolic space with metric equal to one quarter of the standard Poincar\'e metric. The linear fractional transformations form the group of holomorphic isometries (to generate the full group of isometries one also needs the anti-holomorphic transformation) and can be identified with $PSU(1,1)=\pm Id\backslash SU(1,1)$. Because of the $\frac14$ multiple in the expression for the metric the curvature is $-4$ and the contraction and expansion rates of the geodesic flow on the complex hyperbolic space are twice bigger. It follows that the full stable and unstable horocycles of geodesic flows on higher dimensional complex hyperbolic manifolds contain one dimensional ``fast" horocycles which correspond to the complex lines in the tangent bundle. This yields a partially hyperbolic splitting which is different from the Anosov one and makes the geodesic flow on complex hyperbolic manifold suitable for the blow-up surgery. We begin by summarizing some standard material on complex hyperbolic manifolds. We mostly follow the lucid exposition by D.B.A. Epstein~\cite{epstein}. Consider the following Hermitian quadratic forms on $\field{C}^{n+1}$ of signature $(n,1)$. \begin{equation*} \begin{split} & Q(x)=\sum_{i=1}^{n}z_i\bar z_i-z_{n+1}\bar z_{n+1}\\ & \hat Q(x)=\sum_{i=1}^{n-1}z_i\bar z_i+z_n\bar z_{n+1}+\bar z_n z_{n+1} \end{split} \end{equation*} These forms have the following associated matrices \begin{equation*} \begin{split} & J=diag(1,1,\ldots 1, -1)\\ & \hat J=\left(\begin{array}{cc}Id & 0\\ 0 & J_0\end{array}\right) \end{split} \end{equation*} respectively. Here $J_0= \left( \begin{smallmatrix} 0 & 1\\ 1 & 0 \end{smallmatrix} \right)$. Let $SU(n,1;Q)$ and $SU(n,1;\hat Q)$ be the groups of $(n+1)\times(n+1)$ complex matrices which have determinant 1 and preserve corresponding form. These groups are conjugate in $GL(n+1)$ by $$ T=\left(\begin{array}{cc}Id & 0\\ 0 & T_0\end{array}\right) $$ where $T_0= \frac{1}{\sqrt{2}}\left( \begin{smallmatrix} 1 & 1\\ -1 & 1 \end{smallmatrix} \right)$. Recall that the complex hyperbolic $n$-space $\mathbb H^n_\field{C}$ can be defined as $$ \mathbb H^n_\field{C}=\{ [x]\in \field{C} P^n: Q(x)<0\} $$ Clearly the action of $SU(n,1;Q)$ on $\field{C}^{n+1}$ induces an action on $\mathbb H^n_\field{C}$ and, in fact, $SU(n,1)$ coincides with the group of biholomorphic isometries of $\mathbb H^n_\field{C}$. If $\Gamma$ is a discrete cocompact subgroup of $SU(n,1)$ acting on the right then the orbit space $$ M=\mathbb H^n_\field{C}/\Gamma $$ is a closed complex hyperbolic manifold. Moreover, every closed complex hyperbolic manifold arises in this way. \subsection{The geodesic flow as a homogenous flow} We describe $M$ and its unit tangent bundle as homogeneous spaces. The group $SU(n,1;Q)$ acts transitively on $\mathbb H^n_\field{C}$ and the stabilizer of $[(0,0,\ldots 0,1)]$ is $$ \left\{ \begin{pmatrix} A & 0\\ 0 & \overline{\det A} \end{pmatrix}: \, A\bar A^t=Id \right\}\simeq U(n). $$ The stabilizer of a tangent vector is the group $W(n-1)$ given by\footnote{Notice that, by mapping to the $(n-1)\times(n-1)$ upper diagonal matrix $A$, the group $W(n-1)$ is a double cover of $U(n-1)$. It is curious to notice that, unlike in the real case, $W(n-1)$ is not isomorphic to $U(n-1)$. However using the fact that $\det\colon U(n)\to U(1)$ is a trivial principal fiber bundle one can check that $W(n-1)$ is diffeomorphic to $U(n-1)$.} $$ W(n-1)= \left\{ \begin{pmatrix} A & 0 & 0\\ 0 & \bar \lambda & 0\\ 0 & 0 & \bar\lambda \end{pmatrix}: \, A\bar A^t=Id, \lambda^2={\det A} \right\} $$ Hence we have $$ M=U(n)\backslash SU(n,1; Q)/\Gamma\;\;\;\;\; T^1M=W(n-1)\backslash SU(n,1; Q)/\Gamma. $$ The same descriptions work using $SU(n,1;\hat Q)$ as the underlying Lie group with embeddings of $W(n-1)$ and $U(n)$ are conjugated by $T$. Also note that $W(0)=\{\pm Id\}$ and we will write $PSU(1,1)$ instead of $W(0)\backslash SU(1,1)$. From now on it would be more convenient to only use the form $\hat Q$ and we abbreviate $SU(n,1;\hat Q)$ to $SU(n,1)$. Now recall the Lie algebras $$\u(n-1)=\{A\in M_{n-1}: \bar A^\intercal=-A\}$$ and \begin{equation} \label{eq_su} \mathfrak {su}(n,1)=\mathfrak {su}(n,1,\hat Q)=\{B\in M_{n+1}: Tr(B)=0,\;\; \bar B^\intercal \hat J+\hat JB=0 \} \end{equation} If we write a traceless matrix $B\in\mathfrak {su}(n,1)$ in block form, then $B\in\mathfrak {su}(n,1)$ if and only if $$B=\left(\begin{array}{cc}A & v\\ -J_0\bar v^\intercal & D\end{array}\right)$$ where $A\in\o(n-1)$ and $D= \left( \begin{smallmatrix} a & ib \\ ic & -\bar a \end{smallmatrix} \right) $, $a\in\field{C}$, $b, c \in \field{R}$. The geodesic flow $d_t\colon T^1M\to T^1M$ is given by $W(n-1)g\Gamma\mapsto d_tW(n-1)g\Gamma=W(n-1)d_tg\Gamma$, where $$ d_t=\left(\begin{array}{cc}Id& 0\\ 0 & d_t^0\end{array}\right),\;\;\mbox{with}\;\;\;\;d_t^0=\left(\begin{array}{cc}e^t& 0\\ 0 & e^{-t}\end{array}\right) $$ The strong stable and strong unstable horocycle subgroups are $$ h^{s/u}_t=\left(\begin{array}{cc}Id& 0\\ 0 & h_t^{s0/u0}\end{array}\right),\;\;\mbox{with}\;\;\;h_t^{s0}=\left(\begin{array}{cc}1& it\\ 0 & 1\end{array}\right),\;\;\; h_t^{u0}=\left(\begin{array}{cc}1& 0\\ it & 1\end{array}\right). $$ We refer to~\cite{FK} for a more detailed exposition on the geodesic flow as a homogeneous flow. \subsection{Totally geodesic holomorphic curve.} The complex hyperbolic space $\mathbb H^1_\field{C}$ can be identified with $\{z_1=z_2=\ldots=z_{n-1}=0\}\cap \mathbb H^{n}_\field{C}$. The group of holomorphic isometries $SU(1,1)$ of $\mathbb H^1_\field{C}$ embeds into $SU(n,1)$ as lower diagonal block. Let $\Gamma$ be a cocompact lattice in $SU(n,1)$ and let $\Gamma_1=SU(1,1)\cap\Gamma$. We assume that $\Gamma_1$ is a cocompact subgroup of $SU(1,1)$. Hence the embedding $\mathbb H^1_\field{C}\subset \mathbb H^n_\field{C}$ yields the embeddings \begin{equation*} \begin{split} & N=U(1)\backslash SU(1,1)/\Gamma_1\subset U(n)\backslash SU(n,1)/\Gamma=M,\;\;\mbox{and}\\ & T^1N=PSU(1,1)/\Gamma_1\subset W(n-1)\backslash SU(n,1)/\Gamma=T^1M \end{split} \end{equation*} where $N$ is a totally geodesic one dimensional complex curve. \subsection{Parametrization of the neighborhood and the geodesic flow} We introduce a parametrization of a neighborhood $\mathcal{U}$ of $PSU(1,1)$ in $W(n-1)\backslash SU(n,1)$. This parametrization will be constructed to be $\Gamma_1$ equivariant and, hence, will descend to a parametrization of a neighborhood of $T^1N$ in $T^1M$. Pick a small $\epsilon_0>0$ and take the following as a transversal to the Lie algebras of $SU(1,1)$ and $W(n-1)$. Using the block from~(\ref{eq_su}) let $$ \mathbb D_{\epsilon_0}=\left\{\left(\begin{array}{cc}0& v\\ -J_0\bar v^\intercal & 0\end{array}\right)\in \mathfrak {su}(n,1),\;\;\; \mbox{where}\;\; \|v\|<\epsilon_0\right\} $$ This is a $(4n-4)$-dimensional transversal spanned by weak stable and unstable horocycles. Let $\Sigma=\Sigma_{\epsilon_0}=\exp (\mathbb D_{\epsilon_0})$. Now we define a parametrization $ p\colon \Sigma\times PSU(1,1) \to W(n-1)\backslash SU(n,1) $ of a neighborhood $\mathcal{U}=\mathcal{U}_{\epsilon_0}$ of $PSU(1,1)$ in $W(n-1)\backslash SU(n,1)$ as follows \begin{equation} \label{eq_p} p(\sigma, u) = W(n-1)\sigma u. \end{equation} To verify that this is a well-defined parametrization for a sufficiently small $\epsilon_0$ it is sufficient to check that the map $P\colon W(n-1)\times\Sigma\times PSU(1,1)\to SU(n,1)$ given by $P(w,\sigma, u)=w\sigma u$ is a diffeomorphism on its image. And that the image contains a neighborhood of $W(n-1)\times PSU(1,1)\subset SU(n,1)$. To do this we consider a metric $d$ on $SU(n,1)$ which is invariant under the right action of $PSU(1,1)$ and left action of $W(n-1)$. One can obtain such a metric by starting with a right invariant Riemannian metric and then averaging with respect to the left action of (compact group) $W(n-1)$. Notice that $T_{id}\Sigma$, $\mathfrak {su}(1,1)$ and ${\mathfrak w(n-1)}$ span the full Lie algebra $\mathfrak {su}(n,1)$, and, hence, $P$ is a local diffeomorphism on the neighborhood of $(0,0,0)$. More precisely, by choosing appropriately small $\epsilon_0>0$ and $r>0$ we have that the restriction of $P$ to the neighborhood $$\{w\in W(n-1): d(w,id)<r\}\times \Sigma\times \{u\in PSU(1,1): d(u, id)<r\}$$ is a local diffeomorhism on its image. Further, because $P(w'w,\sigma, uu')=w'P(w,\sigma, u)u'$ we obtain that each point $P(w',0,u')$ has a neighborhood which has a uniform size (with respect to metric $d$) entirely contained in the image of $P$. It remains to check that $P$ is one-to-one. Let $$ \delta_0=\sup_{\sigma\in\Sigma}d(id,\sigma) $$ Note that by choosing smaller $\epsilon_0$ we can make $\delta_0>0$ as small as desired. Assume that $P(w_1,\sigma_1,u_1)=P(w_2,\sigma_2, u_2)$, {\it i.e., } \begin{equation} \label{eq_1} w_2^{-1}w_1\sigma_1=\sigma_2u_2u_1^{-1} \end{equation} Then $$ d(w_2^{-1}w_1,u_2u_1^{-1})\le d(w_2^{-1}w_1, w_2^{-1}w_1\sigma_1)+d(\sigma_2u_2u_1^{-1}, u_2u_1^{-1})= d(id, \sigma_1)+d(\sigma_2, id)\le2\delta_0 $$ Recall that $W(n-1)\times PSU(1,1)$ is (explicitly) properly embedded in $SU(n,1)$. Hence the last inequality implies that both $w_2^{-1}w_1$ and $u_2u_1^{-1}$ are close to $id$. On the other hand we have already shown that $P$ is a local diffeomorphism on the neighborhood of $id$. Hence~(\ref{eq_1}) implies that $w_2^{-1}w_1=id$, $u_2u_1^{-1}=id$ and $\sigma_1=\sigma_2$ proving that $P$ is injective. Finally, we let $\Gamma_1$ act on $\Sigma\times PSU(1,1)$ by $\gamma_1\colon(\sigma, u)\mapsto (\sigma, u\gamma_1)$. Our parametrization is equivariant with respect to the right action of $\Gamma_1$ and hence descends to a parametrization of a neighborhood of $T^1N\subset T^1M$ by $\Sigma\times PSU(1,1)/\Gamma_1\simeq \Sigma\times T^1N\simeq \mathbb D_{\epsilon_0}\times T^1N$.\footnote{Notice that in particular we have shown that the normal bundle of $T^1N$ in $T^1M$ is trivial. This happens because $W(n-1)\cap PSU(1,1)=\{Id\}$. It was pointed out to us by Mike Davis that in general the normal bundle of $N$ in $M$ is twisted and the twisting is controlled by the Chern class.} \subsection{Proof of Corollary~\ref{cor_main}} The Corollary~\ref{cor_main} follows from the Main Theorem provided that we verify the locally fiberwise assumption with respect to our parametrization. We write $v$ as a column vectors $v=(v_1,v_2)$ which parametrizes $\Sigma$. That is, $$ A(v_1,v_2)=\left(\begin{array}{cc}0& v\\ -J_0\bar v^\intercal & 0\end{array}\right) $$ and $\sigma(v_1,v_2)=\exp A(v_1,v_2)$. Notice that \begin{multline*} d_t\sigma(v_1,v_2)d_t^{-1}=d_t\exp A(v_1,v_2)d_t^{-1}\\ =\exp d_t A(v_1,v_2)d_t^{-1}=\exp A(e^{-t}v_1,e^tv_2)=\sigma(e^{-t}v_1,e^tv_2) \end{multline*} Now we can deduce the formula for the geodesic flow using the coordinates $(v_1,v_2,u)\in\Sigma\times PSU(1,1)$ \begin{multline*} d_t(v_1,v_2,u)=W(n-1) d_t\sigma(v_1,v_2)u=W(n-1) d_t\sigma(v_1,v_2)d_t^{-1} d_tu\\ =(e^{-t}v_1,e^tv_2, d_tu) \end{multline*} We conclude that with respect to the coordinates $(v_1,v_2,u)$ the geodesic flow is the product of $(4n-4)$-dimensional hyperbolic saddle and the geodesic flow on a holomorphic curve. This verifies the assumption of the Main Theorem on locally fiberwise structure of $d_t$ on $\mathcal{U}$. Finally to see that the partially hyperbolic flow $\tilde \phi^t$ could be chosen to be arbitrarily close to $\hat\phi^t\colon\widehat{T^1M}\to\widehat{T^1M}$ in $C^\infty$ topology recall that we obtain $\tilde \phi^t$ by blowing up the reparametrized flow $\phi^t_\rho$. The reparametrization is localized in the neighborhood of $T^1N$ and is given by~(\ref{eq_slow_down}). Function $\rho$ has to be chosen so that~(\ref{eq_domination}) holds: \begin{equation*} \left(\frac{\lambda'}{\mu'}\right)^{\rho_0}>\max(\lambda, \mu^{-1}) \end{equation*} In the current setting $\lambda'^{-1}=\mu'=e$ and $\lambda^{-1}=\mu=e^2$. Hence any value of $\rho_0<1$ would work. It follows that the function $\rho$ can be chosen to be arbitrarily close to $1$ in the $C^\infty$ topology. Therefore $\phi^t_\rho$ can be arbitrarily $C^\infty$ close to $\phi^t$ and, accordingly, $\tilde\phi^t$ can be arbitrarily $C^\infty$ close to $\hat\phi^t$. \subsection{Proof of Corollary~\ref{cor_main2}} Corollary~\ref{cor_main2} does not immediately follow from Addendum~\ref{add}. The reason is that the pull-back of the Liouville volume form $p^*vol$ under parametrization $p$ has the form $$ \alpha(v_1,v_2)\omega_0\wedge vol_{T^1N}, $$ where $\omega_0$ is the standard volume form on $\mathbb D_{\epsilon_0}$ and $vol_{T^1N}$ is the Liouville volume form on $T^1N$. Indeed, because the Liouville measure comes from the Haar measure on $SU(n,1)$ and $p$ is equivariant with respect to the right action of $PSU(1,1)$ the density $\alpha$ is independent of the $u$-coordinate. However, the dependence on $v_1$ and $v_2$ is non-trivial. Hence the Addendum~\ref{add} does not apply directly (cf. Remark~\ref{remark2}). Our approach is to replace the flow $\phi_t$ with a different flow $\bar \phi_t$ to which Addendum~\ref{add} can be applied. More precisely, on the neighborhood $\mathbb D_{\epsilon_0}\times T^1N$ we will let \begin{equation*} \label{new_flow} \bar \phi^t=\bar h\circ \phi^t\circ \bar h^{-1} \end{equation*} where $\bar h=(h, id_{T^1N})$ and $h$ is $C^1$ small and tapers away to identity on the neighborhood of the boundary of $\mathbb D_{\epsilon_0}$. Let $\omega_1=\alpha(v_1,v_2)\omega_0$. By rescaling $\omega_0$ if needed, we may assume that $\alpha(0,0)=1$. Denote by $a^t$ the saddle flow, $a^t(v_1,v_2)=(e^{-t}v_1, e^tv_2).$ Note that, because $\alpha$ is continuous and $a^t$-invariant, we also have $\alpha(0,v_2)=\alpha(v_1,0)=1$. \begin{lemma} \label{lemmata} For all sufficiently small $\epsilon_1\in (0,\epsilon_0)$ there exists a diffeomorphism $h\colon \mathbb D_{\epsilon_1}\to h(\mathbb D_{\epsilon_1})\subset \mathbb D_{\epsilon_0}$ such that $h_*\omega_1=\omega_0$ and $h$ commutes with the saddle flow, when defined: $$h\circ a_t=a_t\circ h$$ \end{lemma} Before proving the lemma we first finish the proof of Corollary~\ref{cor_main2}. First extend $h\colon \mathbb D_{\epsilon_1}\to h(\mathbb D_{\epsilon_1})$ to a diffeomorphism $h\colon \mathbb D_{\epsilon_0}\to \mathbb D_{\epsilon_0}$ which equals to identity near the boundary. Then replace the geodesic flow $\phi^t$ with the new flow $\bar \phi^t$ by replacing the restriction $\phi^t|_{ \mathbb D_{\epsilon_0}\times T^1N}$ with $(h\circ a^t\circ h^{-1}, \phi^t_{T^1N})$. Clearly $\bar\phi^t$ is smoothly conjugate to $\phi^t$. Hence $\bar\phi^t$ is partially hyperbolic with $C^1$ splitting. Further, $T^1N$ remains $\bar\phi^t$-invariant and, because $h$ commutes with $a^t$ on $\mathbb D_{\epsilon_1}$ we have $$ \bar \phi^t(v_1,v_2,u)=\phi^t(v_1,v_2,u)=(a^t(v_1,v_2),\phi^t_{T^1N}(u)) $$ for $(v_1,v_2)\in \mathbb D_{\epsilon_1}$. Hence the locally fiberwise assumption is also verified for $\bar \phi^t$. On the neighborhood $\mathbb D_{\epsilon_1}\times T^1N$ the $\bar\phi^t$-invariant volume has the form $\bar h_*(\omega_1\wedge vol_{T^1N})=h_*\omega_1\wedge vol_{T^1N}=\omega_0\wedge vol_{T^1N}$ and hence the assumption of Addendum~\ref{add} is also verified. We conclude that Addendum~\ref{add} applies to $\bar\phi^t$ and yields Corollary~\ref{cor_main2}. \hfill $\square$ Hence it only remains to prove the Lemma. \begin{proof}[Proof of Lemma~\ref{lemmata}] The idea of the proof is perform an $a^t$-equivariant Moser trick.\footnote{While such trick is standard in the context of equivariant cohomology, when the acting group is compact, see e.g.~\cite{GS}, we were unable to locate any prior work on ``locally equivariant" Moser trick. While we only do it here for the saddle singularity, presumably it is much more general.} To obtain the diffeomorphism $h$ such that $h_*\omega_1=\omega_0$ consider the path $\omega_t=(1-s)\omega_0+s\omega_1$, $s\in[0,1]$. Then, by the Poincar\'e Lemma, there exists $\eta$ such that $$ d\eta=\omega_1-\omega_0=\gamma\omega, \,\,\,\gamma=\alpha-1 $$ Further, we can choose $\eta$ to be $a^t$-invariant; that is, $\mathcal{L}_X\eta=0$, where $X=\partial a^t/\partial t$. We proceed with the proof assuming this fact, which we will verify later via a direct calculation. Because $\omega_s$ are non-degenerate forms the equation $$ \iota_{Y_s}\omega_s=\eta, $$ uniquely defines ``time-dependent vector field" $Y_s$. Then, by Cartan's formula, we have for every $s\in[0,1]$ $$ \mathcal{L}_{Y_s}\omega_s=(\iota_{Y_s}\circ d+d\circ \iota_{Y_s})\omega_s=d\beta $$ Hence by integrating $Y_s$ we obtain a one-parameter family of diffeomorophisms $h_s$ such that $$ (h_s)_*\omega_0=\omega_s $$ Recall that volume forms $\omega_s$ are invariant under $X$, {\it i.e., } $\,\, \mathcal{L}_X\omega_s=0 $ Hence $$ 0=\mathcal{L}_X\beta=\mathcal{L}_X(\iota_{Y_s}\omega_s)=\iota_{Y_s}(\mathcal{L}_X\omega_s)+\iota_{\mathcal{L}_XY_s}\omega_s=\iota_{\mathcal{L}_XY_s}\omega_s, $$ which implies that $[X, Y_s]=\mathcal{L}_XY_s=0$ because $\omega_s$ in non-degenerate. It follows from the Frobenius Theorem that $a^t$ commutes with $h_s$ as posited. Note that $h_s(0,0)=(0,0)$. It remains to set $h=h_1$ and restrict to a sufficiently small disk $ \mathbb D_{\epsilon_1}$ such that $ h(\mathbb D_{\epsilon_1})\subset \mathbb D_{\epsilon_0}$. Hence, to finish the proof of the Lemma it remains to show that the form $\eta$ can be chosen to be $a^t$ invariant. For the sake of notation we prove this fact only when $\dim\mathbb D_{\epsilon_0}=4$. The general case can be addressed in the same way. We use coordinates $(x_1,x_2,x_3, x_4)$. Then $\omega_0=dx_1\wedge dx_2\wedge dx_3\wedge dx_4$ and the generator of $a^t$ is given by $$ X=-x_1\frac{\partial}{\partial x_1}-x_2\frac{\partial}{\partial x_2}+x_3\frac{\partial}{\partial x_3}+x_4\frac{\partial}{\partial x_4} $$ First let $\eta_0=x_1dx_2\wedge dx_3\wedge dx_4$. Then $d\eta_0=\omega_0$ and, using Cartan formula $\mathcal{L}_X\eta_0=\iota_X\omega_0+d\iota_X\eta_0$ it is straightforward to verify that $\mathcal{L}_X\eta_0=0$, {\it i.e., } $\eta_0$ is $a^t$-invariant. Our goal now is to find an $a^t$-invariant function $\beta$ such that $d(\beta\eta_0)=\gamma\omega_0$. We have $$ d(\beta\eta_0)=\beta\eta_0+d\beta\wedge\eta_0=\beta\omega_0+x_1\frac{\partial\beta}{\partial x_1}\omega $$ Hence we need to solve the equation $$ \beta+x_1\frac{\partial\beta}{\partial x_1}=\frac{\partial}{\partial x_1}(x_1\beta)=\gamma $$ for $\beta$. Then $$ \beta(x_1, x_2, x_3, x_4)=\frac{1}{x_1}\int_0^{x_1}\gamma(q,x_2,x_3,x_4)dq $$ is a solution. We check that $\beta$ is $a^t$-invariant. Let $\Gamma=\Gamma(x_1,x_2,x_3,x_4)=\int_0^{x_1}\gamma(q,x_2,x_3,x_4)dq$. Because $\gamma$ is $a^t$-invariant we have \begin{multline*} 0=\int_0^{x_1}X\gamma(q,x_2,x_3,x_4)dq=-\int_0^{x_1}q\frac{\partial}{\partial q}\gamma(q,x_2,x_3, x_4)dq-x_2 \frac{\partial}{\partial x_2} \Gamma+x_3 \frac{\partial}{\partial x_3} \Gamma+x_4 \frac{\partial}{\partial x_4} \Gamma\\ =-x_1\gamma(x_1,x_2,x_3,x_4)+\Gamma(x_1,x_2,x_3,x_4)-x_2 \frac{\partial}{\partial x_2} \Gamma+x_3 \frac{\partial}{\partial x_3} \Gamma+x_4 \frac{\partial}{\partial x_4} \Gamma =\Gamma+X\Gamma \end{multline*} where we used integration by parts and the fundamental theorem of calculus. Now differentiating $x_1\beta=\Gamma$ with respect to $X$ gives $$ X(x_1)\beta+x_1X\beta=X\Gamma $$ which yields $$ x_1X\beta=X\Gamma+x_1\beta=X\Gamma+\Gamma=0. $$ Hence $X\beta=0$. Finally by the product formula $$ \mathcal{L}_X\beta\eta_0=X(\beta)\eta_0+\beta\mathcal{L}_X\eta_0=0. $$ \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There are a number of astrophysical systems in which the magnetic Rayleigh-Taylor instability (RTI) is expected to be important, for example accretion onto magnetized compact objects (Arons \& Lea 1976; Wang \& Nepveu 1983; Wang, Nepveu, \& Robertson 1984), buoyant bubbles generated by radio jets in clusters of galaxies (Robinson et al. 2005; Jones \& De Young 2005; Ruszkowski et al. 2007)), the emergence of magnetic flux from the solar interior and the formation of flux tubes (Isobe et al. 2005; 2006; and references therein), and at both the contact discontinuity between the shocked circumstellar medium and ejecta in supernovae remnants (Jun \& Norman 1996a; b), and in the thin shell of ejecta swept up by a pulsar wind (Hester et al. 1996, hereafter H96; Bucciantini et al. 2004). For the idealized case of two inviscid, perfectly conducting fluids separated by a contact discontinuity with a uniform magnetic field ${\bf B}$ parallel to the interface undergoing constant acceleration $g$, then a linear analysis (Chandrasekhar 1961) demonstrates that for modes parallel to the magnetic field there is a critical wavelength \begin{equation} \lambda_c = \frac{B^2}{g(\rho_h - \rho_l)} \end{equation} below which instability is completely suppressed, where $\rho_h$ and $\rho_l$ are the densities in the heavy and light fluids respectively, and we have chosen a system of units in which the magnetic permeability $\mu=1$. At larger wavelengths, the growth rate is reduced compared to the hydrodynamic case, and there is a peak growth rate occurring at a wavelength $\lambda_{\rm max} = 2\lambda_c$. Equation (1) can also be thought of as a condition on the magnetic field: instability on a scale $L$ parallel to the field requires $B < B_c \equiv [Lg(\rho_h - \rho_l)]^{1/2}$. Modes perpendicular to the field are unaffected, and have the same growth rate and stability condition as in pure hydrodynamics. The highly anisotropic nature of the growth rate of modes parallel versus perpendicular to the field suggests that it is important to study the nonlinear regime of the magnetic RTI in full three dimensions. One of the most compelling applications of the magnetic RTI is to the structure of the optical filaments in the Crab nebula (H96). As the low density, highly magnetized synchrotron nebula powered by the Crab pulsar sweeps up the stellar ejecta, the interface between the two is RT unstable, resulting in radially orientated filaments that point to the center of the synchrotron nebula. H96 have suggested the long, widely spaced filaments observed by HST are a consequence of suppression of short wavelength modes due to the magnetic field in the synchrotron plasma, since the filaments bear no resemblance to the turbulent mixing layer that results from the RTI in hydrodynamics (Dimonte et al 2004, hereafter D04), but are a better fit to the morphology that results from the magnetic RTI in two-dimensions (Jun, Norman, \& Stone 1995). However, in purely hydrodynamic simulations of the RTI in the spherically expanding shell swept up by the pulsar wind, Jun (1998) was able to reproduce the morphology and separation of the fingers remarkably well, suggesting than geometrical effects might be important. Since the simulations were performed in two-dimensions assuming axial symmetry, it is unclear if the isolated fingers will persist in three-dimensional hydrodynamics, or whether MHD effects are indeed essential. More recently, Bucciantini et al. (2004) have presented the most realistic numerical models of the filaments in the Crab nebula to date, using two-dimensional MHD simulations of the expanding shell and nebula. In their more realistic treatment of the conditions at the unstable interface, they find fields near equipartition can completely suppress the RTI. However, as they point out, because of the anisotropic nature of the magnetic RTI, three-dimensional effects are critical and need to be included in future studies. Since fully three-dimensional MHD simulations in a spherical geometry which can follow the expanding shell of ejecta are computationally challenging, it is worthwhile to begin investigation of three-dimensional effects in the idealized plane parallel case. Recently, we have described an extensive study of the nonlinear evolution of the magnetic RTI in a three-dimensional planar geometry, focusing on the effect of varying the field strength on the growth rate of fingers and bubbles at the interface, and on the amount of mixing between the heavy and light fluids (Stone \& Gardiner 2007, hereafter Paper I). To facilitate comparison with previous experimental and computational studies of the hydrodynamic RTI (D04), a relatively modest difference in density between the fluids was chosen, that is $\rho_h/\rho_l = 3$. In this paper, we extend the study by considering a more astrophysically relevant density ratio, $\rho_h/\rho_l = 10$, and by focusing on the effect of strong magnetic fields (in the sense that $\lambda_c \sim L$, where L is the size of the computational domain) on the suppression of the RTI in three dimensions. A number of studies of magnetic buoyancy instabilities in three dimensions have been reported, both in the context of the emergence of new magnetic flux from the solar photosphere (Wissink et al. 2000; Fan 2001; Isobe et al. 2005; 2006), and the nonlinear evolution of the Parker instability in the galactic disk (Kim, Ostriker, \& Stone 2002; Kosi\'{n}ski \& Hanasz 2007). In these studies, the magnetic field is strong enough for the ratio of thermal to magnetic pressure $\beta \sim 1$, so that the magnetic field not only plays a significant role in the support of the initial equilibrium state, but also is responsible for driving buoyant motions. In contrast, we study weak fields in the sense that $\beta \gg 1$, so that the magnetic field plays almost no role in the vertical equilibrium, and the RTI is driven by the buoyancy of the fluid. Our goal is to study how magnetic fields affect the evolution of the classical RTI. Our primary conclusions are that in three dimensions, uniform magnetic fields do not suppress the RTI due to the growth of interchange modes perpendicular to the field. In fact, since magnetic fields suppress secondary Kelvin-Helmholtz instabilities and therefore mixing between the heavy and light fluids, the growth rate of bubbles and fingers is in fact enhanced in the magnetic RTI compared to the hydrodynamic case. We explore a variety of initial field configurations, including uniform fields, uniform fields in the light fluid only, and fields with a rotation at the interface, and we show that well separated, long fingers reminiscent of the optical filaments in the Crab nebula can be generated if the magnetic field direction changes through large angles over a distance short compared to $\lambda_c$. \section{Method} We solve the equations of ideal MHD with a constant vertical acceleration ${\bf g} = (0,0,g)$ \begin{eqnarray} \frac{\partial \rho}{\partial t} + {\bf\nabla\cdot} \left(\rho{\bf v}\right) & = & 0 \label{eq:cons_mass} \\ \frac{\partial \rho {\bf v}}{\partial t} + {\bf\nabla\cdot} \left(\rho{\bf vv} - {\bf BB}\right) + {\bf \nabla} P^{*} & = & \rho {\bf g} \\ \frac{\partial {\bf B}}{\partial t} + {\bf\nabla}\times \left({\bf v} \times {\bf B}\right) & = & 0 \\ \frac{\partial E}{\partial t} + \nabla\cdot((E + P^*) {\bf v} - {\bf B} ({\bf B \cdot v})) & = & \rho {\bf v} \cdot {\bf g} \label{eq:cons_energy} \end{eqnarray} The total pressure $P^* \equiv P + ({\bf B \cdot B})/2$, where $P$ is the gas pressure. The total energy density $E$ is \begin{equation} E \equiv \epsilon + \rho({\bf v \cdot v})/2 + ({\bf B \cdot B})/2 ~. \end{equation} where $\epsilon$ is the internal energy density. We use an ideal gas equation of state for which $P = (\gamma - 1) \epsilon$, where $\gamma$ is the ratio of specific heats. We use $\gamma=5/3$ in this paper. In relativistic plasmas such as synchrotron nebulae $\gamma=4/3$ would be more appropriate. However, given our choice for the numerical value of $g$ and the size of the computational domain (see below), the flows induced by the magnetic RTI are subsonic and nearly incompressible. Thus, varying the adiabatic index should have little effect on the results reported here. The three-dimensional computational domain is of size $L \times L \times 2L$, where $L=0.1$. Periodic boundary conditions are used in the transverse ($x-$ and $y-$) directions, and reflecting boundary conditions are used at the top and bottom. The origin of the $z-$coordinate is centered in the domain, so that the computations span $-0.1 \leq z \leq 0.1$ The upper half of the domain ($z>0$) is filled with heavy fluid of density $\rho_h=10$, while in the lower half ($z<0$) the density of the light fluid is $\rho_l=1$. Thus, the Atwood number \begin{equation} A \equiv \frac{\rho_h - \rho_l}{\rho_h + \rho_l} = \frac{9}{11}. \end{equation} Most of the experimental studies of the hydrodynamic RTI used to validate computational methods (D04) use $A=1/2$. In Paper I we studied the magnetic RTI with $A=1/2$; in this paper we study the high Atwood number regime which is more relevant to most astrophysical systems. Initially the gas is in magnetohydrostatic equilibrium, with the amplitude of the gas pressure chosen so that the sound speed in the light fluid $c_s=1$ at the interface, thus \begin{equation} P^*(z) = \frac{3}{5} - g \rho z + B^{2}/2 \end{equation} The sound crossing time in the light fluid at the interface $t_s =0.1$. We choose $g=0.1$, thus the ratio of the free-fall velocity to the sound speed $\sqrt{gL}/c_s = 0.1$, implying the induced flows should be nearly incompressible. The magnetic field is initialized with an amplitude $B_{0}$ that is chosen to be a fixed fraction of the critical field strength $B_c$ at which there are no unstable modes within $L$, we choose $B_{0} \approx 0.6B_c$. From equation (1), the critical wavelength at which all modes are suppressed $\lambda_c/L \approx 0.35$. The field is always initially parallel to the interface, but has a variety of different initial configurations which will be described along with the results of each individual simulation in \S 3. The ratio of the gas to magnetic pressure at the interface $\beta = 480$ in all the runs. Thus, although we study strong fields in the sense that $\lambda_c \sim L$, the energy density in the field is far below equipartition, and the field plays little role in the initial vertical equilibrium. Increasing the size of the computational domain $L$ to accommodate a larger $\lambda_c$ associated with stronger, near-equipartition fields, or simply lowering the sound speed to decrease $\beta$ in the present simulations, will both produce flows in which $\sqrt{gL}/c_s$ is increased, and therefore are more compressible. To seed the RTI, zone-to-zone perturbations are added to the vertical velocity $v_z$ throughout the volume with an amplitude $A$ that is kept small compared to the sound speed, and is decreased toward the vertical boundaries; thus $A = A_0 R (1 + \cos{2\pi z/L})$ where $A_0=0.005$, and $R$ is a random number between -1 and 1. The maximum perturbed velocity is only 1\% of the sound speed in the light fluid at the interface. The computations presented in this paper use Athena, a new MHD code that implements a recently developed Godunov method for compressible MHD (Gardiner \& Stone 2005; 2007). A complete description of the algorithm, including the results of an extensive series of test problems, is given in these references. All of the simulations use a grid of $256 \times 256 \times 512$, which means the critical wavelength $\lambda_c$ is resolved with nearly 100 grid points. Our numerical resolution is much higher than used in previous work (for example, the few 3D simulations reported in Jun, Norman \& Stone 1995), and uses stronger initial fields. In Paper I, we presented a comprehensive convergence study of our numerical algorithms for the magnetic RTI in two dimensions, focused on the amount of mass mixing due to numerical effects. For single mode perturbations, features such as the shape and height of the interface at a fixed time were converged with 32 or more grid points per wavelength. The amount of mixing between heavy and light fluids was also found to converge to zero at first order, independent of the magnetic field strength. First order convergence is consistent with mixing being proportional to the width of the interface between the heavy and light fluids (which cannot be smaller than one grid cell). With multimode perturbations, the degree of mixing does not converge to zero, because at higher resolution there are more small scale distortions in the interface which increase its surface area. Convergence of the mixing to zero with multimode perturbations therefore requires the introduction of surface tension or viscosity to create a fixed small scale below which the interface is smooth. Instead, in this paper we compute all solutions at the highest resolution we can afford (so that they are all at the same, high Reynolds and magnetic Reynolds numbers), and focus on the {\em comparison} of features with different field strengths and geometries that occur at these Reynolds numbers. In this way, we can isolate the effects of changing field strength or geometry from the effect of changing the numerical diffusion. \section{Results} We describe the results from simulations that use a variety of different initial magnetic field configurations. \subsection{Uniform Field versus Hydrodynamics} We begin with the evolution in a uniform magnetic field parallel to the interface and along the $x-$axis, ${\bf B} = (B_{0},0,0)$. For comparison purposes, we also describe the results of a hydrodynamical calculation, computed with the same parameters, grid, and numerical algorithm. Hereafter, we refer to the uniform field case as run U, and the hydrodynamical simulation as run H. Figure 1 shows isosurfaces of the density, along with slices of the density at the edges of the computational domain, at two times during the evolution for both runs H and U. The hydrodynamic case shows the typical evolution of the RTI into a turbulent mixing layer (D04). In hydrodynamics, short wavelength modes grow fastest, thus at early times the instability is dominated by bubbles and fingers at small scales. Secondary Kelvin-Helmholtz instabilities, associated with the shear between the rising and descending plumes, give the tips a ``mushroom-cap" appearance, and cause some of the fingers to break up. At late times, mergers between bubbles favors growth of structure at larger scales, while secondary instabilities continue to distort the plumes and cause mixing. Note the large fraction of fluid at intermediate densities (green colors) at late times in the hydrodynamic case. In the uniformly magnetized simulation run U, the early nonlinear phase of the RTI shows the strongly anisotropic structure of modes introduced by the magnetic field. Perpendicular to the field (along the $y-$axis), interchange modes grow fastest at short wavelength, whereas along the field short wavelengths are suppressed. As a result, the interface develops a filamentary structure that is strongly reminiscent of the structure reported by Isobe et al (2005; 2006) in simulations of flux tubes emerging from the solar photosphere. At late times, fluid flowing along flux tubes collects at bubbles and fingers at the tips (similar to the nonlinear evolution of the Parker instability, Kim et al. 1998), which are then wrinkled by interchange instability at their surface. This produces large-scale smooth bubbles. Slices along the edges of the domain reveal far less mixing than in the hydrodynamic case. Note in three-dimensions the magnetic RTI in a uniform field does not result in isolated, long fingers comparable to the observations of the Crab (H96). One measure of the rate of growth of the RTI is the time evolution of the height $h$ of bubbles from the interface. Self-similar arguments (D04) predict that \begin{equation} h = \alpha Agt^2 \end{equation} where $\alpha$ is a dimensionless constant. The experimentally measured value is $\alpha = 0.057 \pm 0.008$. Without specialized front-tracking algorithms that can prevent mixing between the fluids at the grid scale, most numerical methods give a value for $\alpha$ which is about a factor of two smaller (D04, Paper I). Figure 2 plots the location of the tips of the rising bubbles as a function of time in both runs H and U. At any instant in time, we define the vertical location of the tips of the fingers as the point where the horizontally averaged fraction of the heavy fluid \begin{equation} \langle f_h \rangle = \int_x \int_y f_h dx dy/L^2 \end{equation} is 0.95, where for incompressible fluid with $\rho_h=10$ and $\rho_l=1$ the fraction of heavy fluid in any cell is $f_h = (\rho -1)/9$. (To account for the effects of compressibility, we choose $f_h=0.95$ rather than one to mark the boundary of the mixing region.) From figure 2, we see that after an initial rise, the increase in $h$ in both hydrodynamics and MHD follows the expected self-similar scaling equation (9). In hydrodynamics, the slope $\alpha =0.03$, whereas in MHD the slope $\alpha =0.05$ where we have ignored the final point in both cases, since it is undoubtedly affected by the reflecting boundary conditions at the top of the domain $h/L=1$. It is clear that the bubbles rise {\em faster} in MHD than in the hydrodynamic RTI, in agreement with the results at $A=1/2$ (Paper I). As discussed in \S 3.5, this is primarily due to the reduction of mixing in the MHD case. \subsection{Field in Light Fluid Only} In the magnetic RTI associated with some astrophysical systems, such as the interface between the pulsar wind nebula and the supernova ejecta in the Crab nebula, only the light fluid is expected to be strongly magnetized. Given that the results in section \S 3.1 show that strong, uniform fields do not suppress the RTI, it is unlikely that a strong field in the light fluid only will inhibit instability. Nonetheless, it is of interest to investigate the structure of the nonlinear regime in this case. Figure 3 plots isosurfaces of the density, along with slices of the density at the edges of the computational domain, at two times during the evolution of a simulation in which the magnetic field is uniform, parallel to the interface, and along the $x-$axis, ${\bf B} = (B_{0},0,0)$ in the light ($\rho=1$) fluid only, with ${\bf B} = 0$ everywhere else. As before, we choose $B_0 = 0.6 B_{c}$. The gas pressure is increased in the heavy fluid above the interface so that the total pressure is continuous, that is exact vertical equilibrium is maintained initially. We refer to this calculation as run LF hereafter. It is instructive to compare the structures observed in figure 3 with the uniform field case (bottom row of figure 1). At the early time in run LF, the fingers and bubbles are not elongated along the field as in run U. Instead, the structure is nearly isotropic, similar to the hydrodynamic case but with less small scale structure. At late time, large smooth bubbles emerge in run LF that appear isotropic. Overall, the three-dimensional structure of the fingers and bubbles in run LF is intermediate between the hydrodynamic and uniformly magnetized runs. The density slices at the edge of the domain show much less mixing than run H. The height of the bubbles and degree of mixing (revealed by the density slices at the edge of the domain) show much more similarity to run U; these will be analyzed further in \S 3.5. Once again, we see that in three-dimensions, strong uniform fields in the light fluid are unable to inhibit the RTI. \subsection{Fields with a Discontinuous Rotation} In most astrophysical systems, there is no reason to expect the magnetic field has the same geometry in both the light and heavy fluids. Since only unstable modes parallel to the magnetic field are suppressed, rotating the field near the interface will inhibit modes in multidimensions. To investigate this regime we have performed simulations in which the magnetic field is rotated discontinuously through large angles at the interface. In the first simulation, hereafter referred to as run R45, the field is rotated through $45^{\circ}$, that is ${\bf B} = (B_{0},0,0)$ in the light fluid ($z<0$), and ${\bf B} = (B_{0}/\sqrt{2},B_0/\sqrt{2},0)$ in the heavy fluid ($z>0$). In the second simulation, hereafter referred to as run R90, the field is rotated through $90^{\circ}$, that is ${\bf B} = (B_{0},0,0)$ in the light fluid, and ${\bf B} = (0,B_0,0)$ in the heavy fluid. In both cases, there is a current sheet at the interface. Figure 4 plots isosurfaces of the density, along with slices of the density at the edges of the computational domain, at two times during the evolution of both runs R45 and R90. At early times in both cases, filamentary structures appear at an angle roughly half-way between the direction of the field in the heavy and light fluids (about $22^{\circ}$ with respect to the $x-$axis in R45, $45^{\circ}$ with respect to the $x-$axis in R90), most likely because the magnetic tension forces which are proportional to ${\bf k}\cdot{\bf B}$ are minimized in this direction. Analysis of the magnetic field and velocity perturbations at this time shows flow occurs along the field lines into the ridges. Pure interchange modes are no longer possible with rotated fields, and the growth of perturbations requires ${\bf k}\cdot{\bf B} \ne 0$ in either the light or heavy fluids, or both. Note the amplitude of perturbations is much smaller in R90 at early times in comparison to R45, and only long wavelength modes are present. By $t/t_s=40$, the interface in both R45 and R90 is strongly distorted by RTI. Interestingly, the structure of modes at late times is quite different from previous cases. Isolated, large scale bubbles dominate, with very smooth surfaces, and bulbous tips. The spacing between bubbles is roughly the critical wavelength $\lambda_c$. The structure of R90 is particularly interesting. The fingers in this case are nearly isotropic, and have a length which significantly exceeds $\lambda_c$. The surface of the bubbles is extremely smooth, whereas in R45 there is some evidence for wrinkling due to interchange modes. The interface between the light and heavy fluids is remarkably thin in R90. At the faces of the volume, the density slices reveal very little material at densities intermediate to the values of the isosurfaces (at $\rho=9.9$ and 1.1 respectively). Thus, the faces of the volume are transparent, and the interior of the bubbles is clearly visible. Contrast this to run H in figure 1, where the slice at the edge of the domain revealed a turbulent mixing layer. A more quantitative analysis of mixing in all the runs will be presented in \S 3.5. \subsection{Fields with Continuous Rotation} In the previous section, the direction of the magnetic field was changed discontinuously at the interface, resulting in a current sheet. It is possible that in many astrophysical systems, the direction of the field varies smoothly on many different scales. To investigate the effect this might have on the magnetic RTI, we consider the case where the field amplitude is constant everywhere, while the direction is rotated through a large angle (we choose $90^{\circ}$) over a finite vertical distance $L_{rot}$. More specifically, for $z<-L_{rot}/2$ the field is ${\bf B} = (B_{0},0,0)$, for $-L_{rot}/2<z<L_{rot}/2$ the direction of the field varies linearly with $z$ from along the $x-$axis to along the $y-$axis while the amplitude is fixed at $B_{0}$, and for $z>L_{rot}/2$ the field is ${\bf B} = (0,B_{0},0)$. Note this geomtry results in a current layer with constant amplitude in the region $-L_{rot}/2<z<L_{rot}/2$. If $L_{rot} \ll \lambda_{c}$ we expect this initial configuration to evolve similar to the discontinous rotation run R90 studied in \S3.3, while if $L_{rot} \gg \lambda_{c}$ it will evolve like the uniform field case run U studied in \S3.1. Here we choose $L_{rot}/\lambda_{c} = 0.5$, and hereafter refer to this calculation as run C90. In fact, we find at late times the structure that emerges from the magnetic RTI in run C90 is remarkably similar to that produced in run R90. For example, at $t/t_s=40$s, isolated smooth bubbles are produced with similar sizes and spacing as observed in figure 4. Conversely, we find at early times there is little suppression of interchange modes. This is not suprising: the fastest growing modes occur at the largest wavenumbers, and therefore have wavelengths much smaller than $L_{rot}$. On these scales, the early evolution of the interface is as if the field were uniform (run U). Figure 5 plots the height of bubbles in run C90 versus the uniform field case run U. The evolution of both is very similar. Our results confirm the intuition that changes in the direction of the field at the interface must be on very small scales to inhibit the interchange modes. \subsection{Mixing} The amount of mixing between the heavy and light fluids strongly affects the rate at which bubbles and fingers are displaced from the interface (D04, Paper I). The presence of even a weak field can, through the action of tension forces at small scales, significantly reduce mixing in comparison to hydrodynamics (Paper I). Here we investigate mixing in the simulations presented above. Figure 6 plots the height of bubbles above the interface, defined using the point at which $\langle f_h \rangle=0.95$, for runs U, LF, R45 and R90. In every case, at late times the height $h$ grows as $t^2$, as expected (equation 9). However, in R45, and especially R90, growth is delayed. The slope of the lines, as measured by the dimensionless constant $\alpha$ are remarkably similar, $\alpha = 0.050 \pm 0.005$. The decrease in the slope at late time in each model is most likely an influence of the upper (reflecting) boundary condition rather than a divergence from the self-similar evolution. It is useful to define a mixing parameter $\Theta$ as \begin{equation} \Theta =4\langle f_h f_l \rangle \end{equation} The peak value of $\Theta$ is one, and occurs when $f_h=f_l=1/2$, that is in regions that are fully mixed. In regions that are not mixed, $\Theta=0$. Figure 7 plots the profile of $\Theta$ versus height in runs H, U, and R90. Note in the hydrodynamic case run H, the mixing parameter is close to the theoretical maximum near the original location of the interface $z=0$. This quantifies the result which is evident from a visual inspection of figure 1, namely the hydrodynamic RTI results in a turbulent mixing zone which is dominated by material at intermediate densities. On the other hand, the uniformly magnetized case run U shows far less mixing than run H, again a fact which is evident from the lower panels of figure 1. Finally, run R90 shows the least mixing, with a peak value of $\Theta$ which is five times smaller than the peak value in run H. At the peak of $\Theta$ in run R90, the horizontally averaged fraction of heavy fluid $\langle f_h \rangle = 0.2$, indicating the fingers of heavy fluid occupy a much smaller volume than the bubbles of light fluid. Again, all of these results are evident from figure 4, where density slices at the edge of the domain show the mixing layer between the two fluids to be very thin, and that the bubbles of light fluid fill most of the volume. \subsection{Magnetic Field Evolution} Self-similar arguments predict that the rate of growth of the height $h$ of bubbles and fingers should be proportional to $t^2$ (equation 9). Since the amount of gravitational binding energy released by the descending plumes of heavy fluid is proportional to $h^2$ (the energy released is the product of the mass involved in the flow and the distance it falls, both of which are proportional to $h$), we expect the rate of growth of the kinetic and magnetic energies in the RTI should be proportional to $t^{4}$. Figure 8 plots each component of the volume averaged kinetic and magnetic energies, normalized by the initial volume averaged magnetic energy $B_{0}^{2}/2$, in runs U and R90 versus $t^4$. The magnetic energy associated with the horizontal components of the field have their initial values subtracted as appropriate, thus the plot shows the fractional change in the energies. Note that at early times, the curves are straight lines, indicating the expected scaling with $t^4$ is recovered. In each case the vertical components of the energies dominate, and in the horizontal directions there is rough equipartition between kinetic and magnetic energies. The kinetic energy associated with the $y-$component of the kinetic energy is larger in comparison to the $x-$component in run U since motions perpendicular to the field are favored by interchange modes, which we have shown are important in strong uniform fields. The amplification of the vertical field is larger in run R90 in comparison to run U, although the total magnetic energy in all components of the field is roughly the same at late times in both cases. This is another indication that run R90 leads to ordered, vertical flows and columns, whereas larger amplitude horizontal flows (and therefore more mixing and less ordered fingers) are produced in uniform fields. In both cases the magnetic RTI leads to significant amplification of magnetic energy. It is worth emphasizing that the time evolution of volume averaged quantities as shown in figure 8 is controlled by a number of dimensionless parameters, including the ratio of the critical wavelength to the size of the computational domain $\lambda_c/L$ and the ratio of the free fall to the sound crossing time $\sqrt{\lambda_c/g}/t_{s}$. We have studied strong fields in the sense that $\lambda_c/L \sim 1$. If the calculations were repeated with identical parameters but in a much larger domain, then the evolution would resemble the weak field simulations presented in Paper I. That is, if the calculations presented here were continued in a much larger domain, so that the height of the fingers and bubbles $h \gg \lambda_c$, then the flow would become more hydrodynamic, a turbulent mixing zone would emerge, and once the vertical field is a large fraction of the initial horizontal value, the $t^4$ scaling of energies is broken (Paper I). \section{Summary and Discussion} We have shown that strong, uniform magnetic fields cannot suppress the RTI in three dimensions. In the linear regime only long wavelength modes parallel to the magnetic field are unstable; interchange modes perpendicular to the field are unaffected, and grow at the same rate as in hydrodynamics. We have shown that in the nonlinear regime this leads to a highly anisotropic structure. At late times, flow of plasma along field lines produces large bubbles much as in the Parker instability (Kim et al 1998), which in turn become wrinkled by secondary interchange modes. In fact, in one respect strong magnetic fields actually increase the growth rate of the RTI in the nonlinear regime, in comparison to hydrodynamics. Magnetic fields inhibit secondary instabilities and mixing between the light and heavy fluid. In turn, the reduction of mixing causes bubbles (fingers) to rise (fall) more rapidly. In fact, the tension force associated with even weak fields can suppress mixing on small scales, and increase the growth rate of bubbles and fingers (Paper I). The suppression of a turbulent mixing layer with even a weak magnetic field could be relevant to a number of astrophysical systems, for example the evolution of supernovae remnants (Jun \& Norman 1996a; b), or the mixing of metals from early generations of stars into the intergalactic medium. Although uniform magnetic fields do not suppress the RTI, we have shown that if the direction of the field changes through a large angle at the interface, this can delay instability, and significantly alter the structures that emerge in the nonlinear regime. We have studied field geometries that have both discontinuous rotations of the field at the interface, and continuous rotations over a finite vertical length $L_{rot}$ at the interface. When $L_{rot}/\lambda_c \leq 1$, the nonlinear regime in both these cases is similar, and consists of isolated, smooth, long fingers and bubbles. There are several obvious applications of the magnetic RTI to astrophysical systems. The first is to the penetration of infalling plasma into the magnetosphere of an accreting neutron star (Arons \& Lea 1976; Wang \& Nepveu 1983; Wang, Nepveu, \& Robertson 1984), or to the confinement of the plasma along field lines at the polar caps (Litwin, Brown, \& Rosner 2001). A related problem, confinement of strong vertical flux tubes at the galactic center, has been investigated by Chandran (2001). In each of these cases (except the last), the field is rigidly anchored at a boundary, whereas we have studied the magnetic RTI with periodic boundary conditions in both horizontal directions. The nonlinear evolution of interchange modes will probably be strongly affected no-slip boundary conditions on the magnetic field at the edges of the domain, so our results may only have limited applicability to these systems. The second is to the stability of buoyant bubbles generated by radio jets in clusters of galaxies. Robertson et al. (2004) and Jones \& De Young (2005) have presented two-dimensional simulations of the morphology of magnetized, buoyant bubbles. However, it is clear that three-dimensional effects will be very important in this problem, due to the very different behavior of modes perpendicular versus parallel to the field. Recent work in 3D by Ruszkowski et al. (2007) confirms that magnetic fields are unable to suppress shredding of bubbles in three dimensions unless the coherence length of the field is larger than the size of the bubble. In fact, magnetic fields in cluster gas can alter the dynamics in ways than go beyond the obvious effects of magnetic stresses. Due to the long mean-free-paths of particles, anisotropic heat conduction and viscosity are important in hot cluster gas. Balbus (2000) has shown that the convective stability criterion is fundamentally altered in a plasma with anisotropic heat conduction (see also Chandran \& Dennis 2006). Numerical simulations of the nonlinear regime of this instability (Parrish \& Stone 2005; 2007) reveal vigorous convective motions that are quenched only when the plasma becomes isothermal. Thus, inclusion of magnetic fields in the dynamics of buoyant bubbles alters the basic plasma dynamics in ways that warrant further investigation. Finally, our results have application to the optical filaments being swept up by the pulsar wind in the Crab nebula (H96). It is tempting to compare the long, well-separated fingers generated with rotated fields (run R90, figure 4) with the filaments. However, due to its orientation, figure 4 shows the morphology of the rising bubbles of light fluid, whereas the observations reveal the morphology of the descending fingers of heavy fluid. In figure 9 we plot isosurfaces of the density at $\rho=1.1$, and slices along the face of the computational domain showing only regions where $\rho > 1.1$, with the orientation flipped relative to figure 4, that is with the descending fingers of heavy fluid oriented upward. Note the long, thin fingers of dense gas along the $y-z$ plane are in good agreement with the morphology of the fingers in the Crab. This calculation includes field in both the heavy and light fluids, although in the Crab only the light fluid (synchrotron nebula) is expected to be strongly magnetized. It is an open issue as to whether a strong field in the light fluid only, whose direction changes on scales small compared to $\lambda_c$, can reproduce the structures seen in figure 9. Note that a uniform field in the light fluid only (run LF, see figure 3) results in structure markedly different than in figure 9. Of course, to accurately model the fingers in the Crab nebula, it is important to include density compression due to cooling, to study fields near equipartition (which will also increase the importance of compressibility), and perhaps most importantly, to include the geometrical effects produced by the spherically expanding shell. Previous two-dimensional studies have shown that purely hydrodynamical instability in the appropriate geometry can produce the structure of the Crab filaments (Jun 1998), and more recently it has shown that strong fields in this same geometry significantly alter the picture (Bucciantini et al 2005). In this paper, we have emphasized the importance of three dimensional effects on the magnetic RTI. It will be important to extend these fully three-dimensional results to the expanding wind geometry appropriate to the Crab. \acknowledgements We thank Jeff Hester for discussions. Simulations were performed on the Teragrid cluster at NCSA, the IBM Blue Gene at Princeton University, and on computational facilities supported by NSF grant AST-0216105. Financial support from DoE grant DE-FG52-06NA26217 is acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Methodology} This section first gives a broad briefing of SR methods (Section 2.1), then explains the procedure through which low-resolution data are obtained (Section 2.2), and lastly focuses on the three types of models developed to super resolve the Arctic sea ice concentration (section 2.3). \subsection{SR Background} Over the past two decades, SR, the process of obtaining high resolution images from their low resolution counterparts, has been widely and well researched in a variety of fields, such as satellite imaging, medical image processing, facial image analysis, text image analysis, sign plates reading, biometric recognition, etc. Based on the context of applications, SR techniques can be grouped into a broad taxonomy, as suggested in Fig.1 in \cite{nasrollahi2014super}. The SR problem we aim to solve in this work is to reconstruct the high-resolution field of a geophysical variable given its low-resolution counterpart. Specifically, we have a single low-resolution input image and output its corresponding high-resolution output as a result of the reconstruction. Thus, according to \cite{nasrollahi2014super}, our problem falls into the category of a "Single Image" in a "Spatial Domain". The single-image SR problem is inherently "ill-posed" because the solution is not unique given a single low-resolution image. To mitigate this issue, prior knowledge is needed to constrain the solution space. In our case, the true high resolution fields for the training set must be available to train the SR model, which once trained can then be used to reconstruct the high-resolution field when it is unavailable. To distill the prior, a state-of-the-art strategy is the example-based learning \citep{freeman2002example} which extracts the similarities among the sub-patches of a set of image and learns the mapping between the low and high resolution of these patches \citep{yang2014single}. Using sparse-coding method \citep{yang2010image} as a representative of example-based learning methods, \cite{dong2015image} showed that the pipeline of example-based learning is equivalent to a CNN, which directly learns an end-to-end mapping between low- and high-resolution images with little manual pre- or post-processing. In this spirit of end-to-end mapping, a variety of machine learning techniques have been proposed and shown to reach the state-of-the-art efficiency in different contexts, such as RCN \citep{kim2016deeply} and RF-REG \citep{schulter2015fast,dou2018medical}. While the aforementioned published works mostly focus on public benchmark image data sets, SR in climate-related work is not new. For instance, \cite{keating2012new} and \cite{keating2015upper} used an empirical stochastic parameterization of ocean turbulence within a dynamical model to super resolve the ocean velocity and sea surface temperature given coarse-resolution observations. \cite{bolton2019applications} tested whether the unresolved subgrid turbulent processes can be revealed by low-resolution model data using CNN. \subsection{Data processing} In this paper, we aim to reconstruct finer-scaled sea ice concentrations from low-resolution fields. In practice, we utilize two years of daily sea ice concentration fields from a high-resolution dynamical sea ice model to train and evaluate the ML models. These are the high-resolution fields that our SR model is meant to reconstruct. To resemble sparse satellite measurements or low-resolution model output that we might want to super-resolve, we first coarse-grain the high-resolution dynamical model output by employing a two-dimensional spatial filter that is a simple average around a given grid cell $(i, j)$ with a downgrading factor $m$. Specifically, \begin{equation} c_L(i,j)=\frac{1}{(m+1)^2}\sum_{k=-m/2}^{m/2} \ \sum_{l=-m/2}^{m/2} c_H(i+l,j+k), \label{eq:coarsen} \end{equation} where $c_L(i,j)$ is the low-resolution field and $c_H(i,j)$ is the high-resolution field. Each high-resolution grid cell appears exactly once in an average, so the size of the low-resolution field is reduced by $m+1$ in each direction. For example, downgrading by a factor of four ($m = 4$) averages 5x5 grid cells at high resolution, i.e. starting from 2 pixels away from the center to the left to 2 pixels away from the center to the right, into 1 grid cell at low resolution. We present results with $m=4$ in Section 2, and explore various downscaling factors in Section 3. In addition to coarse-graining, we adopted the traditional patch-based operations \citep{ram2013image}, which has been shown to be more effective with SR at preserving the local texture of an image without being constrained on a single pixel. In the context of sea ice concentration, for example, patches as samples differ from each other due to different land/sea geography. Each high-resolution field is broken into $P$ non-overlapping patches prior to coarse graining. Hence, the low-resolution counterparts are also on patches, and the input to the ML model is $P$ times the number of daily fields. The sea ice concentration we use is from a historical simulation of the Community Earth System Model Version 2 (CESM2) with CICE5 as the sea ice component \citep{danabasoglu2020community}. The resolution of the sea ice in the Arctic is approximately 5 km. We divided the Arctic domain (north of 48 N) into 1067 patches of $32$x$32$ grid cells, each of which has at least 10\% coverage by non-zero sea ice concentrations based on an average of a year. We acquired daily fields from CESM2 in 2007 and 2008, for a total of 778,910 patches. Among these patches, the last 64,020 patches (about 10\%) are used for validation and testing, 32,010 for each. \subsection{Model architectures} Distilled from the broad research literature aforementioned in Section 2.1, three representative ML methods are selected for this study to reconstruct the high-resolution Arctic sea ice concentrations. Specifically, We create SR models with CNN, RCN, and RF-REG, which are illustrated in Fig.\ref{f:arcs} and documented in detail below. The trainings for both CNN and RCN are implemented in Python with the Tensorflow/Keras library (https://keras.io). The training for RF-REG uses Python Scikit-Learn 0.24.2 library (https://scikit-learn.org). The baseline estimate we define for each SR model to beat is the high-resolution reconstructed fields acquired from bilinear interpolation of the low-resolution fields. In fact, we design the SR models to input low-resolution patches and output the difference between the high-resolution patch and the baseline bilinear interpolated patch (e.g., see Fig.\ref{f:IO}). \begin{figure*} \centering \noindent\includegraphics[scale=0.45,trim={0cm 0cm 0cm 0cm}]{architectures.pdf} \caption{The architectures for (a) CNN, (b) RCN, and (c) RF-REG with $m$=4.} \label{f:arcs} \end{figure*} \begin{figure*} \centering \noindent\includegraphics[scale=0.45,trim={0cm 0cm 0cm 0cm}]{IO.pdf} \caption{Sample input and output with $m$=4 for our SR models. (a) $H_b$, the bilinear interpolated high resolution as baseline. (b) $H$, the original high resolution. (c) $L$, the coarse-grained low-resolution as input. (d) $H-H_b$, the difference between $H$ and $H_b$ as the prediction target. (e) Sea ice concentration patches of 32$\times$32 selected (red squares) over the Arctic.} \label{f:IO} \end{figure*} \subsubsection{CNN} Convolutional neural network (CNN) is a class of deep-learning neural network that has been widely applied to image analysis. It breaks down an image composed of complex patterns into smaller and simpler pattern feature blocks by employing a mathematical operation called ``convolution". When $m$=4, the CNN model (Fig.\ref{f:arcs}(a)) takes a low-resolution image patch of size $8$-by-$8$ as input and then process it through three consecutive layers repeating a block unit: a two-dimensional (2D) convolutional layer and a LeakyReLU (Leaky Rectified Linear Unit) layer. The output of this three-time repeated block unit is then passed to a 2D max-pooling layer before getting flattened to a one-dimensional (1D) vector with a length of 1600. The final step is to regress the flattened 1D vector to another 1D vector of a size 51200, which is the flattened high-resolution 32$\times$32 patch. \subsubsection{RCN} To alleviate the issues of vanishing/exploding gradients with minimal redundant recursions, \cite{kim2016deeply} introduced a ``deeply-recursive convolutional network (RCN)" which recursively connects intermediate outputs to the terminal layer through ``skip-connections". Following this work, we construct an RCN adapted from the CNN architecture, as shown in Fig.\ref{f:arcs}(b). In the RCN architecture, intermediate predictions at each layer are output through a skip-connection and simultaneously supervised with respect to the truth to learn the optimal weights for each prediction. RCN resembles CNN for the basic blocks but adds the skip-connections which flatten the outputs from intermediate layers, dense them to 1D vectors of size 1024 as individual predictions, and then weigh these predictions via a supervised learning with respect to the truth. \subsubsection{RF-REG} \cite{dou2018medical} proposed a random forest classifier that selects a regressor that matches a low-resolution patch to its counterpart in the high-resolution space. We name this method ``Random Forest-Regression" (RF-REG). The training algorithm is shown in Fig.\ref{f:arcs}(c) and summarized as follows. \begin{itemize} \item Part I: Learn multiple regression models \item[] Step 1: Randomly and evenly divide all the training samples/patches into $j$ classes \item[] Step 2: Construct $j$ Ridge regression models with Tikhonov regularization that map the low-resolution images to their high-resolution counterparts. \item[] Step 3: Reconstruct all training samples and calculate the reconstruction errors using each of $j$ regression models respectively. \item[] Step 4: Regroup the training samples according to the reconstruction errors calculated from Step 3. \item[] Step 5: Repeat Step 2 until the reconstruction errors converge. \item Part II: Train an RF to classify which of the $j$ regression models in Part I a given patch belongs to. \end{itemize} As a result, when a new low-resolution patch comes in, we pass it directly to the RF to decide which regression model to use for the high-resolution reconstruction. In this work we apply a Ridge regression with a regularization strength of 1.0 and an RF classification with the maximum depth of the tree being 2. \section{Results} \begin{figure*} \centering \noindent\includegraphics[scale=0.65,trim={4cm 4cm 4cm 6cm}]{compare.pdf} \caption{The average RMSEs in \% of predictions by the baseline (gray), CNN (blue), RCN (green) and RF-REG (red) on the test set grouped by downsampling factors of 2, 4 and 6. Since the distribution of RMSE is not normal and skewed towards smaller values, we use top and bottom 25th quantiles to represent the spread of the average (black vertical lines). The computation of RMSEs does not include land or non-ice covered ocean.} \label{f:compare} \end{figure*} We trained the three SR models based on ML methods each with three downsampling factors ($m=2,$ 4, and 6). The baseline estimates from bilinear interpolation plus the ML methods yield a total of 12 reconstructions. The performance of each reconstruction is evaluated by root mean square error (RMSE) (Fig.\ref{f:compare}) with the original high-resolution CESM2 output as the "truth". Finally, we average over the cells with non-zero ice concentration of all samples for all times within the test period to arrive at a single RMSE for each reconstruction for comparison (Fig.\ref{f:compare}). It follows that reconstructions from all three ML-based models surpass the baseline at each downsampling factor, with CNN having the smallest RMSE. The reason why RCN does not surpass CNN might be because we use leaky ReLU as the activation function in both CNN and RNN. Leaky ReLU is known to rectify the vanishing gradient problem, suggesting it is an important factor for this problem and therefore the skip-connection component in RCN does not appear to offer any additional advantage. The RMSE increases with downsampling factor for all reconstructions, in general, indicating, unsurprisingly, that the lower the input resolution is, the harder it is to reconstruct the high-resolution fields. The relative improvement of the reconstructions from the three ML-based SR models compared to the baseline decreases with downscaling factor, suggesting that the benefit of ML over bilinear interpolation decreases for harder problems. However, the relative RMSE for CNN compared to the baseline is 0.7 even when $m=6$, which indicates that CNN is arguably worth the trouble. \section{Conclusions} In this work we present three machine-learning methods, i.e. CNN, RCN and RF-REG, in SR models to reconstruct a high-resolution sea ice concentration from the corresponding coarse-grained low-resolution field. Compared to a baseline estimate from bilinear interpolation, all three ML-based SR models show superior performance, with CNN being the best. In addition, when applied to SST, CNN still surpasses the baseline estimate, which implies the potential promise of our results when applying this technique to other geophysical variables in general. We envision a possible application of the SR methods in this study to the development of parameterizations in low-resolution Earth system models where high-resolution fields are reconstructed from the resolved model state and used to quantify a sub-grid scale process. For sea ice, this might include parameterizations of sea ice growth from unresolved small-scale ice-free or thin-ice areas, such as small leads and polynyas. Another potential application is to satellite observations when transmission of satellite imagery is rate limited, especially for instruments that resolve fine spatial scales and/or have numerous spectral bands per pixel. Images at high-resolution could be transmitted intermittently with more numerous coarse-grained low-resolution images in the interim. The high-resolution images could then be used to train a ML-based SR model for reconstructing high-resolution images from the more common low-resolution images. However, this application might require significant adaption of the SR concept. For example, if the input information is purely satellite tracks, then 1D Convolution will be used instead of the current 2D convolution network. Also, since satellite tracks collect data at different time instants, the time scale complexity should also be taken into account. \competinginterests{Both authors declare no competing interests for this work.} \bibliographystyle{copernicus}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{introduction} The narrow line regions (NLRs), which extend to several hundred or kilo parsec scale around the galaxy center, are the exclusive structure of active galactic nuclei (AGN) where the spatially resolved observations are possible. Therefore NLRs are often investigated as an important tool to study the ionization state of the interstellar medium (ISM) and/or chemical evolution in galactic scale (e.g., \cite{2006A&A...447..863N}). Although it is widely accepted that the NLR is photoionized by ionizing photons radiated from a central engine, the possibility of shock ionization induced by a jet in off nucleus regions cannot be excluded since ionization photons decrease with distance from a nucleus (e.g., \cite{2007ApJ...666..794F}). Thus, how much the shock contributes to the ionization of NLRs is very important to our understanding of AGN structure and in examining the utility of NLRs as a tool to investigate galactic-scale phenomena. Furthermore, the shock ionization of NLRs is getting a lot more attention. Recent dramatic progress of theoretical simulations and observational studies of galaxy formation and evolution allow a quantitative comparison between both sides. In this context, a serious problem has arisen, i.e., theoretical simulations predict too many massive galaxies due to long-duration star formations in contrast to early-time quenching of star formation in observed massive galaxies (e.g., \cite{2006MNRAS.365...11C,2006MNRAS.370..645B}). This problem cannot be solved even if a negative feedback effect on star formation activity caused by supernovae is involved and so AGN feedback effect is considered as a potential candidate of a solution: a massive galaxy likely has a supermassive black hole at its galaxy center, and inflow of ISM to a supermassive black hole invokes its AGN activity which releases vast gravitational potential energy to ISM resulting in suppression of star formation activity (e.g., \cite{2005ApJ...635L..13S,2007MNRAS.380..877S}). However, how the AGN activity transmits its energy to ISM remains a mystery. One possible physical mechanism of the AGN feedback is shock ionization of ISM, i.e., the AGN activity inputs its energy to ISM through a shock heating induced by a jet. In previous studies of NLRs, line-ratio diagnostics to distinguish between shock ionization and photoionization have been examined. Optical diagnostics, however, can hardly discriminate between the two mechanisms, because optical NLR spectra predicted by photoionization and shock ionization models are very similar to each other \citep{1995ApJ...455..468D,1996ApJS..102..161D,2008ApJS..178...20A}. The near infrared line ratio of [Fe II]1.257$\mu$m/[P II]1.188$\mu$m is one of the most powerful indicators to discriminate photoionization and shock ionization. Both lines have similar critical densities and excitation temperatures, i.e., this line ratio is roughly proportional to the ratio of gas-phase abundance of iron and phosphorous. In contrast, iron is a well known refractory species and is strongly depleted in dust grains, whereas phosphorous is a non-refractory species. Photoionization alone (including H II regions and NLRs excited by ionizing photons from young stars and AGN central sources, respectively) is relatively incapable of destroying the tough iron based grains, while these are easily sputtered by shocks. The [Fe II]1.257$\mu$m/[P II]1.188$\mu$m ratio, therefore, is high ($\gtrsim 20$) in fast shock-excited regions and low ($\lesssim 2$) in normal photoionized regions \citep{2001A&A...369L...5O}. The actual ionization state of NLRs would be determined by the combination of photoionization and shock ionization, and so the observed line ratios are expected to change with the locations in NLRs ranging from [Fe II]1.257$\mu$m/[P II]1.188$\mu$m $\sim 2$ to $\sim 20$. NGC 1068 is one of the nearest AGNs, which has a compact radio jet around the nucleus and spatially extended radio lobe \citep{1983ApJ...275....8W}. This radio structure well coincides with a morphology of NLR \citep{1997ApJ...487..560C}. Therefore, NGC 1068 is an ideal object with which to research the spatial distribution of [Fe II]1.257$\mu$m/[P II]1.188$\mu$m. In this paper, we adopt 14.4 Mpc as a distance to NGC 1068 and 1$\arcsec$ corresponds to 70 pc. \section{Theoretical [Fe II]1.257$\mu$m/[P II]1.188$\mu$m ratio} Before describing the details of observation and data reduction, we summarize the theoretical background of [Fe II]1.257$\mu$m and [P II]1.188$\mu$m emission lines. The emission line intensity is proportional to the product of the density of the ion responsible for the emission line process ($n_{i}$) and the electron density ($n_{e}$), multiplied by a function $f$ giving the rate of the process. Thus the intensity ratio of two emission lines radiated from ion 1 and 2 is written as \begin{equation} \frac{I(\lambda _{1})}{I(\lambda _{2})} = \frac{n_{i,1}f_{1}}{n_{i,2}f_{2}}, \end{equation} assuming the same spatial distribution of both ions \citep{1989agna.book.....O}. The function $f$ involves the rate of emission line photons in the radiative transition from the excited level to the ground level written as \begin{equation} \frac{n_{1}A_{10}}{n_{0}} = A_{10}\frac{q_{01}(T)}{q_{10}(T)} \left [1+\frac{A_{10}}{n_{e}q_{10}(T)} \right]^{-1}. \end{equation} Here $A_{10}$ is the radiative transition probability, and $q_{01}$ and $q_{10}$ are collisional excitation and deexcitation rate, which include collisional strength ($\Omega$). Thus, $f$ can be calculated if $A$ and $\Omega$ are known. \cite{2001A&A...369L...5O} derived \begin{equation} \frac{n({\rm Fe})}{n({\rm P})} \lesssim \frac{n({\rm Fe}^{+})}{n({\rm P}^{+})} \sim 2\cdot \frac{I([{\rm Fe\ II}]1.257\mu{\rm m})}{I([{\rm P\ II}]1.188\mu{\rm m})}, \label{abundance} \end{equation} using the collision strengths and transition probabilities of [Fe II]1.257$\mu$m and [P II]1.188$\mu$m \citep{1970RSPSA.318..531K,1995A&A...293..953Z,1982MNRAS.199.1025M,1988A&A...193..327N} . For solar abundance ratio of $n({\rm Fe})/n({\rm P}) \sim 100$ and typical depletion factor of Fe ($\sim 0.01$) and P ($\sim 1.0$ because of refractory species), equation (\ref{abundance}) gives [Fe II]1.257$\mu$m/[P II]1.188$\mu$m close to unity, although the depletion factor of iron differs from object to object. Actually, [Fe II]1.257$\mu$m/[P II]1.188$\mu$m is $\lesssim 2$ in normal photoionized region, e.g., $\sim 2$ in Orion Bar \citep{2000A&A...364..301W}. This is also true for NLR ionized by AGN radiation because even ionizing photons from AGN central source can hardly destroy the tough iron based grains in NLR. However if shocks exist the grains are easily destroyed and gas-phase iron increases. As a result shock ionized gas represents high [Fe II]1.257$\mu$m/[P II]1.188$\mu$m ratio. If we assume that iron based grains are completely sputtered by shocks, the ratio becomes $\sim 50$ for solar abundance. This is similar to that measured in supernova remnants (e.g., $\gtrsim 20$ for LMC-N63A and LMC-N49 reported by \cite{2001A&A...369L...5O}). The actual observed line ratio in NLR of AGN is expected to be between $\sim 2$ and $\sim 20$ since the ionization state would be determined by the combination of photoionization and shock ionization if these exist as mentioned in section \ref{introduction}. \section{Observations and data reduction} \label{Observation} Long-slit spectroscopy was carried out from November 8 to 12 2009 with ISLE \citep{2006SPIE.6269E.118Y,2008SPIE.7014E.106Y}, which is a near-infrared imager and spectrograph for the Cassegrain focus of the 1.88 m telescope at Okayama Astrophysical Observatory (OAO). The camera used for the spectroscopic observations has a projected scale of 0$\arcsec$.25/pixel. The spectrum of NGC 1068 was obtained with a slit of 2$\arcsec$.0 (= 8 pixels) width and $J$-band grating which yields a $1.11 - 1.32\mu$m spectrum with a dispersion of 0.166 $\mu$m/pix. The spectral resolution is $\sim$ 1300 measured from an OH emission line at the central wavelength. The slit was oriented to E-W (i.e., position angle = 90$^{\circ}$) and centered on the $J$-band continuum peak of NGC 1068 (Fig. \ref{figure1}). We note that the position angle is fixed to 90$^{\circ}$ in ISLE spectroscopic mode. Therefore, the slit was not placed along the major axis of NLR and the direction of the radio structure, at a position angle of $\sim$ 30 $^{\circ}$ \citep{2006AJ....132..620D,2010ApJ...708..419C}. It lies outside of the nominal bicone of NLR and away from the axis of radio emission. Unfortunately other slit positions north and south of the nucleus could not be completed due to bad weather conditions. The acquisition consisted of a series of two 2-minute exposures with the object set at different positions along the slit followed by dome flats and calibration lamp of Argon and Xenon. The seeing size was 1$\arcsec$.0 - 2$\arcsec$.0. Since the weather conditions were not good throughout the observation, we excluded poor data from a total 6-hour exposure on source, resulting in the effective exposure time of 4.4 hours on source. The standard data reduction was performed for all selected spectra, i.e., dark frame subtraction, flat fielding, wavelength calibration, and sky subtraction using IRAF software. To correct the atmospheric spectral response and the instrumental efficiency, spectra of NGC 1068 were divided by spectra of some A-type rationing stars (HIP5310, HIP10795, HIP14077, and HIP22774) with the same airmass. In this reduction, the black body and Pa$\beta$ absorption features of rationing stars were removed by spectral fitting with a black body function and Voigt profile, respectively. The assumed effective temperatures of rationing stars are 8270, 7500, 8200, and 9230 K for HIP5310 (A3V), HIP10795 (A7V), HIP14077 (A5V), and HIP22774 (A1V), respectively. \section{Results and discussion} \label{Results} The obtained 2-D spectra of NGC 1068 are displayed in Fig. \ref{figure2}. We detected [Fe II]1.257$\mu$m and [P II]1.188$\mu$m lines as well as Pa$\beta$ and [S IX]1.252$\mu$m. The spatial extent of [Fe II]1.257$\mu$m and [P II]1.188$\mu$m are $\sim 14\arcsec$ and $\sim 7\arcsec$. These values are clearly larger than the typical seeing size of $1\arcsec .0-2\arcsec .0$. Thus, we concluded that spatially extended [Fe II]1.257$\mu$m and [P II]1.188$\mu$m were successfully detected. The line fluxes were measured from spectral fitting analysis with IRAF $specfit$ task \citep{1994asp...61...437}, assuming single gauss and underlying linear functions for each emission line. We summarized relative line fluxes normalized by [P II]1.188$\mu$m in Table \ref{table}, extracted from central 2$\arcsec$.0 region and east and west neighbor regions. The detailed spatial distribution of [Fe II]1.257$\mu$m/[P II]1.188$\mu$m line ratios is shown in Fig. \ref{figure3} (a). \cite{2001A&A...369L...5O} reported that the line ratio in the central $\sim$ 2$\arcsec$ region of NGC 1068 is about 1.5. Since this value corresponds with the photoionization scheme as mentioned above, they concluded that in the central region most iron is locked into grains and shock excitation is not the primary origin of [Fe II] line emission. This explanation is relatively straightforward because there would be a large number of ionizing photons near the nucleus, that is enough to dominate the ionization of surrounding gases. Our measurement of [Fe II]1.257$\mu$m/[P II]1.188$\mu$m $\sim$ 1.3 in the central 2$\arcsec$ region is consistent with this value. However, this argument may not be valid at off nucleus regions. We found that [Fe II]1.257$\mu$m/[P II]1.188$\mu$m increases with distance from a central continuum peak. While observed line ratios around the nucleus are consistent with a prediction by photoionization models, the ratios at 3$\arcsec$ $-$ 4$\arcsec$ east and west of the nucleus ($\sim$ 560 pc) are slightly higher than the typical value of [Fe II]1.257$\mu$m/[P II]1.188$\mu$m in the photoionized region. Although there are only two research efforts devoted to the spatial distribution of [Fe II]1.257$\mu$m/[P II]1.188$\mu$m ratios of NLRs (NGC 4151 by \cite{2009MNRAS.394.1148S} and Mrk 1066 by \cite{2010MNRAS.404..166R}), similar results were reported in both cases. \cite{2009MNRAS.394.1148S} found that [Fe II]1.257$\mu$m/[P II]1.188$\mu$m ratios are higher at $\sim$ 130 pc away from the nucleus of NGC 4151 ($\sim 6$) than that in its nucleus ($\sim 2$). They also pointed out a possible spatial correlation between [Fe II]1.257$\mu$m/[P II]1.188$\mu$m and radio continuum structure. They suggested that shocks induced by a radio jet release the Fe locked in grains and produce an enhancement of the [Fe II] emission at off nucleus regions. Similarly \cite{2010MNRAS.404..166R} found that Mrk 1066 presents [Fe II]1.257$\mu$m/[P II]1.188$\mu$m $\sim 3$ at most locations within $\sim 470$ pc from the nucleus, but in some regions close to the borders of the radio continuum structure this ratio reaches values up to 9.5. They concluded that shocks seem to play a more important role in these regions. Fig. \ref{figure3} (b) shows VLA 4.86 GHz flux density as a function of distance from the nucleus of NGC 1068 extracted from a same slit aperture as our near-infrared observation with OAO/ISLE. The radio data was obtained from NRAO Science Data Archive\footnotemark[1]. \footnotetext[1]{https://archive.nrao.edu/archive/e2earchivex.jsp} At off nucleus region of NGC 1068 we find a possible association between [Fe II]1.257$\mu$m/[P II]1.188$\mu$m and the radio continuum like NGC 4151 and Mrk 1066. The higher ratios at off nucleus of NGC 1068 is likely attributed to a mild contribution of shock ionization to ionized gases. This may indicate that the interaction between the jet and ISM forms an expanding cocoon which induces the shock waves propagating perpendicularly in the direction of the jet axis (e.g., \cite{1974MNRAS.166..513S}), while photoionization by central engine is dominant near the nucleus. \section{Conclusion} The line ratio [Fe II]1.257$\mu$m/[P II]1.188$\mu$m in the near-infrared wavelength range is a useful tool with which to examine the dust destruction by shocks. We investigated spatial distribution of this ratio in NLR of nearby Seyfert galaxy NGC 1068 with OAO/ISLE. [Fe II]1.257$\mu$m/[P II]1.188$\mu$m near the nucleus is close to unity consistent with a previous observation and with a ratio in a normal photoionized region. This indicates that photoionization by ionizing photons radiating from a central engine is dominant near the nucleus. We found that the ratio increases with the distance from the nucleus, and is slightly higher at 3$\arcsec$ $-$ 4$\arcsec$ east and west of the nucleus than ratios typical of a photoionized region. We also found a possible spatial association between [Fe II]1.257$\mu$m/[P II]1.188$\mu$m and radio continuum around $\sim 560$ pc from the nucleus. These findings suggest a higher contribution of shock ionization induced by a radio jet at off nucleus. Except for NGC 1068, recently the spatial correlation between [Fe II]1.257$\mu$m/[P II]1.188$\mu$m and radio continuum over the several hundred parsec scale has been reported for NGC 4151 and Mrk 1066. Applying this kind of research to a number of other AGNs is the clue to revealing ongoing AGN feedback phenomena. \\ We would like to thank Nozomu Kawakatu for his meaningful comments on interpretation of observed data. This work was supported by the Publications Committee of the National Astronomical Observatory of Japan (NAOJ) and the Grant-in-Aid for the Global COE Program \lq\lq The Next Generation of Physics, Spun from Universality and Emergence\rq\rq from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. T.N. acknowledges financial supports through the Research Promotion Award of Ehime University and the Kurata Memorial Hitachi Science and Technology Foundation. K.M. acknowledges financial support from the Japan Society for the Promotion of Science (JSPS) through the JSPS Research Fellowships for Young Scientists. \onecolumn \begin{figure} \begin{center} \FigureFile(80mm,80mm){figure-1.eps} \end{center} \caption{ $J$-band image of NGC 1068 obtained with OAO/ISLE in our observation. The long-slit position (P.A. = 90$^{\circ}$) is shown by two solid lines. }\label{figure1} \end{figure} \begin{figure} \begin{center} \FigureFile(160mm,80mm){figure-2.eps} \end{center} \caption{ 2-D spectra in $J$ band extracted from central $\pm$15$\arcsec$ region (a) and continuum-subtracted spectrum (b). }\label{figure2} \end{figure} \onecolumn \begin{figure} \begin{center} \FigureFile(80mm,80mm){figure-3.eps} \end{center} \caption{ [Fe II]1.257$\mu$m/[P II]1.188$\mu$m line ratio (top) and VLA 4.86 GHz flux density (bottom) as a function of distance from a continuum peak. Arrows in the top figure are lower limits calculated from 3 $\sigma$ noise level around undetected [P II]1.188$\mu$m. }\label{figure3} \end{figure} \begin{table} \caption{Relative line fluxes normalized by [P II]1.188$\mu$m}\label{table} \begin{center} \begin{tabular}{lclclc|c|} \hline Line ID&East 3$\arcsec.0$&Central 2$\arcsec$.0&West 3$\arcsec$.0\\ \hline ${\rm [P\ II]}$1.188$\mu$m&1.0$\pm$0.29&1.0$\pm$0.02&1.0$\pm$0.08\\ ${\rm [S\ IX]}$1.252$\mu$m&1.05$\pm$0.02&1.01$\pm$0.02&0.73$\pm$0.07\\ ${\rm [Fe\ II]}$1.257$\mu$m&1.79$\pm$0.02&1.33$\pm$0.05&1.63$\pm$0.07\\ Pa$\beta$&2.35$\pm$0.04&2.89$\pm$0.05&2.55$\pm$0.08\\ \hline \end{tabular}\\ Spectra were extracted from central 2$\arcsec$.0 region and east and west neighbor 3$\arcsec$.0 regions. \end{center} \end{table} \bigskip \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Appendix} The uniqueness proof in section \ref{inverse_problem} requires results on the density of a certain subset of functions and we give two ways to look at this through different formulations; namely the Stone-Weierstrass and M\"untz-Sz\'asz theorems. We give the statements of these results below. The Stone-Weierstrass theorem is a generalization of Weierstrass' result of 1885 that the polynomials are dense in $C[0,1]$ and was proved by Stone some 50 years later, \cite{Stone:1948}. If $X$ is a compact Hausdorff space and $C(X)$ those real-valued continuous functions on $X$, with the topology of uniform convergence, then the question is when is a subalgebra $A(X)$ dense? A crucial notion is that of separation of points; a set $A$ of functions defined on $X$ is said to {\it separate points\/} if, for every $x,y\in X$, $x\not=y$, there exists a function $f\in A$ such that $f(x) \not= f(y)$. Then we have \begin{theorem}(Stone--Weierstrass). Suppose $X$ is a compact Hausdorff space and $A$ is a subalgebra of $C(X)$ which contains a non-zero constant function. Then $A$ is dense in $C(X)$ if and only if it separates points. \end{theorem} The proof can be found in standard references, for example, \cite[Theorem 4.45]{Folland:2013}. The M\"untz-Sz\'asz theorem, (1914-1916) is also a generalization of the Weierstrass approximation theorem; it gives a condition under which one can ``thin out'' the polynomials and still maintain a dense set. \begin{theorem}(M\"untz--Sz\'asz) Let $\Lambda := \{\lambda_j\}_1^\infty$ be a sequence of real positive numbers. Then the span of $\{1,x^{\lambda_1},x^{\lambda_2},\ldots\,\}$ is dense in $C[0,1]$ if and only if $\sum_1^\infty\frac{1}{\lambda_j} = \infty$. \end{theorem} This result can be generalized to the $L^p[0,1]$ spaces for $1\leq p\leq \infty$, see \cite{BorweinErdelyi:1996}. \section{Preliminary material}\label{sec:eur} Let $\Omega$ be an open bounded domain in $\mathbb{R}^d$ with a smooth ($C^2$ will be more than sufficient) boundary $\partial\Omega$ and let $T>0$ be a fixed constant. $\mathcal{L}$ is a strongly elliptic, self-adjoint operator with smooth coefficients defined on $\Omega$, \begin{equation*} \mathcal{L} u = \sum_{i,j=1}^d a_{ij}(x)\frac{\partial^2 u}{\partial x_i\partial x_j} + c(x) u \end{equation*} where $a_{ij}(x)\in C^1(\overline{\Omega})$, $c(x)\in C(\overline{\Omega})$, $a_{ij}(x)=a_{ji}(x)$ and $\sum_{i,j=1}^d a_{ij}\xi_i\xi_j \geq \delta \sum_{i=1}^d \xi^2$ for some $\delta>0$, all $x\in\overline\Omega$ and all $\xi=(\xi_1,\,\ldots\,\xi_d)\in \mathbb{R}^d$. To avoid unnecessary complications for the main theme we will make the assumption of homogeneous Dirichlet boundary conditions on $\partial\Omega$ so that the natural domain for $\mathcal{L}$ is $H^2(\Omega)\cap H_0^1(\Omega)$. Then $-\mathcal{L}$ has a complete, orthonormal system of eigenfunctions $\{\psi_n\}_1^\infty$ in $L^2(\Omega)$ with $\psi_n\in H^2(\Omega)\cap H_0^1(\Omega)$ and with corresponding eigenvalues $\{\lambda_n\}$ such that $0<\lambda_1\leq \lambda_2\leq\dots \leq\lambda_n \to \infty$ as $n\to\infty$. The nonhomogeneous term will be taken to satisfy $f(x,t)\in C(0,T; H^2(\Omega))$. This can be weakened to assume only $L^p$ regularity in time, but as shown in \cite{LiLiuYamamoto:2015} this requires more delicate analysis. The initial value $u_0(x)\in H^2(\Omega)$. We will use $\langle\cdot,\cdot\rangle$ to denote the inner product in $L^2(\Omega)$. Throughout this paper we will, by following \cite{Kochubei:2008}, make the assumptions on the distributed derivative parameter $\mu.$ \noindent \begin{assumption}\label{mu_assumption} $$\mu\in C^1[0,1],\ \mu(\alpha)\ge 0,\ \mu(1)\ne 0.$$ \end{assumption} \begin{remark} From these conditions it follows that there exists a constant $C_\mu>0$ and an interval $(\beta_0,\beta)\subset (0,1)$ such that $\mu(\alpha)\ge C_\mu$ on $(\beta_0,\beta)$. This will be needed in our proof of the representation theorem in Section~\ref{sect:representation}. \end{remark} We will use the Djrbashian-Caputo version for $D^{(\mu)}$: $D^{(\mu)} u=\int_0^1 \mu(\alpha){}\partial_t^\alpha u {\rm d}\alpha$ with $\partial_t^\alpha u=\frac{1}{\Gamma(1-\alpha)}\int_0^t(t-\tau)^{-\alpha} \frac{d}{d\tau}u(x,\tau){\rm d}\tau$ and so \begin{equation}\label{eqn:dist_frac_der} D^{(\mu)}u=\int_0^t \left[\int_0^1 \frac{\mu(\alpha)}{\Gamma(1-\alpha)} (t-\tau)^{-\alpha} {\rm d}\alpha\right]\frac{d}{d\tau}u(x,\tau){\rm d}\tau := \int_0^t \eta(t-\tau)\frac{d}{d\tau}u(x,\tau){\rm d}\tau, \end{equation} where \begin{equation}\label{eqn:dist_frac_eta} \eta(s)=\int_0^1 \frac{\mu(\alpha)}{\Gamma(1-\alpha)} s^{-\alpha} {\rm d}\alpha. \end{equation} Thus our distributed differential equation (DDE) model in this paper will be \begin{equation}\label{eqn:model_pde} \begin{aligned} D^{(\mu)} u(x,t) - \mathcal{L} u(x,t) &= f(x,t), &&\quad x\in\Omega,\quad t\in(0,T);\\ u(x,t) &= 0, &&\quad x\in\partial\Omega,\quad t\in(0,T);\\ u(x,0) &= u_0(x), &&\quad x\in\Omega.\\ \end{aligned} \end{equation} \subsection{A Distributional ODE} Our first task is to analyze the ordinary distributed fractional order equation \begin{equation}\label{eqn:dODE} D^{(\mu)} v(t)=-\lambda v(t),\ v(0)=1,\ t\in (0,T) \end{equation} and to show there exists a unique solution. We will need some preliminary analysis to determine the integral operator that serves as the inverse for $D^{(\mu)}$ in analogy with the Riemann-Liouville derivative being inverted by the Abel operator. If we now take the Laplace transform of $\eta$ in \eqref{eqn:dist_frac_eta} then we have \begin{equation}\label{eqn:Phi} (\L\eta) (z)=\frac{\Phi(z)}{z},\quad \text{where }\ \Phi(z)=\int_0^1 \mu(\alpha)z^\alpha {\rm d}\alpha. \end{equation} \par The next lemma introduces an operator $I^{(\mu)}$ to analyze the distributed ODE \eqref{eqn:dODE}. \begin{lemma}\label{lem:kappa} Define the operator $I^{(\mu)}$ as $$ I^{(\mu)} \phi(t)=\int_0^t \kappa(t-s)\phi(s){\rm d}s,\quad \text{where }\ \kappa (t)=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{e^{zt}}{\Phi(z)}{\rm d}z. $$ Then the following conclusions hold: \begin{itemize} \item [(1)] $D^{(\mu)}I^{(\mu)} \phi(t)=\phi(t),\ \ImuD^{(\mu)} \phi(t)=\phi(t)-\phi(0)$ for $\phi\in C^1(0,T);$ \item [(2)] $\kappa(t)\in C^\infty (0,\infty)$ and \begin{equation}\label{eqn:kappa_inequality} \kappa(t)=|\kappa(t)|\le C \ln {\frac{1}{t}}\ \ \text{for sufficiently small}\ t>0. \end{equation} \end{itemize} \end{lemma} \begin{proof} This is \cite[Proposition~3.2]{Kochubei:2008}. We remark that the result in this paper include further estimates on $\kappa$ that require additional regularity on $\mu$. However, for the bound \eqref{eqn:kappa_inequality} only $C^1$ regularity on $\mu$ is needed. \end{proof} \begin{remark} In \cite[Proposition~3.2]{Kochubei:2008}, if the condition either $\mu(0)\ne 0$ or $\mu(\alpha)\sim a\alpha^v,\ a>0,\ v>0$ is added, then $\kappa$ is completely monotone. This property is not explicitly used in this paper, however as we remark after the uniqueness result, this condition on $\kappa$ could be a useful basis for a reconstruction algorithm. \end{remark} \par With $I^{(\mu)}$, we have the following results. \begin{lemma}\label{lem:existence_uniqueness_u_n} For each $\lambda>0$ there exists a unique $u(t)$ which satisfies \eqref{eqn:dODE}. \end{lemma} \begin{proof} Lemma~\ref{lem:kappa} implies that \eqref{eqn:dODE} is equivalent to $$u(t)=-\lambda I^{(\mu)} u(t)+1 =: A_1 u.$$ Now the asymptotic and smoothness results of $\kappa(t)$ in Lemma~\ref{lem:kappa} give $\kappa\in L^1(0,T)$, that is, there exists $t_1\in (0,T)$ such that $$\|\kappa\|_{L^1(0,t_1)}<\frac{1}{\lambda}.$$ Hence, given $\phi_1, \phi_2\in L^1(0,t_1)$, \begin{equation*} \begin{split} \|A_1(\phi_1)-A_1(\phi_2)\|_{L^1(0,t_1)} &\le \lambda \int_0^{t_1} \int_0^t |\kappa(t-s)|\cdot |\phi_1(s)-\phi_2(s)|\, ds dt\\ &= \lambda\int_0^{t_1} |\phi_1(s)-\phi_2(s)| \int_s^{t_1} |\kappa(t-s)|\, dt ds\\ &\le \lambda\int_0^{t_1} |\phi_1(s)-\phi_2(s)|\cdot \|\kappa\|_{L^1(0,t_1)} ds\\ &= \lambda \|\kappa\|_{L^1(0,t_1)} \cdot\|\phi_1-\phi_2\|_{L^1(0,t_1)}. \end{split} \end{equation*} From the fact that $0<\lambda\|\kappa\|_{L^1(0,t_1)}<1$, $A_1$ is a contraction map on $L^1(0,t_1)$ and so by the Banach fixed point theorem, there exists a unique $u_{1}(t)\in L^1(0,t_1)$ that satisfies $u_{1}=A_1u_{1}$. For each $t\in (t_1,2t_1)$, we have \begin{equation*} u(t) = 1 -\lambda I^{(\mu)} u(t) = 1 -\lambda \int_{t_1}^{t} \kappa(t-s) u(s)\,ds -\lambda \int_{0}^{t_1} \kappa(t-s) u(s)\,ds. \end{equation*} Since $u=u_{1}$ on $(0, t_1) $ which is now known, then \begin{equation*} \begin{split} u(t)=-\lambda \int_{t_1}^{t} \kappa(t-s) u(s)\,ds+1 -\lambda \int_{0}^{t_1} \kappa(t-s) u_1(s)\,ds:=A_2 u \end{split} \end{equation*} for each $t\in (t_1,2t_1)$. Given $\phi_1, \phi_2\in L^1(t_1,2t_1)$, it holds \begin{equation*} \begin{split} \|A_2(\phi_1)-A_2(\phi_2)\|_{L^1(t_1,2t_1)} &\le \lambda \int_{t_1}^{2t_1} \int_{t_1}^{t} |\kappa(t-s)|\cdot |\phi_1(s)-\phi_2(s)| ds dt\\ &=\lambda \int_{t_1}^{2t_1} |\phi_1(s)-\phi_2(s)| \int_{s}^{2t_1} |\kappa(t-s)| {\rm d} t {\rm d} s\\ &\le \lambda \int_{t_1}^{2t_1} |\phi_1(s)-\phi_2(s)| \cdot \|\kappa\|_{L^1(0,t_1)} {\rm d} s\\ &=\lambda \|\kappa\|_{L^1(0,t_1)} \cdot \|\phi_1-\phi_2\|_{L^1(t_1,2t_1)}. \end{split} \end{equation*} Hence, $A_2$ is also a contraction map on $L^1(t_1,2t_1)$, which yields and shows that there exists a unique $u_{2}(t)\in L^1(t_1,2t_1)$ such that $u_{2}=A_2 u_{2}$. Repeating this argument yields that there exists a unique solution $u\in L^1(0,T)$ of the distributed ODE \eqref{eqn:dODE}, which completes the proof. \end{proof} \begin{lemma}\label{lem:u_is_cm} $u(t)\in C^{\infty}(0,T)$ is completely monotone, which gives $0\le u(t)\le 1$ on $[0,T]$. \end{lemma} \begin{proof} This lemma is a special case of \cite[Theorem 2.3]{Kochubei:2008}. \end{proof} \section{Existence, uniqueness and regularity} \subsection{Existence and uniqueness of weak solution for DDE \eqref{eqn:model_pde} \label{sect:model_general}} \par We state the definition of the weak solution as \begin{definition} $u(x,t)$ is a weak solution to DDE \eqref{eqn:model_pde} in $L^2(\Omega)$ if $u(\cdot,t)\in H_0^1(\Omega)$ for $t\in(0,T)$ and for any $\psi(x)\in H^2(\Omega)\cap H_0^1(\Omega)$, \begin{equation*} \begin{split} &\langleD^{(\mu)} u(x,t),\psi(x)\rangle-\langle\mathcal{L} u(x,t;a),\psi(x)\rangle=\langle f(x,t),\psi(x)\rangle, \ t\in(0,T);\\ &\langle u(x,0),\psi(x)\rangle = \langle u_0(x),\psi(x)\rangle. \end{split} \end{equation*} \end{definition} \par Then Lemma \ref{lem:existence_uniqueness_u_n} gives the following corollary. \begin{corollary}\label{cor:existence_uniqueness} There exists a unique weak solution $u^*(x,t)$ of DDE \eqref{eqn:model_pde} and the representation of $u^*(x,t)$ is \begin{equation}\label{eqn:weak solution} \begin{aligned} u^*(x,t)=&\sum_{n=1}^{\infty} \Big[\langle u_0,\psi_n\rangle u_n(t) +\langle f(\cdot,0),\psi_n\rangleI^{(\mu)} u_n(t)\\ &+\int_0^t \langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\Big]\psi_n(x), \end{aligned} \end{equation} where $u_n(t)$ is the unique solution of the distributed ODE \eqref{eqn:dODE} with $\lambda=\lambda_n$. \end{corollary} \begin{proof} \par Completeness of $\{\psi_n(x):\N+\}$ in $L^2(\Omega)$ and direct calculation show that the representation \eqref{eqn:weak solution} is a weak solution of DDE \eqref{eqn:model_pde}; while the uniqueness of $u^*$ follows from Lemma \ref{lem:existence_uniqueness_u_n}. \end{proof} \subsection{Regularity} \par The next two lemmas concern the regularity of $u^*$ and $D^{(\mu)} u^*$. \begin{lemma}\label{lem:regularity_u} \begin{equation*} \|u^*(x,t)\|_{C([0,T];H^2(\Omega))} \le C\big(\|u_0\|_{H^2(\Omega)}+\|f(\cdot, 0)\|_{H^2(\Omega)} +T^{1/2} |f|_{H^1([0,T];H^2(\Omega))}\big) \end{equation*} where $C>0$ depends on $\mu$, $\mathcal{L}$ and $\Omega$, and $|f|_{H^1([0,T];H^2(\Omega))}= \|\frac{\partial f}{\partial t}\|_{L^2([0,T];H^2(\Omega))}$. \end{lemma} \begin{proof} \par Fix $t\in(0,T)$, \begin{equation*} \begin{aligned} \|u^*(x,t)\|_{H^2(\Omega)} \le &\ \big\|\sum_{n=1}^{\infty}\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\big\|_{H^2(\Omega)} &&:=I_1\\ &+\big\|\sum_{n=1}^{\infty}\langle f(\cdot,0),\psi_n\rangleI^{(\mu)} u_n(t)\psi_n(x)\big\|_{H^2(\Omega)} &&:=I_2\\ &+\big\|\sum_{n=1}^{\infty}\int_0^t \langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)\big\|_{H^2(\Omega)} &&:=I_3. \end{aligned} \end{equation*} We estimate each of $I_1$, $I_2$, and $I_3$ in turn using Lemmas~\ref{lem:kappa} and \ref{lem:u_is_cm} where in each case $C>0$ is a generic constant that depends only on $\mu$, $\mathcal{L}$ and $\Omega$. \begin{equation*}\label{eqn:I_1} \begin{aligned} I_1^2&=\big\|\sum_{n=1}^{\infty}\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\big\|_{H^2(\Omega)}^2 \le C\big\|\mathcal{L}\big(\sum_{n=1}^{\infty}\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\big)\big\|_{L^2(\Omega)}^2\\ &=C\big\|\sum_{n=1}^{\infty}\lambda_n\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\big\|_{L^2(\Omega)}^2 =C\sum_{n=1}^{\infty} \lambda_n^2\langle u_0,\psi_n\rangle^2 u^2_n(t)\\ &\le C\sum_{n=1}^{\infty} \lambda_n^2\langle u_0,\psi_n\rangle^2 =C\big\|\mathcal{L} u_0\big\|_{L^2(\Omega)}^2\le C\|u_0\|_{H^2(\Omega)}^2. \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_2^2&=\big\|\sum_{n=1}^{\infty}\langle f(\cdot,0),\psi_n\rangle I^{(\mu)} u_n(t)\psi_n(x)\big\|_{H^2(\Omega)}^2 \le C\sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2 (I^{(\mu)} u_n(t))^2\\ &\le C\sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2 \Bigl(\int_0^t|\kappa(\tau)| \cdot|u_n(t-\tau)|{\rm d}\tau\Bigr)^2 \\ &\le C \sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2 \Bigl(\int_0^t |\kappa(\tau)|{\rm d}\tau\Bigr)^2\\ &\le C\sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2 \|\kappa\|_{L^1(0,T)}^2 \le C\|\kappa\|_{L^1(0,T)}^2 \|f(\cdot, 0)\|_{H^2(\Omega)}^2. \end{aligned} \end{equation*} \begin{equation*}\label{eqn:I_3} \begin{aligned} \quad I_3^2&=\big\|\sum_{n=1}^{\infty}\int_0^t \langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)\big\|_{H^2(\Omega)}^2\\ &\le C\sum_{n=1}^{\infty}\left[\int_0^t \lambda_n \langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\right]^2\\ &\le C\sum_{n=1}^{\infty}\left[\int_0^t \lambda_n|\langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle|\cdot |I^{(\mu)} u_n(t-\tau)|{\rm d}\tau\right]^2\\ &\le C\|\kappa\|_{L^1(0,T)}^2 \sum_{n=1}^{\infty} \int_0^t \lambda_n^2|\langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle|^2 {\rm d}\tau \cdot \int_0^t 1^2 {\rm d}\tau\\ &\le C T\|\kappa\|_{L^1(0,T)}^2 \int_0^T \sum_{n=1}^{\infty} \lambda_n^2|\langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle|^2{\rm d}\tau \le CT\|\kappa\|_{L^1(0,T)}^2 \int_0^T \big\|\frac{\partial}{\partial t} f(\cdot,\tau)\big\|_{H^2(\Omega)}^2{\rm d}\tau\\ &=CT\|\kappa\|_{L^1(0,T)}^2 |f|^2_{H^1([0,T];H^2(\Omega))}. \end{aligned} \end{equation*} Hence, \begin{equation*} \begin{split} \|u^*(x,t)\|_{C([0,T];H^2(\Omega))} &\le C\|u_0\|_{H^2( \Omega)}+C\|\kappa\|_{L^1(0,T)} \|f(\cdot, 0)\|_{H^2(\Omega)}\\ &\qquad+CT^{1/2}\|\kappa\|_{L^1(0,T)} |f|_{H^1([0,T];H^2(\Omega))}\\ &\le C\big(\|u_0\|_{H^2(\Omega)}+\|f(\cdot, 0)\|_{H^2(\Omega)} +T^{1/2} |f|_{H^1([0,T];H^2(\Omega))}\big). \end{split} \end{equation*} Due to the fact that $\kappa$ is determined by $\mu$, the constant $C$ above only depends on $\mu$, $\mathcal{L}$ and $\Omega$. \end{proof} \begin{lemma}\label{lem:regularity_Dmu} \begin{equation*} \begin{split} \|D^{(\mu)} u^*\|_{C([0,T];L^2(\Omega))} \le C\left(\|u_0\|_{H^2(\Omega)}+T^{1/2}|f|_{H^1([0,T];H^2(\Omega))} +\|f\|_{C([0,T];H^2(\Omega))}\right), \end{split} \end{equation*} where $C>0$ only depends on $\mu$, $\mathcal{L}$ and $\Omega$. \end{lemma} \begin{proof} \par For each $t\in(0,T)$, \begin{equation*} \begin{split} D^{(\mu)} u^*(x,t)&=-\sum_{n=1}^{\infty}\lambda_n\langle u_0,\psi_n\rangle u_n(t)\psi_n(x) -\sum_{n=1}^{\infty}\lambda_n\langle f(\cdot,0),\psi_n\rangleI^{(\mu)} u_n(t)\psi_n(x)\\ &\quad-\sum_{n=1}^{\infty}\lambda_n\int_0^t \langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x) +f(x,t), \end{split} \end{equation*} which implies \begin{equation*} \begin{aligned} \qquad\|D^{(\mu)} u^*\|_{L^2(\Omega)}& \le \|\sum_{n=1}^{\infty}\lambda_n\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\|_{L^2(\Omega)} +\|\sum_{n=1}^{\infty}\lambda_n\langle f(\cdot,0),\psi_n\rangleI^{(\mu)} u_n(t)\psi_n(x)\|_{L^2(\Omega)}\\ &\quad+\|\sum_{n=1}^{\infty}\lambda_n\int_0^t \langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)\|_{L^2(\Omega)} +\|f(\cdot,t)\|_{L^2(\Omega)}. \end{aligned} \end{equation*} Combining the estimates for $I_1$, $I_2$ and $I_3$ we obtain \begin{equation*} \|\sum_{n=1}^{\infty}\lambda_n\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\|_{L^2(\Omega)}^2 =\sum_{n=1}^{\infty} \lambda_n^2\langle u_0,\psi_n\rangle^2u_n^2(t) \le C\|u_0\|_{H^2(\Omega)}^2, \end{equation*} \begin{equation*} \begin{split} \|\sum_{n=1}^{\infty}\lambda_n\langle f(\cdot,0),\psi_n\rangle I^{(\mu)} u_n(t)\psi_n(x)\|_{L^2(\Omega)}^2 &=\sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2 (I^{(\mu)} u_n(t))^2\\ &\le C\|\kappa\|_{L^1(0,T)}^2 \|f(\cdot, 0)\|_{H^2(\Omega)}^2\\ &\le C\|\kappa\|_{L^1(0,T)}^2 \|f\|_{C([0,T];H^2(\Omega))}^2 \end{split} \end{equation*} and \begin{equation*} \begin{split} &\|\sum_{n=1}^{\infty}\lambda_n\int_0^t\langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)\|_{L^2(\Omega)}^2\\ =&\sum_{n=1}^{\infty}\left[\int_0^t \lambda_n\langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\right]^2 \le CT\|\kappa\|_{L^1(0,T)}^2 |f|^2_{H^1([0,T];H^2(\Omega))}. \end{split} \end{equation*} Therefore, \begin{equation*} \begin{split} \|D^{(\mu)} u^*\|_{C([0,T];L^2(\Omega))} \le C\left(\|u_0\|_{H^2(\Omega)}+T^{1/2}|f|_{H^1([0,T];H^2(\Omega))} +\|f\|_{C([0,T];H^2(\Omega))}\right), \end{split} \end{equation*} where $C$ is dependent only on $\mu$, $\mathcal{L}$ and $\Omega$. \end{proof} The main theorem of this section follows from Corollary~\ref{cor:existence_uniqueness}, Lemmas~\ref{lem:regularity_u} and \ref{lem:regularity_Dmu}. \begin{theorem}[Main theorem for the direct problem]\label{main} There exists a unique weak solution $u^*(x,t)$ in $L^2(\Omega)$ of the DDE~\eqref{eqn:model_pde} with the representation \eqref{eqn:weak solution} and the following regularity estimate \begin{equation*} \begin{aligned} \quad\|u^*\|_{C([0,T];H^2(\Omega))} &+ \|D^{(\mu)} u^*\|_{C([0,T];L^2(\Omega))}\\ &\le C\Big(\|u_0\|_{H^2(\Omega)}+T^{1/2}|f|_{H^1([0,T];H^2(\Omega))} +\|f\|_{C([0,T];H^2(\Omega))}\Big), \end{aligned} \end{equation*} where $C>0$ depends only on $\mu$, $\mathcal{L}$ and $\Omega$. \end{theorem} \section*{Acknowledgment} The authors were partially supported by NSF Grant DMS-1620138. \bibliographystyle{abbrv} \section{Introduction}\label{sec:intro} Classical Brownian motion as formulated in Einstein's 1905 paper \cite{Einstein:1905b} can be viewed as a random walk in which the dynamics are governed by an uncorrelated, Markovian, Gaussian stochastic process. The key assumption is that a change in the direction of motion of a particle is random and that the mean-squared displacement over many changes is proportional to time $\langle x^2\rangle = C t$. This easily leads to the derivation of the underlying differential equation being the heat equation. In fact we can generalize this situation to the case of a continuous time random walk ({\sc ctrw}) where the length of a given jump, as well as the waiting time elapsing between two successive jumps follow a given probability density function. In one spatial dimension, the picture is as follows: a walker moves along the $x$-axis, starting at a position $x_0$ at time $t_0=0$. At time $t_1$, the walker jumps to $x_1$, then at time $t_2$ jumps to $x_2$, and so on. We assume that the temporal and spatial increments $\,\Delta t_n = t_n - t_{n-1}$, $\,\Delta x_n = x_n-x_{n-1}$ are independent, identically distributed random variables, following probability density functions $\psi(t)$ and $\lambda (x)$, respectively, which is known as the waiting time distribution and jump length distribution, respectively. Namely, the probability of $\Delta t_n$ lying in any interval $[a,b]\subset (0,\infty)$ is $ P(a<\Delta t_n<b) = \int_a^b \psi(t)\,dt$ and the probability of $\Delta x_n$ lying in any interval $[a,b]\subset \mathbb{R}$ is $P(a<\Delta x_n <b) = \int_a^b \lambda(x)\,dx$. For given $\psi$ and $\lambda$, the position $x$ of the walker can be regarded as a step function of $t$. It it easily shown using the Central Limit Theorem that provided the first moment, or characteristic waiting time $T$, defined by $T = \mu_1(\psi)=\int_0^\infty t\psi(t)\,dt$ and the second moment, or jump length variance $\Sigma$, $\mu_2(\lambda)=\int_{-\infty}^\infty x^2\lambda(t)\,dt$ are finite, then the long-time limit again corresponds to Brownian motion, On the other hand, when the random walk involves correlations, non-Gaussian statistics or a non-Markovian process (for example, due to ``memory'' effects) the diffusion equation will fail to describe the macroscopic limit. For example, if we retain the assumption that $\Sigma$ is finite but relax the condition on a finite characteristic waiting time so that for large $t$ $\psi(t) A/t^{1+\alpha}$ as $t\to\infty$ where $0<\alpha\leq 1$ , then we get very different results. Such probability density functions are often referred to as a ``heavy-tailed.'' If in fact we take \begin{equation}\label{eqn:frac_dist} \psi(t) = \frac{A_\alpha}{B_\alpha+t^{1+\alpha}} \end{equation} then again it can be shown, \cite{MontrollWeiss:1965,KlafterSokolov:2011}, that the effect is to modify the Einstein formulation $\langle x^2\rangle = C t$ to $\langle x^2\rangle = C t^\alpha$. This above leads to a {\it subdiffusive\/} process and, importantly provides a tractable model where the partial differential equation is replaced by one with a fractional derivative in time of order $\alpha$. Such objects have been a steady source of investigation over the last almost 200 years beginning in the 1820s with the work of Abel and continuing first by Liouville then by Riemann. The fractional derivative operator can take several forms, the most usual being either the Riemann-Liouville $^R\!D_0^\alpha$ based on Abel's original singular integral operator, or the Djrbashian-Caputo $^C\!D^\alpha_0$ version, \cite{Djrbashian:1989}, which reverses the order of the Riemann-Liouville formulation \begin{equation}\label{eqn:frac_ders} \begin{aligned} ^R\!D^\alpha_0 u &=\frac{1}{\Gamma(n-\alpha)}\frac{d^n}{dx^n}\int_0^x(x-t)^{\alpha +1-n}u(t)\,dt,\\ ^C\!D^\alpha_0u &= \frac{1}{\Gamma(n-\alpha)}\int_0^x(x-t)^{\alpha+1-n}u^{(n)}(t)\,dt.\\ \end{aligned} \end{equation} The D{\v{z}}rba{\v{s}}jan-Caputo derivative tends to be more favored by practitioners since it allows the specification of initial conditions in the usual way. Nonetheless, the Riemann-Liouville derivative enjoys certain analytic advantages, including being defined for a wider class of functions and possessing a semigroup property. Thus the fractional-anomalous diffusion model gives rise to the fractional differential equation \begin{equation} \label{eqn:basic_one_term} \partial_t^\alpha u - \mathcal{L} u = f(x,t),\qquad x\in\Omega, t\in (0,T) \end{equation} where $\mathcal{L}$ is a uniformly elliptic differential operator on an open domain $\Omega\subset\mathbb{R}^d$ and $\partial_t^\alpha$ is one of the above fractional derivatives. The governing function for the fractional derivative becomes the Mittag-Leffler function $E_{\alpha,\beta}(z)$ which generalizes the exponential function that forms the key component for the fundamental solution in the classical case when $\alpha=\beta=1$. \begin{equation}\label{eqn:mlf} E_{\alpha,\beta}(z) = \sum_{k=0}^\infty \frac{z^k}{\Gamma(\alpha k + \beta)} \end{equation} For the typical examples described here we have $0<\alpha\leq 1$ and $\beta$ a positive real number although further generalization is certainly possible. See, for example, \cite{GorenfloKilbasMainardiRogosin:2014}. During the past two decades, differential equations involving fractional-order derivatives have received increasing attention in applied disciplines. Such models are known to capture more faithfully the dynamics of anomalous diffusion processes in amorphous materials, e.g., viscoelastic materials, porous media, diffusion on domains with fractal geometry and option pricing models. These models also describe certain diffusion processes more accurately than Gaussian-based Brownian motion and have particular relevance to materials exhibiting memory effects. As a consequence, we can obtain fundamentally different physics. There has been significant progress on both mathematical methods and numerical algorithm design and, more recently, attention has been paid to inverse problems. This has shed considerable light on the new physics appearing, \cite{JinRundell:2015,SokolovKlafterBlumen:2002} Of course, such a specific form for $\psi(t)$ as given by \ref{eqn:frac_dist} is rather restrictive as it assumes a quite specific scaling factor between space and time distributions and there is no reason to expect nature is so kind to only require a single value for $\alpha$. One approach around this is to take a finite sum of such terms each corresponding to a different value of $\alpha$. This leads to a model where the time derivative is replaced by a finite sum of fractional derivatives of orders $\alpha_j$ and by analogy leads to the law $\langle x^2\rangle = g(t,\alpha)$ where $g$ is a finite sum of fractional powers. This formulation replaces the single value fractional derivative by a finite sum $\sum_1^m q_j\partial_t^{\alpha_j} u$ where a linear combination of $m$ fractional powers has been taken. Physically this represents a fractional diffusion model that assumes diffusion takes place in a medium in which there is no single scaling exponent; for example, a medium in which there are memory effects over multiple time scales. This seemingly simple device leads to considerable complications. For one, we have to use the so-called multi-index Mittag-Leffler function $E_{\alpha_1,\,\ldots\,\alpha_m,\beta_1,\,\ldots\,\beta_m}(z)$ in place of the two parameter $E_{\alpha,\beta}(z)$ and this adds complexity not only notationally but in proving required regularity results for the basic forwards problem of knowing $\Omega$, $\mathcal{L}$, $f$, $u_0$ and recovering $u(x,t)$, see \cite{LiYamamoto:2015,LiLiuYamamoto:2015} and the references within. It is also possible to generalize beyond the finite sum by taking the so-called distributed fractional derivative, \begin{equation}\label{eqn:distributional_der-def} \partial_t^{(\mu)} u(t) = \int_0^1 \mu(\alpha) \partial_t^\alpha u(t) \,d\alpha. \end{equation} Thus the finite sum derivative can be obtained by taking $\mu(\alpha) = \sum_{j=1}^m q_j\delta(\alpha-\alpha_j)$. See \cite{Naber:2003,Kochubei:2008,MMPG:2008,Luchko:2009b,li2016analyticity}, for several studies incorporating this extension. This in turn allows a more general function probability density distribution function $\psi$ in \ref{eqn:frac_dist} and hence a more general value for $g(t,\alpha)$. The purpose of this paper is to analyze this distributed model extension to equation~\eqref{eqn:basic_one_term} and the paper is organized as follows. First, we demonstrate existence, uniqueness and regularity results for the solution of the distributed fractional derivative model on a cylindrical region in space-time $\Omega\times[0,T]$ where $\Omega$ is a bounded, open set in $\mathbb{R}^d$. Second, in the case of one spatial variable, $d=1$, we set up representation theorems for the solution analogous to that for the heat equation itself, \cite{Cannon:1984}, and extended to the case of a single fractional derivative in \cite{RundellXuZuo:2013}. Section~\ref{sec:eur} looks at the assumptions to be made on the various terms in \eqref{eqn:distributional_der-def} and utilizes these to show existence, uniqueness and regularity results for the direct problem; namely, to be given $\Omega$, $\mathcal{L}$, $f$, $u_0$ and the function $\mu=\mu(\alpha)$, then to solve \eqref{eqn:distributional_der-def} for $u(x,t)$. Section~\ref{sect:representation} will derive several representation theorems for this solution and these will be used in the final section to formulate and prove a uniqueness result for the associated inverse problem to be discussed below. \medskip However, there is the obvious question for all of these models: what is the value of $\alpha$? Needless to say there has been much work done on this; experiments have been set up to collect additional information that allows a best fit for $\alpha$ in a given setting. One of the earliest works here is from 1975, \cite{Scher_Montroll:1975} and in part was based on the Montroll-Weiss random walk model \cite{MontrollWeiss:1965}. See also \cite{HatanoHatano:1998}. Mathematically the recovery in models with a single value for $\alpha$ turns out to relatively straightforward provided we are able to choose the type of data being measured. This would be chosen to allow us to rely on the known asymptotic behavior of the Mittag-Leffler function for both small and large arguments. An exception here is when we also have to determine $\alpha$ as well as an unknown coefficient in which case the combination problem can be decidedly much more complex. See, for example, \cite{cheng2009uniqueness, Li2013Simultaneous, RundellXuZuo:2013}. Amongst the first papers in this direction with a rigorous existence and uniqueness analysis is \cite{HatanoNakagawaWangYamamoto:2013}. The multi-term case, although similar in concept, is quite nontrivial but has been shown in \cite{LiYamamoto:2015,LiLiuYamamoto:2015}. In these papers the authors were able to prove an important uniqueness theorem: if given the additional data consisting of the value of the normal derivative $\frac{\partial u}{\partial\nu}$ at a fixed point $x_0\in\partial\Omega$ for all $t$ then the sequence pair $\{q_j,\alpha_j\}_{j=1}^m$ can be uniquely recovered. The main result of the current paper in this direction is in Section~\ref{inverse_problem} where we show that the uniqueness results of \cite{LiYamamoto:2015,LiLiuYamamoto:2015} can be extended to recover a suitably defined exponent function $\mu(\alpha)$. \section{Determining the distributed coefficient $\mu(\alpha)$} \label{inverse_problem} In this section we state and prove two uniqueness theorems for the recovery of the distributed derivative $\mu$. We show that by measuring the solution along a time trace from a fixed location $x_0$ one can use this data to uniquely recover $\mu(\alpha)$. This time trace can be one where the sampling point is located within the interior of $\Omega=(0,1)$ and we measure $u(x_0,t)$, or we measure the flux at $x^\star$; $u_x(x^\star,t)$ where $0<x^\star\leq 1$. This latter case therefore includes measuring the flux on the right-hand boundary $x=1$. First we give the definition of the admissible set $\Psi$ according to Assumption~\ref{mu_assumption}. \begin{definition}\label{def:Psi} Define the set $\Psi$ by \begin{equation*} \Psi:=\{\mu\in C^1[0,1]:\ \mu\ge 0, \ \mu(1)\ne 0,\ \mu(\alpha)\ge C_{\Psi}>0 \ \text{on}\ (\beta_0, \beta_1)\}, \end{equation*} where the constant $C_{\Psi}>0$ and the interval $(\beta_{0}, \beta_{1})\subset(0,1)$ only depend on $\Psi.$ \end{definition} We introduce the functions $F(y;x_0)$ and $F_f(y;x^\star)$ in the next two lemmas. \begin{lemma}\label{lem:F} Define the function $F(y;x_0)\in C^1((0,\infty),\mathbb{R})$ as $$ F(y;x_0)=\frac{e^{(x_0-2)y}-e^{-x_0 y}}{2(1-e^{-2y})},$$ where $x_0\in (0,1)$ is a constant. Then the function $F(y;x_0)$ is strictly increasing on the interval $(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)\subset (0,\infty).$ \end{lemma} \begin{proof} Since $x_0\in (0,1)$, $e^{(x_0-2)y}-e^{-x_0 y}<0$ and $2(1-e^{-2y})>0$ on $(0,\infty)$. A direct calculation now yields $$\frac{d}{dy}(e^{(x_0-2)y}-e^{-x_0 y})=(x_0-2)e^{(x_0-2)y}+x_0e^{-x_0 y}>0$$ for $y\in (\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)$. Then we have $e^{(x_0-2)y}-e^{-x_0 y}<0$ and strictly increasing on $(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)$. The function $2(1-e^{-2y})$ is obviously both positive and strictly increasing on $(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)$. Hence the function $F(y;x_0)$ is also strictly increasing on $(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)$, which completes the proof. \end{proof} \begin{lemma}\label{lem:F_f} For the inverse problem with flux data, define the function $F_f(y;x^\star)\in C^1((0,\infty),\mathbb{R})$ as $$F_f(y;x^\star)=\frac{y e^{(x^\star-2)y}+ye^{-x^\star y}} {2(1-e^{-2y})},$$ where $x^\star\in (0,1]$ is a constant. Then the function $F_f(y;x^\star)$ is strictly decreasing on the interval $(1/x^\star,\infty)\subset (0,\infty).$ \end{lemma} \begin{proof} \begin{equation*} \begin{split} \frac{\partial F_f}{\partial y}(y;x^\star ) &=\frac{((x^\star -2)y+1)e^{(x^\star -2)y}+(1-x^\star y)e^{-x^\star y}}{2(1-e^{-2y})^2}\\ &\quad +\frac{(-x^\star y-1)e^{(x^\star -4)y} +((x^\star -2)y-1)e^{(-x^\star -2)y}}{2(1-e^{-2y})^2}, \end{split} \end{equation*} hence $\frac{\partial F_f}{\partial y}(y;x^\star )<0$ if $y\in (1/x^\star,\infty)$ and the proof is complete. \end{proof} For the important lemmas to follow, we need the Stone--Weierstrass and the M\"untz--Sz\'asz Theorems. See the appendix for statements and references for these results. The next result shows that the set $\{(n r)^x:\N+\}$ is complete in $L^2[0,1]$ for any positive integer $r$. We give two proofs of this important lemma. \begin{lemma}\label{lem:dense} For each $r\in \mathbb{N}^+,$ the vector space consisting with the set of functions $\{(nr)^x:\ \N+\}$ is dense in the space $L^2[0,1],$ i.e. $$\overline{span\{(nr)^x:\N+\}}=L^2[0,1]$$ w.r.t $L^2$ norm. In other words, the set $\{(nr)^x:\N+\}$ is complete in $L^2[0,1].$ \end{lemma} \begin{proof} Clearly, $span\{(nr)^x:\N+\}$ satisfies all the conditions of the Stone--Weierstrass Theorem, so that the closure of $span\{(nr)^x:\N+\}$ w.r.t the continuous norm is either $C[0,1]$ or $\{f\in C[0,1]:f(x_0)=0, x_0\in[0,1]\}.$ The two alternatives both yield that $span\{(nr)^x:\ \N+\}$ is dense in $C[0,1]$ with respect to the $L^2$ norm, which together with the fact $C[0,1]$ is dense in $L^2[0,1]$ gives $span\{(nr)^x:\N+\}$ is dense in $L^2[0,1]$ and completes the proof. As a second proof, if for some $h\in C[0,1]$, $\int_0^1 (n r)^x h(x)\,dx = 0$ for all $n\in \mathbb{N}^+$ then $\int_0^1 e^{x\log(r n) } h(x)\,dx = 0$ and with the change of variables $y = e^{x}$ this becomes $\int_1^e y^{\log(r n)}\tilde h(y)\,dy = 0$ for all $n\in \mathbb{N}^+$ where $\tilde h(y) = h(\log(y))/y$. Since $\sum_{n=1}^\infty 1/\log(r n)$ diverges, the M\"untz-Sz\'asz theorem shows that $\tilde h =0$ and hence $h(x) = 0$. \end{proof} We now have the main result of this paper. \begin{theorem}[Uniqueness theorem for the inverse problem]\label{thm:uniqueness_mu} In the DDE~\eqref{eqn:one_dim_model}, set $u_0=g_1=f=0$ and let $g_0$ satisfy the following condition \begin{equation*}\label{eqn:condition_g_0_v1} (\L g_0)(z)\ne 0\ \text{for}\ z\in(0,\infty). \end{equation*} Given $\mu_1$, $\mu_2\in \Psi$, denote the two weak solutions with respect to $\mu_1$ and $\mu_2$ by $u(x,t;\mu_1)$ and $u(x,t;\mu_2)$ respectively. Then for any $x_0\in (0,1)$ and $x^\star\in(0,1]$, either \begin{equation*}\label{interior_data} u(x_0,t;\mu_1)=u(x_0,t;\mu_2) \end{equation*} or \begin{equation*}\label{flux_data} \frac{\partial u}{\partial x} (x^\star,t;\mu_1)=\frac{\partial u}{\partial x}(x^\star,t;\mu_2),\ t\in(0,\infty) \end{equation*} implies $\mu_1=\mu_2$ on $[0,1]$. \end{theorem} \begin{proof} For the first case of $u(x_0,t;\mu_1)=u(x_0,t;\mu_2),$ fix $x_0\in (0,1)$, Theorem~\ref{thm:representation} yields for $k=1,\,2$: \begin{equation*} u(x_0,t;\mu_k)=-2\int_0^t \overline{\theta}_{(\mu_k)}(x_0,t-s)g_0(s) \,ds, \qquad k=1,\; 2 \end{equation*} which implies $$ \int_0^t \overline{\theta}_{(\mu_1)}(x_0,t-s)g_0(s)\,ds =\int_0^t \overline{\theta}_{(\mu_2)}(x_0,t-s)g_0(s)\,ds. $$ Taking the Laplace transform in $t$ on both sides of the above equality gives $$\Big(\L(\overline{\theta}_{(\mu_1)}(x_0,\cdot))\Big)(z)\cdot (\L g_0)(z) =\Big(\L(\overline{\theta}_{(\mu_2)}(x_0,\cdot))\Big)(z)\cdot (\L g_0)(z).$$ Since $(\L g_0)(z)\ne 0\ \text{on}\ (0,\infty),$ so that $$\Big(\L(\overline{\theta}_{(\mu_1)}(x_0,\cdot))\Big)(z) =\Big(\L(\overline{\theta}_{(\mu_2)}(x_0,\cdot))\Big)(z), \ \text{for}\ z\in (0,\infty).$$ This result and \eqref{eqn:L_theta} then give $$ \frac{e^{(x_0-2)\Phi_1^{1/2}(z)}-e^{-x_0\Phi_1^{1/2}(z)}}{2(1-e^{-2\Phi_1^{1/2}(z)})} =\frac{e^{(x_0-2)\Phi_2^{1/2}(z)}-e^{-x_0\Phi_2^{1/2}(z)}} {2(1-e^{-2\Phi_2^{1/2}(z)})}, \ z\in(0,\infty), $$ where $$\Phi_j (z)=\int_0^1 \mu_j(\alpha)z^\alpha {\rm d}\alpha,\quad j=1,2.$$ The definition of $\Psi$ and the fact $z\in (0,\infty)$ yield $\Phi_j^{1/2}(z)\in (0,\infty)$ and hence we can rewrite the above equality as \begin{equation}\label{eqn:equality_F} F(\Phi_1^{1/2}(z);x_0)=F(\Phi_2^{1/2}(z);x_0),\ z\in (0,\infty), \end{equation} where the function $F$ comes from Lemma~\ref{lem:F}. Since $x_0\in (0,1)$, it is obvious that $\displaystyle{\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}>0}$. Then we can pick a large $N^*\in\mathbb{N}^+$ such that \begin{equation*} \int_{\beta_0}^{\beta_1} C_\Psi\cdot (N^*)^\alpha {\rm d}\alpha >\left(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}\right)^2, \end{equation*} which together with the definition of $\Psi$ gives that for each $z\in (0,\infty)$ with $z\ge N^*,$ $\Phi_j(z)\in (0,\infty)$ and $$\Phi_j^{1/2}(z)>\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)},\quad j=1,2.$$ This result means that \begin{equation}\label{eqn:inequality_Phi} \Phi_j^{1/2}(nN^*)>\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)},\ j=1,2,\ \N+. \end{equation} Lemma~\ref{lem:F} shows that $F(\cdot;x_0)$ is strictly increasing on the interval $\bigl(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty\bigr)$, which together with \eqref{eqn:equality_F} and \eqref{eqn:inequality_Phi} yields $$\Phi_1^{1/2}(nN^*)=\Phi_2^{1/2}(nN^*),\ \N+,$$ that is $\Phi_1(nN^*)=\Phi_2(nN^*),\ \N+$, sequentially, we have \begin{equation*} \int_0^1 (\mu_1(\alpha)-\mu_2(\alpha)) (nN^*)^\alpha {\rm d}\alpha=0, \ \N+. \end{equation*} We can rewrite the above result as $\,\langle \mu_1(\alpha)-\mu_2(\alpha),(nN^*)^\alpha\rangle=0$ for $\N+$. From the completeness of $\{(nN^*)^\alpha:\N+\}$ in $L^2[0,1]$ which is ensured by Lemma~\ref{lem:dense}, we have $\mu_1-\mu_2=0$ in $L^2[0,1]$, that is, $\,\|\mu_1-\mu_2\|_{L^2[0,1]}=0$, which together with the continuity of $\mu_1$ and $\mu_2$ shows that $\mu_1=\mu_2$ on $[0,1].$ For the case of $\frac{\partial u}{\partial x} (x^\star,t;\mu_1)=\frac{\partial u}{\partial x}(x^\star,t;\mu_2),$ following \eqref{eqn:L_theta} we have \begin{equation*}\label{eqn:L_theta_x} \begin{split} &\quad\ \L\left(\frac{\partial \overline{\theta}_{(\mu)}}{\partial x}(x,t)\right) =\L \left[\kappa * \left(\frac{\partial ^3}{\partial t\partial x^2} \sum_{m=-\infty}^{\infty} G_{(\mu)}(x,t)\right)\right]\\ &=\L \left[\kappa *\L^{-1}\left(\sum_{m=-1}^{-\infty} \frac{\Phi^{3/2}(z)}{2} e^{\Phi^{1/2}(z)(x+2m)}{\rm d}z +\sum_{m=0}^{\infty}\frac{\Phi^{3/2}(z)}{2} e^{-\Phi^{1/2}(z)(x+2m)}\right)\right]\\ &=\frac{1}{\Phi(z)}\left(\sum_{m=-1}^{-\infty} \frac{\Phi^{3/2}(z)}{2}e^{\Phi^{1/2}(z)(x+2m)} +\sum_{m=0}^{\infty} \frac{\Phi^{3/2}(z)}{2}e^{-\Phi^{1/2}(z)(x+2m)}\right)\\ &=\frac{\Phi^{1/2}(z)e^{(x-2)\Phi^{1/2}(z)}+\Phi^{1/2}(z)e^{-x\Phi^{1/2}(z)}} {2(1-e^{-2\Phi^{1/2}(z)})}. \end{split} \end{equation*} Following the proof for the case $u(x_0,t;\mu_1)=u(x_0,t;\mu_2)$, we can deduce $\mu_1=\mu_2$ from the above result and Lemmas \ref{lem:F_f} and \ref{lem:dense}. \end{proof} \begin{remark} In this paper we have considered only the uniqueness question for the function $\mu(\alpha)$. Certainly, one would like to know under what conditions this function can be effectively recovered from the given data. Clearly this is an important question, but we caution there are many difficulties, especially with a mathematical analysis of the stability issue of $\mu$ in terms of the overposed data either $u(x_0,t)$ or $\frac{\partial u}{\partial x}(x^\star,t)$. One can certainly employ the representation result of section~\ref{sect:representation} to obtain a nonlinear integral equation for $\mu$ but the analysis of this is unclear. An alternative approach would be restrict the function $\mu$ as in Lemma~\ref{lem:kappa} to ensure that $\kappa$ is completely monotone and hence use Bernstein's theorem to obtain an integral representation for this function. We hope to address some of these questions in subsequent work. \end{remark} \section{Representation of the DDE solution for one spatial variable} \label{sect:representation} In this section, we will establish a representation result for the special case $\Omega=(0,1)$, $\mathcal{L}u=u_{xx}$ in \eqref{eqn:model_pde} \begin{equation}\label{eqn:one_dim_model} \begin{cases} D^{(\mu)} u-u_{xx}=f(x,t),\ 0<x<1,\ 0<t<\infty;\\ u(x,0)=u_0(x),\ 0<x<1;\\ u(0,t)=g_0(t),\ 0\le t< \infty;\\ u(1,t)=g_1(t),\ 0\le t< \infty, \end{cases} \end{equation} where $g_0,g_1\in L^2(0,\infty)$ and $f(x,\cdot)\in L^1(0,\infty)$ for each $x\in(0,1)$. We can obtain the fundamental solution by Laplace and Fourier transforms. First, we extend the finite domain to an infinite one and impose a homogeneous right-hand side, i.e. we consider the following model \begin{equation*} \begin{cases} D^{(\mu)} u-u_{xx}=0,\ -\infty<x<\infty,\ 0<t<\infty;\\ u(x,0)=u_0(x),\ -\infty<x<\infty. \end{cases} \end{equation*} Next we take the Fourier transform $\mathcal{F}$ with respect to $x$ and denote $(\mathcal{F} u)(\xi,t)$ by $\tilde{u}(\xi,t)$, \begin{equation*} D^{(\mu)} \tilde{u}(\xi,t) +\xi^2\tilde{u}(\xi,t)=0. \end{equation*} Then by taking the Laplace transform $\L$ with respect to $t$ and denote $(\L \tilde{u})(\xi,z)$ by $\hat{\tilde{u}}(\xi,z)$, we obtain \begin{equation*} \int_0^1 \mu(\alpha)\left(z^\alpha\hat{\tilde{u}}(\xi,z)- z^{\alpha-1}\tilde{u}_0(\xi)\right)\,d\alpha +\xi^2\hat{\tilde{u}}(\xi,z)=0, \end{equation*} that is, \begin{equation*} \hat{\tilde{u}}(\xi,z)=\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\tilde{u}_0(\xi), \end{equation*} where $\Phi(z)$ comes from \eqref{eqn:Phi}. Then we have \begin{equation*} \begin{aligned} u(x,t)=\mathcal{F}^{-1}\!\circ\L^{-1}( \hat{\tilde{u}}(\xi,z)) &=\frac{1}{2\pi}\int_{-\infty}^{+\infty} e^{i x\xi}\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} e^{zt} \frac{\Phi(z)/z}{\Phi(z)+\xi^2}\tilde{u}_0(\xi) \,dz \,d\xi\\ &=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} e^{zt} \int_{-\infty}^{+\infty} \frac{1}{2\pi}e^{ i x\xi}\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \tilde{u}_0(\xi) \,d\xi\,dz\\ &=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} e^{zt} \big(\mathcal{F}^{-1}(\frac{\Phi(z)/z}{\Phi(z)+\xi^2})*u_0\big)(x)\,dz, \end{aligned} \end{equation*} where the integral above is the usual Bromwich path, that is, a line in the complex plane parallel to the imaginary axis $z=\gamma + it$, $-\infty<t<\infty$, see \cite{WhittakerWatson:1962}. The last equality follows from the Fourier transform formula on convolutions and $\gamma$ can be an arbitrary positive number due to the fact that $z=0$ is a singular point of the function $\frac{\Phi(z)/z}{\Phi(z)+\xi^2}$. Throughout the remainder of this paper we will use $\gamma$ to denote a strictly positive constant which is larger than $e^{1/\beta}$. The number $e^{1/\beta}$ will be seen in the proof of Lemma \ref{lem:phi}. We shall assume the angle of variation $z$ for the Laplace transforms is from $-\pi$ to $\pi$, that is $z\in\Lambda:=\{z\in \mathbb{C}:arg(z)\in (-\pi, \pi]\}$. For $\Phi(z)$, we have the following result which will be central to the rest of the paper. It can be shown by using the Cauchy-Riemann equations in polar form. \begin{lemma} $\Phi(z)$ is analytic on $\mathbb{C}\setminus\!\{0\}$. \end{lemma} \par In the next two lemmas, we obtain important properties of $\Phi(z).$ \begin{lemma}\label{lem:re_phi} $\;{\displaystyle \operatorname{Re}(\Phi^{1/2}(z))\ge \frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|,\ \operatorname{Re} z=\gamma>0}$. \end{lemma} \begin{proof} $\gamma>0$ implies that $\operatorname{Re} z>0$, i.e. $arg(z)\in (-\frac{\pi}{2},\frac{\pi}{2})$, which together with $0<\alpha<1$ and $\mu(\alpha)\ge 0$ yields $\operatorname{Re} \Phi(z)\ge 0$, i.e. $arg(\Phi(z))\in (-\frac{\pi}{2},\frac{\pi}{2})$. This gives $arg(\Phi^{1/2}(z))\in (-\frac{\pi}{4},\frac{\pi}{4})$. Hence, $$\operatorname{Re}(\Phi^{1/2}(z))=\cos(arg(\Phi^{1/2}(z)))|\Phi^{1/2}(z)| \ge\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|,$$ which completes the proof. \end{proof} \begin{lemma}\label{lem:phi} $$\;{\displaystyle C_{\mu, \beta}\frac{\gamma^\beta-\gamma^{\beta_0}}{\ln \gamma} \le C_{\mu, \beta} \frac{|z|^\beta-|z|^{\beta_0}}{\ln |z|} \le|\Phi(z)|\le C\frac{|z|-1}{\ln |z|}},$$ for $z$ such that $\operatorname{Re} z=\gamma>e^{1/\beta}>0$. \end{lemma} \begin{proof} For the right-hand side of the inequality, $\mu(\alpha)\in C^1[0,1]$ obviously implies that there exists a $C>0$ such that $|\mu(\alpha)|\le C$ on $[0,1]$. Hence, \begin{equation*} \begin{aligned} |\Phi(z)|\le \int_0^1 |\mu(\alpha)|\cdot |z|^\alpha \,d\alpha \le C \int_0^1 |z|^\alpha \,d\alpha=C\frac{|z|-1}{\ln|z|}. \end{aligned} \end{equation*} \par For the left-hand side, write $z=r e^{i\theta}$. Since $\operatorname{Re} z=\gamma>0$, $\theta\in(-\frac{\pi}{2},\frac{\pi}{2})$, then \begin{equation*} \begin{aligned} |\Phi (z)|&\ge \operatorname{Re}(\phi(z))=\int_0^1 \mu(\alpha) r^\alpha \cos(\theta \alpha) \,d\alpha \\ &\ge C_\mu \int_{\beta_0}^\beta r^\alpha \cos(\theta\alpha)\,d\alpha \ge C_\mu \cos(\beta\theta) \int_{\beta_0}^\beta r^\alpha \,d\alpha\\ &\ge C_\mu \cos(\frac{\beta\pi}{2}) \int_{\beta_0}^\beta |z|^\alpha \,d\alpha =C_{\mu, \beta} \frac{|z|^\beta-|z|^{\beta_0}}{\ln |z|}. \end{aligned} \end{equation*} Recall $|z|\ge \gamma>e^{1/\beta}$, we have $\frac{|z|^\beta-|z|^{\beta_0}}{\ln |z|} \ge \frac{\gamma^\beta-\gamma^{\beta_0}}{\ln \gamma}$ due to the function $\frac{x^\beta-x^{\beta_0}}{\ln x} $ being increasing on the interval $(e^{1/\beta},+\infty)$. \end{proof} Now we are in a position to calculate the complex integral $\mathcal{F}^{-1}\bigl(\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\bigr)$. \begin{lemma}\label{lem:inversefourier} $\;{\displaystyle \mathcal{F}^{-1}(\frac{\Phi(z)/z}{\Phi(z)+\xi^2}) =\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}}$. \end{lemma} \begin{proof} From the inverse Fourier transform formula we have \begin{equation*} \begin{aligned} \mathcal{F}^{-1}\Bigl(\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\Bigr) =\frac{1}{2\pi}\int_{-\infty}^{+\infty} e^{i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi. \end{aligned} \end{equation*} We denote the contour from $-R$ to $R$ by $C_0$, the semicircle with radius $R$ in the upper and lower half plane by $C_{R^+}$ and $C_{R^{-}}$, respectively. Also, let $C_+$, $C_-$ be the closed contours which consist of $C_0, C_{R^+}$ and $C_0, C_{R^-}$ respectively. For the case of $x>0$, working on the closed contour $C_+$, we have \begin{equation*} \begin{aligned} \quad\frac{1}{2\pi}\int_{-\infty}^{+\infty}\!\!\!e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi &=\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_+}\!\! e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi -\lim_{R\to \infty} \frac{1}{2\pi}\int_{C_R^+} \!\!e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi\\ &=\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_+}\!\!e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi, \end{aligned} \end{equation*} where the second limit is $0$ as follows from Jordan's Lemma. Since to $0<\alpha<1$, $\gamma>0$, by our assumptions we have $\operatorname{Re}(\Phi(z))\ge 0$, which in turn leads to $\operatorname{Re}(\Phi^{1/2}(z))\ge 0$. Then there is only one singular point $\xi=i\Phi^{1/2}(z)$ in $C_+$ which is contained by the upper half plane. By the residue theorem \cite{WhittakerWatson:1962}, we have \begin{equation*} \lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_+}e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi =\lim_{R\to \infty} 2\pi i\frac{1}{2\pi} e^{ixi\Phi^{1/2}(z)} \frac{\Phi(z)/z}{2i\Phi^{1/2}(z)} =\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)x}. \end{equation*} For the case of $x<0$, we choose the closed contour $C_-$. Since $\operatorname{Re}(\Phi^{1/2}(z))\ge 0$, it follows that $\xi=-i\Phi^{1/2}(z)$ is the unique singular point in $C_-$. Then a similar calculation gives \begin{equation*} \begin{aligned} \quad\frac{1}{2\pi}\int_{-\infty}^{+\infty}\!\!\! e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi &=-\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_-}\!\! e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi +\lim_{R\to \infty} \frac{1}{2\pi}\int_{C_R^-}\!\! e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi\\ &=-\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_-}\!\! e^{ i x\xi} \frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi\\ &=\lim_{R\to \infty} \frac{\Phi^{1/2}(z)}{2z}e^{\Phi^{1/2}(z)x} = \frac{\Phi^{1/2}(z)}{2z}e^{\Phi^{1/2}(z)x}. \end{aligned} \end{equation*} Therefore, $$ \mathcal{F}^{-1}\Bigl(\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\Bigr) =\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}, $$ which completes the proof. \end{proof} \subsection{The fundamental solution $\,G_{\mu}(x,t)$} With the above lemma, we have \begin{equation*} \begin{aligned} u(x,t)&=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} e^{zt} \int_{-\infty}^{+\infty} \frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x-y|} u_0(y)\,dy\,dz\\ &=\int_{-\infty}^{+\infty} \left[\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi^{1/2}(z)}{2z} e^{zt-\Phi^{1/2}(z)|x-y|}\,dz\right] u_0(y)\,dy. \end{aligned} \end{equation*} Then we can define the fundamental solution $G_{(\mu)}(x,t)$ as \begin{equation}\label{G_mu} G_{(\mu)}(x,t)=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi^{1/2}(z)}{2z} e^{zt-\Phi^{1/2}(z)|x|}\,dz. \end{equation} The following three lemmas provide some important properties of $G_{(\mu)}(x,t)$. \begin{lemma}\label{lem:pointwise} The integral for $G_{(\mu)}(x,t)$ is convergent for each $(x,t)\in (0,\infty)\times(0,\infty)$. \end{lemma} \begin{proof} \par Given $(x,t)\in (0,\infty)\times(0,\infty)$, with Lemmas \ref{lem:re_phi} and \ref{lem:phi}, we have \begin{equation*} \begin{aligned} |G_{(\mu)}(x,t)| &\le \frac{1}{4\pi} \int_{\gamma-i\infty}^{\gamma+i\infty} |\frac{\Phi^{1/2}(z)}{z}|\cdot|e^{zt}|\cdot|e^{-\Phi^{1/2}(z)|x|}|\,dz\\ &= \frac{1}{4\pi} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{|\Phi^{1/2}(z)|}{|z|}e^{\gamma t} e^{-\operatorname{Re}(\Phi^{1/2}(z)|x|)} \,d z\\ &\le \frac{1}{4\pi} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{|\Phi^{1/2}(z)|}{|z|}e^{\gamma t} e^{-\frac{\sqrt{2}}{2}|x||\Phi^{1/2}(z)|} \,d z\\ &\le\frac{Ce^{\gamma t}}{4\pi}\int_{\gamma-i\infty}^{\gamma+i\infty} (|z|\ln|z|)^{-1/2} e^{-C_{\mu, \beta}|x|(\frac{|z|^\beta-|z|^{\beta_0}}{\ln |z|})^{1/2}} \,dz\\ &\le \frac{Ce^{\gamma t}}{4\pi (\ln\gamma)^{1/2}} \int_{\gamma-i\infty}^{\gamma+i\infty} |z|^{-1/2} e^{-C_{\mu, \beta}|x|(\frac{C|z|^\beta}{\ln |z|})^{1/2}} \,dz<\infty. \end{aligned} \end{equation*} \end{proof} \begin{lemma}\label{eqn:G_smooth} $G_{(\mu)}(x,t)\in C^\infty((0,\infty)\times(0,\infty))$. \end{lemma} \begin{proof} Fix $(x,t)\in (0,\infty)\times(0,\infty)$. Then for small $|\epsilon_x|, |\epsilon_t|$ we have \begin{equation*} \begin{aligned} \quad |G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t)| &\le |G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t+\epsilon_t)|\\ &\quad+ |G_{(\mu)}(x,t+\epsilon_t)-G_{(\mu)}(x,t)|. \end{aligned} \end{equation*} For $|G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t+\epsilon_t)|$, the following holds \begin{equation*} \begin{aligned} &\quad|G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t+\epsilon_t)|\\ &\le \frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty} |\frac{\Phi^{1/2}(z)}{2z}|\cdot|e^{zt+z\epsilon_t}| \cdot|e^{-\Phi^{1/2}(z)|x/2|}|\cdot |e^{-\Phi^{1/2}(z)(\frac{x}{2}+\epsilon_x)} -e^{-\Phi^{1/2}(z)(x/2)}|\ \,dz. \end{aligned} \end{equation*} From the proof of Lemma~\ref{lem:pointwise}, we have \begin{equation*} \begin{aligned} |e^{-\Phi^{1/2}(z)(\frac{x}{2}+\epsilon_x)}-e^{-\Phi^{1/2}(z)(x/2)}| &\le |e^{-\Phi^{1/2}(z)(\frac{x}{2}+\epsilon_x)}|+|e^{-\Phi^{1/2}(z)(x/2)}|\\ &\le e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(\frac{x}{2}+\epsilon_x)} +e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(x/2)} \le 2, \end{aligned} \end{equation*} and $$ \frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty} |\frac{\Phi^{1/2}(z)}{2z}|\cdot|e^{zt+z\epsilon_t}| \cdot|e^{-\Phi^{1/2}(z)|x/2|}|\ \,dz<\infty. $$ Hence, after setting $e_1(z,\epsilon_x)=|e^{-\Phi^{1/2}(z)(\frac{x}{2}+\epsilon_x)} -e^{-\Phi^{1/2}(z)(x/2)}|,$ we can apply Lebesgue's dominated convergent theorem to deduce that \begin{equation*} \begin{aligned} &\lim_{\epsilon_x\to 0}|G_{(\mu)}(x+\epsilon_x,t+\epsilon_t) -G_{(\mu)}(x,t+\epsilon_t)|\\ \le& \lim_{\epsilon_x\to 0} \frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty} |\frac{\Phi^{1/2}(z)}{2z}|\cdot|e^{zt+z\epsilon_t}| \!\cdot\!|e^{-\Phi^{1/2}(z)|x/2|}|\!\cdot e_1(z,\epsilon_x)\ \,dz\\ =& \frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty} |\frac{\Phi^{1/2}(z)}{2z}|\!\cdot\!|e^{zt+z\epsilon_t}| \!\cdot\!|e^{-\Phi^{1/2}(z)|x/2|}|\!\cdot\! \lim_{\epsilon_x\to 0}e_1(z,\epsilon_x)\ \,dz=0. \end{aligned} \end{equation*} A similar argument also shows that $\lim_{\epsilon_t\to 0}|G_{(\mu)}(x,t+\epsilon_t)-G_{(\mu)}(x,t)|=0$. From this we deduce that $\lim_{\epsilon_x,\ \epsilon_t\to 0} |G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t)|=0$, which shows that $G_{(\mu)}(x,t)\in C((0,\infty)\times(0,\infty))$. Similarly, following from the proof of Lemma~\ref{lem:pointwise} and the above limiting argument, we obtain $$G_{(\mu)}(x,t) \in C^n((0,\infty)\times(0,\infty)),\ \N+,$$ which leads to $G_{(\mu)}(x,t)\in C^\infty((0,\infty)\times(0,\infty))$ and this completes the proof. \end{proof} \begin{lemma}\label{lem:delta} \begin{equation*} \lim_{t\to 0} G_{(\mu)}(x,t)=\delta (x). \end{equation*} \end{lemma} \begin{proof} \par Fix $x\ne 0$, for each $t\in (0,\infty)$, \begin{equation*} \left|\frac{\Phi^{1/2}(z)}{2z}\right|\cdot |e^{zt-\Phi^{1/2}(z)|x|}| \le e^{\gamma t} \left|\frac{\Phi^{1/2}(z)}{2z}\right|\cdot|e^{-\Phi^{1/2}(z)|x|}|. \end{equation*} The proof of Lemma \ref{lem:pointwise} shows that $$\int_{\gamma-i\infty}^{\gamma+i\infty} \left|\frac{\Phi^{1/2}(z)}{2z}\right| \cdot|e^{-\Phi^{1/2}(z)|x|}|<\infty,$$ then by dominated convergence theorem, we can deduce that \begin{equation}\label{eqn:equality_2} \begin{aligned} \lim_{t\to 0} G_{(\mu)}(x,t)&= \lim_{t\to 0}\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{\Phi^{1/2}(z)}{2z} e^{zt-\Phi^{1/2}(z)|x|}\,dz\\ &=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{\Phi^{1/2}(z)}{2z} \lim_{t\to 0}e^{zt-\Phi^{1/2}(z)|x|}\,dz\\ &=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}\,dz, \end{aligned} \end{equation} for each $x\ne 0$. Let $z=\gamma+mi$, we have \begin{equation}\label{eqn:equality_1} \begin{aligned} \lim_{t\to 0} G_{(\mu)}(x,t) =\frac{1}{4\pi} \int_{-\infty}^{+\infty}\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi} e^{-\Phi^{1/2}(\gamma+mi)|x|}\,dm. \end{aligned} \end{equation} Recalling the definition of the closed contour $C_-$ and the proof of Lemma~\ref{lem:inversefourier}, we see the function $\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi}e^{-\Phi^{1/2}(\gamma+mi)|x|}$ is analytic in $C_-$. Then \begin{equation*} \begin{aligned} \int_{-\infty}^{+\infty}\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi} e^{-\Phi^{1/2}(\gamma+mi)|x|}\,dm &=\lim_{R\to \infty}\int_{C_{R^-}}\!\!\!\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi} e^{-\Phi^{1/2}(\gamma+mi)|x|}\,dm\\ &=\lim_{R\to \infty}\int_{-\pi}^0 Rie^{i\theta}\frac{\Phi^{1/2}(\gamma+Rie^{i\theta})} {\gamma+Rie^{i\theta}} e^{-\Phi^{1/2}(\gamma+Rie^{i\theta})|x|}\,d\theta, \end{aligned} \end{equation*} where $m=Re^{i\theta}$. Since $\operatorname{Re} (\gamma+Rie^{i\theta})=\gamma-R\sin\theta\ge 0$, following from the proofs of Lemmas~\ref{lem:re_phi} and \ref{lem:phi}, we can deduce that \begin{equation*} \begin{aligned} \operatorname{Re} (\Phi^{1/2}(\gamma+Rie^{i\theta})) &\ge \frac{\sqrt{2}}{2} |\Phi^{1/2}(\gamma+Rie^{i\theta})|\\ &\ge C_{\mu, \beta} \frac{|\gamma+Rie^{i\theta}|^\beta -|\gamma+Rie^{i\theta}|^{\beta_0}}{\ln |\gamma+Rie^{i\theta}|} \ge C \frac{R^\beta-R^{\beta_0}}{\ln R}, \end{aligned} \end{equation*} and $$|\Phi^{1/2}(\gamma+Rie^{i\theta})| \le C\frac{|\gamma+Rie^{i\theta}|-1}{\ln |\gamma+Rie^{i\theta}|} \le C\frac{|R|-1}{\ln |R|}$$ for large $R$. Hence, as $R\to \infty$, \begin{equation*} \begin{aligned} \Bigl|Rie^{i\theta}\frac{\Phi^{1/2}(\gamma+Rie^{i\theta})} {\gamma+Rie^{i\theta}}&e^{-\Phi^{1/2}(\gamma+Rie^{i\theta})|x|}\Bigr|\\ &\le |\frac{Rie^{i\theta}}{\gamma+Rie^{i\theta}}| \!\cdot\! |\Phi^{1/2}(\gamma+Rie^{i\theta})| \!\cdot\! |e^{-\Phi^{1/2}(\gamma+Rie^{i\theta})|x|}|\\ &\le C\frac{|R|-1}{\ln |R|}\!\cdot\! e^{-C \frac{R^\beta-R^{\beta_0}}{\ln R}|x|} \to 0, \end{aligned} \end{equation*} which implies $$\left|\int_{-\infty}^{+\infty}\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi} e^{-\Phi^{1/2}(\gamma+mi)|x|}\,dm\right| \le \pi\cdot C\frac{|R|-1}{\ln |R|}\cdot e^{-C \frac{R^\beta-R^{\beta_0}}{\ln R}|x|} \to 0.$$ The above result and \eqref{eqn:equality_1} show that \begin{equation}\label{eqn:equality_3} \lim_{t\to 0} G_{(\mu)}(x,t)=0\ \text{for}\ x\ne 0. \end{equation} Now, we are in the position to calculate $\int_{-\infty}^\infty \lim_{t\to 0} G_{(\mu)}(x,t) \,dx$. Equation~\eqref{eqn:equality_2} gives \begin{equation*} \begin{aligned} \int_{-\infty}^\infty \lim_{t\to 0} G_{(\mu)}(x,t) \,dx &=\int_{-\infty}^0 \lim_{t\to 0} G_{(\mu)}(x,t) \,dx +\int_0^\infty \lim_{t\to 0} G_{(\mu)}(x,t) \,dx\\ &=\int_{-\infty}^0 \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}\,dz \,dx\\ &\qquad+\int_0^\infty \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}\,dz \,dx\\ &=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\int_{-\infty}^0 \frac{\Phi^{1/2}(z)}{2z}e^{\Phi^{1/2}(z)x}\,dx \,dz\\ &\qquad+\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\int_0^\infty \frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)x}\,dx \,dz. \end{aligned} \end{equation*} Now Lemma~\ref{lem:re_phi} and the fact that $\operatorname{Re} z=\gamma>0$ shows that \begin{equation*} \begin{aligned} &\int_{-\infty}^0\frac{\Phi^{1/2}(z)}{2z}e^{\Phi^{1/2}(z)x}\,dx =\frac{e^{\Phi^{1/2}(z)x}}{2z}\Big |_{-\infty}^0=\frac{1}{2z},\\ &\int_{0}^\infty\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)x}\,dx =\frac{e^{-\Phi^{1/2}(z)x}}{2z}\Big |_\infty^0=\frac{1}{2z}.\\ \end{aligned} \end{equation*} Therefore, ${\displaystyle\; \int_{-\infty}^\infty \lim_{t\to 0} G_{(\mu)}(x,t) \,dx =\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{1}{2z}\cdot 2\,dz=1 }$, which together with \eqref{eqn:equality_3} yields the conclusion. \end{proof} Lemma~\ref{lem:delta} allows us to make the definition \begin{equation}\label{initial_G_mu} G_{(\mu)}(x,0)=\lim_{t\to 0} G_{(\mu)}(x,t)=\delta (x). \end{equation} \subsection{The Theta functions: $\theta_{\mu}(x,t)$ and $\overline{\theta}_{\mu}(x,t)$} One very useful way to represent solutions to initial value problems for a parabolic equation is through the $\theta-$function, \cite{Cannon:1984}. For the case of the heat equation if we let $K(x,t)$ denote the fundamental solution, then set $\theta(x,t) = \sum_{m=-\infty}^{\infty}K(x+2m,t)$. The value of this function lies in the following result. If $u_t-u_{xx}=0$, $u(0,t)=f_0(t)$, $u(1,t)=f_1(t)$, $u(x,0)=u_0(x)$, then $u(x,t)$ has the representation \begin{equation} \begin{aligned} u(x,t) &= \int_0^1[\theta(x-\xi,t)-\theta(x+\xi,t)]u_0(\xi)\,d\xi\\ &\quad -2\int_0^t \frac{\partial\theta}{\partial x}(x,t-\tau)f_0(\tau)\,d\tau +2\int_0^t \frac{\partial\theta}{\partial x}(x-1,t-\tau)f_1(\tau)\,d\tau. \end{aligned} \end{equation} A generalization to the case of the fractional equation $D_t^\alpha -u_{xx} = 0$ for a fixed $\alpha$, $0<\alpha\leq 1$ can be found in \cite{RundellXuZuo:2013}. Our aim is to extend this representation result to the distributed fractional order case. \begin{definition}\label{def:theta_func} We define for each $\mu(\alpha)$ which satisfies Assumption \ref{mu_assumption}, $${\displaystyle\; \theta_{(\mu)}(x,t)=\sum_{m=-\infty}^{\infty} G_{(\mu)}(x+2m,t)}.$$ \end{definition} \par The uniform convergence and smoothness property of $\theta_{(\mu)}(x,t)$ are established by the next lemma. \begin{lemma}\label{lem:uniform_theta} $\theta_{(\mu)}(x,t)$ is an even function on $x$ and uniformly convergent on $(0,2)\times (0,T)$ for any positive $T$. Then $\theta_{(\mu)}(x,t)\in C^\infty((0,2)\times (0,\infty))$. \end{lemma} \begin{proof} The even symmetric property follows from the definitions of $G_{(\mu)}(x,t)$ and $\theta_{(\mu)}(x,t)$ directly. \par Given a positive $T,$ fix $(x,t)\in (0,2)\times (0,T)$, by Lemma \ref{lem:re_phi} we have \begin{equation}\label{leftsum} \begin{aligned} \sum_{|m|>N}|G_{(\mu)}(x+2m,t)| &\le \left|\frac{1}{2\pi i} \sum_{|m|>N}\int_{\gamma-i\infty}^{\gamma+i\infty} \frac{\Phi^{1/2}(z)}{2z} e^{zt-\Phi^{1/2}(z)|x+2m|} \,dz\right|\\ &= \left|\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \frac{\Phi^{1/2}(z)}{2z} \sum_{|m|>N}e^{zt-\Phi^{1/2}(z)|x+2m|} \,dz\right|\\ &\le \frac{1}{2\pi } \int_{\gamma-i\infty}^{\gamma+i\infty} \big|\frac{\Phi^{1/2}(z)}{2z}\big|e^{\gamma t}\sum_{|m|>N} e^{-\operatorname{Re}(\Phi^{1/2}(z))|x+2m|} \,dz\\ &\le \frac{1}{2\pi } \int_{\gamma-i\infty}^{\gamma+i\infty} \big|\frac{\Phi^{1/2}(z)}{2z}\big|e^{\gamma t}\sum_{|m|>N} e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)||x+2m|} \,dz. \end{aligned} \end{equation} For the series $\sum_{|m|>N} e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)||x+2m|}$, Lemma~\ref{lem:phi} shows that \begin{equation*} \begin{aligned} &\sum_{|m|>N}e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)||x+2m|}\\ =&\ (1-e^{-\sqrt{2}|\Phi^{1/2}(z)|})^{-1} (e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(2N+2+x)}+ e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(2N+2-x)})\\ =&\ \frac{e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(2N-2)}} {1-e^{-\sqrt{2}|\Phi^{1/2}(z)|}} e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|} (e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(3+x)}+ e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(3-x)})\\ \le&\ 2 (1-e^{-\sqrt{2}(C_{\mu, \beta}\frac{\gamma^\beta-\gamma^{\beta_0}} {\ln \gamma})^{1/2}})^{-1} (e^{-\frac{\sqrt{2}}{2}(C_{\mu, \beta}\frac{\gamma^\beta-\gamma^{\beta_0}} {\ln \gamma})^{1/2}})^{2N-2} e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|}\\ \le&\ A_\gamma C_\gamma^{2N-2}e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|} \end{aligned} \end{equation*} where $$A_\gamma=2 (1-e^{-\sqrt{2}(C_{\mu, \beta}\frac{\gamma^\beta-\gamma^{\beta_0}} {\ln \gamma})^{1/2}})^{-1}, \quad 0<C_\gamma=e^{-\frac{\sqrt{2}}{2}(C_{\mu,\beta}\frac{\gamma^\beta-\gamma^{\beta_0}} {\ln \gamma})^{1/2}}<1$$ only depend on $\gamma>0$. Inserting the above result into \eqref{leftsum} yields $$ \sum_{|m|>N}|G_{(\mu)}(x+2m,t)| \le \frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty} \big|\frac{\Phi^{1/2}(z)}{2z}\big|e^{\gamma t} A_\gamma C_\gamma^{2N-2}e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|}\,dz. $$ Meanwhile, from the proof of Lemma~\ref{lem:pointwise}, we have $$\int_{\gamma-i\infty}^{\gamma+i\infty} \big|\frac{\Phi^{1/2}(z)}{2z}\big| e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|}\,dz<\infty.$$ Therefore, $$ \sum_{|m|>N}|G_{(\mu)}(x+2m,t)| \le CC_\gamma^{2N-2} $$ where the constant $C$ only depends on $T$, $\gamma$ and $0<C_\gamma<1$ only depends on $\gamma$. We conclude from this that for each $\epsilon>0$, $\exists$ sufficiently large $N\in\mathbb{N}$ independent of $x,t$ such that $$ \sum_{|m|>N}|G_{(\mu)}(x+2m,t)|<\epsilon \ \text{for each}\ (x,t)\in (0,2)\times (0,T), $$ which implies the uniform convergence of the series. Then the smoothness results follow from Lemma~\ref{eqn:G_smooth} and the uniform convergence. \end{proof} \par Now we introduce the definition of $\overline{\theta}_{(\mu)}(x,t)$ and state some of its properties. \begin{definition} \begin{equation*}\label{eqn:theta_bar} \overline{\theta}_{(\mu)}(x,t) =\left(I^{(\mu)} \frac{\partial ^2 \theta_{(\mu)}}{\partial t \partial x} \right)(x,t),\ (x,t)\in (0,2)\times(0,\infty). \end{equation*} \end{definition} \begin{lemma}\label{lem:dmu_xx} $D^{(\mu)}\theta_{(\mu)}(x,t)=(\theta_{(\mu)}(x,t))_{xx}$,\quad $D^{(\mu)}\overline{\theta}_{(\mu)}(x,t)=(\overline{\theta}_{(\mu)}(x,t))_{xx}$ . \end{lemma} \begin{proof} \par The first equality follows from the fact $D^{(\mu)} G_{(\mu)}(x,t)=(G_{(\mu)}(x,t))_{xx}$ and the uniform convergence of the series representation. For the second equality, Lemma~\ref{lem:kappa} yields $D^{(\mu)}\overline{\theta}_{(\mu)}=D^{(\mu)}I^{(\mu)} \frac{\partial ^2 \theta_{(\mu)}}{\partial t \partial x} =\frac{\partial ^2 \theta_{(\mu)}}{\partial t \partial x}$ and this together with the first equality and Lemma~\ref{lem:uniform_theta} then gives \begin{equation*} \begin{aligned} (\overline{\theta}_{(\mu)})_{xx} &=I^{(\mu)} \frac{\partial ^2 }{\partial t \partial x}(\frac{\partial^2\theta_{(\mu)}}{\partial x^2}) =I^{(\mu)} \frac{\partial ^2 }{\partial t \partial x} D^{(\mu)}\theta_{(\mu)} =I^{(\mu)} \frac{\partial }{\partial t} D^{(\mu)} (\frac{\partial \theta_{(\mu)}}{\partial x})\\ &=\kappa * \frac{\partial }{\partial t} [\eta * \frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} ] =\kappa * \eta * \frac{\partial^3 \theta_{(\mu)}}{\partial t^2\partial x} +\kappa * \eta\cdot \frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,0)\\ &=\int_0^t \frac{\partial^3 \theta_{(\mu)}}{\partial t^2 \partial x} \,dt +\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,0)\\ &=\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,t) -\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,0) +\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,0) =\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x}, \end{aligned} \end{equation*} which shows that the second equality holds. \end{proof} \begin{lemma}\label{lem:boundary_overline_theta} For each $\psi(t)\in L^2(0,\infty)$, we have \begin{equation*} \begin{split} &\int_0^t \overline{\theta}_{(\mu)}(0+,t-s)\psi(s){\rm d}s=-\frac{1}{2} \psi(t), \quad \int_0^t \overline{\theta}_{(\mu)}(1-,t-s)\psi(s)\,ds = 0,\\ &\int_0^t \overline{\theta}_{(\mu)}(0-,t-s)\psi(s){\rm d}s=\frac{1}{2} \psi(t), \quad \int_0^t \overline{\theta}_{(\mu)}(-1+,t-s)\psi(s)\,ds = 0,\quad t\in (0,\infty). \end{split} \end{equation*} \end{lemma} \begin{proof} Fix $(x,t)\in (0,1)\times(0,\infty)$, then computing the Laplace transform yields \begin{equation}\label{eqn:L_theta} \begin{aligned} \L(\overline{\theta}_{(\mu)}(x,t)) &=\L \Bigl[\kappa * \Bigl(\frac{\partial ^2}{\partial t\partial x} \sum_{m=-\infty}^{+\infty} G_{(\mu)}(x,t)\Bigr)\Bigr]\\ &=\L \Bigl[\kappa *\Bigl(\sum_{m=-1}^{-\infty} \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2} e^{zt+\Phi^{1/2}(z)(x+2m)}\,dz\\ &\quad-\sum_{m=0}^{+\infty} \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2} e^{zt-\Phi^{1/2}(z)(x+2m)}\,dz\Bigr)\Bigr]\\ &=\L (\kappa)\cdot \L\Bigl(\sum_{m=-1}^{-\infty} \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2} e^{zt+\Phi^{1/2}(z)(x+2m)}\,dz\\ &\quad-\sum_{m=0}^{+\infty} \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2} e^{zt-\Phi^{1/2}(z)(x+2m)}\,dz\Bigr)\\ &=\frac{1}{\Phi(z)}\Bigl(\sum_{m=-1}^{-\infty} \frac{\Phi(z)}{2}e^{\Phi^{1/2}(z)(x+2m)} -\sum_{m=0}^{+\infty} \frac{\Phi(z)}{2}e^{-\Phi^{1/2}(z)(x+2m)}\Bigr)\\ &=\frac{e^{(x-2)\Phi^{1/2}(z)}-e^{-x\Phi^{1/2}(z)}} {2(1-e^{-2\Phi^{1/2}(z)})}, \end{aligned} \end{equation} where the last equality follows from the fact $\operatorname{Re} (\Phi^{1/2}(z))>0$ which is in turn ensured by Lemma~\ref{lem:re_phi}. Therefore, \begin{equation*} \begin{aligned} &\L\left(\int_0^t \overline{\theta}_{(\mu)}(0+,t-s)\psi(s) \,ds\right) =\L(\overline{\theta}_{(\mu)}(0+,t))\L(\psi(t)) =-\frac{1}{2}\L(\psi(t));\\ &\L\left(\int_0^t \overline{\theta}_{(\mu)}(1-,t-s)\psi(s) \,ds\right) =\L(\overline{\theta}_{(\mu)}(1-,t))\L(\psi(t))=0. \end{aligned} \end{equation*} \par For $(x,t)\in (-1,0)\times(0,\infty)$, we have \begin{equation*} \begin{aligned} \L(\overline{\theta}_{(\mu)}(x,t)) &=\L \Bigl[\kappa *\Bigl(\sum_{m=0}^{-\infty} \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2} e^{zt+\Phi^{1/2}(z)(x+2m)}\,dz\\ &\quad-\sum_{m=1}^{+\infty} \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2} e^{zt-\Phi^{1/2}(z)(x+2m)}\,dz\Bigr)\Bigr]\\ &=\frac{1}{\Phi(z)}\Bigl(\sum_{m=0}^{-\infty} \frac{\Phi(z)}{2}e^{\Phi^{1/2}(z)(x+2m)} -\sum_{m=1}^{+\infty} \frac{\Phi(z)}{2}e^{-\Phi^{1/2}(z)(x+2m)}\Bigr)\\ &=\frac{e^{x\Phi^{1/2}(z)}-e^{-(x+2)\Phi^{1/2}(z)}} {2(1-e^{-2\Phi^{1/2}(z)})}, \end{aligned} \end{equation*} which gives $\L(\overline{\theta}_{(\mu)}(0-,t))=\frac{1}{2}$ and $\L(\overline{\theta}_{(\mu)}(-1+,t))=0,$ and completes the proof. \end{proof} \subsection{Representation of the solution to the initial-boundary value problem} We will build the representation of the solution in this subsection from four representations in terms of the theta functions; the initial condition, the values of $u$ at each boundary $x=0$, $x=1$, and the nonhomogeneous term $f$. \begin{definition}\label{eqn:theta_kernels} \begin{equation*} \begin{aligned} u_1(x,t)&=\int_0^1(\theta_{(\mu)}(x-y,t)-\theta_{(\mu)}(x+y,t))u_0(y) \,dy;\\ u_2(x,t)&=-2\int_0^t \overline{\theta}_{(\mu)}(x,t-s)g_0(s) \,ds;\\ u_3(x,t)&=2\int_0^t \overline{\theta}_{(\mu)}(x-1,t-s) g_1(s)\,ds;\\ u_4(x,t)&=\int_0^1\int_0^t[\theta_{(\mu)}(x-y,t-s)- \theta_{(\mu)}(x+y,t-s)]\cdot [\frac{\partial}{\partial t}I^{(\mu)} f(y,s)] \,ds\,dy. \end{aligned} \end{equation*} \end{definition} \par The following four lemmas give some properties of $u_j,\ j=1,2,3,4$. \begin{lemma}\label{lem:u1u2u3u4} $\;{\displaystyle D^{(\mu)} u_j=\frac{\partial^2 u_j}{\partial x^2},\ j=1,2,3}$, $\;{\displaystyle D^{(\mu)} u_4=\frac{\partial^2 u_4}{\partial x^2}+f(x,t)}$, where $(x,t)\in (0,1)\times(0,\infty)$. \end{lemma} \begin{proof} For $u_1$, by Lemma~\ref{lem:dmu_xx}, we have \begin{equation*} \begin{aligned} D^{(\mu)} u_1 &= \int_0^1(D^{(\mu)}\theta_{(\mu)}(x-y,t)-D^{(\mu)}\theta_{(\mu)}(x+y,t))u_0(y) \,dy\\ &=\int_0^x(D^{(\mu)}\theta_{(\mu)}(x-y,t)-D^{(\mu)}\theta_{(\mu)}(x+y,t))u_0(y) \,dy\\ &\quad+ \int_x^1(D^{(\mu)}\theta_{(\mu)}(x-y,t)-D^{(\mu)}\theta_{(\mu)}(x+y,t))u_0(y) \,dy\\ &=\int_0^x\Big[\theta_{(\mu)}(x-y,t)-\theta_{(\mu)}(x+y,t)\Big]_{xx}u_0(y) \,dy\\ &\quad+\int_x^1\Big[\theta_{(\mu)}(x-y,t)-\theta_{(\mu)}(x+y,t)\Big]_{xx}u_0(y) \,dy\\ &=\int_0^1\Big[\theta_{(\mu)}(x-y,t)-\theta_{(\mu)}(x+y,t)\Big]_{xx}u_0(y) \,dy= \frac{\partial^2 u_1}{\partial x^2}. \end{aligned} \end{equation*} For $u_2$, \begin{equation*} \begin{aligned} D^{(\mu)} u_2 &=\eta * \frac{\partial u_2}{\partial t} =-2\eta * \frac{\partial}{\partial t}(\overline{\theta}_{(\mu)}*g_0) =-2\eta *(\frac{\partial}{\partial t}\overline{\theta}_{(\mu)})*g_0 -2(\eta *g_0)\cdot \overline{\theta}_{(\mu)}(x,0)\\ &=-2D^{(\mu)} \overline{\theta}_{(\mu)} *g_0 =-2(\overline{\theta}_{(\mu)})_{xx}*g_0 =(-2\overline{\theta}_{(\mu)}*g_0)_{xx} =(u_2)_{xx}. \end{aligned} \end{equation*} In an analogous fashion to the above argument, we deduce that $D^{(\mu)} u_3=(u_3)_{xx}$. For $u_4$, using Lemmas~\ref{lem:delta}, \ref{lem:kappa} and \ref{lem:uniform_theta} we obtain \begin{equation*} \begin{aligned} D^{(\mu)} u_4&=\eta * \frac{\partial u_4}{\partial t} =\eta * \frac{\partial}{\partial t} \Bigl(\int_0^1 [\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)] * [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy\Bigr)\\ &=\eta *\Bigl(\int_0^1 \frac{\partial}{\partial t}[\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)] * [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy\Bigr)\\ &\quad+ \eta *\Bigl(\int_0^1 [\theta_{(\mu)}(x-y,0)-\theta_{(\mu)}(x+y,0)] \cdot[\frac{\partial}{\partial t}I^{(\mu)} f(y,t)]\,dy\Bigr)\\ &=\int_0^1 \eta*\frac{\partial}{\partial t}[\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)] * [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy\\ &\quad +\eta *\Bigl(\int_0^1 [\delta(x-y)-\delta(x+y)] \cdot[\frac{\partial}{\partial t}I^{(\mu)} f(y,t)]\,dy\Bigr)\\ &=\int_0^1D^{(\mu)} [\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)] * [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy +\eta *\frac{\partial}{\partial t}I^{(\mu)} f(x,t)\\ &=\int_0^1[\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)]_{xx} * [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy +D^{(\mu)}I^{(\mu)} f(x,t)\\ &=(u_4)_{xx}+f(x,t). \end{aligned} \end{equation*} \end{proof} \begin{lemma}\label{lem:initial_u} $\;{\displaystyle \lim_{t\to 0} u_1(x,t)=u_0(x)}$, $\;{\displaystyle \lim_{t\to 0} u_j(x,t)=0}$ for $j=2,3,4$, $x\in (0,1)$. \end{lemma} \begin{proof} \par For each $x\in (0,1)$, Lemmas~\ref{lem:uniform_theta} and \ref{initial_G_mu} yield that \begin{equation*} \begin{aligned} \lim_{t\to 0} u_1&= \int_0^1(\theta_{(\mu)}(x-y,0)-\theta_{(\mu)}(x+y,0))u_0(y)\,dy\\ &=\int_0^1 \sum_{m=-\infty}^\infty(\delta(x-y+2m)-\delta(x+y+2m))u_0(y)\,dy =\int_0^1 \delta(x-y)u_0(y)\,dy =u_0(x).\\ \end{aligned} \end{equation*} The other result follows directly from the definitions of $u_2$, $u_3$ and $u_4$. \end{proof} \begin{lemma}\label{lem:boundary_u14} $\;u_j(0,t)=u_j(1,t)=0$, for $\,j=1,4$ and $t\in (0,\infty)$. \end{lemma} \begin{proof} Since $\theta_{(\mu)}(x,t)$ is even on $x$ which is stated in Lemma~\ref{lem:uniform_theta}, then \begin{equation*} u_1(0,t)=\int_0^1(\theta_{(\mu)}(0-y,t)-\theta_{(\mu)}(0+y,t)) u_0(y)\,dy=0. \end{equation*} We also have \begin{equation*} \begin{aligned} u_1(1,t)&=\int_0^1(\theta_{(\mu)}(1-y,t)-\theta_{(\mu)}(1+y,t)) u_0(y)\,dy\\ &=\int_0^1(\theta_{(\mu)}(y-1,t)-\theta_{(\mu)}(1+y,t)) u_0(y)\,dy\\ &=\int_0^1\!\Big[\!\sum_{m=-\infty}^\infty G_{(\mu)}(y-1+2m,t)- \!\!\sum_{m=-\infty}^{\infty}G_{(\mu)}(y+1+2m,t)\Big]u_0(y)\,dy\\ &=\int_0^1\!\Big[\!\sum_{q=-\infty}^\infty G_{(\mu)}(y+1+2q,t)- \!\!\sum_{m=-\infty}^{\infty}G_{(\mu)}(y+1+2m,t)\Big]u_0(y)\,dy=0, \end{aligned} \end{equation*} where $q=m-1$. Following from the above proof, we obtain the conclusion for $u_4$. \end{proof} \begin{lemma}\label{lem:boundary_u23} $u_2(0,t)=g_0(t)$, $u_2(1,t)=0$, $u_3(0,t)=0$, $u_3(1,t)=g_1(t)$, for $t\in (0,\infty)$. \end{lemma} \begin{proof} The proof follows from Lemma \ref{lem:boundary_overline_theta} directly. \end{proof} \par Now we can state \begin{theorem}[Representation theorem]\label{thm:representation} There exists a unique solution $u(x,t)$ of Equations~\eqref{eqn:one_dim_model}, which has the representation $\;{\displaystyle u(x,t)=\sum_{j=1}^4 u_j}$. \end{theorem} \begin{proof} The existence follows from Lemmas~\ref{lem:u1u2u3u4}, \ref{lem:initial_u}, \ref{lem:boundary_u14} and \ref{lem:boundary_u23}; while the uniqueness is ensured by Corollary \ref{cor:existence_uniqueness}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{LIST OF SYMBOLS} \begin{abstract} Sleep staging is of great importance in the diagnosis and treatment of sleep disorders. Recently, numerous data driven deep learning models have been proposed for automatic sleep staging. They mainly rely on the assumption that training and testing data are drawn from the same distribution which may not hold in real-world scenarios. Unsupervised domain adaption (UDA) has been recently developed to handle this domain shift problem. However, previous UDA methods applied for sleep staging has two main limitations. First, they rely on a totally shared model for the domain alignment, which may lose the domain-specific information during feature extraction. Second, they only align the source and target distributions globally without considering the class information in the target domain, which hinders the classification performance of the model. In this work, we propose a novel adversarial learning framework to tackle the domain shift problem in the unlabeled target domain. First, we develop unshared attention mechanisms to preserve the domain-specific features in the source and target domains. Second, we design a self-training strategy to align the fine-grained class distributions for the source and target domains via target domain pseudo labels. We also propose dual distinct classifiers to increase the robustness and quality of the pseudo labels. The experimental results on six cross-domain scenarios validate the efficacy of our proposed framework for sleep staging and its advantage over state-of-the-art UDA methods. \end{abstract} \begin{IEEEkeywords} sleep stage classification, domain adaptation, adversarial training, attention mechanism, self-training, dual classifiers \end{IEEEkeywords} \section{Introduction} Sleep stage classification is crucial to identify sleep problems and disorders of humans. This task refers to the classification of one or many different signals including electroencephalography (EEG), electrocardiogram (ECG), electrooculogram (EOG) and electromyogram (EMG) into one of five sleep stages, namely, wake (W), rapid eye movement (REM), non-REM stage 1 (N1), non-REM stage 2 (N2), and non-REM stage 3 (N3). For EEG recordings, they are usually split into 30-second segments, where each segment is classified manually into one of the above stages by specialists \cite{bands}. Despite being mastered by many specialists, the manual annotation process is tedious and time-consuming especially with the large amount of collected EEG data. In recent years, data-driven deep learning approaches have been developed, relying on the availability of massive amount of labeled data for training. Therefore, many deep learning methods have been proposed recently to perform sleep staging automatically \cite{deepsleepnet,tnnls_cnn_paper,seqsleepnet,attnSleep_paper}. These methods implemented different network structures to process EEG data and trained proper classification models relying on the availability of large datasets. Since these methods were able to achieve decent performance, it was expected to be a step-forward to reduce the reliance on the manual scoring process. However, many sleep labs were found to keep relying on manually scoring EEG data \cite{attentive_sleep_Staging,phan2020towards}. The main reason is the high variation between the public training data and the data generated in the sleep labs due to several factors, \textit{e.g.}, different measuring locations on the skull and different sampling rates for measuring devices. This is well-known as the \textit{domain shift} problem, i.e., the training (\emph{source}) and testing (\emph{target}) data have different distributions. Consequently, these models suffer a significant performance degradation when training on public datasets and testing on the sleep labs data. In addition, it is difficult for these labs to annotate large enough EEG datasets to re-train the models. A typical solution for the above issues is to employ transfer learning approaches \cite{phan2020towards,channel_mismatch}. For instance, Phan \textit{et al.} \cite{phan2020towards} applied transfer learning from a large dataset to a different and relatively smaller one. It includes pre-training their model on the large dataset and then fine-tuning it on the smaller dataset. Similarly, the authors in \cite{channel_mismatch} studied the channel mismatch problem while transferring the knowledge from one dataset to another. However, these transfer learning methods require the availability of labeled data from the target domain to fine-tune the model. In reality, the target domain may be completely unlabeled, and it is thus impractical to fine-tune the models. Unsupervised Domain adaptation (UDA) is a special scenario of transfer learning that aims to minimize the mismatch between the source and target distributions without using any target domain labels. So far, a limited number of studies have investigated UDA in the context of sleep stage classification. For example, Chambon \textit{et al.} \cite{da_sleep} improved the feature transferability between source and target domains using optimal transport domain adaptation. Nasiri \textit{et al.} \cite{attentive_sleep_Staging} used adversarial training based domain adaptation to improve the transferability of features. However, these methods still suffer from the following limitations. First, they rely on shared models (i.e., same architectures with same weights) to extract features from both source and target domains. This may loss the domain-specific features for both source and target domains, which can be harmful to the classification task on the target domain. Second, these approaches only align the global distribution between source and target domains without considering the mismatch of the fine-grained class distribution between the domains. As such, target samples belonging to one class can be misaligned to an incorrect class in the source domain. To tackle the aforementioned challenges, we propose an \textbf{A}dversarial \textbf{D}omain \textbf{A}daptation with \textbf{S}elf-\textbf{T}raining (\textbf{ADAST}) for EEG-based sleep stage classification. We first propose a domain-specific attention module to preserve both the source-specific and the target-specific features. Second, to align with the fine-grained distribution of the unlabeled target domain, we propose a self-training strategy to provide supervisory signal via target domain pseudo labels. Hence, we can adapt the classification decision boundaries according to the target domain classes. Moreover, we design distinct dual classifiers to improve the robustness of target domain pseudo labels. The main contributions of this work are summarized as follows: \begin{itemize} \item We propose a novel adversarial domain adaptation framework called ADAST for sleep stage classification to handle the domain shift issue across different datasets. \item ADAST utilizes unshared domain-specific attention module to preserve the key features in both source and target domains during adaptation, which can boost the classification performance. \item ADAST incorporates a dual-classifier based self-training to align the fine-grained distribution of the unlabeled target domain, which enforces the classification decision boundaries to adapt to the target domain classes. \item Extensive experiments demonstrate that our ADAST achieves superior performance for cross-domain sleep stage classification against state-of-the-art UDA methods. \end{itemize} \section{Related Works} \subsection{Sleep Stage Classification} Automatic sleep staging with single-channel EEG has been widely studied in the literature. In particular, deep learning based methods \cite{deepsleepnet,attnSleep_paper,seqsleepnet} have showed great advances through end-to-end feature learning. These methods design different network structures to extract the features from EEG data and capture the temporal dependencies. Several studies explored convolutional neural networks (CNN) for feature extraction from EEG data. For example, Supratak \textit{et al.} \cite{deepsleepnet} proposed two CNN branches to extract different frequency features in EEG signals. Li \textit{et al.} \cite{cnn_se_Sleep} proposed to adopt CNN supported by a squeeze and excitation block to extract the features from multi-epoch EEG data. Eldele \textit{et al.} \cite{attnSleep_paper} developed a multi-resolution CNN with an adaptive features recalibration to extract representative features. Additionally, Qu \textit{et al.} \cite{residual_attn} proposed multiple residual CNN blocks to learn features mappings. They further handled the temporal dependencies by using recurrent neural networks (RNNs) as in \cite{deepsleepnet}, or adopted the multi-head self-attention approach as a fast and efficient way \cite{attnSleep_paper,residual_attn}. Instead of using CNN, some works adopted RNNs. The authors in \cite{seqsleepnet} designed an end-to-end hierarchical RNN architecture. It consists of an attention-based recurrent layer to handle the short-term features within EEG epochs. They further applied a recurrent layer to capture the epoch-wise features. Some other researchers proposed different ways to handle EEG data. For example, Phan \textit{et al.} \cite{xsleepnet} used both raw EEG signal and its time-frequency image to design a joint multi-view learning from both representations. Additionally, Jia \textit{et al.} \cite{graphsleepnet} proposed a graph-based approach for sleep stage classification, where graph and temporal convolutions were utilized to extract spatial features and capture the transition rules, respectively. Neng \textit{et al.} \cite{ccrrsleepnet} handled the EEG data in the frame, epoch and sequence levels to extract a mixture of features that would improve the classification performance. Despite the success of these methods in handling complex EEG data, their performance for cross-domain (e.g., cross-dataset) sleep stage classification is limited due to the domain shift issue. Therefore, many researches were directed to adopt transfer learning approaches to handle this issue. \subsection{Transfer Learning for Sleep Staging} Some works studied the problem of personalized sleep staging~\cite{personalized_1,personalized_2} to improve the classification accuracy for individual subjects within the same dataset using transfer learning. For a dataset with two-night recordings for each subject, they pretrained the model by excluding the two nights of the test subject. Next, the first night is applied for fine-tuning the model and the second night is used for evaluation. However, few works have been proposed to work for cross-dataset scenario, i.e., training a model on subjects from one dataset and testing on different subjects from another dataset. Phan \textit{et al.} \cite{phan2020towards} studied the data-variability issue with the availability of large source dataset, and different labeled but insufficient target dataset. They trained their model on the source dataset, and fine-tuned it on the smaller target dataset. With similar problem setting, Phan \textit{et al.} \cite{channel_mismatch} proposed to use deep transfer learning to overcome the problem of channel mismatch between the two domains. These methods either require large corpus source datasets to increase their generalization ability or a labeled target dataset to fine-tune their models. Unsupervised domain adaptation (UDA) approaches were proposed to address these issues by aligning the features from different domains. These approaches can be categorized as discrepancy-based approaches and adversarial-based approaches. The discrepancy-based approaches attempt to minimize the distance between the source and target distributions. For example, Maximum Mean Discrepancy (MMD) \cite{mmd} and CORrelation ALignment (CORAL) \cite{coral} align the first and the second order statistics respectively. One the other hand, adversarial-based approaches mimic the adversarial training proposed in the generative adversarial network (GAN) \cite{goodfellow_gan}. Nasiri \textit{et al.} \cite{attentive_sleep_Staging} considered the problem of data transferability between two datasets, where models suffer poor generalization across subjects/datasets. They proposed an adversarial training along with local and global attention mechanisms to extract the transferable individual information for unsupervised domain adaptation. \begin{figure*} \centering \includegraphics[width=\textwidth]{imgs/overall_architecture22.pdf} \caption{Overall architecture of the proposed ADAST framework. The shared feature extractor consists of three convolutional blocks, where each block contains 1D-convolution, batch normalization, non-linear ReLU activation and MaxPooling. The two classifiers share the same architecture, but we apply a similarity constraint on their weights to push them from being identical to each other (best viewed in colors, as blocks with similar colors represent shared components).} \label{Fig:end-to-end} \end{figure*} \section{Method} \subsection{Preliminaries} In this work, we focus on the problem of unsupervised cross-domain adaptation for EEG-based sleep staging. In this setting, we have an access to a labeled source dataset ${X}_s= \{(\mathbf{x}_s^i,y_s^i)\}_{i=1}^{n_s}$ of $n_s$ labeled samples, and an unlabeled target dataset ${X}_t= \{(\mathbf{x}_t^j)\}_{j=1}^{n_t}$ of $n_t$ target samples. The source and target domains are sampled from source distribution $P_s(X_s)$ and target distribution $P_t(X_t)$ respectively. The source and target domains have different marginal distributions (i.e., $P_s \neq P_t$), yet they share the same label space $Y=\{1,2, \dots K\}$, where $K$ is the number of classes (i.e., sleep stages). The domain adaptation scenario aims to transfer the knowledge from a labeled source domain to a domain-shifted unlabeled target domain. In the context of EEG data, both $\mathbf{x}_s^i$ and $\mathbf{x}_t^i$ $\in \mathbb{R}^{1 \times T}$, where the number of electrodes/channels is $1$ since we use single-channel EEG data, and $T$ represents the number of timesteps in the 30-second EEG epochs. \subsection{Overview} As shown in Fig.~\ref{Fig:end-to-end}, our proposed framework consists of three main components, namely the domain-specific attention, the adversarial training and the dual classifier based self-training. First, the domain-specific attention plays an important role in refining the extracted features so that each domain preserves its key features. Second, the adversarial training step leverages a domain discriminator to align the source and target features. Particularly, the domain discriminator network is trained to distinguish between the source and target features while the feature extractor is trained to confuse the domain discriminator via generating domain invariant features. Finally, the self-training strategy utilizes the target domain pseudo labels to adapt the classification decision boundaries according to the target domain classes. The dual classifiers are incorporated to improve the quality and robustness of the pseudo labels. Further details about each component will be provided in the following subsections. \subsection{Domain-specific Attention} Our proposed framework extracts domain invariant features by using a shared CNN-based feature extractor, i.e., $F_s(\cdot) = F_t(\cdot) = F(\cdot)$. However, relying solely on this shared architecture may not be able to preserve the key features of each domain. Therefore, we propose an unshared attention module to effectively capture domain-specific information and hence refine the extracted features for both source and target domains. For each position in the feature space, the attention module calculates the weighted sum of the features at all positions with a little computational cost. Thus, the features at each location have fine details that are coordinated with fine details in distant portions of the features. Formally, given an input source sample $\mathbf{x}_s \in \mathbb{R}^{1 \times T}$ that is passed through the feature extractor to generate the source features, i.e., $F(\mathbf{x}_s) = (\mathbf{f}_{s1}, \dots, \mathbf{f}_{sl})\in \mathbb{R}^{d \times l}$, where $d$ is the number of CNN channels, and $l$ is the length of the features. Inspired by \cite{sagan}, we deploy a convolutional attention mechanism as shown in Fig.~\ref{fig:self-attn}. The attention operation starts by obtaining new representation for the features at each position by using two 1D-convolutions, i.e., $H_1$ and $H_2$. Specifically, given $\mathbf{f}_{si}, \mathbf{f}_{sj} \in \mathbb{R}^{d}$, which are the feature values at the positions $i$ and $j$, they are transformed into $\mathcal{Z}_{si} = H_1(\mathbf{f}_{si})$ and $\mathcal{Z}_{sj} = H_2(\mathbf{f}_{sj})$. The attention scores are calculated as follows. \begin{equation} \mathcal{V}_{ji} = \frac{\exp (\mathcal{Z}_{si}^\top \mathcal{Z}_{sj})}{\sum_{k=1}^{l} \exp(\mathcal{Z}_{sk}^\top \mathcal{Z}_{sj})}. \label{eqn:attn_map} \end{equation} Here, the attention score $\mathcal{V}_{ji}$ indicates the extent to which $j^{th}$ position attends to the $i^{th}$ position in the feature map. The output of the attention layer is $\mathcal{O}_{s} = (\mathbf{o}_{s1}, \dots, \mathbf{o}_{sj}, \dots \mathbf{o}_{sl}) \in \mathbb{R}^{d \times l}$, where \begin{equation} \mathbf{o}_{sj} = \sum_{i=1}^{l} \mathcal{V}_{ji} {\mathbf{f}_s}_i. \label{eqn:attn_out} \end{equation} We denote the attention process in Equations~\ref{eqn:attn_map} and \ref{eqn:attn_out} as $A(\cdot)$, such that $\mathcal{O}_s = A_s(F(\mathbf{x}_s))$. The same process applies to the target domain data flow to train $A_t$. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{imgs/self-attention22.pdf} \end{center} \caption{Design of domain-specific attention module.} \label{fig:self-attn} \end{figure} \subsection{Adversarial Training} Given the learned source and target representations which preserve the domain-specific features, adversarial training is employed to align the source and target domains. Inspired by the generative adversarial network (GAN) \cite{goodfellow_gan}, we aim to solve a minimax objective between the feature extractor and domain discriminator. Specifically, the domain discriminator is trained to classify between the source and target features, while the feature extractor tries to generate indistinguishable representations for both source and target domains. By doing so, the classifier trained on the source domain can generalize well on the target domain. However, with the minimax objective, the discriminator can saturate quickly, resulting a gradient vanishing problem \cite{adda}. To address this issue, we train our model using a standard GAN loss with inverted labels \cite{goodfellow_gan}. Formally, the domain discriminator, $D$, classifies the input features to be either from the source or target domain. Thus, $D$ can be optimized using a standard cross entropy loss with the labels indicating the domain of the data point. The objective of this operation $\mathcal{L}_{D}$ can be defined as: \begin{align} \min_D \mathcal{L}_{\mathrm{D}}= &-\mathbb{E}_{\mathbf{x}_{s} \sim P_{s}}[\log D(A_s(F(\mathbf{x}_{s})))] \nonumber \\ &-\mathbb{E}_{\mathbf{x}_{t} \sim P_{t}}[\log (1-D(A_t(F(\mathbf{x}_{t}))))], \label{eqn:train_disc} \end{align} where $\mathcal{L}_{\mathrm{D}}$ is used to optimize the domain discriminator separately so that it discriminates the source and target features. On the other hand, the feature extractor and the domain-specific attention are trained to confuse the discriminator by mapping the target features to be similar to the source ones. The objective function can be described as: \begin{align} \min_{F,A_s,A_t} \mathcal{L}_{\mathrm{adv}} = &-\mathbb{E}_{\mathbf{x}_{s} \sim P_{s}}[\log (1-D(A_s(F(\mathbf{x}_{s}))))] \nonumber \\ &-\mathbb{E}_{\mathbf{x}_{t} \sim P_{t}}[\log D(A_t(F(\mathbf{x}_{t})))]. \label{eqn:adv_train} \end{align} Notably, only $\mathcal{L}_{\mathrm{adv}}$, which optimizes the feature extractor and the domain-specific attentions, is added to the overall objective function to ensure that the model is able to generate domain-invariant features. \subsection{Dual Classifier based Self-Training} After the adversarial training step, the global distributions between the source and target domains are aligned. However, the classes between different domains may be misaligned which can deteriorate the performance. Hence, there is a need to align the fine-grained class distributions between the source and target domains. To address this issue, we propose a novel self-training strategy supported by dual classifiers. To apply self-training, we first use the model to produce pseudo labels for the target domain. Next, we train the model with these pseudo labels that act as a supervisory signal to adapt the decision boundaries according to target domain classes. However, the generated pseudo labels might be noisy and inefficient. Therefore, at the end of training, we generated new pseudo labels and use them to retrain the model again. This process is repeated for $r$ iterations. Due to the domain shift between the source and target domains, we aim to further improve robustness of the generated pseudo labels. Therefore, we jointly train dual classifiers $C_1$ and $C_2$ that share the same architecture. Our dual-classifier approach has two main benefits. First, it helps the model to avoid the variance in the training data. Second, the average prediction vector of two classifiers decreases the probability of low-confident predictions. Since the two classifiers share the same architecture, we need to ensure their diversity and make sure they do not converge to the same predictions during training. Thus, we add a regularization term $ | \theta_{C_1}^\intercal \theta_{C_2} |$ on the weights of the two classifiers as inspired by \cite{tri_training}, where $\theta_{C_1}$, $\theta_{C_2}$ represent the weights of $C_1$ and $C_2$ respectively. This regularization term ensures the diversity of the two classifiers and helps them to produce different yet correct predictions. The final prediction vector is the averaged vector of the predictions of both classifiers. Formally, in each iteration, we first calculate the average probability $\mathbf{p}_{t}$ of the two classifiers, and the corresponding target pseudo labels $\hat{y}_{t}$ as follows. \begin{align} & \mathbf{p}_{t} = \frac{1}{2} \left[C_1(A_t(F(\mathbf{x}_{t})) + C_2(A_t(F(\mathbf{x}_{t}))\right], \label{eqn:pt} \\ & \hat{y}_{t} = argmax(\mathbf{p}_{t}). \label{eqn:y_pseudo} \end{align} The target classification loss $\mathcal{L}_{\mathrm{cls}}^{t}$ based on the above pseudo labels is defined as follows. \begin{align} \min_{F,A_t,C_1,C_2} \mathcal{L}_{\mathrm{cls}}^{t}= -\mathbb{E}_{\mathbf{x}_{t} \sim P_{t}} \sum_{k=1}^K \mathbbm{1}_{[\hat{y}_t = k]} \log \mathbf{p}_{t}^k, \label{eqn:trg_cls} \end{align} where $\mathbbm{1}$ is the indicator function, which is set to be 1 when the condition is met, and set to 0 otherwise. The target classification loss $\mathcal{L}_{\mathrm{cls}}^{t}$ optimizes the feature extractor $F$, the target domain-specific attention $A_t$ as well as the dual classifiers $C_1$ and $C_2$. Similarly, the source classification loss $\mathcal{L}_{\mathrm{cls}}^{s}$, which depends on the source labels $y_s$, is formalized as follows. \begin{align} &\mathbf{p}_{s} = \frac{1}{2}~[C_1(A_s(F(\mathbf{x}_{s})) + C_2(A_s(F(\mathbf{x}_{s}))], \\ &\min_{F,A_s,C_1,C_2} \mathcal{L}_{\mathrm{cls}}^{s}= -\mathbb{E}_{(\mathbf{x}_{s},y_{s}) \sim P_{s}} \sum_{k=1}^K \mathbbm{1}_{[y_s = k]} \log \mathbf{p}_{s}^k , \label{eqn:src_cls} \end{align} where the source classification loss $\mathcal{L}_{\mathrm{cls}}^{s}$ optimizes the feature extractor $F$, the source domain-specific attention $A_s$ as well as the dual classifiers $C_1$ and $C_2$. To sum up, we integrate the adversarial loss with the source and target classification losses and the regularization of the dual classifiers in one objective loss function as follows. \begin{align} \mathcal{L}_{\mathrm{overall}} = \mathcal{L}_{\mathrm{adv}} + \mathcal{L}_{\mathrm{cls}}^s + \lambda_1 \mathcal{L}_{\mathrm{cls}}^t + \lambda_2 | \theta_{C_1} ^\intercal \theta_{C_2} |. \label{eqn:overall} \end{align} Since the adversarial training and the source classification are two essential modules, we set their weights to one and tune the values of the two hyperparameters $\lambda_1$ and $\lambda_2$ to control their contributions. In overall, the three losses are integrated to guide the feature extractor to generate domain-invariant features, while allowing the domain-specific attentions to preserve the key features for each domain. Additionally, the dual classifiers are diversified using the regularization term. \section{Experiments} \subsection{Datasets} We evaluate the proposed framework on three challenging datasets, namely Sleep-EDF\footnote{https://physionet.org/physiobank/database/sleep-edfx/} (\textbf{EDF} for short), SHHS-1 (\textbf{S1}) and SHHS-2 (\textbf{S2}). These three datasets represent distinct domains due to their differences in sampling rates and EEG channels. EDF dataset contains PSG readings of 20 healthy subjects with 10 males and 10 females. Each PSG recording consists of two EEG channels namely Fpz-Cz and Pz-Oz, with a sampling rate of 100 Hz. We adopted the EEG recordings from Fpz-Cz channel following previous studies \cite{deepsleepnet,seqsleepnet,attnSleep_paper}. Both S1 and S2 are derived from SHHS dataset\footnote{https://sleepdata.org/datasets/shhs} \cite{shhs_ref1,shhs_ref2}. SHHS is a multi-center cohort study conducted to assess the cardiovascular and other consequences of sleep-disordered breathing. The subjects in the SHHS dataset suffered from different diseases, such as cardiovascular diseases and lung diseases. S1 dataset contains the data in the first visits of the patients during 1995 to 1998, while S2 dataset contains the data in the second visits of the patients in 2011. Each PSG file in both datasets contains data from two EEG channels namely C4-A1 and C3-A2, where we only adopt C4-A1 channel recordings for both datasets. We selected subjects from S1 and S2 datasets such that 1) they contain different patients, 2) subjects from S2 dataset have a sampling rate of 250 Hz, and 3) the subjects have Apnea Hypopnea Index (AHI) $<1$ to eliminate the bias to sleep disorders~\cite{AHI_reference}. Notably, we down-sampled the data from S1 and S2 datasets such that the sequence length is the same as the EDF dataset, i.e., 30 seconds $\times$ 100 Hz ($T=3000$). We preprocessed the three datasets by 1) merging stages N3 and N4 into one stage (N3) according to AASM standard, and 2) including only 30 minutes of wake stage periods before and after the sleep~\cite{deepsleepnet}. Table \ref{tbl:datasets} shows a brief summary of the above three datasets before down-sampling. \input{tables/datasets} \input{tables/baselines_comparison} \subsection{Experimental Settings} To evaluate the performance of our model and baseline models, we employed the classification accuracy (ACC) and the macro-averaged F1-score (MF1). These two metrics are defined as follows: \begin{align} &ACC = \frac{\sum_{i=1}^{K}TP_i}{M}, \label{equ:acc}\\ &MF1 = \frac{1}{K} \sum_{i=1}^{K} \frac{2 \times Precision_i \times Recall_i}{Precision_i + Recall_i} \label{equ:f1}, \end{align} where $Precision_i = \frac{TP_i}{TP_i + FP_i}$, and $ Recall_i = \frac{TP_i}{TP_i + FN_i} $. $TP_i, ~FP_i,~ TN_i$, and $FN_i$ denote the True Positives, False Positives, True Negatives, and False Negatives for the $i$-th class respectively, M is the total number of samples and K is the number of classes. All the experiments were repeated 5 times with different random seeds for model initialization, and then we reported the average performance (i.e., ACC and MF1) with standard deviation. We performed \textit{subject-wise} splits for the data from the three domains, i.e., we split them into 60\%, 20\%, 20\% for training, validation and testing, respectively. Note that all the data from one subject were assigned to either of the 3 sets under the \textit{subject-wise} splits. In particular, we used the training part of source and target domains while training our model. We used the validation part and test part of target domain for validation and testing. Following \cite{tri_training,dirt}, we used the validation split of the target domain to select the best hyperparameters in our model. We tuned the parameters $\lambda_1, \lambda_2$ in the range $\{0.00001, 0.0001, 0.001, 0.01, 0.1, 1\}$ and set their values as $\lambda_1=0.01$ and $\lambda_2=0.001$. For self-training, we set the maximum iterations $r$ to 2, as the performance of the model is found to converge. We used Adam optimizer with a learning rate of 1e-3 that is decayed by 0.1 after 10 epochs, weight decay of 3e-4, $\beta_1 = 0.5$, $\beta_2 = 0.99$, and a batch size of 128. All the experiments were performed with PyTorch 1.7 on a NVIDIA GeForce RTX 2080 Ti GPU. The source code and supplementary material are available at \href{https://github.com/emadeldeen24/ADAST}{https://github.com/emadeldeen24/ADAST}. \subsection{Baselines} To assess our proposed ADAST model, we compared it against seven state-of-the-art domain adaptation baselines as follows. In particular, Deep CORAL, MDDA and DSAN are discrepancy-based methods, while DANN, ADDA, CDAN, DIRT-T are adversarial-based methods. \begin{itemize} \item \textbf{Deep CORAL} \cite{deep_coral}: it extends CORAL \cite{coral} to learn a nonlinear transformation that aligns the correlations of layer activations in deep neural networks. \item \textbf{MDDA} \cite{mdda}: it applies MMD and CORAL on multiple classification layers to minimize the discrepancy between the source and target domains. \item \textbf{DSAN} \cite{dsan}: it incorporates a local MMD loss to align the same-class sub-domain distributions. \item \textbf{DANN} \cite{dann}: it jointly trains feature extractor and domain classifier by negating the gradient from the domain classifier with a gradient reversal layer (GRL). \item \textbf{ADDA} \cite{adda}: it performs similar operation as DANN but by inverting the labels instead of using GRL. \item \textbf{CDAN} \cite{cdan}: it minimizes the cross-covariance between feature representations and classifier predictions. \item \textbf{DIRT-T} \cite{dirt}: it combines virtual adversarial domain adaptation (VADA) with a teacher to refine the decision boundary for the target domain. \end{itemize} We used our backbone feature extractor for all the baselines to ensure a fair comparison. We tuned the hyperparameters of the baselines to achieve their best performance. We also included \textbf{Source-Only} in the experiments, which was trained on the source domain and directly tested on the target domain without adaptation. It represents the lower bound in our experiments. \subsection{Experimental Results} \label{sec:exp_results} Table~\ref{tbl:baselines_comparison} shows the comparison results among various methods. In overall, all the domain adaptation methods, including discrepancy-based and adversarial-based methods, achieve better performance than Source-Only. This indicates the importance of using domain adaptation to address the domain shift problem for cross-dataset sleep stage classification. We noticed that the three methods considering the class-conditional distribution, i.e., CDAN, DIRT-T and DSAN, outperform the ones globally aligning the source and target domains, i.e., DANN, Deep CORAL, ADDA and MDDA. This indicates that considering class distribution, especially in the case of imbalanced sleep data, is important to achieve better classification performance on the target domain. Our proposed ADAST achieves a superior performance over all the baselines in terms of both mean accuracy and F1-score in four out of six cross-domain scenarios for two reasons. First, our ADAST, similar to CDAN, DIRT-T and DSAN, also considers the class-conditional distribution. In particular, ADAST explores the target domain classes using the proposed self-training strategy with dual classifiers. Second, ADAST preserves domain-specific features using the unshared attention module, which improves the performance. As shown in Table \ref{tbl:baselines_comparison}, the performance of our model is less than most baselines in the scenario S1$\rightarrow$EDF. Note that we used the same value of $\lambda_1$ (i.e., 0.01) for all the six scenarios, which might not be fair for some scenarios. We found that the quality of the pseudo labels is not good in this scenario S1$\rightarrow$EDF, and thus we should use a smaller $\lambda_1$ to reduce the contribution of the target classification loss. By tuning $\lambda_1$ from 0.01 to $10^{-6}$, the mean accuracy and MF1 of our ADAST in the scenario S1$\rightarrow$EDF would increase from 75.94\% and 63.33\% to 78.50\% and 64.73\%, respectively. Please refer to the Fig. S.1c in the supplementary material for more details. We also observed interesting results while investigating different cross-dataset scenarios. Various methods usually achieve better performance in the cross-domain scenario S1$\rightarrow$S2 than EDF$\rightarrow$S2 (and similarly S2$\rightarrow$S1 is better than EDF$\rightarrow$S1). To explain this, as shown in Table \ref{tbl:datasets}, S1 and S2 are closer to each other, as they have the same EEG channel. Meanwhile, EDF has a different EEG channel and sampling rate, and thus it is a distant domain from S1 and S2. These results indicate that distant domain adaptation is still very challenging. Finally, we observed that S1$\rightarrow$EDF is easier than S2$\rightarrow$EDF, probably because S1 and EDF have a similar sampling rate. \input{tables/ablation} \subsection{Ablation Study} We assessed the contribution of each component in our ADAST framework, namely the unshared domain-specific attention module (\textbf{ATT}), the dual classifiers (\textbf{DC}) and self-training (\textbf{ST}). Particularly, we conducted an ablation study to show the results of different variants of ADAST in Table~\ref{tbl:ablation}. The results emphasize three main conclusions. First, using the proposed domain-specific attention benefits the overall performance, as it helps to preserve the domain-specific features. Second, the self-training improves the classification performance by $\sim$ 1.1\%. This improvement shows the benefit of incorporating the target domain class information in modifying the classification boundaries by using the pseudo labels. Third, the addition of dual classifiers benefits the classification performance in overall as it avoids the variance in the training data. Moreover, combining it with the self-training in specific is helpful to further improve the performance by 2.5\% through improving the quality of the pseudo labels. \begin{figure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/source_only_c_to_a_src_trg.png} \caption{} \label{fig:src_trg_alignment:a} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/my_method_c_to_a_src_trg.png} \caption{} \label{fig:src_trg_alignment:b} \end{subfigure} \caption{UMAP feature space visualization showing the source and target domains alignment using (a) Source-Only, and (b) our ADAST, applied for the scenario S2$\rightarrow$EDF.} \label{fig:src_trg_alignment} \end{figure} \begin{figure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/source_only_c_to_a_trg_only.png} \caption{} \label{fig:trg_classification:a} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/my_method_c_to_a_trg_only.png} \caption{} \label{fig:trg_classification:b} \end{subfigure} \caption{UMAP feature space visualization showing the target domains classification performance after (a) Source-Only, and (b) our ADAST alignment, applied for the scenario S2$\rightarrow$EDF.} \label{fig:trg_classification} \end{figure} \subsection{Representation Visualization} In Section \ref{sec:exp_results}, the results illustrate the advantages of our proposed ADAST framework over the initial Source-Only performance. To make the comparison more intuitive, we visualized the feature representations that are learned during the training process using UMAP \cite{umap}. First, we investigated the alignment quality, where Fig.~\ref{fig:src_trg_alignment} visualizes the source and target alignment in the scenario S2$\rightarrow$EDF. In particular, Fig.~\ref{fig:src_trg_alignment:a} shows the Source-Only alignment, and Fig.~\ref{fig:src_trg_alignment:b} shows our ADAST framework alignment. In these figures, the red dots represent the source domain, and the blue dots denote the target domain. We can observe that the Source-Only is not very efficient as there are many disjoint patches that are not well-aligned with the target domain. However, our ADAST framework improves the alignment of the two domains to become arc-shaped, which increases the overlapped region and they become less discriminative. Additionally, we investigated the target domain classification performance in the aforementioned scenario after the alignment in Fig.~\ref{fig:trg_classification}. In particular, Fig.~\ref{fig:trg_classification:a} is the Source-Only performance, and Fig.~\ref{fig:trg_classification:b} is the one after our alignment. We noticed that the Source-Only alignment generates a lot of overlapping samples from different classes, which degrades the target domain classification performance. On the other hand, our ADAST framework improves the discrimination between the classes and they become more distinct from each other. This is achieved with the aid of self-training strategy. \subsection{Sensitivity Analysis} \textbf{Effect of target classification loss.} Since the self-training process relies on target domain pseudo labels, it is not practical to assign a high weight to the target classification loss as the pseudo labels are expected to have some uncertainties. Therefore, we studied the effect of the different variants to the weight assigned to the target classification loss $\lambda_1$, as shown in Fig.~\ref{fig:sens_trg_cls}. Notably, when $\lambda_1$ is very small (i.e., $\lambda_1$ = 1e-6), it makes the self-training useless, and the performance becomes very close to the case without self-training. As we gradually increase $\lambda_1$ value, we notice improvement on the overall performance until we reach the optimal value of $\lambda_1=0.01$. Further increasing $\lambda_1$ deteriorates the performance as the model is highly penalized based on the pseudo labels which may contain false examples. \begin{figure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/trg_cls_sens_analysis.png} \caption{} \label{fig:sens_trg_cls} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/dissim_sens_analysis.png} \caption{} \label{fig:sens_dissim} \end{subfigure} \caption{Sensitivity analysis to the different variants of $\lambda_1$ and $\lambda_2$ in Eq.~\ref{eqn:overall}.} \label{fig:sens_analysis} \end{figure} \textbf{Effect of classifier weight constraint.} Since the dual classifiers share the same architecture, it is important to keep their predictions relatively different but not with a big gap. The classifier weight constraint is the factor that keeps this distance with acceptable margin, and hence, it becomes important to study the effect of this term and how its weight $\lambda_2$ should be selected. We analyzed the performance of our model with different $\lambda_2$ values, as illustrated in Fig.~\ref{fig:sens_dissim}. When $\lambda_2$ is very small, it makes the two classifiers perform very closely to each other, which has a similar performance with a single classifier. The performance is gradually improved when increasing $\lambda_2$, as the two classifiers tend to have different classification decisions. It can be found that the best performance is achieved with $\lambda_2=0.001$. However, as its value is increased beyond this threshold (i.e., 0.001), we notice that the overall performance degrades. This happens as the weights of the two classifier became very dissimilar, moving them away from the correct predictions. \section{Conclusions} In this paper, we proposed a novel adversarial domain adaptation architecture for sleep stage classification using single channel raw EEG signals. We tackle the problem of the domain shift that happens when training the model on one dataset (i.e., the source domain), and testing it on another out of distribution dataset (i.e., the target domain). We developed unshared attention mechanisms to preserve domain-specific features. We also proposed a dual classifier based self-training strategy, which helps the model to adapt the classification boundaries according to the target domain with robust pseudo labels. The experiments performed on six cross-domain scenarios generated from three public datasets prove that our model can achieve superior performance over state-of-the-art domain adaptation methods. Additionally, the ablation study shows that the dual classifier based self-training is the main contributor to the improvement as it considers class-conditional distribution in the target domain. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Nowadays, deep neural network computations are widely and actively used in many applications. Therefore, it is obvious that there are a lot of research contributions which aim at privacy-preserving secure deep neural network computation. Among them, federated learning is one of well-known and successful approaches which does not expose training data to the public~\cite{pieee21park,iotj20kwon}. According to the privacy-preserving nature in federated learning, it is suitable for medical applications. In distributed medical systems (hospitals, biomedical research institutes, etc.), there exist lots of patients' data. In addition, the system conducts deep neural network training on such data all of it typically should be in the same space for the training. However, gathering and transporting patient data is strictly regulated by laws due to privacy. As a result, each medical system conducts computations with its own local data. To deal with this issue, this paper proposes a federated learning framework which trains models with the data stored in each end-system \textit{locally} for privacy-preserving computation while maintaining learning accuracy. Among various federated learning algorithms, this paper considers split learning which separates deep neural network computation into several parts~\cite{splitlearning}. The basic system architecture for split learning is as illustrated in Fig.~\ref{fig:fig1}~\cite{dsn19jeon}. As shown in Fig.~\ref{fig:fig1}, the training data located in end-system cannot be seen at the centralized server, thus, privacy-preserving deep neural network computation can be available. However, to the best of our knowledge, multiple end-systems are not considered in split learning research contributions, yet. In the case with multiple end-systems, the training data are located in spatially separated individual end-systems and the deep neural network training is temporally separated as shown in Fig.~\ref{fig:fig1}, thus our proposed system is named to \textit{spatio-temporal split learning}. In this proposed spatio-temporal split learning, each end-system contains several hidden layers and the results of the last hidden layers of all end-systems will be delivered to the centralized server. The server has the other remaining hidden layers and output layer of our considering deep neural network, thus, centralized computation can be available. Based on our performance evaluation results with cifar10 based classification, it can be observed that our spatio-temporal split learning can show near-optimal performance while preserving data privacy. \begin{figure}[t!] \centerline{\includegraphics[width=85mm]{dsn21_fig1.pdf}} \caption{Basic system architecture for split learning.} \vspace{-3mm} \label{fig:fig1} \end{figure} \section{Spatio-Temporal Split Learning Framework} Our proposed spatio-temporal split learning framework is as illustrated in Fig.~\ref{fig:fig2}. As shown in Fig.~\ref{fig:fig2}, multiple end-systems exist; and the individual results of end-systems' first hidden layers are delivered to the centralized server. Then, the original raw data is not shared and encoded (due to the first hidden layer computation). Note that sharing the results of first hidden layers at the centralized server does not expose original raw data. Then, deep neural network computation with the other hidden layers and output layer is conducted at the server, thus, all training data is used for single deep neural network training. Therefore, in theory, our proposed spatio-temporal split learning framework achieves near optimal performance by using all data in a single network with certain amount of performance sacrifice due to individual first hidden layers in each end-system. This performance degradation can be larger when more hidden layers are in end-systems while sacrificing certain amounts of learning performance (tradeoff). \begin{figure}[t!] \centerline{\includegraphics[width=85mm]{dsn21_fig2.pdf}} \caption{Our proposed spatio-temporal split learning framework.} \label{fig:fig2} \end{figure} The centralized server requires queue while gathering the results of the fist hidden layers in end-systems under the consideration of geo-distributed end-systems. If an end-system is located very far from the centralized server, the parameters from the end-system can arrive at the server lately or sparsely. Then, the learning performance can be biased due to the differences of arrivals from end-systems. Thus, parameter scheduling is required depending on applications, i.e., a queue data structure needs to be defined. \begin{figure}[t!] \centerline{\includegraphics[width=85mm]{dsn21_fig3.pdf}} \caption{Our considering CNN for cifar10 classification CNN for performance evaluation.} \label{fig:fig3} \end{figure} \begin{table}[t!]% \caption{Accuracy Results} \label{accuracy} \small \begin{center} \begin{tabular}{r|r} \toprule[1.0pt] Layers at end-systems & Accuracy \\ \midrule Nothing (All layers are in the server) & $71.09$\,\% \\ $L_{1}$ & $68.18$\,\% \\ $L_{1}, L_{2}$ & $67.92$\,\% \\ $L_{1}, L_{2}, L_{3}$ & $66.00$\,\% \\ $L_{1}, L_{2}, L_{3}, L_{4}$ & $65.66$\,\% \\ \bottomrule[1.0pt] \end{tabular} \end{center} \vspace{-5mm} \end{table} \section{Primary Evaluation Results} In order to show that our proposed spatio-temporal split learning framework presents near-optimal performance while preserving data privacy, cifar10 based classification is conducted with convolutional neural networks (CNN). In our considering CNN, as illustrated in Fig.~\ref{fig:fig3}, 5 convolution layers (Conv2D) and 5 max-pooling layers (MaxPooling2D) are used. In $L_{1},\cdots,L_{5}$ (Conv2D and MaxPooling2D) layers, $16$, $32$, $64$, $128$, and $256$ numbers of filters are used where the size is $32\times 32$. The last two dense layers (including output layer) has $512$ and $10$ units. Then, our performance evaluation is conducted while the numbers of $L_{i}, \forall i\in\{1,\cdots,5\}$ in end-systems vary, i.e., from $L_{1}$ to $L_{5}$. As shown in Table~\ref{accuracy}, $71.09$\,\% classification accuracy can be obtained if all layers are in the centralized server (global model). If $L_{1}$ is located in end-systems, the performance is degraded from $71.09$\,\% to $68.18$\,\%, i.e., $2.91$\,\%. With this small amount of performance degradation, original raw data at end-systems are not exposed/shared, thus, the privacy of training data is preserved. In the worst case in our experiments where $L_{1},\cdots,L_{4}$ layers are in end-systems, the performance is $65.66$\,\%, thus, we only have $5.43$\,\% performance degradation. It means that our proposed spatio-temporal split learning works well. As shown in Fig.~\ref{fig:privacy}, original training cifar10 images may be recognized only with the Conv2D in $L_{1}$ (even if they are blurred) as in Fig.~\ref{fig:privacy}(b), however, max-pooling can definitely hide original images, as in Fig.~\ref{fig:privacy}(c). \section{Summary and Future Work} In this paper, a novel privacy-preserving split learning framework with multiple end-systems, called as \textit{spatio-temporal split learning}, is proposed for avoiding original raw data sharing among end-systems during deep neural network training. In our proposed framework, multiple end-systems are sharing one centralized server, where the multiple end-systems are with first hidden layer and the centralized server is with the other layers. This framework is spatially separated for getting data from multiple end-systems and temporally separated for split learning computation. The performance of the proposed framework is evaluated and the results verify that the framework shows near-optimal performance without original raw data sharing for privacy preserving computation. \section*{Acknowledgment} This research was funded by Ministry of Health and Welfare (HI19C0572) and National Research Foundation of Korea (2019R1A2C4070663). S. Jung and S. Yoo are corresponding authors. \begin{figure}[t!] \centering \setlength{\tabcolsep}{2pt} \renewcommand{\arraystretch}{0.2} \begin{tabular}{p{0.31\linewidth}p{0.02\linewidth}p{0.31\linewidth}p{0.02\linewidth}p{0.31\linewidth}} \tabularnewline \tabularnewline \includegraphics[page=1, width=0.95\linewidth]{fig3a_org.png} & {} & \includegraphics[page=1, width=0.95\linewidth]{fig3b_h1.png} & {} & \includegraphics[page=1, width=0.95\linewidth]{fig3c_h1maxpool.png} \\ \tabularnewline \tabularnewline \centering(a) Original & {} & \centering(b) Conv2D in $L_{1}$ & {} & \centering(c) $L_{1}$ \tabularnewline \end{tabular} \caption{Image capture during deep neural network computation.} \label{fig:privacy} \vspace{-5mm} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Low-mass X-ray binaries (LMXBs) often show quasi-periodic oscillations (QPOs) in their X-ray flux. In the case of high-frequency QPOs (HF QPOs) in black-hole sources, they often appear in a pair and their frequencies are kept with time with the frequency ratio of 3 : 2. In the case of kHz QPOs in neutron-star X-ray sources, they frequently appear also in pair, but with time variation. The frequency ratio is not kept to 3 : 2. However, the time variations of the pair kHz QPOs are not only correlated each other, but also correlated with the time variation of low-frequency QPOs (LF QPOs) (Boutloukos et al. 2006). The purpose of this paper is to examine whether these correlated time-variations of QPOs in neutron-star X-ray binaries can be described as disk oscillations resonantly excited on warped disks. In disks deformed by some external forces, excitation of disk oscillations by resonant processes is generally expected. A well-known example is the tidal instability in cataclysmic variables (Whitehurst 1988; Hirose and Osaki 1990; Lubow 1991). Another example is the excitation of spiral pattern in ram-pressure-deformed galactic disks (Tosa 1994; Kato and Tosa 1994). In the context of high-frequency QPOs, a warp will be one of the most conceivable deformations of disks. Based on this, we proposed a model that HF QPOs in balck-hole X-ray binaries and kHz QPOs in neutron-star X-ray binaries are disk oscillations resonantly excited on disks deformed by warp (e.g., Kato 2003, 2004a,b; Klu{\' z}niak et al. 2004; Kato 2005a,b; Kato and Fukue 2006). In this warp model there are three possible combinations of i) type of resonance and ii) type of oscillations. Stability analyses for these three cases show that inertial-acoustic oscillations and/or g-mode oscillations are excited by their horizontal coupling with a warped disks (Kato 2004b). An overeview of the resonant non-linear coupling processes among oscillations and warp, which feedback to the original oscillations so as to amplify or dampen the oscillations, are shown in figure 1 of Kato (2004b). By this resonant model in warped disks, we can account for the 3 : 2 frequency ratio of HF QPOs in black-hole X-ray sources (e.g., Kato 2004b; Kato and Fukue 2006). Although the above analyses of stability by Kato (2004b) is only to the case in which the warp has no precession, we think that the excitation of inertial-acoustic oscillations and/or g-mode oscillations still holds even in the case in which the warp has precession. We further assume that the warp has time-dependent precession in the case of neutron stars.\footnote{ In the case of black-hole sources we assume that the warp has no precession. This difference will be related to the difference of the surface of the central sources. If the surface is present, magnetic couplings and radiative couplings (e.g., Pringle 1996; Maloney et al. 1996) of the disks with the central sources will cause precession of warps. } By the time-dependent precession, the resonant radius changes with time. Hence, frequencies of resonant oscillations vary with time. During the frequency changes of resonant oscillations, they are correlated each other in this warp model as shown below, since the resonance occurs at the same radius for different oscillation modes. In this paper we demonstrate that the observed correlations of kHz QPOs and LF QPOs in neutron-star X-ray binaries can be described by this warp model with time-dependent precession. \section{Overview of the Resonant Oscillation Model in Warped Disks} We present here the main part of the warp model, although the model has various variations. Let us consider disk oscillations described by ($\omega$, $m$, $n$), where $\omega$ and $m$ are angular frequency and azimuthal wavenumber ($m=0,1,2,...$) of the oscillations, respectively, and $n$ is an integer ($n=0,1,2,...$) describing node number of the oscillations in the vertical direction (e.g., see Kato et al. 1998; Kato 2001). For a set of ($\omega$, $m$, $n$), there are two different kinds of modes of oscillations, except for the case of $n=0$. In the case of $n=0$, we have inertial-acoustic oscillations alone, while in the case of $n\geq 1$ we have two different modes of oscillations. One is gravity mode, and the other is corrugation mode ($n=1$) or vertical-acoustic mode ($n\geq 2$) (see Kato et al. 1998; Kato 2001). Now, the disks are assumed to be warped with a time-dependent precession. The warp is a kind of a global one-armed corrugation waves, and is described by ($\omega_{\rm p}$, 1, 1), where $\omega_{\rm p}$ is the angular frequency of the precession, $\omega_{\rm p}>0$ being prograde and $\omega_{\rm p}<0$ retrograde. \subsection{Resonant condition and resonant radius} Nonlinear resonant interaction between a warp with ($\omega_{\rm p}$, $1$, $1$) and an oscillation with ($\omega$, $m$, $n$) brings about oscillations described by ($\omega\pm \omega_{\rm p}$, $m\pm 1$, $n\pm 1$), where arbitrary combinations of $\pm$ are possible. (These oscillations are called hereafter intermediate oscillations.) These intermediate oscillations have resonant interaction with the disk at particular radii where the dispersion relation of the intermediate oscillations is satisfied. There are two kinds of resonance, corresponding to the fact that two kinds of oscillation modes are described by the same dispersion relation, i.e., i) inertial-acoustic oscillations and gravity oscillations, and ii) vertical-acoustic oscillations. We call the resonance related to the former oscillations horizontal resonance, while the resonance related to the latter oscillations vertical resonance (e.g., see Kato 2004b). After making the resonant interaction with the disk, the intermediate oscillations nonlinearly couple with the warp to feedback to the original oscillations ($\omega$, $m$, $n$) (see figure 1 of Kato 2004b). This nonlinear feedback processes amplify or dampen the original oscillations, since a resonance process is involved in the feedback processes. Careful stability analyses which resonance excites oscillations and which oscillations are excited have been made in the case of no precession (Kato 2004b). The results show that inertial-acoustic oscillations and/or gravity oscillations are excited by horizontal resonance (Kato 2004b). This result will not change even when there is precession. Hence, we hereafter restrict our attention only on the above case, i.e., inertial-acoustic oscillations and/or gravity oscillations which have resonant interaction with the warped disk through the horizontal resonance. For the resonance to occur effectively, the place of resonance and the place where the oscillations predominantly exist must be the same. We find that in the case of no precession, the condition is realized at the radius of $\kappa=\Omega/2$, where $\kappa$ is the epicyclic frequency and $\Omega$ is the angular velocity of disk rotation. This can be simply extended to the case where the warp has precession, which gives the resonant condition as [see also Kato (2005a)\footnote {In Kato (2005), $\omega_{\rm p}>0$ was retrograde, but here we adopt $\omega_{\rm p}>$ for prograde precession.}] \begin{equation} \kappa={1\over 2}(\Omega + \omega_{\rm p}). \label{res} \end{equation} Hereafter, $\Omega$ is taken to be the Keplerian angular velocity, $\Omega_{\rm K}$, when numerical figures are necessary. Equation (\ref{res}) gives the resonant radius, $r_{\rm r}$, as a function of $\omega_{\rm p}$, when the mass and spin of central source are given. The $r_{\rm r}$--$\omega_{\rm p}$ relation is shown in figure 1 for the case in which the mass of the central source is $2M_\odot$ and the central source has no spin (i.e., the metric is the Schwarzschild one). It is noted that when the mass of the central source is smaller than $2M_\odot$ by a factoe $\alpha$, the same $r_{\rm r}/r_{\rm g}$ is realized for the precession faster than that in the case of $2M_\odot$ by factor $\alpha^{-1}$. \begin{figure} \begin{center} \FigureFile(80mm,80mm){figure-1.eps} \end{center} \caption{The resonant radius versus frequency of precession. The mass of the central source is $2M_\odot$. The metric is taken to be the Schwarzschild, i.e., $a_*=0$. In disks in which the warp has no precession, the resonance occus at $4r_{\rm g}$. When the precession is prograde ($\omega_{\rm p}>0$), the resonant radius becomes smaller than $4r_{\rm g}$ as the precession frequency increases, while becomes larger when the precession is retrograde ($\omega_{\rm p}<0$) and it absolute value increases. In the case of retrograde precession, however, the resonant radii are not unique for a given $\omega_{\rm p}$, as shown in the figure. That is, another resonance (resonance at an outer radius) appears far outside of the disk. The radius of the outer resonance moves inwards, as the absolute value of precession increases. The inner and outer resonances join together for a certain value of retrograde precession, and no resonance occurs for a larger value of retrograde precession.} \end{figure} The resonant condition in the case of no precession, $\kappa=\Omega/2$, is realized at $4r_{\rm g}$ when there is no spin. Figure 1 show that if the precession is prograde, the resonant radius becomes smaller than $4r_{\rm g}$ with increase of precession frequency, $\omega_{\rm p}$. If the precession is retrograde, however, we have resonance at two different radii. One is close to $4r_{\rm g}$ when the precession is small, and moves outward with increase of the absolute value of $\omega_{\rm p}$. The other one is far outside of the disk when the precession is weak, and the radius of resonance moves inward with increase of the absolute value of precession. At a certain value of retrograde precession, both radii coincide and no resonance occurs for retrograde precession with faster precession. \subsection{frequencies of resonant oscillations} Since inertial-acoustic oscillations or g-mode oscillations are concerned here, the place where the oscillations exist predominantly is the place where $(\omega-m\Omega)^2-\kappa^2\sim 0$ is satisfied (e.g., see Kato and Fukue 2006). In other words, frequencies of resonant oscillations are $(m\Omega\pm \kappa)_{\rm r}$, where various $m$'s are allowed, and the subscript r denotes the values at the resonant radius. The axisymmetric oscillations with $m=0$ will be observationally less interesting by the nature of symmetry itself. Hence we consider only non-axisymmetic oscillations. Typical non-axisymmetric modes of the oscillations are those with $m=1$ and $m=2$. Hence, as the frequencies of such oscillations, we have $(\Omega-\kappa)_{\rm r}$ (i.e., $m=1$), $(2\Omega-\kappa)_{\rm r}$, (i.e., $m=2$), $(\Omega+\kappa)_{\rm r}$ (i.e., $m=1$) ,$(2\Omega+\kappa)_{\rm r}$ (i.e., $m=2$),... For convenience, we introduce the following notations defined by \begin{equation} \omega_{\rm LL}=(\Omega-\kappa)_{\rm r}, \quad \omega_{\rm L}=(2\Omega-\kappa)_{\rm r}, \quad \omega_{\rm H}=(\Omega+\kappa)_{\rm r}. \label{2} \end{equation} Here, the relation between frequencies of resonant oscillations and of the observed QPOs should be briefly discussed. In the case of black-hole X-ray binaries, we think that the one-armed oscillations with frequency $\omega_{\rm LL}$ will be observed with the two-fold frequency, $2\omega_{\rm LL}$, in addition to $\omega_{\rm LL}$ itself by the following reasons (Kato and Fukue 2006). In black-hole sources, the high-frequency QPOs are observed only in the phase where the sources are at the steep power-law state (i.e., very high state) (Remillard 2005). In such states, the disk region where the QPOs are excited will be inside a compact hot torus, and the observed QPO photons are those which are Comptonized in the torus. In such cases, observed Comptonized high energy photon from one-armed oscillations ($m=1$) will have two maxima during one cycle of the oscillations (see figures 2 -- 4 of Kato and Fukue 2006). This means that $2\omega_{\rm LL}$ will be observed with large amplitude (in many cases with an amplitude larger than that of the oscillation of $\omega_{\rm LL}$), since the oscillation is one-armed (Kato and Fukue 2006). Based on this consideration, we have suggested that the observed pair QPO frequencies in black hole sources are $\omega_{\rm L}$ and $2\omega_{\rm LL}$ (not $\omega_{\rm LL}$) (see, e.g., Kato 2004b; Kato and Fukue 2006). Even in the case of neutron-star X-ray binaries, a similar situation may exist. Including this possibility, we regard $\omega_{\rm H}$, $\omega_{\rm L}$, $2\omega_{\rm LL}$, and $\omega_{\rm LL}$ as the main candidates of observed QPO frequencies in neutron stars. \subsection{Frequency-Frequency Relations} Frequencies $\omega_{\rm H}$, $\omega_{\rm L}$, $2\omega_{\rm LL}$, $\omega_{\rm LL}$, and $\omega_{\rm p}$ are given functions of the resonant radius $r_{\rm r}$, spin parameter $a_*$, and the mass $M$ of the central source. Hence, eliminating $r_{\rm r}$ from these expressions for the frequencies, we obtain relations among $\omega_{\rm H}$, $\omega_{\rm L}$, $2\omega_{\rm LL}$, $\omega_{\rm LL}$, and $\omega_{\rm p}$. Parameters are $a_*$ and $M$. Figure 2 shows the relations by taking $\omega_{\rm L}$ as the abscissa in the case of $a_*=0$ and $M=2M_\odot$. The straight line of $\omega_{\rm L}$ -- $\omega_{\rm L}$ relation is also shown for comparison. This figure should be compared with figure 2.9 of van der Klis (2004). The latter shows observed frequency-correlations among various QPOs in neutron-star LMXBs. Comparison suggests that the observed frequency correlations of QPOs seem to be qualitatively described by the present disk oscillation model. More close comparison between observations and the model is made in the next section. \begin{figure} \begin{center} \FigureFile(80mm,80mm){figure-2.eps} \end{center} \caption{Various frequencies as functions of $\omega_{\rm L}$. The frequency of precession shown here, $\omega_{\rm p}$, is the absolute value. In the main part of this figure, the value of $\omega_{\rm p}$ is negative (retrograde). The values of parameters adopted here are $M=2M_\odot$ and $a_*=0$ } \end{figure} \section{Correlation of QPO Frequencies} \subsection{Hectohertz QPOs} In atoll sources (less luminous neutron-star LMXBs), hectohertz QPOs (hHz QPOs) have been observed (see figure 2.9 of van der Klis 2004). Their frequencies are in the range of 100 -- 200 Hz, and seen in atoll sources in most state. Their presence in Z sources is, however, uncertain. Different from kHz QPOs, their frequencies are approximately constant, which is similar across sources (van der Klis 2004). The hectohertz QPOs can be intrerpreted in the present model as observations of warp. Figure 2 shows that for a wide range of variations of $\omega_{\rm L}$, $\omega_{\rm p}$ is approximately constant around 100 -- 200 Hz, which is consistent with the observational characteristics of hHz QPOs. The fact that the value of $\vert\omega_{\rm p}\vert$ remains around 100 -- 200 Hz is related to the fact that the lower limit of $\omega_{\rm P}(<0)$ shown in figure 1 is around 100 -- 200 Hz. The lower limit of $\omega_{\rm P}(<0)$ depends on $a_*$. If $a_*>0$, the maximum value of $\vert \omega_{\rm p}\vert$ slightly decreases from that in the case of $a_*=0$, while it increases with decrease of the mass of the central source. \subsection{Correlation of pair kHz QPOs} As argued in previous papers (e.g., Kato 2004b; Kato and Fukue 2006), we think that the high-frequency pair QPOs in black-hole LMXBs are oscillations of $\omega_{\rm L}$ and $2\omega_{\rm LL}$. Their frequency ratio is just 3 : 2, since we assume that in black-hole sources the warp has no precession.\footnote{ The reason why there is a difference of disk precession in black-hole and neutron-star sources is a subject to be clarified, but we suppose that it will be related to the difference of surfaces of the central sources. } In the case of neutron-star LMXBs, we assume that the warp has time-dependent precession although the pair oscillations are still $\omega_{\rm L}$ and $2\omega_{\rm LL}$. We examine here whether the frequency correlation in kHz QPOs can be accounted for by this picture. Figure 3 shows $\omega_{\rm L}$ as a function of $2\omega_{\rm LL}$ for $a_*=0$ in the frequency range in which the ratio of the above two frequencies is around 3 : 2, i.e., the precession is not too large. For comparison, the $\omega_{\rm H}$ -- $2\omega_{\rm LL}$ relation is also shown. In figure 3, the mass $M$ of the central sources is taken so that $2\omega_{\rm LL}$ becomes 600Hz in the case of no precession. This means that we have adopted $M=2.4M_\odot$, since $2\omega_{\rm LL}$ is given by $2\omega_{\rm LL}=1.43\times 10^3(M/M_\odot)^{-1}$ Hz in the case of $a_*=0$ (e.g., see Kato and Fukue 2006). Bursa (2003, see also Abramowicz 2005, Klu{\'z}niak 2005) plotted the observed data of pair QPOs of some typical neutron-star sources on a diagram of the upper kHz QPO frequency versus lower kHz QPO frequency, in order to see how their time changes are correlated. The plots obtained by Bursa have been superposed on figure 3 by regarding the lower QPO frequencies as $2\omega_{\rm LL}$. Figure 3 shows that the observed correlated changes of the upper and lower QPO frequencies are qualitatively described by the correlated changes of $\omega_{\rm L}$ (or $\omega_{\rm H}$) and $2\omega_{\rm LL}$. \begin{figure} \begin{center} \FigureFile(80mm,80mm){figure-3.eps} \end{center} \caption{ Dependence of $\omega_{\rm H}$ and $\omega_{\rm L}$ on $2\omega_{\rm LL}$ in the case of $a_*=0$ and $M=2.4M_\odot$. The plots of the upper versus lower frequencies of the observed pair kHz QPOs obtained by Bursa (2003) are superposed by taking the frequency of the lower kHz QPOs as $2\omega_{\rm LL}$. This figure shows that the plots of the observed data are between two curves of $\omega_{\rm H}$ -- $2\omega_{\rm LL}$ and $\omega_{\rm L}$ -- $2\omega_{\rm LL}$. Bursa's diagram is taken from Abramowicz (2005). } \label{fig:3} \end{figure} \subsection{KHz QPOs in low frequency range and LF QPOs} As shown in figure 1, resonance occurs at two different radii when precession is retrograde. One is close to $4r_{\rm g}$, while the other is at an outer region. In some sources only the oscillations in the outer resonance will be observed, since the inner region of disks may be highly perturbed from the steady state by magnetic and/or radiative disturbances from the central source. Here, we consider characteristics of resonant oscillations which are excited at the outer resonant radius. In order to compare them with observational results, dependences of $\omega_{\rm H}$, $\omega_{\rm L}$, $\omega_{\rm p}$, and $2\omega_{\rm LL}$ on $\omega_{\rm LL}$ are shown in figure 4. In addition to them, the $(3\Omega+\kappa)_{\rm r}$ -- $\omega_{\rm LL}$ relation is drawn in figure 4, where $(3\Omega+\kappa)_{\rm r}$ is the frequency of one of resonant oscillations with $m=3$. There are other oscillation modes with high frequency. Among such oscillations, the oscillations of $(3\Omega-\kappa)_{\rm r}$ (i.e., $m=3$) have frequencies close to $\omega_{\rm H}$, and thus it is not shown here. Resonant oscillations with $(2\Omega+\kappa)_{\rm r}$ are also not shown in figure 4, since the curve of the $(2\Omega+\kappa)_{\rm r}$ -- $\omega_{\rm LL}$ relation is between two curves of the $(3\Omega+\kappa)_{\rm r}$ -- $\omega_{\rm LL}$ and of the $\omega_{\rm H}$ -- $\omega_{\rm LL}$ relations. The straight line of $\omega_{\rm LL}$ -- $\omega_{\rm LL}$ is shown in the figure. \begin{figure} \begin{center} \FigureFile(80mm,80mm){figure-4.eps} \end{center} \caption{Dependences of various frequencies of resonant oscillations on $\omega_{\rm LL}$. The frequencies considered here are in a low frequency range in order to compare them with those observed in Cir X-1. The most upper curve is the $(3\Omega+\kappa)_{\rm r}$ -- $\omega_{\rm LL}$ relation. Figure 6 of Boutloukos et al. (2006), which shows the observed frequency -- frequency correlations in Cir X-1, has been superposed assuming that $\nu_{\rm LF}$ corresponds to $\omega_{\rm LL}$. } \end{figure} In order to examine whether the curves drawn in figure 4 can account for observational frequency-frequency correlations, the plots of observational data for Cir X-1 by Boutloukos et al. (2006) (figure 6 of their paper) are superposed on figure 4, assuming that the frequency of LF QPOs corresponds to $\omega_{\rm LL}$. The superposed fugure seems to show that our disk oscillation model can well describe the observed frequency correlations. That is, $\omega_{\rm LL}$ corresponds to the frequency of LF QPOs and $\omega_{\rm L}$ to that of the lower kHz QPOs. The frequency $\omega_{\rm H}$, however, seems to be slightly lower than the observed upper frequency of kHz QPOs. As frequencies of resonant oscillations whose frequencies are higher than $\omega_{\rm H}$, we have $(2\Omega+\kappa)_{\rm r}$, $(3\Omega+\kappa)_{\rm r}$,... Among them, the frequency $(3\Omega+\kappa)_{\rm r}$ seems to well describe the observed data of the upper kHz QPOs. The reason why oscillations of $(3\Omega+\kappa)_{\rm r}$ dominate over those of $(2\Omega+\kappa)_{\rm r}$ and $\omega_{\rm H}$ is a subject to be discussed further. When resonance occurs at an inner region of disks, $2\omega_{\rm LL}$ is interpreted as the frequency of the lower kHz QPOs, as shown in figure 3. However, in the present case in which resonance occurs at an outer region, $\omega_{\rm LL}$ (and $2\omega_{\rm LL}$) is no longer the frequency of the lower kHz QPOs, since it is too low. It becomes the frequency of LF QPOs. The reason why $\omega_{\rm LL}$, not $2\omega_{\rm LL}$, is the frequency of LF QPOs will be the followings. The outer region of the disk will not be covered with a hot torus. Thus, different from the case of black-hole X-ray sources, the resonant region will be outside a hot torus. In such case one-armed oscillations will be observed by the frequency of the oscillations themselves, not by two-fold ones. The inertial-acoustic oscillations with $m=1$, however, propagate inward from the resonant radius (see the propagation region shown in figure 6 of Kato and Fukue 2006). Hence, if the oscillations propagate inward and enter into an hot torus, the oscillations of two-fold frequency will be observed (see Kato and Fukue 2006). This may be one of possible causes of the occasional appearance of the two-fold frequencies of $\omega_{\rm LL}$ in Cir X-1. In summary, the LF QPOs will be manifestation of the oscillations of $\omega_{\rm LL}$, and the lower kHz QPOs ($\nu_{\ell}$ in notations of Boutloukos et al. 2006) will be a mixture of $\omega_{\rm p}$, $\omega_{\rm L}$, and $\omega_{\rm H}$. The upper kHz QPOs ($\nu_u$ in notations of Boutloukos 2006) are suggested to be oscillations of higher $m$ modes such as those with $(3\Omega+\kappa)_{\rm r}$ or $(2\Omega+\kappa)_{\rm r}$. \section{Discussion} In this paper we have suggested that the pair kHz QPOs and the low-frequency QPOs in neutron-star LMXBs are qualitatively described, in a wide frequency range, as resonantly excited disk-oscillations in warped disks. The oscillations are non-axisymmetric inertial-acoustic or g-mode oscillations of $m=1$, $m=2$, and sometimes $m=3$. Both inertial-acoustic oscillations and g-mode oscillations are possible candidates of QPOs, but the former will be better, since the magnitudes of temperature and density variations associated with the oscillations are larger in the former than in the latter. Especially, the oscillations of $\omega_{\rm LL}$ will be inertial-acoustic oscillations, since they propagate inwards from the resonant radius and thus observational appearance of the harmonics, $2\omega_{\rm LL}$, will be conceivable (see subsection 3.3). In the present disk oscillation model, the cause of correlated frequency-changes of various QPOs is time change of resonant radius resulting from time-dependent precession of the warp. Has the warp been observed? We think that it has been observed as the hectoherz QPOs in atoll sources (see figure 2, in which $\omega_{\rm p}$ is around 100 Hz $\sim$ 200 Hz in a wide frequency range of $\omega_{\rm L}$). Furthermore, in sources in which kHz QPOs appear in low frequency region, precession of the warp will be a cause of vertical spread of $\nu_\ell$ in figure 4. The closeness of $\omega_{\rm p}$ and $\omega_{\rm L}$ in low frequency region comes from the following situation. The resonant condition, $\kappa=(\Omega+\omega_{\rm p})/2$ gives $\omega_{\rm p}=(2\kappa-\Omega)_{\rm r}$, which is $\sim \Omega_{\rm r}$ when the resonant radius is far from the innermost region since there $\kappa\sim\Omega$. On the other hand, $\omega_{\rm L}=(2\Omega-\kappa)_{\rm r}\sim \Omega_{\rm r}$ in such a region. It is emphasized that by difference of frequency range in consideration, the counterparts of the oscillation modes to observed QPOs are different. That is, in the case in which the frequencies of kHz QPOs are high (subsection 3.2), the counterpart of the lower kHz QPOs is $\omega_{\rm LL}$ (more exactly, $2\omega_{\rm LL}$) (see figure 3). In the sources in which the frequencies of kHz QPOs are low (subsection 3.3), however, the counterpart of the lower kHz QPOs is $\omega_{\rm L}$, not $\omega_{\rm LL}$. In the latter case, the oscillation of $\omega_{\rm LL}$ corresponds to $\nu_{LF}$ (see figure 4). There are some problems to be examined or to be clarified further. First, it is known that the frequencies of lower kHz QPOs, $\nu_\ell$, and those of horizontal branch QPOs, $\nu_{\rm HBO}$, are correlated as $\nu_\ell \sim 0.08 \nu_{\rm HBO}$ (Psaltis et al. 1999; Belloni et al. 2002). In the framework of our present model, there are no oscillation modes corresponding to the horizontal branch QPOs. They might be related to resonant oscillations resulting from other types of resonance. Even in the framework of the resonant oscillation model in warped disks, there are other types of resonant osciilations (Kato 2004a; 2005b), although their excitation is uncertain. Second, a detailed inspection of figure 3 shows that the $\omega_{\rm L}$--$2\omega_{\rm LL}$ relation cannot always well describe the observed data. That is, to sources whose lower frequencies are lower than 600 Hz (i.e., GX 4340+0 and GX-5), the observed data are generally above the $\omega_{\rm L}$--$2\omega_{\rm LL}$ curve on the diagram, rather close to the $\omega_{\rm H}$--$2\omega_{\rm LL}$ curve. A preliminary study seems to show that it is difficult to explain the deviation by adjusting the values of parameters (mass and spin of the central source). If we want to explain the difference by a deviation of disk rotation from the Keplerian one, a rather large deviation is necessary. If $\omega_{\rm H}$ is taken as the upper QPO frequency below 900 Hz and $\omega_{\rm L}$ above 900 Hz, qualitative agreement with observations becomes better. However, it seems not to be clear whether such choice of oscillation modes is physically acceptable. Some more consideration concerning the cause of the deviation is necessary, which is a subject in the future. \bigskip \leftskip=20pt \parindent=-20pt \par {\bf References} \par Abramowicz, M.A. 2005, Astron. Nachr. 326, No.9 \par Belloni, T., Psaltis, D., van der Klis, M. 2002, ApJ., 572, 392 \par Boutloukos, S., van der Klis, M., Altamirano, D., Klein-Wolt, M., Wijnands, R., Jonker, P.G., Fender, R.P. 2006, astro-ph/0608089 \par Bursa, M. 2003, unpublished \par Hirose, M., Osaki, Y. 1990, PASJ 42, 135\par Kato, S. 2001, PASJ, 53, 1\par Kato, S. 2003, PASJ, 55, 801\par Kato, S. 2004a, PASJ, 56, 559 \par Kato, S. 2004b, PASJ, 56, 905\par Kato, S. 2005a, PASJ, 57, L17 \par Kato, S. 2005b, PASJ, 57, 679 \par Kato, S., Fukue, J., \& Mineshige, S. 1998, Black-Hole Accretion Disks (Kyoto: Kyoto University Press)\par Kato, S., Tosa, M. 1994, PASJ, 46, 559 \par Kato, S., Fukue, J. 2006, PASJ , 58, 909\par Klu{\' z}niak, W. 2005, Astron. Nachr. 326, 820 \par Klu{\' z}niak, W., Abramowicz, M. A., Kato, S., Lee, W. H., \& Stergioulas, N. 2004, ApJ, 603, L89 \par Lubow, S.H. 1991, ApJ, 381, 259\par Psaltis, D., Belloni, T., van der Klis, M. 1999, ApJ., 520, 262\par Remillard, R.A. 2005, Astron. Nachr., 326, 804 \par Tosa, M. 1994, ApJ, 426, L81 \par van der Klis, M. 2004, in Compact stellar X-ray sources (Cambridge University Press), eds. W.H.G. Lewin and M. van der Klis (astro-ph/0410551) \par Whitehurst, R. 1988, MNRAS 232, 35 \par \bigskip\bigskip \end{document} It is emphasized that in the low frequency regime considered here, $2\omega_{\rm LL}$ and $\omega_{\rm LL}$ are not the lower frequency of the pair QPOs, but $\omega_{\rm LL}$ represents the lower-frequency QPOs (LF QPOs), which are much lower than the usual kHz pair QPOs. The presence of first harmonics (i.e., $2\omega_{\rm LL}$) will be understood in the following way. In the present case, the resonant radius is far outside of the disk and will be outside of a central torus even if it exists. Thus, observed frequency is $\omega_{\rm LL}$, not $2\omega_{\rm LL}$. However, the one-armed inertial-acoustic oscillation with $\omega_{\rm LL}$ has its propagation region inside the resonant radius (see figure 5 of Kato and Fukue 2006). Hence, propagating inwards, the oscillations may enter inside a central hot torus. In such case it is natural that the first harmonics, $2\omega_{\rm LL}$, is also observed as the photons which have passed inside the torus. Next, the frequency difference between the pair QPOs, i.e., $\omega_{\rm H}-\omega_{\rm L}$, is examined. In the sources in which the frequency ratio of the pair QPOs is close to 3 : 2, the difference decreases with increase of the frequencies (see, for example, figure 3). In Cir X-1, however, the frequency deference increases with increase of the frequency of the kHz QPOs (Boutloukos and van der Klis (2006). We examine whether such trend can be accounted for in our present model. Figure 5 shows the frequency difference as functions of $\omega_{\rm L}$ or $\omega_{\rm H}$. This figure should be compared with figure 11 of Boutloukos and van der Klis (2006). This comparison shows that our model can qualitatively account for the observational trend shown in their figure 11. \begin{figure} \begin{center} \FigureFile(80mm,80mm){deltafreq-freq.eps} \end{center} \caption{$a_*=0$. The abcissa for thin curves is $\omega_{\rm L}$ and that of the thick curves is $\omega_{\rm H}$. Between two curves of the same thickness, the upper one is for $m=M_\odot$ and the lower one is for $M=2M_\odot$. } \label{fig:figure 5} \end{figure} Figure 1 is drawn with $a_*=0$ and $M=2.4M_\odot$, as mentioned before. In order to see how the $\omega_{\rm H}$--$2\omega_{\rm LL}$ and $\omega_{\rm L}$--$2\omega_{\rm LL}$ relations on the frequency-frequency diagram change when the mass is different from $2.4 M_\odot$, two cases of $M=2M_\odot$ and $3M_\odot$, with $a_*=0$, are shown in figures 2 and 3. To compare with figure 1, the $\omega_{\rm H}$--$2\omega_{\rm LL}$ and $\omega_{\rm L}$--$2\omega_{\rm LL}$ relations and the 3:2 curve drawn in figure 1 are duplicated in these figures by thin curves. The effects of increase of $a_*$ on the frequency-frequency diagram are similar with those of decrease of $M$. That is, let us consider a case in which $a_*>0$ with $M=2.4M_\odot$. Then, the $\omega_{\rm H}$--$2\omega_{\rm LL}$ and $\omega_{\rm L}$--$2\omega_{\rm LL}$ curves shift on the frequency-frequency diagram, like a case of smaller $M$. Hence, in a case of $a_*>0$, if the mass is larger than $M=2.4M_\odot$ by a certain amount, the $\omega_{\rm H}$--$2\omega_{\rm LL}$ and $\omega_{\rm L}$--$2\omega_{\rm LL}$ curves on the frequency-frequency diagram shifts left below, and the cross point can pass the point (600Hz, 900Hz) . For example, when $a_*=0.3$ the required value of $M$ for which the two frequency-frequency curves pass the point (600 Hz, 900 Hz) is $3.0M_\odot$. In this case of $a_*=0.3$ and $M=3.0M_\odot$, the $\omega_{\rm H}$--$2\omega_{\rm LL}$ and $\omega_{\rm L}$--$2\omega_{\rm LL}$ curves are rather close to those in figure 1 of $a_*=0$ and $M=2.4M_\odot$. Figure 1, with the help of figures 2 and 3, shows that the $\omega_{\rm L}$--$\omega_{\rm LL}$ relation can roughly describe Bursa's plots, but a systematic difference which cannot be buried by adjustments of $a_*$ and $M$ seems to remain. One of conceivable reasons will be mentioned in the last section. In order to show how much change of $r_{\rm r}$ is necessary to obtain the required range of frequency change of $\omega_{\rm LL}$, $r_{\rm r}$--$2\omega_{\rm LL}$ relation is shown in figure 2 for two cases of $a_*=0$ and $a_*=0.3$. In this figure, the mass of the central source has been taken so that $2\omega_{\rm LL}$ is 600 Hz. That is, we adopt $M=2.4M_\odot$ and $M=3.0M_\odot$ for $a_*=0$ and $a_*=0.3$, respectively. \section{Possible Causes of Variation of Resonance Radius} The next problem is to consider causes of the time change of resonant radius. Two possible causes are conceivable. One is that the warp has precession and it changes with time (Kato 2005a). Another one is that the resonance between the disk oscillations and the warped disk occurs through vertical motions (not horizontal motions) and the resonant radius changes with time by change of vertical structure of disks (Kato 2005b). \subsection{Precession Model of Warp} The warp is now assumed to have precession with frequency $\omega_{\rm p}$. Here, $\omega_{\rm p}<0$ corresponds to the case in which the precession is prograde, and $\omega_{\rm p}>0$ does to a retrograde precession. Then, the nonlinear coupling bewteen the disk oscillation of ($\omega$, $m$, $n$) and the warp of ($\omega_{\rm p}$, 1, 1) brings about intermediate oscillations characterized by ($\omega\pm\omega_{\rm p}$, $m\pm 1$, $n\pm 1$), where arbitrary combinations of $\pm$ are possible. These intermediate oscillations have resonant interaction with the disks at the radius where their dispersion relation is satisfied. In the case of horizontal resonance, the resonant radius is given by the solution of (e.g., Kato 2004b) \begin{equation} [(\omega\pm\omega_{\rm p})-(m\pm 1)\Omega]^2-\kappa^2\sim 0. \label{3} \end{equation} As mentioned before, the radius where resonance occurs and the radius where the oscillations predominate must be the same for the resonance to occur efficiently. That is, the resonant condition is obtained by requiring that equation (\ref{1}) and equation (\ref{3}) are satisfied simultaneously. After some manipulations, the condition is found to be \begin{equation} \kappa={1\over 2}(\Omega-\omega_{\rm p}). \label{4} \end{equation} In the case of no precession, $\omega_{\rm p}=0$, the resonant condition is reduced to $\kappa=\Omega/2$. When $\omega_{\rm p}\not=0$, the resonant condition is changed from $\kappa=\Omega/2$, and the resonant radius, $r_{\rm r}$, shift from $4r_{\rm g}$, even when $a_*=0$. The dependence of $r_{\rm r}$ on $\omega_{\rm p}$ is shown in figure 5 for $a_*=0$ and $a_*=0.3$. The observed data in figure 1 show that variation of $2\omega_{\rm LL}$ in a single source is about, at maximum, 300 Hz. Figure 4 shows that such variation of $2\omega_{\rm LL}$ is possible if the resonant radius changes in the range of $3.5r_{\rm g}$ to $5.3r_{\rm g}$ in the case of $a_*=0$. Figure 5 then shows that such variation of $r_{\rm r}$ is possible, if the range of variation of $\omega_{\rm p}$ is $-120$ Hz to $200$ Hz.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Often, phase I trials in diseases like cancer, osteoarthritis, and psoriasis aim to find the maximum tolerated dose (MTD), the highest dose with toxicity rate lower than or close to a pre-specified target level, $p_T$. As in most statistical inference, an estimated MTD is usually produced to represent the true and unknown MTD. However, the \ estimation is always with noise and the probability of toxicity for the estimated MTD is never exactly the same as $p_T$. For this reason, the statistical community has been considering interval-based inference to account for the variabilities in the toxicity estimates. For example, \citet{cheung2002simple} propose to treat any dose with toxicity probability in the ``indifference interval'' $(p_T - \delta, p_T + \delta) $ as an estimated MTD, as long as a small $\delta \in (0, 1)$ is agreed upon at the design stage by the clinical team. Later, in \citet{ji2007dose,ji2010modified} and \citet{ji2013modified}, the authors further developed toxicity probability interval (TPI) and modified TPI (mTPI) methods, in which they formally proposed a decision theoretic framework linking the dose-finding decisions of ``Stay'' (S), ``De-escalation'' (D), and ``Escalation'' (E) with the equivalence interval $EI=(p_T - \epsilon_1, p_T + \epsilon_2)$, over-dosing interval $OI=(p_T+\epsilon_2, 1)$, and under-dosing interval $UI=(0, p_T-\epsilon_1)$, respectively. For a given dose $d$, the authors calculate $Pr(p_d \in EI \mid data)$, $Pr(p_d \in OI \mid data)$, and $Pr(p_d \in UI \mid data)$, three posterior probabilities that the toxicity rate $p_d$ belongs to each of the three dosing intervals. The authors associate the dose-finding decisions with these three posterior probabilities. Distinctively, inference in mTPI is directly linked to the posterior probabilities of the three dosing intervals, which is different from a class of other interval designs \citep{ivanova2007cumulative,oron2011dose,liu2015bayesian} that use a point estimate $\hat{p}_d$ and compare $\hat{p}_d$ with three dosing intervals. That is, these interval designs do not directly calculate posterior probabilities of the intervals. They use the intervals as a thresholding device where their inference is still based on a point estimate of $p_d$. Interval-based designs, such as mTPI \citep{ji2010modified} are based on parametric models and use model-based inference for decision making. In \citet{ji2013modified} and \citet{yang2015integrated} the superiority of the interval-based designs over the standard rule-based designs, such as the 3+3 design is established using massive simulations and crowd sourcing. One critical and distinctive feature of mTPI is its ability to precalculate all the dose finding decisions in advance, allowing investigators to examine the decisions before the trial starts. Therefore, even though a model-based design, mTPI exhibits the same simplicity and transparency as rule-based methods. However, some decision rules in mTPI could be debated in practice. For example, when the target toxicity probability $p_T=0.3$, and 3 out of 6 patients treated at a dose experience dose limiting toxicity (DLT) events, mTPI would suggest ``S'', stay at the current dose and enroll more patients to be treated at the dose. Since the empirical rate is 3/6, or 50\%, practitioners have argued that the decision should be ``D'', de-escalation instead of ``S''. Another case is when $p_T=0.3$ and 2 out of 9 patients experience DLT events at a dose, mTPI would suggest ``S'' as well. Investigators could argue that the decision should be ``E'', escalation since the empirical rate is 2/9, or 22\%. For this reason, \citet{yang2015integrated} proposed an ad-hoc remedy that allows the decision rules in the mTPI design to be modified by users. While this feature allows great flexibility in practice, it lacks solid statistical justification and therefore cannot be properly assessed. To this end, we propose mTPI-2, an extension of mTPI that solves the undesirable issue in the current decision under mTPI. We show that the suboptimal rules listed above are consequences of the Ockham's razor \citep{jefferys1992ockham}. The Ockham's razor usually helps Bayesian inference to automatically achieve parsimony by favoring simpler models. However, in the case of dose finding with small sample size, the Ockham's razor is too sharp and must be blunted. Otherwise, anti-intuitive decisions, such as those listed above, will be generated as a consequence of parsimonious inference under the Ockham's razor. In mTPI-2, we provide a new framework to blunt the Ockham's razor, which leads to an improved decision table. The remainder of the paper is organized as follows. Section 2 is devoted to Ockham's razor and its role in interval-based designs. Section 3 proposes mTPI-2 as a solution to blunt the Ockham's razor with a few simple theoretical results. Section 4 examines the numerical performance of mTPI-2, in comparison to the mTPI design using crowd sourcing. Section 5 introduces an online software that implements both methods and Section 6 ends the manuscript with a discussion. \section{Ockham's Razor and Interval-Based Designs} As an accepted principle in science, the Ockham's razor states the principle that an explanation of the facts should be no more complicated than necessary \citep{thorburn1918myth,jefferys1990bayesian, good1967bayesian,mackay1992bayesian,jefferys1992ockham}. A direct impact of Ockham's razor is on model selection, which favors ``smaller'' models if data can be fit similarly well by different models. Usually, in model selection one considers multiple models $\{M_i; i = 1, \ldots, I\}$, and for each model $M_i$, a set of parameters $\theta_i$. Bayesian inference involving model selection typically requires a prior $p(M_i)$ for the candidate model $i$ and a prior $p(\theta_i \mid M_i)$ for parameters $\theta_i$ that characterize the parameters of interests in model $M_i$. Formal posterior inference calculates the posterior probability of the model $p(M_i \mid data)$ and selects the model with the largest posterior probability. Numerous papers have shown that the inference based on the posterior probability $p(M_i \mid data)$ automatically applies the Ockham's razor, in that models with more parameters and larger parameter space are penalized. In general, the Ockham's razor helps Bayesian inference by selecting more parsimonious models. However, in the case of interval-based designs for dose finding, such as mTPI, Ockham's razor is too sharp and leads to practically undesirable decisions. To see this, we first conduct a quick review of the mTPI design. The mTPI design considers three intervals that partition the sample space $(0, 1)$ for the probability of toxicity $p_d$ at a given dose $d$: \begin{eqnarray} M_E: && p_d \in (0, p_T - \epsilon_1) \nonumber \\ M_S: &&p_d \in (p_T - \epsilon_1, p_T + \epsilon_2) \nonumber \\ M_D: &&p_d \in (p_T + \epsilon_2, 1) \label{eq:model} \end{eqnarray} The three intervals can be viewed as three models $M_i$ with index $i \in \{E, S, D\}$, where the three letters correspond to the dose-finding decisions if they are selected. For example, when $M_E$ is selected as the winning model, the corresponding decision is ``E'', to escalate from the current dose. Typically, $p_T$ ranges from $0.1$ to $0.3$ in phase I trials, and $\epsilon$'s are usually small, say $\le 0.05$. In mTPI, the observed data are integers $(x_d, n_d)$, where $n_d$ and $x_d$ represent the numbers of patients treated at dose $d$ and those who have experienced DLT events, respectively. Given $p_d$, the probability of toxicity at dose $d$, $x_d \mid p_d \sim Bin(n_d, p_d)$ a binomial distribution. The mTPI design assumes that $p_d \sim Beta(1, 1)$, and the dose-finding decision rule for dose $d$ is given by \begin{equation} \mathcal{D}_{\mbox{mTPI}} = \arg \max_{i \in \{E, S, D\}} UPM(i, d) \label{eq:mTPI-rule} \end{equation} where \begin{equation} UPM(i, d) = \frac{Pr(M_i \mid \{x_d, n_d\})}{S(M_i)} \label{eq:UPM} \end{equation} is the posterior probability of the interval $M_i$ divided by the length of the interval. We first show that the decision rule $\mathcal{D}_{\mbox{mTPI}}$ is optimal if intervals $M_i$ are considered part of the candidate models in a model-selection framework. To see this, we introduce an additional parameter $m_d \in \{M_E, M_S, M_D\}$, which denotes the indicator of the three candidate models (intervals) to which $p_d$ belongs. In particular, Theorem 1 below shows that decision $\mathcal{D}_{\mbox{mTPI}}$ corresponds to the Bayes rule, the optimal decision rule that minimizes the posterior expected loss under a 0-1 loss function $\ell(a, m_d)$ \citep{berger19880}, defined by \begin{equation} \ell(a=i, m_d=M_j) = \left\{\begin{array}{cc}1, & \mbox{ if } i \ne j; \\ 0, & \mbox{ if } i = j, \end{array}\right. \quad \mbox{ for } i, j \in \{E, S, D\}. \label{eq:0-1loss-1} \end{equation} The loss function $\ell(a, m_d)$ states that the loss for taking action $i$ is 0 if model $M_i$ is the winning model, and 1 otherwise. \vskip 3ex \noindent {\bf Theorem 1.} {\it Given the sampling model $x_d \mid p_d \sim Bin(n_d, p_d)$ and priors \begin{eqnarray*} p_d \mid m_d = M_i &\sim& \frac{1}{S(M_i)} I(p_d \in M_i) \\ p(m_d = M_i) &=& \frac{1}{3} \end{eqnarray*} independently for all doses, and given the 0-1 loss function $\ell(i, M_j)$ in \eqref{eq:0-1loss-1} for three decisions, where $i,j \in \{E, S, D\}$, decision rule $\mathcal{D}_{\mbox{mTPI}}$ in \eqref{eq:mTPI-rule} is optimal in the sense that it minimizes the posterior expected loss. } \noindent Proof is given in the Appendix A. \vskip 3ex \noindent The Bayes rule $\mathcal{D}_{\mbox{mTPI}}$ selects the action $i \in \{E, S, D \}$ corresponding to the model $M_i$ with the largest posterior probability. This inference is subject to Ockham's razor. As an example, when $x_d =3$ and $n_d =6$, i.e., the decision rule $\mathcal{D}_{\mbox{mTPI}}$ boils down to comparing the $UPM(S, d)$ and $UPM(D, d)$, which involves the calculation of the posterior probability $Pr(M_i \mid {x_d, n_d})$ for $M_S=(p_T - \epsilon_1, p_T + \epsilon_2)$ and $M_D = (p_T + \epsilon_2, 1)$. For each model, the size of the model is the length of the interval in the model. The model size $S(M_D) = (1-p_T -\epsilon_2)$ is usually larger than the size $S(M_S) = (\epsilon_1 + \epsilon_2)$ since usually $p_T$ is close to 0.3 or 0.16, and $\epsilon_{1, 2} \le 0.05.$. The posterior probability $Pr(M_i \mid {x_d, n_d})$ can be written as a difference of incomplete beta functions evaluated at the boundaries of the two models. Some theoretical discussion of how $Pr(M_i \mid x_d, n_d)$ depends on $x_d$, $n_d$ and interval definitions are given in Appendix B. When $x_d=3$ and $n_d=6$, it can be shown that the $UPM(S, d)$ is larger than $UPM(D, d)$ for $p_T=0.3$ and $\epsilon_1=\epsilon_2=0.05$. Consequently, even though the empirical rate $x_d/n_d = 0.5$ is greater than $p_T=0.3$, mTPI still prefers $S$, to stay at the current dose. In summary, due to the Ockham's razor which prefers more parsimonious model, in this case model $M_S$ with a shorter interval length, mTPI chooses to stay at dose $d$ when $x_d=3$ and $n_d=6$. Theoretically, the exact proof depends on the convexity of the incomplete beta function, which is still an open question \citep{swaminathan2007convexity} with no conclusion. Instead, we provide a numerical illustration next. As an example that shows the effect of the Ockham's razor, in Figure \ref{fig:razor-examp}, mTPI will select decision ``S'' even when $x_d=3$ out of $n_d=6$ patients experience the DLT events, and the posterior distribution is clearly peaked inside the interval $M_D$. \begin{figure} \begin{center} \includegraphics[scale=0.75]{beta.pdf} \caption{An example demonstrating the effect of the Ockham's razor in mTPI. Shown is the posterior density of $p_d$ when $x_d=3$ and $n_d=6$. Even though the shape of the density suggests that dose $d$ might be above the MTD, e.g., the posterior mode is to the right of the equivalence interval (shown as the two vertical bars), the UPM for decision $S$ (stay) is still larger than that the UPM for decision $D$ (de-escalate). Therefore, mTPI would still choose to ``Stay'' despite that the shape of the posterior density of $p_d$ indicates otherwise. This is due to the larger size (longer length) of the interval $M_D$ than $M_S$ and the Ockham's razor, which prefers the smaller model $M_S$.} \label{fig:razor-examp} \end{center} \end{figure} \section{A Solution to Blunt the Ockham's Razor: mTPI-2} \subsection{Decision theoretic framework} We provide a solution to blunt the Ockham's razor for mTPI and avoid the undesirable decisions, such as $S$ when 3 out of 6 patients experience DLT at a given dose. Statistically speaking, there is nothing wrong with the current decision in mTPI as the Bayesian inference takes into account the model complexity when choosing the optimal decision. However, for human clinical trials patient safety often outweighs statistical optimality. To this end, we modify the decision theoretic framework and blunt the Ockham's razor. We call the new class of designs mTPI-2, since the framework is motivated by that in mTPI. We show next that the framework blunt the Ockham's razor and leads to safer and more desirable decision rules. Importantly, mTPI-2 preserves the same simple and transparent nature exhibited in mTPI, facilitating its practical implementation by both statisticians and clinicians. The basic idea is to divide the unit interval $(0, 1)$ into subintervals with equal length, given by $(\epsilon_1 + \epsilon_2)$. This results in multiple intervals with the same length, which are considered multiple equal-sized models. See Figure \ref{fig:mTPI-2}. For clarity, we now denote $EI$ the equivalence interval $(p_T - \epsilon_1, p_T+\epsilon_2)$, and $LI$ a set of intervals below $EI$, and $HI$ a set of intervals above $EI$. For example, when $p_T=0.3$ and $\epsilon_1=\epsilon_2=0.05$, the equivalence interval is $EI = (0.25, 0.35)$, the $LI$ intervals are $$LI=\{M_1^{LI}=(0.15, 0.25) , \; M_2^{LI}=(0.05, 0.15), \; M_3^{LI}=(0, 0.05)\}, $$ and the $HI$ intervals are \begin{eqnarray*} HI & = & \{M_1^{HI}=(0.35, 0.45), \; M_2^{HI}=(0.45, 0.55), \; M_3^{HI}=(0.55, 0.65), \; M_4^{HI}=(0.65, 0.75), \;\\ && M_5^{HI}=(0.75, 0.85), M_6^{HI}=(0.85, 0.95), \; M_7^{HI}=(0.95, 1)\}. \end{eqnarray*} The same as mTPI, if the equivalence interval $M^{EI}= (p_T - \epsilon_1, p_T + \epsilon_2)$ has the largest UPM, it is selected as the winning model and the dose-finding decision of mTPI-2 is $S$, stay. If any interval $M_i^{HI}$ or $M_i^{LI}$ has the largest UPM, it will be selected as the winning model and the dose-finding decision is $D$ or $E$, respectively. In Figure \ref{fig:mTPI-2}, for the same posterior density corresponding to $x_d=3$ and $n_d=6$, interval $M_2^{HI}$ exhibits the largest UPM and therefore the decision is now $D$. Note that the same decision theoretic framework as mTPI is in place except that now there are multiple intervals corresponding to $D$ or $E$, and the intervals all have the same length, thereby blunting the Ockham's razor. \begin{figure} \begin{center} \includegraphics[scale=0.75]{mTPI-2.pdf} \caption{An example demonstrating the new framework of mTPI-2. Here, $EI$ is the equivalence interval $(p_T - \epsilon_1, p_T+\epsilon_2)$, and $LI$ denotes the intervals below $EI$, and $HI$ denotes the intervals above $EI$. Interval $M_2^{HI}$ exhibits the largest UPM and therefore the decision is now $D$, to de-escalate.} \label{fig:mTPI-2} \end{center} \end{figure} \subsection{Optimal rule for mTPI-2} We again consider a 0-1 loss function $l(a, m_d )$, but with multiple intervals, and multiple decisions. Shown in Table \ref{tab:loss} the loss function divides the parameter space $(0,1)$ of $p_d$ into $(k_1 + k_2 +3)$ intervals, with $(k_1 + 1)$ intervals below the equivalence interval $M^{EI}$ and $(k_2+1)$ intervals above $M^{EI}$. Except for the two boundary intervals $M_{k_1+1}^{LI}$ and $M_{k_2+1}^{HI}$, all the intervals have the same length $\delta=(\epsilon_1 + \epsilon_2)$. The loss $l(a, m_d )$ is a function of action $a$ that selects any of the $(k_1 + k_2 + 3)$ intervals as the winning model, and the parameter $m_d$ indexes the model, which takes one of the intervals $M_i$. There are a total of $(k_1+k_2+3)$ intervals. Consider the statistical decision $a$ to select one interval as the winning interval into which the toxicity probability $p_d$ falls. However, selecting a winning interval must be translated into dose-finding decisions. To this end, we consider a deterministic mapping. Define $a^* \in \{E, S, D\}$ the three dose-finding decisions for the trial. Based on ethical consideration, whenever the statistical decision $a$ is in set $LI$, $EI$, or $HI$, the corresponding trial decision $a^*$ takes value $E$, $S$, or $D$, respectively. Mathematically, this means that \begin{equation} a^* = \left\{\begin{array}{ll} E, & \mbox{ if } a \in LI \\ S, & \mbox{ if } a = EI \\ D, & \mbox{ if } a \in HI . \end{array} \right. \label{eq:a-star} \end{equation} The goal is to optimally select $a$, which leads to $a^*$. \begin{table}[htbp] \begin{center} \caption{A loss function of dose finding decisions $a$ and model parameter $m_d$. Columns are the sample space of $m_d$, i.e., the candidate models are the toxicity probability intervals and rows are the action values for $a$ and $a^*$ \eqref{eq:a-star}. } \label{tab:loss} \resizebox{\textwidth}{!}{ \begin{tabular}{|l||c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{Loss function $\ell(a,m_d)$, for $a$ to select a model $\in \{LI, EI, HI \}$ and $m_d $ also takes an interval value $\in \{LI, EI, HI\}$.} \\ \hline & \multicolumn{3}{|c|}{$m_d \in LI$: Intervals below the Equiv. Interval} & { $m_d=EI$: Equiv. Interval} & \multicolumn{3}{|c|}{$m_d \in HI$: Intervals above Equiv. Interval} \\ \hline Actions $a$, $a^*$ & $M_{k_1+1}^{LI}=(0, p_T - \epsilon_1 - k_1 \delta)$ & $\cdots$ & $M_{1}^{LI} = (p_T - \epsilon_1 - \delta, p_T - \epsilon_1)$ & $M^{EI} = (p_T - \epsilon_1, p_T + \epsilon_2)$ & $M_1^{HI}=(p_T + \epsilon_2, p_T + \epsilon_2 + \delta)$ & $\cdots$ & $M_{k_2+1}^{HI}=(p_T + \epsilon_2 + k_2 \delta, 1)$ \\ [2ex] \hline $a=M_1^{LI}, a^*=E$ & \cellcolor{Gray} 0 & 1 & 1& 1 & 1 &$1$ & $1$ \\ [2ex] \hline \multicolumn{8}{|c|}{$\cdots$$\cdots$}\\ [3ex]\hline $a=M_{k_1+1}^{LI}, a^*=E$ & $1$ & $1$ &\cellcolor{Gray} 0 & 1 & $1$ & $1$ & $1$\\ [2ex] \hline $a=M^{EI}, a^*=S$& $1$ & $1$ & $1$ & \cellcolor{Gray} 0 & 1& $\cdots$ & 1 \\ [2ex] \hline $a=M_1^{HI}, a^*=D$ & 1 & 1 & 1& 1 & \cellcolor{Gray} 0 &$1$ & $1$ \\ [2ex] \hline \multicolumn{8}{|c|}{$\cdots$$\cdots$}\\ [3ex] \hline $a=M_{k_2+1}^{HI}, a^*=D$ & $1$ & $1$ & 1 & 1 & $1$ & $\cdots$ & \cellcolor{Gray} 0 \\ [2ex] \hline \end{tabular} } \end{center} \end{table} Assume that given $n_d$, $x_d$ follows a binomial distribution, i.e., $f(x_d \mid n_d, p_d) \propto p_d^{x_d} (1-p_d)^{n_d - x_d}$. For $p_d$, given interval (model) $m_d=M_i$, assume a prior \begin{equation} p_d \mid m_d=M_i \sim Beta(1, 1) I(p_d \in M_i) \label{eq:prior}. \end{equation} Assume prior probability $p(m_d = M_i)$ is the same for all the models (intervals), where $M_i \in \cup\{LI, EI, HI\}$. Theorem 2 below provides the optimal decision rule for mTPI-2. \bigskip \noindent {\bf Theorem 2.} {\it The new Bayes rule $\mathcal{D}_{\mbox{mTPI-2}} \equiv \mathcal{D}_{a^*}$ that takes action $a^* \in \{E, S, D\}$ corresponds to the Bayes rule $\mathcal{D}_{a}$ that takes actions $a \in \{LI, EI, HI\}$. Under $\ell(a, m_d)$ in Table \ref{tab:loss} and the hierarchical model $\left\{f(x_d \mid n_d, p_d), f(p_d \mid m_d), p(m_d)\right\}$ above, $\mathcal{D}_{\mbox{mTPI-2}}$ is given by the following rule: \begin{itemize} \item If $M_{max} \equiv \arg\max_i Pr(m_d = M_i \mid \{x_d, n_d\}) = EI$, $\mathcal{D}_{\mbox{mTPI-2}} = S$, to Stay. \item If $M_{max} \equiv\arg\max_i Pr(m_d = M_i \mid \{x_d, n_d\}) \in LI$, $\mathcal{D}_{\mbox{mTPI-2}} = E$, to Escalate. \item If $M_{max} \equiv\arg\max_i Pr(m_d = M_i \mid \{x_d, n_d\}) \in HI$, $\mathcal{D}_{\mbox{mTPI-2}} = D$, to De-escalate. \end{itemize}} \noindent Proof is immediate given the fact that $\mathcal{D}_{a}$ is the Bayes rule for the loss function in Table \ref{tab:loss} and the definition in \eqref{eq:a-star}. \bigskip Theorem 2 states that the optimal rule is to first find the interval $M_{max}$ with the largest posterior probability. If $M_{max}$ is the $EI$, the equivalence interval, stay at the current dose and treat the next cohort of patients at that dose; if $M_{max}$ is one of the intervals in $LI$, escalate to and treat the next cohort of patients at the next higher dose; if $M_{max}$ is one of the intervals in $HI$, de-escalate to and treat the next cohort of patients at the next lower dose. This decision rule minimizes the Bayes risk, i.e., the posterior expected loss. \bigskip \noindent {\bf Corollary 1:} The optimal decision $\mathcal{D}_{mTPI-2}$ is equivalent to the following procedure: Assume dose $d$ is the current dose being used for treatment. \begin{enumerate} \item Compute $UPM(i, d)$ in \eqref{eq:UPM} for each interval $M_i \in \cup \{LI, EI, HI\}$. Let $M_{max}$ be the interval with the largest $UPM$. \item If $M_{max}$ is the $EI$, in $LI$, or in $HI$, the optimal rule $\mathcal{D}_{mTPI-2}$ is to Stay, Escalate, or De-escalate, respectively. \end{enumerate} Proof: It suffices to prove $Pr(m_d = M_i \mid \{x_d, n_d\}) = UPM(i, d),$ which is immediate. \subsection{Design Algorithm} The implementation of the mTPI-2 design is as simple and transparent as mTPI. A decision table of all the optimal decisions in Corollary 1 can be precalculated. See Figure \ref{fig:tables} as an example for a trial with $p_T=0.3$ and $\epsilon_1=\epsilon_2=0.05$. The table in Figure \ref{fig:tables}(a) guides all the dose assignment decisions throughout the trial. For example, suppose a trial has five candidate doses, and dose 3 is being used to treat patients. Then the possible doses for treating future patients are doses 2, 3, and 4. Record $n_3$ and $x_3$ as the number of patients treated and number of patients experienced DLT at dose 3, then go to the table entry corresponds to row $x_3$ and column $n_3$, and treat the next cohort of patients based on the decision in the table. For example, if $x_3=3$ and $n_3=6$, the decision is $D$ in Figure \ref{fig:tables}(a), and the next patients will be treated at dose 2. Note that in contrast, Figure \ref{fig:tables}(a) would suggest $S$ under mTPI, a now suboptimal decision under mTPI-2. More discussion about Figure \ref{fig:tables} will follow next. The full algorithm of mTPI-2 is given below, assuming patients are enrolled in cohorts of size $\ge 1.$ \begin{center} \fbox{\fbox{\parbox{6.5 in}{ \begin{description}% \item[] {\bf Optimal decision rule:} Suppose that the current dose is $d$, $d \in \{1,\cdots,D\}$ candidate doses. After the toxicity outcomes of the most recent patient cohort are observed, denote $(x_d, n_d)$ the current observed trial data. Select the dose for treating the next cohort among $\{(d-1), d, (d+1)\}$ based on the optimal rule $\mathcal{D}_{mTPI-2}$ in Corollary 1. There are two exceptions: if $d=1$, the next available doses are $\{d, (d+1)\}$; if $d=D$, the next available doses are $\{(D-1), D\}$.% \item[] {\bf Trial stopping rule:} Assume $n_1 > 0$. If $Pr(p_1 > p_T \mid {x_d, n_d}) > \xi, $ for a large probability $\xi$, say $0.95$, terminate the trial due to excessive toxicity. Otherwise, terminate the trial when the maximum sample size is reached. In the special case of cohorts of size 1, do not apply the stopping rule $Pr(p_1 > p_T \mid {x_d, n_d}) > \xi, $ until three or more patients have been evaluated at a dose. \item[] {\bf MTD selection:} At the end of the trial, select the dose as the estimated MTD with the smallest difference $|\hat{p}^*_d - p_T|$ among all the doses $d$ for which $n_d > 0$ and $Pr(p_d > p_T | {x_d, n_d}) < \xi$. Here $\hat{p}^*_d$ is the isotonically transformed posterior mean of $p_d$, the same as that in the mTPI design \citep{ji2010modified}. If two or more doses tie for the smallest difference, perform the following rule. Let $p^*$ denote the transformed posterior mean $\hat{p}_d^*$ of the tied doses. \begin{itemize} \item If $p^* < p_T$, choose the highest dose among the tied doses. \item If $p^* > p_T$, choose the lowest dose among the tied doses. \end{itemize} \end{description}% }}} \end{center} \section{ Results} \subsection{Decision Tables With Bayes Factors} As an interval design, both mTPI and mTPI-2 generate a set of decisions based on the input values $p_T$, $\epsilon_1$, and $\epsilon_2$ from physicians. They are summarized in a tabular format, e.g., those in Figure \ref{fig:tables}. Together, three values define the equivalence interval $(p_T - \epsilon_1, p_T+\epsilon_2)$ where any dose with a toxicity probability falling into the interval can be considered as an MTD. Doses with toxicity probabilities outside the interval are considered either too low or too high. In a dose-finding trial aiming at identifying the MTD, the decision table can be precalculated for any values of $p_T \in (0, 1)$ and $\epsilon_1, \epsilon_2 \ll p_T$, and a sample size which determines column number of the table. Suppose a sample size $maxN$ is decided for the trial. For each enumerated integer pairs, $(x, n)$, $0 \le x \le n \le maxN$, the decision $\mathcal{D}_{\mbox{mTPI-2}} \in \{D, S, E\}$ is precalculated. Figures \ref{fig:tables} (a) shows an example of the decision tables under both designs for $p_T=0.3$ and a sample size of 12. As can be seen, the main improvement of the mTPI-2 design over mTPI is the precise and ``faithful'' decisions that reflect physicians input. For example, unlike mTPI where a decision $S$ is given when $x_d=3$ toxicity events are observed out of $n_d=6$ patients, mTPI-2 recommends $D$, to de-escalate. Similarly, when $x_d=2$ and $n_d=9$, the decision becomes $E$ for mTPI-2 instead of $S$ for mTPI. In essence, mTPI-2 becomes a more ``nimble'' design due to the effort in blunting the Ockham's razor. Specifically, mTPI favors the $EI$ and the decision $S$, to stay, simply because the equivalence interval has the shortest length and is preferred in the Bayesian inference due to the Ockham's razor. In contrast, mTPI-2 avoids the Ockham's razor by having equal-lengthed intervals. Therefore, in Figures \ref{fig:tables}(a) the mTPI-2 design shows fewer $S$, more $D$'s and $E$'s. Figure \ref{fig:tables}(b) shows the distribution of different decisions between mTPI-2 and mTPI for different $p_T$ values and a large sample size of 30. As can be seen, all the differences are related to changing the decision $S$ in mTPI to not $S$ ($D$, $E$, or $DU$) in mTPI-2. In general, many $S$ decisions are changed to $D$ or $E$, corresponding to the green and blue bars, respectively. Also, when $p_T < 0.2$, there are no green bars (hence no change from $S$ to $E$), which seems to be sensible since escalation is less likely when $p_T < 0.2$. In addition, when $p_T \le 0.2$, some $S$ decisions are changed to $DU$ (red bars). That is, some ``stay'' decisions in mTPI are changed to a composite decision in mTPI-2, which says that first, ``De-escalate'' and second, the current dose is deemed too toxic and will be removed from the trial. This is a major modification on the dosing decision. We look into why there is such a big change. For example, such a change occurs when $p_T=0.1$ and $x_d=3$ out of $n_d=12$ patients experience DLT. Under mTPI, the three intervals are $(0, 0.05)$, $(0.05, 0.15)$, and $(0.15, 1)$. Intuitively, the empirical toxicity rate equals $x_d/n_d = 0.25$, which is much higher than $p_T=0.1$. So $D$, de-escalate, should be preferred. However, based on mTPI the UPM for $S$ is the largest. The main reason is that the posterior distribution of $p_d$ is Beta(4, 10) given data $(x_d=3, n_d=12)$, which has a very light right tail and puts tiny probability mass when $p_d > 0.7$. This allows Ockham's razor to sharply penalize the right interval $(0.15, 1)$, which is of length $0.85$. In contrast, the EI $(0.05, 0.15)$ only has a length of $0.15$. As a consequence, the UPM value for each of the three intervals, defined as the ratio of interval's posterior probability mass and interval length, favors the shorter interval $(0.05, 0.15)$ instead of $(0.15, 1)$, even though the posterior distribution puts most mass above 0.15. Therefore, mTPI gives an $S$ for $(x_d=3, n_d=12)$. However, the mTPI-2 design blunts the Ockham's razor and uses sub-intervals with equal length. Based on the new statistical framework under mTPI-2, the winning subinterval is $(0.25, 0.35)$ and the optimal decision is $D$. In addition, under mTPI-2 the safety rule is invoked and therefore $U$ is added. In the case of mTPI, since the decision is $S$, the safety rule is not even evaluated (mTPI does not evaluate the safety rule unless the decision is $D$). For these reasons, when $x_d=3$ and $n_d=12$ at a given dose $d$, mTPI would stay ($S$) and mTPI would de-escalate and remove dose $d$ from the trial (due to high toxicity). This example shows that mTPI-2 is a safer design than mTPI. In Figure \ref{fig:tables}(c) we show that the changes from mTPI decisions to mTPI-2 decisions are all compatible with the empirical toxicity rate $x_d/n_d$. That is, mTPI-2 would only change $S$ to $E$ when the empirical rate is lower than $p_T$, and $S$ to $D$ when the empirical rate is higher than $p_T$. Due to the principled decision-theoretic framework, mTPI-2 calculates the posterior probability $Pr(m_d = M_i \mid \{x_d, n_d\})$ for each of the intervals, $M_i \in \{LI, EI, HI\}$. Naturally, the Bayes factor (BF) between any two intervals can be calculated as $$ \mbox{BF}_{ij} = \frac{Pr(m_d = M_i \mid \{x_d, n_d\})}{Pr(m_d = M_j \mid \{x_d, n_d\})}, $$ assuming equal prior probability for each model $M_i$. A value close to 1 means there is only weak evidence supporting one model or the other. In mTPI-2, in addition to provide the winning decision in the table, we also display the BF of the winning decision versus the decision with the second largest posterior probability. Therefore, all those BF's are greater than 1 but a value close to 1, say $<1.05$ indicates uncertainty in the decision. Due to small sample sizes for phase I trials, such weak decisions are not uncommon as can be seen in Table \ref{tab:BF} below. \begin{figure}[htbp] \begin{center} \begin{tabular}{cc} \multicolumn{2}{c}{ \hskip -0.3in \includegraphics[scale=0.55,angle=0]{combined-mtpi-mtpi2.png}} \\ \multicolumn{2}{c}{(a) A combined decision table for mTPI and mTPI-2. } \\ \hline \hskip -0.5in \includegraphics[scale=0.20]{mtpi2-mtpi-dec-diff.png} & \hskip -.3in \includegraphics[scale=0.38]{change-boxplot2.png} \\ \hskip -.3in {\small (b) Changes between mTPI and mTPI-2 for various $p_T$ values.} & {\small (c) A box-plot of $x_d/n_d - p_T$ values for the changes.} \end{tabular} \caption{An example of the optimal decision tables for mTPI and mTPI-2. (a) presents decisions for both mTPI and mTPI2. For each ``Number of Patients'' (column), there are two subcolumns listing the decisions of mTPI and mTPI2 side by side. Here, the target toxicity probability $p_T=0.30$ and $\epsilon_1=\epsilon_2=0.05.$ (b) summarizes the differences in decisions between mTPI and mTPI2 with breakdowns of different $p_T$ values. For example, the blue bar denotes a change from decision $S$ in mTPI to decision $E$ in mTPI-2. (c) Boxplots of $(x_d/n_d - p_T)$ for the decisions that are changed in mTPI. The plots show that when $x_d/n_d < p_T$, decisions $S$ are changed to $E$; when $x_d/n_d > p_T$, decisions $S$ are changed to $D$ or $DU$. } \label{fig:tables} \end{center} \end{figure} \begin{table}[htbp] \caption{Decisions in mTPI-2 along with Bayes' factors. For any decision that is not ``U'', a Bayes factor (BF) is provided comparing the winning decision and the second most likely decision. The BF value here is always greater than 1 since we calculated the BF of the winning decision versus the second best decision. A BF value closer to 1 indicates weaker evidence supporting the winning decision.}\label{tab:BF} \begin{center} \begin{tabular}{|l|l|lr|lr|lr|lr|}\hline && \multicolumn{8}{c|}{Number of Patients} \\ \cline{3-10} & & 3 & (BF) & 6 & (BF) & 9 & (BF) & 12 & (BF) \\[2pt] \hline \multirow{13}{*}{\begin{turn}{-270}{Number of DLTs}\end{turn} }& 0 & E & (2.12) & E &(4.47) & E & (9.38) & E &(19.56) \\ & 1 & S & (1.02) & E & (1.29) & E & (2.34) & E & (4.8) \\ & 2 & D & (2.32) & S & (1.04) & E & (1.12) & E & (1.64) \\ & 3 & U & & D & (1.68) & S & (1.06) & S & (1.03) \\ & 4 & & & U & & D & (1.45) & S & (1.08) \\ & 5 & & & U & & U & & D & (1.42) \\ & 6 & & & U & & U & & D & (2.73) \\ & 7 & & & & & U & & U & \\ & 8 & & & & & U & &U &\\ & 9 & & && & U & &U &\\ &10 &&&& &&& U &\\ & 11 &&&&&&& U &\\ & 12 &&&&&&& U &\\ \hline \end{tabular} \end{center} \end{table} \subsection{Simulation Studies} We conduct a comprehensive study that evaluates the performance of mTPI-2 and mTPI. Powered by crowd sourcing, we include a study based on 1,774 scenarios and 6,013,460 simulated trials, generated by 71 independent users of our existing tool, NGDF \citep{yang2015integrated}. NGDF is a web tool that allows users to design and simulate dose-finding trials based on various methods, including 3+3, CRM, and mTPI. We take the scenarios and simulation settings (including sample size and number of simulated trials per scenario) and simulate trials based on mTPI and mTPI-2. Therefore, the scenarios we use are from NGDF users, which constitute a crowd-sourcing exercise. Crowd sourcing typically allows objective and unbiased assessment of various methods, since the evaluators are a large number of different users, rather than the inventors themselves. We compare both methods in terms of reliability and safety, as described in \citet{ji2013modified}. In particular, reliability is the average percentage that the true MTD is selected at the end of the trial, for a given scenario and across all the simulated trials; and safety is the average percentage of patients treated at or below the true MTD, for a given scenario and across all the simulated trials. So for each method, we obtain 1,774 reliability values, one for each scenario. We then take pair-wise differences between any two methods in their reliability values for the same scenario, and plot the boxplots of the differences in the left half of Figure \ref{fig:comp}. Each boxplot corresponds to a unique $p_T$ value of the simulated trials. In the right half we show the boxplots for safety comparisons in the same manner. Figure \ref{fig:comp} shows that when $p_T \leq 0.2$, mTPI is slightly more reliable in identifying the true MTD than mTPI-2. However, when $p_T > 0.2$, mTPI-2 is more reliable. What stands out is that mTPI-2 is always safer than mTPI regardless of the $p_T$ values, which means that mTPI-2 has less chance of assigning patients to overly toxic doses than mTPI. In practice, mTPI-2 and mTPI are both easy to implement, only requiring 1) generating dose-assignment decision tables (e.g., in Figure \ref{fig:tables}a) prior to trial initiation and 2) following the decisions in the table during the course of the trial. \begin{figure}[htbp] \includegraphics[scale=0.3]{boxplot_mtpi2.png} \caption{Boxplots comparing the reliability and safety of mTPI and mTPI-2.} \label{fig:comp} \end{figure} \section{Software} We have implemented mTPI-2 as an online tool at \url{www.compgenome.org/NGDF}. It only requires a web browser, such as Google Chrome, to access. The same website hosts mTPI, 3+3, and a version of CRM which allows head-to-head comparison between mTPI-2 and these designs. There is no need to download or maintain any software package, and the web tool can be accessed anywhere via internet. In our experience, the web tool runs successfully on a tablet such as iPad or a smart phone such as iphone. This capability allows investigators to use the design with great flexibility. A detailed user manual is provided on the website to assist new users. \section{Discussion} We present mTPI-2, an improved mTPI design, to reduce the effect from the Ockham's razor in the posterior inference. The mTPI-2 design is based on formal Bayesian decision theoretic framework, adjusting for Ockham's razor. It mitigates some suboptimal decisions in mTPI and provides theoretically optimal and intuitively sound decision rules. As a result, mTPI-2 makes more refined actions that allow more efficient exploration of different doses in the dose finding process. The mTPI-2 design hinges on user-provided quantities, $p_T$, $\epsilon_1$ and $\epsilon_2$. It treats any dose with toxicity probability smaller than $(p_T - \epsilon_1)$ or larger than $(p_T + \epsilon_2)$ as being lower or higher than the MTD, respectively. Therefore, these two values are the key input of the design and must be elicited from physicians. For example, one can ask the physician what the highest toxicity rate is that would still warrant a dose escalation ($p_T - \epsilon_1$) and the lowest rate ($p_T + \epsilon_2$) that would warrant a dose de-escalation. In this paper, we consider $\epsilon_1=\epsilon_2$. Intuitively, when the two $\epsilon$'s are not equal, the decisions can be altered in a nonsymmetric way such as allowing more escalation than de-escalation or the opposite. This is an ongoing research direction that we are currently pursuing. We focus on the comparison between mTPI and mTPI-2 in this paper. For interested readers desired to compare mTPI-2 to the 3+3 design \citep{storer1989design} or the continual reassessment method (CRM, \citet{o1990continual}), we refer to \citet{ji2013modified} and \citet{yang2015integrated} who compared mTPI to 3+3 and CRM through extensive simulation studies, which serves as an indirect comparison to mTPI-2. Innovatively, mTPI-2 is able to provide Bayes factors for each decision so that investigators can assess the uncertainty behind it. These Bayes factors may provide additional use for future work, such as allowing for randomization between two different decisions when the value of Bayes factor comparing the two decisions is very close to 1. The size of the equivalence interval serves as an ``effect size'' for phase I dose-finding trials. This is an added benefit of interval-based designs, such as mTPI and mTPI-2. A narrower equivalence interval implies that the MTD must be identified with more precision, and therefore demands a larger sample size. Also the sample size will depend on the number of doses in the trial and the cohort size, see \citep{ji2013modified} for a discussion. We intend to address the sample size issue in a future work. \input{mTPI2_arXiv.bbl} \clearpage \clearpage \section*{Appendix} \subsection*{A. Proof of Theorem 1} Recall that $S(M_i)$ is the size of interval length for model $M_i$, $i \in \{E, S, D \}$. For example, for $M_E$, $S(M_E) = p_T - \epsilon_1.$ It suffices to show that the decisions rule $\mathcal{D}_{\mbox{mTPI}}$ maximizes $E((1-\ell(i, M_j)) \mid \{x_d, n_d\})$, the posterior expected utility, where utility is defined as one minus the 0-1 loss, i.e., $(1-\ell(i, M_j)).$ The posterior expected utility for action $i \in \{E, S, D\},$ at dose $d$ is given by \begin{eqnarray*} L(i, d) &=& \sum_{j \in \{E, S, D\}} \ell(i, M_j) p \left(M_j \mid (x_d, n_d) \right) \\ & \propto & \sum_{j \in \{E, S, D\}} \ell(i, M_j) \int p(x_d \mid n_d, p_d) p(p_d \mid M_j) p(M_j) dp_d \\ &=& \int p(x_d \mid n_d, p_d) p(p_d \mid M_i) p(M_i) d p_d \\ &\propto& \int_{l_i}^{h_i} \frac{1}{S(M_i)} p_d^{x_d} (1-p_d)^{n_d - x_d} d p_d \\ &\propto& \frac{Pr(M_i \mid \{x_d, n_d\})}{S(M_i)} \\ &=& UPM(i, d) \end{eqnarray*} Therefore, the decision rule \eqref{eq:mTPI-rule} given by \begin{equation} \mathcal{D}_{\mbox{mTPI}} = \arg \max_{i \in \{E, S, D\}} UPM(i, d) \end{equation} maximizes the posterior expected utility, which is equivalent to minimizing the posterior expected 0-1 loss. \clearpage \subsection*{B. Rate of Incomplete Beta Function} We only need to consider the posterior probability of model $M_i$ in the calculation of UPM, i.e., \begin{multline} Pr(M_i \mid {x_d, n_d}) \propto \frac{1}{S(M_i)} \int_{l_i}^{h_i} p_d^{x_d} (1-p_d)^{n_d - x_d} d p_d \\ \propto \frac{I_{h_i}(x_d + 1, n_d -x_d+1) - I_{l_i}(x_d +1, n_d - x_d +1)}{h_i - l_i} \label{eq:ibeta} \end{multline} where $$ I_{x}(p, q) = \frac{1}{B(p, q)} \int_0^x t^{p-1} (1-t)^{q-1} dt $$ is the incomplete beta function, with $$ B(p, q) = \int_0^1 t^{p-1} (1-t)^{q-1} dt. $$ Based on \citet{johnson2002continuous}, $$ I_x(p, q) \approx \Phi(z) $$ where \begin{multline} z=\frac{k}{|q-0.5-n(1-x)|} \left\{\frac{2}{1+(6n)^{-1}} \left[\left(q- 0.5 \right)\log \left\{\frac{q-0.5}{n(1-x)} \right\} + (p-0.5) \log\left\{\frac{p-0.5}{nx} \right\} \right] \right\}^{1/2}, \end{multline} $n= n_d -1$ and $k=n_d - x_d - 1/3 - (n_d + 1/3)(1-x)$. When $x_d=3$ and $n_d =6$, the incomplete beta function can be shown to be approximated by $$\Phi(sgn(x-0.5)) * \sqrt{-7\log(x(1-x))}).$$ Based on Feller (1968), this can be approximated by $$ I(x>0.5)*\frac{1}{2} + sgn(x-0.5)* \frac{e^{-y^2/2}}{\sqrt{2 \pi} y}, \quad y = \sqrt{-7\log(x(1-x))}, $$ which equals $$ I(x>0.5)*\frac{1}{2} + sgn(x-0.5)*\frac{\{x(1-x)\}^{1/14}}{\sqrt{-14 \pi \log(x(1-x))}}. $$ A numerical evaluation reveals that when $x$ takes values at 0.25, 0.35, and near 1, the expression of \eqref{eq:ibeta} favors model $M_D$ in which $h_i = 1$ and $l_i=0.35$ over model $M_S$ in which $h_i = 0.35$ and $l_i = 0.25$. Unfortunately, there is no general conclusion on the value of \eqref{eq:ibeta} for any $x_d$ and $n_d$ values, which makes the theoretical derivation difficult. The above derivation pushes forward the theoretical development for the incomplete beta function in that it gives the ratio of $(x(1-x))^{1/14}/\sqrt{-log(x(1-x))}$. However, the entire function if not monotone with a mode at 0.5, which makes it difficult to evaluate the magnitude of \eqref{eq:ibeta} as a difference of two incomplete beta functions. It is known that the analytic expression of incomplete beta function is still an open research question \citep{swaminathan2007convexity}. Therefore, we leave the further theoretical development to future work. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} The Hall coefficient $\mathit{R_\mathrm{H}}$ reveals properties of band structure and effective carrier density in weakly interacting systems, determined by the shape of the Fermi surface and the angular dependence of the quasiparticle relaxation time \cite{mermin,geometry}. For strongly correlated materials, it may less directly correspond to the topology of the Fermi surface, since they generally lack well-formed quasiparticles. Such materials exhibit unusual behaviors incompatible with the quasiparticle picture. Cuprates display large, $\mathit{T}$-linear resistivity \cite{linear-resistivity,linear-resistivity-2}, known as strange metallicity. In some materials, magnetoresistance also shows unusual linear $\mathit{T}$-dependence \cite{magnetoresistance1,magnetoresistance2,magnetoresistance3}. Recent experiments have shown that the Hall number may be related closely to the strange metallicity \cite{strangemetalhighfieldnh}. $\mathit{R_\mathrm{H}}$ of high-$\mathit{T}_\mathrm{c}$ cuprates has strong temperature and doping dependence, in contrast to what is expected for free electrons. Underdoped cuprates have positive $\mathit{R_\mathrm{H}}$ with complicated temperature dependence \cite{ono2007strong}. As doping increases, $\mathit{R_\mathrm{H}}$ decreases and becomes $\mathit{T}$-independent at high temperature \cite{cuprateHall}. In the heavily overdoped regime, $\mathit{R_\mathrm{H}}$ experiences a sign change and becomes negative around $\mathit{p} = 0.3$ \cite{cupratehall2,negativehall}, in agreement with the doping dependent shape of the Fermi surface reported from angle-resolved photoemission spectroscopy (ARPES) \cite{ARPES,arpes2}. The doping dependence of $\mathit{R_\mathrm{H}}$ has been studied in several experiments \cite{badoux2016change,balakirev2003signature, strangemetalhighfieldnh, PhysRevB.95.224517}, and different theoretical models also have been established in order to explain this anomalous doping dependence of $\mathit{R_\mathrm{H}}$ \cite{verret2017phenomenological,storey2016hall,PhysRevLett.117.187001,PhysRevB.73.174501,charlebois2017hall}. Finally, at low temperatures in the cuprates the cotangent of the Hall angle, $\mathit{\cot(\theta_\mathrm{H})}$, simply has quadratic temperature dependence \cite{chien1991effect,cuprateHall,PhysRevLett.89.037003}. Hubbard model calculations have revealed properties similar to those of high-$\mathit{T}_\mathrm{c}$ cuprates, including $\mathit{T}$-linear resistivity in the strange metal phase \cite{huang}. Quantum Monte Carlo (QMC) simulations on the Hubbard model show similar generic nature of the quasiparticle dispersion relation observed in some hole-doped cuprates, and demonstrate it to be mostly determined by the strong Coulomb repulsion, reflecting many-body correlations, rather than a simply one-electron band structure \cite{PhysRevB.50.7215}. Including a next-nearest neighbouring hopping $\mathit{t'}=-0.15$ for the Hubbard model ($\mathit{U}=8\mathit{t}$), they find the Fermi surface changes from a large hole-pocket centered at $(\pi,\pi)$ to an electron-pocket around $(0, 0)$ at $30\%$ doping. This implies the shape of the Fermi surface numerically measured in this model is in agreement with the observed doping dependence of $\mathit{R_\mathrm{H}}$ in LSCO \cite{cupratehall2,negativehall}, if one assumes $\mathit{R_\mathrm{H}}$ is simply determined by the curvature of the Fermi surface. A change from a hole-like Fermi surface to an electron-like Fermi surface from low doping to high doping also has been observed for the Hubbard model with only nearest-neighbor hopping ($\mathit{t'}=0$) and strong interactions by other QMC simulations \cite{grober2000anomalous}, Dynamical cluster approximation (DCA) \cite{maier2002angle} techniques and a self-consistent projection operator method (SCPM) \cite{PhysRevLett.94.156401}. Thus, we are motivated to calculate $\mathit{R_\mathrm{H}}$ in the Hubbard model to further investigate transport properties within the strange metal phase of cuprates. Numerical calculations of $\mathit{R_\mathrm{H}}$ have been attempted for a number of models and with various algorithms, such as the 2D Hubbard model in the high frequency limit \cite{highf} and $\mathit{t}$-$\mathit{J}$ model with exact diagonalization \cite{tj2002}. In Ref. \cite{stanescu2003full}, it was demonstrated that $\mathit{R_\mathrm{H}}$ in a doped Mott insulator must change sign at $\mathit{p}<1/3$. $\mathit{R_\mathrm{H}}$ at high temperature and high frequency has been examined in the $\mathit{t}$-$\mathit{J}$ model \cite{shastry1993faraday}, where they focused on the high frequency limit rather than the DC limit, because of the assumption that high-frequency $\mathit{R_\mathrm{H}^*\mathit}$ is instantaneous, and thus closer to the semiclassical expression $1/\mathit{n}^*\mathit{e}$. However, in the Hubbard model the DC limit has been less well studied, especially using numerical techniques. In this work, we calculate the DC Hall coefficient using an expansion that expresses magneto-transport coefficients in terms of a sum of thermodynamic susceptibilities \cite{assa,assa2}, avoiding challenges in numerical analytic continuation for obtaining DC transport properties. We use the unbiased and numerically exact determinant quantum Monte Carlo (DQMC) algorithm \cite{dqmc1,dqmc2} to calculate the leading order term of the expansion of $\mathit{R_\mathrm{H}}$ from Ref.~\cite{assa}. We find strong temperature and doping dependence of $\mathit{R_\mathrm{H}}$ in a parameter regime with strong interactions and no coherent quasiparticles, and show a good correspondence between the sign of the Hall coefficient and the shape of a quasi-Fermi surface. \section*{Results} \subsection*{Hall Coefficient} In Fig.~\ref{fig:hall}, at half filling, particle-hole symmetry of the Hubbard Hamiltonian gives rise to a zero Hall coefficient for all values of $\mathit{U}$ as expected. As the system is doped away from half filling and the particle-hole symmetry is broken, $\mathit{R_\mathrm{H}}$ becomes nonzero and temperature dependent. When $\mathit{U}$ is small, the system is expected to be weakly interacting, and the sign and magnitude of $\mathit{R_\mathrm{H}}$ is simply determined by the Fermi surface. Indeed, we see that for $\mathit{U}$ in the range between $4\mathit{t}$ and $8\mathit{t}$ in Fig.~\ref{fig:hall}, $\mathit{R_\mathrm{H}}$ has weak temperature dependence and is negative for all hole doping levels, corresponding to a well defined electron-like Fermi surface. For these same $\mathit{U}$ values in Fig.~\ref{fig:dop}, $\mathit{R_\mathrm{H}}$ has a nearly linear doping dependence, consistent with the quasiparticle picture and Fig.~2 in Ref. \cite{assa2}. With strong Coulomb interactions $\mathit{U}=12\mathit{t}$ and $16\mathit{t}$, we have $\mathit{T} \ll \mathit{U}$, and $\mathit{R_\mathrm{H}}$ becomes strongly temperature dependent and can be positive. \subsection*{Single-particle properties} To explore the connection between the Hall coefficient and quasi-Fermi surface in strongly interacting systems, we investigate the spectral weight around $\mathit{\omega} = 0$. $\mathit{G}(\mathbf{k},\mathit{\tau}=\mathit{\beta}/2)\mathit{\beta} $, as a proxy for $\mathit{A}(\mathbf{k}, \mathit{\omega}= 0)$ (see the "Methods" section), within the first Brillouin zone as shown in Figs.~\ref{fig:green}\textbf{a-h}. For weak interactions, the peak of $\mathit{G}(\mathbf{k},\mathit{\tau}=\mathit{\beta}/2)\mathit{\beta}$ in momentum space marks the position of the Fermi surface. For fixed hole doping, as the interaction gets stronger and opens a large Mott gap above the Fermi energy, $\mathit{R_\mathrm{H}}$ becomes positive and the peak of $\mathit{G}(\mathbf{k},\mathit{\tau}=\mathit{\beta}/2)\mathit{\beta}$ moves toward the $(\pi,\pi)$ point and the dashed lines, which mark the Fermi surface position predicted under the Hubbard-I approximation \cite{hubbard1963electron,grober2000anomalous}. As $\mathit{U}$ becomes stronger, the Fermi surface changes from closed (a pocket centered at $\Gamma$ point) to open (a pocket centered at $M$ point). This evolution is shown for doping $\mathit{p} = 0.05$($\mathit{n}=0.95$) and $\mathit{p}=0.1$($\mathit{n}=0.9$). Meanwhile, the spectral peak becomes broader, signaling that the Fermi surface becomes less well-defined as interaction strength increases. However, we could still see a clear connection between $\mathit{R_\mathrm{H}}$ and the spectral weight, even without a well-defined Fermi surface or well-formed quasiparticles. When the Fermi pocket changes from electron-like to hole-like, the sign of $\mathit{R_\mathrm{H}}$ changes from negative to positive [c.f. Fig.~\ref{fig:hall}]. For fixed Hubbard $U$, as doping level increases, the Fermi surface unsurprisingly moves back to $(0, 0)$ to enclose an electron pocket, as $\mathit{R_\mathrm{H}}$ decreases, returning to quasiparticle behavior. Within the low doping regime, the hole-like Fermi surface violates the Luttinger theorem, which is in agreement with other numerical results on the Hubbard model \cite{grober2000anomalous,maier2002angle,PhysRevLett.94.156401,sen2020mott,stanescu2004nonperturbative}. The peak of $\mathit{G}(\mathbf{k},\mathit{\tau}=\mathit{\beta}/2)\mathit{\beta}$ becomes better defined going away from the Mott insulator, either by doping or decreasing $\mathit{U}$. The evolution of the Fermi pocket is similar to ARPES experiments \cite{ARPES,arpes2}. We also notice that for strong interactions as temperature decreases from $\mathit{T}=2\mathit{t}$ to $\mathit{T}\sim \mathit{t}/3$, we see that the peak of $\mathit{G}(\mathbf{k},\mathit{\tau}=\mathit{\beta}/2)\mathit{\beta}$ moves from close to $(0, 0)$ out towards $(\pi,\pi)$, and then moves slightly back towards $(0, 0)$, which can correspond to the two sign changes of $\mathit{R_\mathrm{H}}$ as a function of temperature in Fig. \ref{fig:hall}. We can see similar $\mathit{A}(\mathbf{k},\mathit{\omega})$ peak position changes in momentum space with temperature in a DMFT study \cite{deng2013bad}, and DQMC method accounts for momentum dependent self-energy effects. Examples of $\mathit{A}(\mathbf{k},\mathit{\omega})$ obtained from maximum entropy analytic continuation are shown in Fig.~\ref{fig:green}\textbf{i}. Compared with Fig.~\ref{fig:green}\textbf{d}, as we move along the $\Gamma$-$X$-$M$ momentum curve, the location of the spectral weight peak crosses $\mathit{\omega} = 0$ between $X$ and $M$, indicating that our proxy $\mathit{G}(\mathbf{k}, \mathit{\beta}/2)$ properly represents the behavior of the spectral weight and that the Fermi pocket is hole-like. Figs.~\ref{fig:green}\textbf{j-k} show the electron pocket for both $\mathit{U/t}=8$ and $\mathit{U/t}=16$ at large hole-doping above $0.3$. The Fermi surface positions are similar, and the spectral weight peaks are sharp, meaning that the coherence of $\mathit{A}(\mathbf{k},\mathit{\omega})$ with large doping is more consistent with a quasiparticle picture. In contrast to $\mathit{n}=0.95$, at $\mathit{n}=0.6$ the apparent Fermi surface closely follows the non-interacting Fermi surface and is minimally affected by increasing interaction strength. \subsection*{Hall Angle, Mobility and Mass} For completeness, we also calculate the Hall angle $\mathit{\cot(\theta_\mathrm{H})}$ and effective mass $\mathit{m}$ using $\mathit{R_\mathrm{H}}$ and $\mathit{\sigma_{xx}(\omega)}$ (see the "Methods" section), as shown in Fig.~\ref{fig:mm}. We observe a $\mathit{T^2}$ temperature dependence in $\mathit{\cot(\theta_\mathrm{H})}$ when temperature is low compared with the band width for most doping up to $\mathit{n}=0.9$ for $\mathit{U/t}=4$ and for temperatures higher than $\dfrac{1}{3.5}\mathit{t}$ for $\mathit{U/t}=8$, similar to what has been observed for LSCO \cite{cuprateHall,cupratehall2,PhysRevB56R8530} and other cuprates \cite{PhysRevB503246}. For $\mathit{U/t} = 8$, the large error bars at the lowest temperature arise from a sever fermion sign problem \cite{PhysRevB.41.9301} which limits the accessible temperatures. The upturn in $\mathit{\cot(\theta_\mathrm{H})}$ as temperature decreases for $\mathit{U}=4, \mathit{n}=0.95$ at the lowest temperatures, probably results from anisotropy around the Fermi surface playing a much more significant role, considering it is relatively close to half filling \cite{PhysRevB.46.14297}. When $\mathit{U}$ is strong ($\mathit{U/t}=8$ in Fig.~\ref{fig:mm}\textbf{c}) and doping is small, $\mathit{\cot(\theta_\mathrm{H})}$ shows a peak around $\mathit{T}\sim \mathit{t}$ (the ratio exceeds $1.0$). Comparing this peak with the smooth $\mathit{\cot(\theta_\mathrm{H})}$ curve when $\mathit{U/t}=4$, we see again an indication that the Coulomb interaction strongly affects the temperature dependence of transport properties when $\mathit{T}\ll \mathit{U}$. The effective mass increases slightly as the temperature increases. We observe that a stronger interaction leads to a heavier effective mass. The mass approaches the mass of a free electron $\mathit{m}_\mathit{e}=\frac{1}{2\mathit{t}}$ at large doping and as the temperature tends to $0$, returning to a normal metal with well-defined quasiparticles as one would expect. \section*{discussion} In our results, we observe that when $\mathit{U}$ is large and doping is small, $\mathit{R_\mathrm{H}}$ in the Hubbard model exhibits complicated temperature and doping dependence. Along with $\mathit{T}$-linear resistivity in the Hubbard model \cite{huang}, both phenomena suggest that strongly correlated electrons shouldn't simply behave like coherent quasiparticles moving in a static band structure. However, we also observe a corresondence between $\mathit{R_\mathrm{H}}$ and the topology of the Fermi surface, revealed by the proxy $\mathit{G}(\mathbf{k},\mathit{\beta}/2)\mathit{\beta}$. This is rather surprising, as the correspondence between $\mathit{R_\mathrm{H}}$ and Fermi surface topology is usually understood only in the quasiparticle picture for weakly interacting systems. Here, we have found this correspondence is still well established even when strong correlations are present and the Fermi surface itself becomes ill-defined. The features of $\mathit{R_\mathrm{H}}$ are obtained from the single-band Hubbard model, using the unbiased and numerically exact DQMC algorithm. They directly show contributions to the Hall effect from the on-site Coulomb interaction and an effective $\mathit{t'}$, pushing $\mathit{R_\mathrm{H}}$ to change sign and show strong temperature dependence and complicated doping dependence. Comparing our $\mathit{R_\mathrm{H}}$ to that of cuprates \cite{cuprateHall,cupratehall2} at high temperatures, such as LSCO, $\mathit{R_\mathrm{H}}$ usually changes sign around $30\%$ hole doping. Underdoped cuprates at low temperature have complicated temperature dependence and almost unbounded Hall coefficient towards half filling. Their low temperature behavior is affected jointly by the on-site Coulomb interaction and next nearest neighbour (NNN) hoping, as well as other experimental factors. However, our simulation corresponds to relatively high temperatures in LSCO experiments, before which unbounded $\mathit{R_\mathrm{H}}$ has alreay dropped down to the scale $\sim 10^{-3}\mathrm{cm}^3\mathrm{C}^{-1}$. Nevertheless, around the point at which the sign changes, the order of magnitude of the ratio $\mathit{\delta R_\mathrm{H}}/\mathit{\delta p}$ in our $\mathit{R_\mathrm{H}}$ data in the Hubbard model is comparable to that of LSCO \cite{cuprateHall,cupratehall2, negativehall} at high temperatures. Furthermore, here we have only focused on the single-band Hubbard model with only nearest-neighbor hopping. The next-nearest-neighbor hoping can also deform the Fermi surface \cite{duffy1995influence} and affect $\mathit{R_\mathrm{H}}$. Thus far, we have only implemented the lowest order term of the effective expansion from Ref. \cite{assa}. Correction terms involve tens of thousands of Wick contractions and are not feasible to simulate given current computational capacity. However, our results regarding sign changes using the leading order term are consistent with various other methods, including coupling the Hamiltonian to an external magnetic field (Ding, J.~K. et al. Manuscript in preparation). \section*{methods} \subsection*{Hall Coefficient} We calculate the Hall coefficient $\mathit{R_\mathrm{H}}$ in the doped Hubbard Model on a 2D square lattice with periodic boundary conditions, defined by the Hamiltonian \begin{align*} \mathit{H} = -\mathit{t}\sum_{\langle \mathit{jk}\rangle ,\sigma} \mathit{c_{j,\sigma}^{\dagger} c_{k,\sigma}} \mathrm{e}^{\mathrm{i} \int_\mathit{j}^\mathit{k} \mathit{e}\mathbf{A}(\mathbf{r}) d\mathbf{r}} - \mu\sum_{\mathit{j},\mathit{\sigma}} \mathit{n}_{\mathit{j},\mathit{\sigma}} + \mathit{U}\sum_{\mathit{j}}\mathit{n}_{\mathit{j},\uparrow } \mathit{n}_{\mathit{j},\downarrow} \stepcounter{equation}\tag{\theequation}\label{eq:hubbard} \end{align*} where $\mathit{t}$ is nearest-neighbor hopping energy, $\mathit{\mu}$ is chemical potential and $\mathit{U}$ is the Coulomb interaction. $\mathit{c}_{\mathit{j},\mathit{\sigma}}^{\dagger}$ stands for the creation operator for an electron on site $\mathit{j}$ with spin $\mathit{\sigma}$. $\mathit{n}_{\mathit{j},\mathit{\sigma}} \equiv \mathit{c}_{\mathit{j},\mathit{\sigma}}^{\dagger} \mathit{c}_{\mathit{j},\mathit{\sigma}}$ is the number operator. $\mathit{\theta}_{\mathit{jk}} = \int_\mathit{j}^\mathit{k} \mathit{e}\mathbf{A}(\mathbf{r})d\mathbf{r}$ is the Peierls phase factor. For a perpendicular field $\mathbf{B} = \mathit{B} \hat{z}$, we choose the vector potential $\mathbf{A}= -\mathit{\alpha B y\hat{x} + (\mathrm{1}-\alpha) B x\hat{y}}$, with $\mathit{\alpha}$ associated with an arbitrary gauge choice. The DC Hall coefficient $\mathit{R_\mathrm{H}}$ \cite{assa,assa2} is expressed as \begin{align*} &\mathit{R_\mathrm{H}}^{(0)} = -\operatorname{Im}\frac{\mathit{e}^2\mathit{t}^2/\mathit{V}}{ (\int^{\mathit{\beta}}_0 d\mathit{\tau} \langle \mathit{j_x(\tau)j_x\rangle/V})^2 }\int^{\mathit{\beta}}_0 d\mathit{\tau}[-(1-\mathit{\alpha})\times\\ & \langle \mathit{j_y(\tau)} \sum_{\mathit{k},\mathit{\sigma}}(\mathit{c}_{\mathit{k}+\mathit{\delta\hat{x}},\mathit{\sigma}}^\dagger \mathit{c}_{\mathit{k}+\mathit{\delta\hat{y}},\mathit{\sigma}}+\mathit{c}_{\mathit{k},\mathit{\sigma}}^\dagger \mathit{c}_{\mathit{k}+\mathit{\delta\hat{x}}+\mathit{\delta\hat{y}},\mathit{\sigma}} - \mathrm{h.c.})\rangle \\ &+\mathit{\alpha} \langle \mathit{j_x(\tau)\sum_{k,\sigma}(c_{k+\delta\hat{x}+\delta\hat{y},\sigma}^\dagger c_{k,\sigma}+c_{k+\delta\hat{x},\sigma}^\dagger c_{k+\delta\hat{y},\sigma}} - \mathrm{h.c.})\rangle\stepcounter{equation}\tag{\theequation}\label{eq:rh} \end{align*} where $\mathit{j_x}$ and $\mathit{j_y}$ are current operators along $\mathit{x}$ and $\mathit{y}$ directions. For example, $\mathit{j_x} =-\mathrm{i}\mathit{e}t \sum_{\mathit{k},\mathit{\sigma}}(\mathit{c_{k+\delta\hat{x},\sigma}^\dagger c_{k,\sigma}} - \mathrm{h.c.})$. By $C_4$ rotational symmetry, we notice that the magnitude of the term after $1-\mathit{\alpha}$ is equal to the term after $\alpha$, leaving the expression independent of $\mathit{\alpha}$ and gauge invariant. We use DQMC to calculate the susceptibilities in Eq.~\eqref{eq:rh} to obtain $\mathit{R_\mathrm{H}}^{(0)}$ (shown in Figs. \ref{fig:hall}, \ref{fig:dop}). We measure both unequal time correlators in Eq.~\eqref{eq:rh} and combine them by selecting $\alpha=0.5$, as in Refs. \cite{assa,assa2}. Due to the fermion sign problem, a large number of measurements is required to cope with the small sign, which limits the temperatures we can access. Nevertheless, we can access temperatures below the spin exchange energy $\mathit{J}=4\mathit{t}^2/\mathit{U}$ reliably for all doping levels. The finite size effect is minimal in our results (Supplementary Fig. 1). Limitations of our method for evaluating $\mathit{R_\mathrm{H}}$ include: (1) The fermion sign problem, which constrains our ability to access lower temperatures. (2) Correction terms of the effective expansion involve a proliferation of Wick contractions and are not implemented given current computational capacity. (3) The next-nearest-neighbour hoping has not been taken into account. \subsection*{Single-Particle Properties} The spectral function $\mathit{A}(\mathbf{k},\mathit{\omega})$ on all frequencies can be computed by adopting standard maximum entropy analytic continuation \cite{ana1,ana2}. Starting from the imaginary time Green's function data $\mathit{G}(\mathbf{k},\mathit{\tau}) = \langle \mathit{c}(\mathbf{k},\mathit{\tau})\mathit{c}^\dagger(\mathbf{k},0)\rangle$, we invert the relation \begin{equation} \mathit{G}(\mathbf{k},\mathit{\tau})=\int^{\infty}_{-\infty}d\mathit{\omega} \frac{ \mathrm{e}^{-\mathit{\tau} \mathit{\omega}}}{1 + \mathrm{e}^{-\mathit{\beta\omega}}} \mathit{A}(\mathbf{k},\mathit{\omega}). \end{equation} We also calculate a proxy for $\mathit{A}(\mathbf{k},\mathit{\omega}=0)$, showing the position of the Fermi surface without the need for analytic continuation. $\mathit{A}(\mathbf{k},\mathit{\omega}=0)$ can be approximately calculated directly as $\mathit{G}(\mathbf{k},\mathit{\tau} = \mathit{\beta} /2)\mathit{\beta}$ (Fig.~\ref{fig:green}), since $\mathit{\tau}=\mathit{\beta}/2$ contains the largest weight of $\mathit{A}(\mathbf{k},\mathit{\omega}) = -\dfrac{1}{\pi}\operatorname{Im}\mathit{G}(\mathbf{k},\mathit{\omega})$ near $\mathit{\omega} = 0$. We see this from the relation \begin{align*} \mathit{G}(\mathbf{k},\mathit{\tau}=\mathit{\beta}/2) &= \langle \mathit{c}_{\mathbf{k}}(\mathit{\tau}=\mathit{\beta}/2)\mathit{c}_{\mathbf{k}}^\dagger \rangle =- \int \frac{d\mathit{\omega}}{\pi} \frac{1}{2\cosh(\mathit{\beta\omega}/2)} \operatorname{Im} \mathit{G}(\mathbf{k},\mathit{\omega}). \end{align*} \subsection*{Hall Angle and Mass} The Hall angle $\mathit{\theta_\mathrm{H}}$ is defined by $\cot\mathit{\theta_\mathrm{H}} = \mathit{\sigma}_{\mathit{xx}}/\mathit{\sigma_{xy}}$. So from $\mathit{R_\mathrm{H}} \Big|_{\mathit{B}=0} = \mathit{\sigma_{xy}\big{/}{\sigma^\mathrm{2}_{xx} B}}\Big|_{\mathit{B}=0}$ and DC optical conductivity $\mathit{\sigma_{xx}}$, we can evaluate the Hall angle with \begin{equation} \cot(\mathit{\theta_\mathrm{H}})\mathit{B}\big|_{\mathit{B}=0} = \frac{1}{\mathit{R_\mathrm{H}} \mathit{\sigma_{xx}}}\bigg|_{\mathit{B}=0}. \end{equation} Under the assumption of a single quasiparticle Fermi pocket, we can use the Drude theory of metals to write $\mathit{R_\mathrm{H}} = 1/(\mathit{n}^{*}\mathit{e})$ and $\mathit{\sigma_{xx}} = \mathit{n}^{*}\mathit{e}\mathit{\mu}$, where $\mathit{\mu}$ is the effective mobility with a convention that $n^*$ is negative for electrons and positive for holes, so that mobility is simply \begin{equation} \mathit{\mu} = \mathit{\sigma_{xx}} \times \mathit{R_\mathrm{H}} \end{equation} which itself is related to the Hall angle by $\cot(\mathit{\theta_\mathrm{H}})\mathit{B}\big|_{\mathit{B}=0} = 1/\mathit{\mu}$. The optical conductivity $\mathit{\sigma_{xx}(\omega)}$ of the Hubbard Model has been investigated already with DQMC and maximum entropy analytic continuation \cite{huang}, whose methods we adapt here. With relaxation time $\mathit{\tau}$ obtained from the inverse width of the Drude peak of $\mathit{\sigma_{xx}(\omega)}$, the effective mass of carriers (Figs.~\ref{fig:mm}\textbf{e-h}) could be evaluated under Drude theory using $\mathit{\sigma_{xx}} = -\dfrac{\mathit{n}^{*}\mathit{e}^2\mathit{\tau}}{\mathit{m}}$. Thus we have the expression \begin{equation} \mathit{m} =- \frac{\mathit{\tau} \mathit{e}}{\mathit{R_\mathrm{H}} \mathit{\sigma_{xx}}}. \end{equation} There are different ways to determine the relaxation time $\mathit{\tau}$ (or frequency $\mathit{\omega_\tau}$) from $\mathit{\sigma_{xx}(\omega)}$. Here we choose the frequency $\mathit{\omega_\tau}$ where $\mathit{\sigma_{xx}(\omega_\tau)} = \mathit{\sigma_{xx}(\omega=\mathrm{0})}/2$. A special point in Fig.~\ref{fig:mm}\textbf{g} is $\mathit{U/t}=8, \mathit{n}=0.95, \mathit{T/t}=1$. For these parameters, $\sigma_{xx}(\omega)$ has a significant high frequency peak centered around $\mathit{ \omega \sim U}$, so the Drude peak does not decay to half of its zero frequency value before increasing again \cite{huang}. For these parameters, we select $\mathit{\omega_\tau}$ as the local minimum of $\mathit{\sigma_{xx}(\omega)}$ between the zero frequency Drude peak and the high-frequency peak at around $\mathit{ \omega \sim U}$, where the ratio at the minimum is $\mathit{\sigma_{xx}(\omega_\tau)}/ \mathit{\sigma_{xx}(\omega=\mathrm{0})} = 0.655$. We also can fit the frequency dependence of $\mathit{\sigma_{xx}(\omega)}$ to a zero frequency Lorentzian and a high-frequency Lorentzian or Gaussian, which yield $1.04\mathit{\tau}_0$ (Lorentzian) and $0.91\mathit{\tau}_0$ (Gaussian), where $\mathit{\tau}_0$ is the value obtained from the local minimum method. Using these different methods only changes the effective mass result slightly, but does not affect the features in Figs.~\ref{fig:mm}\textbf{e-h}. \section*{Error analysis} For our Hall coefficient results, we use jackknife resampling to calculate standard errors. Error bars represent $1$ standard error. Error bars for measurements involving $\mathit{\sigma_{xx}(\omega)}$ represent random sampling errors, determined by bootstrap resampling standard deviation \cite{huang}. Error bars represent $1$ bootstrap standard error. \section*{Data availability} Data supporting this manuscript are stored on the Sherlock cluster at Stanford University and are available from the corresponding author upon request. \section*{Code availability} Source code for the simulations can be found at \url{https://doi.org/10.5281/zenodo.3923215}. \section*{Acknowledgements} We acknowledge helpful discussions with A. Auerbach, I. Khait, D. Scalapino, E. Berg, Y. Schattner, S. Kivelson and X.X. Huang. \emph{Funding:} This work was supported by the U.S. Department of Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. EWH was supported by the Gordon and Betty Moore Foundation EPiQS Initiative through the grant GBMF 4305. Computational work was performed on the Sherlock cluster at Stanford University and on resources of the National Energy Research Scientific Computing Center, supported by the U.S. DOE under Contract no. DE-AC02-05CH11231. \section*{Author contributions} WOW performed numerical simulations and analyzed data. EWH and TPD conceived the project. All authors assisted in data interpretation and contributed to writing the manuscript. \section*{Competing interests} The authors declare no competing interest.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $G$ be a Lie group. In the theory of simplicial manifold, there is a well-known simplicial manifold $NG$ called nerve of $G$. The de Rham complex $\Omega^*(NG(*))$ on it is a double complex, and the cohomology of its total complex is isomorphic to $H^*(BG)$. In \cite{Bot}, Bott proved the cohomology of its horizontal complex $\Omega^p(NG(*))$ is isomorphic to the continuous cohomology $H_c^*(G;S^q{\mathcal G})$ for any fixed $q$. On the other hand, for a subgroup $H$ of $G$ we can construct a bisimplicial manifold $NG(*) \rtimes NH(*)$ and the de Rham complex $\Omega^*(NG(*) \rtimes NH(*))$ on it. This complex is a triple complex and the cohomology of its total complex is isomorphic to $H^*(B(G \rtimes H))$ \cite{Suz}. In this paper, we show that the total complex of the double complex $\Omega^q(NG(*) \rtimes NH(*))$ is isomorphic to the continuous cohomology $H_c^*(G \rtimes H;S^q{\mathcal G} \times S^q{\mathcal H})$ for any fixed $q$. \section{Review of the simplicial de Rham complex} In this section we recall the relation between the simplicial manifold $NG$ and the classifying space $BG$. We also recall the notion of the equivariant version of the simplicial de Rham complex. \subsection{The double complex on simplicial manifold} For any Lie group $G$, we have simplicial manifolds $NG$, $PG$ and simplicial $G$-bundle $\gamma : PG \rightarrow NG$ as follows:\\ \par $NG(q) = \overbrace{G \times \cdots \times G }^{q-times} \ni (g_1 , \cdots , g_q ) :$ \\ face operators \enspace ${\varepsilon}_{i} : NG(q) \rightarrow NG(q-1) $ $$ {\varepsilon}_{i}(g_1 , \cdots , g_q )=\begin{cases} (g_2 , \cdots , g_q ) & i=0 \\ (g_1 , \cdots ,g_i g_{i+1} , \cdots , g_q ) & i=1 , \cdots , q-1 \\ (g_1 , \cdots , g_{q-1} ) & i=q \end{cases} $$ \par \medskip $PG (q) = \overbrace{ G \times \cdots \times G }^{q+1 - times} \ni (\bar{g}_1 , \cdots , \bar{g}_{q+1} ) :$ \\ face operators \enspace $ \bar{\varepsilon}_{i} : PG(q) \rightarrow PG(q-1) $ $$ \bar{{\varepsilon}} _{i} (\bar{g}_1 , \cdots , \bar{g}_{q+1} ) = (\bar{g}_1 , \cdots , \bar{g}_{i} , \bar{g}_{i+2}, \cdots , \bar{g}_{q+1}) \qquad i=0,1, \cdots ,q $$ \par \medskip We define $\gamma : PG \rightarrow NG $ as $ \gamma (\bar{g}_1 , \cdots , \bar{g}_{q+1} ) = (\bar{g}_1 {\bar{g}_2}^{-1} , \cdots , \bar{g}_{q} {\bar{g}_{q+1}}^{-1} )$.\\ For any simplicial manifold $\{ X_* \}$, we can associate a topological space $\parallel X_* \parallel $ called the fat realization defined as follows: $$ \parallel X_* \parallel \enspace \buildrel \mathrm{def} \over = \coprod _{n} {\Delta}^{n} \times X_n / \enspace ( {\varepsilon}^{i} t , x) \sim ( t , {\varepsilon}_{i} x).$$ Here ${\Delta}^{n}$ is the standard $n$-simplex and ${\varepsilon}^{i}$ is a face map of it. It is well-known that $\parallel \gamma \parallel : \parallel PG \parallel \rightarrow \parallel NG \parallel$ is the universal bundle $EG \rightarrow BG$ (see \cite{Dup2} \cite{Mos} \cite{Seg}, for instance). \\ Now we introduce a double complex on a simplicial manifold. \begin{definition} For any simplicial manifold $ \{ X_* \}$ with face operators $\{ {\varepsilon}_* \}$, we have a double complex ${\Omega}^{p,q} (X) := {\Omega}^{q} (X_p) $ with derivatives as follows: $$ \delta := \sum _{i=0} ^{p+1} (-1)^{i} {\varepsilon}_{i} ^{*} , \qquad d' := (-1)^{p} \times {\rm the \enspace exterior \enspace differential \enspace on \enspace }{ \Omega ^*(X_p) } .$$ \end{definition} $\hspace{31em} \Box $ For $NG$ and $PG $ the following holds. \begin{theorem}[\cite{Bot2} \cite{Dup2} \cite{Mos}] There exist ring isomorphisms $$ H^*({\Omega}^{*} (NG)) \cong H^{*} (BG ), \qquad H^*({\Omega}^{*} (PG)) \cong H^{*} (EG ). $$ Here ${\Omega}^{*} (NG)$ and ${\Omega}^{*} (PG)$ mean the total complexes. \end{theorem} $\hspace{31em} \Box $ \subsection{Equivariant version} When a Lie group $H$ acts on a manifold $M$, there is the complex of equivariant differential forms ${\Omega}_H ^{*} (M) := ( {\Omega} ^{*} (M) \otimes S\mathcal{H}^*)^H$ with suitable differential $d_H$ (\cite{Ber} \cite{Car}). Here $\mathcal{H}$ is the Lie algebra of $H$ and $S\mathcal{H}^*$ is the algebra of polynomial functions on $\mathcal{H}$. This is called the Cartan Model. When $M$ is a Lie group $G$, we can define a double complex ${\Omega}^{*} _H (NG(*))$ below in the same way as in Definition 2.1. $$ \begin{CD} {\Omega}^{p}_H (G ) \\ @AA{-d_H}A \\ {\Omega}^{p-1}_H (G )@>{{\varepsilon}_{0} ^{*} - {\varepsilon}_{1} ^{*} +{\varepsilon}_{2} ^{*} }>>{\Omega}^{p-1}_H (NG(2))\\ @.@AA{d_H}A\\ @.{\Omega}^{p-2}_H (NG(2))\\ @.@. \ddots \\ @.@.@.{\Omega}^{1}_H (NG(p)) \\ @.@.@.@AA{(-1)^p d_H }A\\ @.@.@.{\Omega}^{0}_H (NG(p))@>{ \sum _{i=0} ^{p+1} (-1)^{i} {\varepsilon}_{i} ^{*}}>> {\Omega}^{0}_H (NG(p+1)) \end{CD} $$ \section{The cohomology of the horizontal complex} At first, we recall the description of the cohomology of groups in terms of resolutions due to Hochschild and Mostow \cite{Ho}. \begin{theorem}[\cite{Ho}] If $G$ is a topological group and $M$ is a topological $G$-module, then the continuous cohomology $H_c(G;M)$ is isomorphic to the cohomology of the invariant complex $$ {\rm Inv}_G M \rightarrow {\rm Inv}_G X_0 \rightarrow {\rm Inv}_GX_1 \rightarrow \cdots$$ for any continuously injective resolution $M \rightarrow X_0 \rightarrow X_1 \rightarrow \cdots$ of $M$. \end{theorem} $\hspace{31em} \Box $ Now we recall the result of Bott in \cite{Bot}, which gives the cohomology of the horizontal complex of $\Omega^*(NG)$. \begin{theorem}[Bott,\cite{Bot}] For any fixed $q$, $$H^{p+q}_{\delta}(\Omega^q(NG)) \cong H^{p}_c(G;S^q{\mathcal{G}^*}).$$ Here $\mathcal{G}$ is a Lie algebra of $G$. \end{theorem} \begin{proof} Let $\Sigma \mathcal{G}^*$ denote the suspension of $\mathcal{G}^*$. Then there exists the following isomorphism: $$\Omega^q(NG(n)) \cong {\rm Inv}_G[ \Omega^0(PG(n)) \times \Lambda^q \Sigma \mathcal{G}^*(n) ].$$ Before we consider the cohomology $H_{\delta}^*({\rm Inv}_G[k\{ \Omega^0(PG) \times \Lambda^q \Sigma \mathcal{G}^* \}])$, we observe the complex $\mathfrak{P}^q_{\delta}G:=k\{\Omega^0(PG(*)) \times \Lambda^q \Sigma \mathcal{G}^*(*)\}$. \begin{lemma} $$H_{\delta}(\Omega^0(PG(n))) \cong \begin{cases} {\mathbb R} & (n=0)\\ 0 & {\rm otherwise} \end{cases},~~~~~ H_{\delta}(\Lambda^q \Sigma \mathcal{G}^*(n)) \cong \begin{cases} S^q{\mathcal G}^* & (n=q)\\ 0 & {\rm otherwise}, \end{cases}$$ So $$H_{\delta}^n({\mathfrak P} ^qG)\ \cong \begin{cases} S^q{\mathcal G}^* & (n=q)\\ 0 & {\rm otherwise}. \end{cases}$$\\ \end{lemma} Since the cochain complex $${\mathfrak P} ^qG:\Omega^0(PG(0))\times \Lambda^q \Sigma \mathcal{G}^*(0) \xrightarrow{{\delta}_0} \Omega^0(PG(1))\times \Lambda^q \Sigma \mathcal{G}^*(1) \xrightarrow{{\delta}_1} \cdots$$ is continuously injective, we obtain the following continuously injective resolution of $S^q{\mathcal G}^*$ from Lemma 3.1. $$ S^q{\mathcal G}^*(={\rm Ker}{\delta}_q/{\rm Im}{\delta}_{q-1}) \xrightarrow{\delta_q} (\Omega^0(PG(q+1))\times \Lambda^q \Sigma \mathcal{G}^*(q+1))/{\rm Im}{\delta}_{q} $$ $$ \xrightarrow{{\delta}_{q+1}} \Omega^0(PG(q+2))\times \Lambda^q \Sigma \mathcal{G}^*(q+2) \xrightarrow{{\delta}_{q+2}} \cdots~~~~~({\rm exact}).$$ Therefore $H^{p}_c(G;S^q{\mathcal{G}})$ is equal to the $p$-th cohomology of the complex below. $$ {\rm Inv}_G S^q {\mathcal{G}^*} \xrightarrow{\delta_q} {\rm Inv}_G [\Omega^0(PG(q+1))\times \Lambda^q \Sigma \mathcal{G}^*(q+1)/{\rm Im}{\delta}_{q}]$$ $$ \xrightarrow{{\delta}_{q+1}} {\rm Inv}_G [\Omega^0(PG(q+2))\times \Lambda^q \Sigma \mathcal{G}^*(q+2)] \xrightarrow{{\delta}_{q+2}}\cdots$$ So we obtain the following isomorphism. $$ H^{p}_c(G;S^q{\mathcal{G}}) \cong H^{p+q}_{\delta}({\rm Inv}_G [k \{ \Omega^0(PG)\times \Lambda^q \Sigma \mathcal{G}^* \}]).$$ \end{proof} \begin{corollary}[Bott,\cite{Bot}] If $G$ is compact, $$H^{p}_{\delta}(\Omega^q(NG)) \cong \begin{cases} {\rm Inv}_GS^q\mathcal{G}^* & (p=q)\\ 0 & {\rm otherwise.} \end{cases}$$ \end{corollary} $\hspace{31em} \Box $ \bigskip \section{The triple complex on bisimplicial manifold} In this section we construct a triple complex on a bisimplicial manifold.\\ A bisimplicial manifold is a sequence of manifolds with horizontal and vertical face and degeneracy operators which commute with each other. A bisimplicial map is a sequence of maps commuting with horizontal and vertical face and degeneracy operators. For a subgroup $H$ of $G$, we define a bisimplicial manifold $NG(*) \rtimes NH(*)$ as follows; \par $$NG(p) \rtimes NH(q) := \overbrace{G \times \cdots \times G }^{p-times} \times \overbrace{H \times \cdots \times H }^{q-times}. $$ Horizontal face operators \enspace ${\varepsilon}_{i}^{G} : NG(p) \rtimes NH(q) \rightarrow NG(p-1) \rtimes NH(q) $ are the same as the face operators of $NG(p)$. Vertical face operators \enspace ${\varepsilon}_{i}^{H} : NG(p) \rtimes NH(q) \rightarrow NG(p) \rtimes NH(q-1) $ are $$ {\varepsilon}_{i}^{H}(\vec{g}, h_1 , \cdots , h_q )=\begin{cases} (\vec{g}, h_2 , \cdots , h_q ) & i=0 \\ (\vec{g}, h_1 , \cdots ,h_i h_{i+1} , \cdots , h_q ) & i=1 , \cdots , q-1 \\ (h_{q}\vec{g}h_{q} ^{-1}, h_1 , \cdots , h_{q-1} ) & i=q. \end{cases} $$ Here $\vec{g}=(g_1, \cdots , g_p)$. We define a bisimplicial map $\gamma_{\rtimes} : P{G}(p) \times P{H}(q) \rightarrow NG(p) \rtimes NH(q) $ as $ \gamma_{\rtimes} (\vec{\bar{g}}, \bar{h}_1, \cdots ,\bar{h}_{q+1} ) = (\bar{h}_{q+1}\gamma (\vec{\bar{g}}) \bar{h}^{-1} _{q+1} , \gamma (\bar{h}_1, \cdots, \bar{h}_{q+1}))$. Now we fix a semi-direct product operator $\cdot_{\rtimes}$of $G \rtimes H$ as $(g, h) \cdot_{\rtimes} (g', h') := (ghg'h^{-1} , hh')$, then $G \rtimes H$ acts $ PG(p) \times P{H}(q)$ by right as $(\vec{\bar{g}},\vec{\bar{h}})\cdot(g,h) = (h^{-1}\vec{\bar{g}}gh, \vec{\bar{h}}h)$ and $\parallel \gamma_{\rtimes} \parallel$ is a model of $E(G \rtimes H) \rightarrow B(G \rtimes H)$. \begin{definition} For a bisimplicial manifold $NG(*) \rtimes NH(*)$, we have a triple complex as follows: $${\Omega}^{p,q,r} (NG(*) \rtimes NH(*)) := {\Omega}^{r} (NG(p) \rtimes NH(q)) $$ Derivatives are: $$ \delta_G := \sum _{i=0} ^{p+1} (-1)^{i} ({{\varepsilon}^G _{i}}) ^{*} , \qquad \delta_H := \sum _{i=0} ^{q+1} (-1)^{i} ({{\varepsilon}^H _{i}}) ^{*} \times (-1)^{p} $$ $$ d' := (-1)^{p+q} \times {\rm the \enspace exterior \enspace differential \enspace on \enspace }{ \Omega ^*(NG(p) \rtimes NH(q)) }.$$ \end{definition} $\hspace{31em} \Box $ \begin{theorem}[\cite{Suz}] If $H$ is compact, there exist isomorphisms $$ H({\Omega}_H ^{*} (NG)) \cong H({\Omega}^{*} (NG \rtimes NH)) \cong H^{*} (B(G \rtimes H)).$$ Here ${\Omega}_H ^{*} (NG)$ means the total complex in subsection 2.2 and ${\Omega}^{*} (NG \rtimes NH)$ means the total complex of the triple complex.$\hspace{12em} \Box $ \end{theorem} \section{Main theorem} \begin{theorem} For any fixed $q$, $$H^{p+q}_{\delta}(\Omega^q(NG \rtimes NH)) \cong H^{p}_c(G \rtimes H ;S^q{\mathcal{G}} \times S^q{\mathcal{H}}).$$ Here $\delta:=\delta_G+\delta_H$. \end{theorem} \begin{proof} We identify $\Omega^q(NG(n) \rtimes NH(m))$ with ${\rm Inv}_{G \rtimes H} [ \Omega^0(PG(n)) \times \Lambda^q \Sigma \mathcal{G}^*(n) \times \Omega^0(PH(m)) \times \Lambda^q \Sigma \mathcal{G}^*(m) ]$. Before we deal with the cohomology $H_{\delta}^*({\rm Inv}_{G \rtimes H}[k\{ \Omega^0(PG) \times \Lambda^q \Sigma \mathcal{G}^* \times \Omega^0(PH) \times \Lambda^q \Sigma \mathcal{G}^* \}])$, we observe the total complex of the double complex $$\mathfrak{P}^q_{\delta_G}G \times \mathfrak{P}^q_{\delta_H}H=k\{\Omega^0(PG(*)) \times \Lambda^q \Sigma \mathcal{G}^*(*)\} \times k\{\Omega^0(PH(*)) \times \Lambda^q \Sigma \mathcal{H}^*(*)\}.$$ From Lemma 3.1, we obtain: $$H_{\delta}^n({\mathfrak P} ^qG \times \mathfrak{P}^q H)\ \cong \begin{cases} S^q{\mathcal G}^* \times S^q{\mathcal H}^*& (n=q)\\ 0 & {\rm otherwise}. \end{cases}$$\\ Since the total complex $$k_{\delta}({\mathfrak P} ^qG \times \mathfrak{P}^q H)(0) \xrightarrow{{\delta}_0} k_{\delta}({\mathfrak P} ^qG \times \mathfrak{P}^q H)(1) \xrightarrow{{\delta}_1} \cdots$$ is continuously injective, we obtain the following continuously injective resolution of $S^q{\mathcal G} \times S^q{\mathcal H}$. $$ S^q{\mathcal G}^* \times S^q{\mathcal H}^* (={\rm Ker}{\delta}_q/{\rm Im}{\delta}_{q-1})\xrightarrow{{\delta}_{q}} k_{\delta}({\mathfrak P} ^qG \times \mathfrak{P}^q H)(q+1)/{\rm Im}{\delta}_{q}$$ $$\xrightarrow{{\delta}_{q+1}} k_{\delta}({\mathfrak P} ^qG \times \mathfrak{P}^q H)(q+2)\xrightarrow{{\delta}_{q+2}} \cdots ({\rm exact}).$$ Therefore $H^{p}_c(G \rtimes H;S^q{\mathcal{G}} \times S^q{\mathcal{H}})$ is equal to the $p$-th cohomology of the complex below. $$ {\rm Inv}_{G \rtimes H} (S^q{\mathcal{G}} \times S^q{\mathcal{H}}) \xrightarrow{{\delta}_{q}} {\rm Inv}_{G \rtimes H} [k_{\delta}({\mathfrak P} ^qG \times \mathfrak{P}^q H)(q+1)/{\rm Im}{\delta}_{q}]$$ $$ \xrightarrow{{\delta}_{q+1}} {\rm Inv}_{G \rtimes H} [k_{\delta}({\mathfrak P} ^qG \times \mathfrak{P}^q H)(q+2)] \xrightarrow{{\delta}_{q+2}} \cdots$$ So we obtain the following isomorphisms. $$ H^{p}_c(G \rtimes H ;S^q{\mathcal{G}} \times S^q{\mathcal{H}}) \cong H^{p+q}_{\delta}({\rm Inv}_{ G \rtimes H} [k_{\delta}({\mathfrak P} ^qG \times \mathfrak{P}^q H)]) $$ $$\cong H^{p+q}_{\delta}(\Omega^q(NG \rtimes NH)).$$ \end{proof} \begin{corollary} If $G$ is compact, $$H^{p}_{\delta}(\Omega^q(NG \rtimes NG)) \cong \begin{cases} {\rm Inv}_{G \rtimes G}(S^q\mathcal{G}^* \times S^q\mathcal{G}^*) & (p=q)\\ 0 & {\rm otherwise.} \end{cases}$$ \end{corollary} $\hspace{31em} \Box $
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} With the CKM prediction of large {\bf CP}~asymmetries in $B$ decays \cite{CS,BS80} like $B_d \to \psi K_S$ confirmed \cite{BELLEBABAR}, the main task of $B$ factories of all stripes is to look for `New Physics', i.e. dynamics beyond the Standard Model (SM). This goal is being pursued by studying $B$ decays of ever greater rarity. We want to stress that $\tau$ decays likewise deserve extensive efforts for three reasons at least: \begin{itemize} \item No {\bf CP}~violation has been observed yet in leptodynamics. Finding it there would represent a qualitative step forward. \item There are intriguing scenarios, where baryogenesis is driven by leptogenesis as primary effect \cite{LEPTO}. {\bf CP}~violation is then required in leptodynamics. This is the main justification for undertaking Herculean efforts to find {\bf CP}~violation in neutrino oscillations. Searching for {\bf CP}~asymmetries in $\tau$ decays provides another of the few meaningful avenues towards that goal. \item Like for $B$ mesons, studies of $\tau$ decays very likely provide different and presumably complementary perspectives onto the anticipated New Physics connected with the electroweak phase transition. \end{itemize} An optimal environment for studying $\tau$ decays is provided by $e^+e^- \to \tau ^+\tau ^-$. It offers a high rate relative to all other final states in a `clean' and well-understood environment that allows searching for SM forbidden modes like $\tau ^{\pm} \to l^{\pm}\gamma$, $\mu^{\pm}l^+l^-$ with $l =e,\mu$. Maybe even more importantly is another unique opportunity such $\tau$ factories offer, whether they are of the $\tau$-charm or $B$ factory of Giga-Z variety: they enable searches for novel {\bf CP}~asymmetries. Since the $\tau$ pair is produced with its spins aligned, one can use the decay of one $\tau$ to `tag' the spin of the other $\tau$ and thus probe for spin dependent {\bf CP}~asymmetries {\em without} needing polarized beams. In this short note we want to point out that contrary to a wide spread perception known dynamics generate {\bf CP}~asymmetries in $\tau$ decays: the well-measured {\bf CP}~asymmetry in $K_L \to \pi^{\mp}l^{\pm}\nu$ produces a difference in $\Gamma (\tau^+ \to \overline \nu K_L\pi^+)$ vs. $\Gamma (\tau^- \to \nu K_L\pi^-)$, where $K_L$ is defined as the neutral kaon decaying on a time scale $\sim {\cal O}(\Gamma_L^{-1})$, and -- assuming {\bf CPT}~symmetry -- the same asymmetry also in $\Gamma (\tau^+ \to \overline \nu K_S\pi^+)$ vs. $\Gamma (\tau^- \to \nu K_S\pi^-)$ with the $K_S$ defined as the neutral kaon decaying on the much shorter time scale $\sim {\cal O}(\Gamma_S^{-1})$. We explain how the apparent conflict with {\bf CPT}~invariance enforcing equal $\tau ^+$ and $\tau ^-$ lifetimes is resolved. Such {\bf CP}~asymmetries, which of course are absent in $\Gamma (\tau^+ \to \overline \nu K^+\pi^0)$ vs. $\Gamma (\tau^- \to \nu K^-\pi^0)$, have to be taken into account; it also provides a powerful calibration, when searching for manifestations of New Physics through {\bf CP}~studies. \section{SM {\bf CP}~violation in $\tau$ decays} The SM predicts for the transition amplitudes \begin{equation} T(\tau^-\to \overline K^0 \pi^- \nu )=T(\tau^+\to K^0 \pi^+ \overline \nu) \; , \label{K0K0BAR} \end{equation} since there is no weak phase and the strong phase has to be the same. Yet the observed kaons are the mass and not the flavour eigenstates, i.e. $K_S$ and $K_L$, rather than $K^0$ and $\overline K^0$. Ignoring {\bf CP}~violation in $\Delta S\neq 0$ dynamics, one has $\langle K_L|K_S\rangle =0$, and the $K_L$ and $K_S$ are unambiguously distinguished by their decay modes in addition to their vastly different lifetimes: $K_S \to 2 \pi$, $K_L\not\to 2\pi$, $K_L\to 3\pi$. Then one has \begin{eqnarray} \Gamma (\tau ^- \to \nu K_S \pi^-) &=& \Gamma (\tau ^- \to \nu K_L \pi^-) = \frac{1}{2}\Gamma (\tau ^- \to \nu \overline K^0 \pi^-) \\ \Gamma (\tau ^+ \to \overline \nu K_S \pi^+) &=& \Gamma (\tau ^+\to \overline \nu K_L \pi^+) = \frac{1}{2}\Gamma (\tau ^+ \to \overline \nu K^0 \pi^+) \end{eqnarray} and thus no {\bf CP}~asymmetry due to Eq.(\ref{K0K0BAR}). The situation becomes considerably more complex and intriguing, once {\bf CP}~violation in $\Delta S=2$ transitions is included. (We can safely ignore {\em direct} {\bf CP}~violation for our purposes here.) Imposing {\bf CPT}~invariance we can write \begin{eqnarray} |K_S\rangle&=&p|K^0\rangle +q|\overline K^0\rangle \nonumber \\ |K_L\rangle &=&p|K^0\rangle -q|\overline K^0\rangle \label{KLKSDEFCPT} \end{eqnarray} with $|p|^2 + |q|^2 = 1$. We then have \begin{equation} \langle K_L|K_S\rangle = |p|^2 - |q|^2 \simeq 2 {\rm Re}\epsilon_K \simeq (3.27 \pm 0.12)\times 10^{-3} \neq 0 \end{equation} as deduced from \begin{equation} \frac{\Gamma(K_L\to \pi^- l^+\nu)-\Gamma(K_L\to \pi^+ l^- \overline \nu)} {\Gamma(K_L\to \pi^- l^+\nu)+\Gamma(K_L\to \pi^+ l^- \overline \nu)} = |p|^2 - |q|^2 \end{equation} The mass eigenstates thus are no longer orthogonal to each other and both $K_S \to 2\pi$ and $K_L\to 2\pi$ can occur. I.e., the $2\pi$ final state by itself no longer distinguishes strictly between $K_S$ and $K_L$. Yet the difference in lifetimes still provides a discriminator: Considering the decay rate evolution for $\tau \to \nu \pi [\pi^+\pi^-]_K$ as a function of $t_K$, the (proper) time of the kaon decay, one has for {\em short} decay times -- $t_K \sim {\cal O}(\Gamma_S^{-1})$ -- we have for all practical purposes only $K_S \to 2\pi$ decays and find \begin{equation} \frac{\Gamma(\tau^+\to [\pi^+\pi^-]_{"K_S"} \pi^+ \overline \nu)- \Gamma(\tau^-\to [\pi^+\pi^-]_{"K_S"} \pi^- \nu)} {\Gamma(\tau^+\to [\pi^+\pi^-]_{"K_S"}\pi^+ \overline \nu)+ \Gamma(\tau^-\to [\pi^+\pi^-]_{"K_S"} \pi^- \nu)}= |p|^2-|q|^2\simeq (3.27 \pm 0.12)\times 10^{-3} \; . \label{KS} \end{equation} For {\em long} decay times -- $t_K \sim {\cal O}(\Gamma_L^{-1})$ -- we have again for all practical purposes only $K_L \to 2\pi$ and find \begin{equation} \frac{\Gamma(\tau^+\to [\pi^+\pi^-]_{"K_L"} \pi^+ \overline \nu)- \Gamma(\tau^-\to [\pi^+\pi^-]_{"K_L"} \pi^- \nu)} {\Gamma(\tau^+\to [\pi^+\pi^-]_{"K_L"}\pi^+ \overline \nu)+ \Gamma(\tau^-\to [\pi^+\pi^-]_{"K_L"} \pi^- \nu)} = |p|^2 - |q|^2 \label{KL} \end{equation} Strictly speaking it does not even matter which forces generate $|q| \neq |p|$, whether it is due to to SM dynamics or not, as long as $\tau$ decays {\em themselves} are described by the SM. Measuring the asymmetry of Eq.(\ref{KL}) seems hardly feasible, since the $K_L$ acts basically like a second neutrino, yet it raises an intriguing question: With the asymmetries in Eqs.(\ref{KS},\ref{KL}) having the same sign, how is the equality of the $\tau^+$ and $\tau^-$ lifetimes restored as required by {\bf CPT}~invariance, which we have explicitly invoked? To answer this question, let us recall how we might actually measure the asymmetries of Eqs.(\ref{KS},\ref{KL}). These asymmetries are obtained by studying the elapsed time between the $\tau$ decay and the time at which $"\pi\pi"$ is formed. The first asymmetry is obtained by looking at events with short time difference, where as the second asymmetry is obtained by looking decays with large time difference. {\bf CPT}~constraint applies only to the total decay rate where we include events at all times of decay. Because $\langle K_L|K_S\rangle \neq 0$, the decay rate evolution for $\tau \to \nu \pi [f]_K$,where $f$ is an arbitrary final state, now contains three terms: in addition to the two contributions listed above with a time dependance $\propto e^{-\Gamma_St_K}$ and $\propto e^{-\Gamma_Lt_K}$, respectively, we have an interference term $e^{-\frac{1}{2}(\Gamma_S + \Gamma_L)t_K}$ most relevant for intermediate times $\Gamma_S^{-1} \ll t_K \ll \Gamma_L^{-1}$. \footnote{It was this interference term in $K^0 (t) \to \pi^+\pi^-$ and $\overline K^0(t) \to \pi^+\pi^-$, which established originally that the Fitch-Cronin observation of $K_L \to \pi^+\pi^-$ could not be reconciled with {\bf CP}~symmetry by suggesting that they had actually observed $K_L \to \pi^+\pi^-U$ with $U$ denoting a hitherto unknown neutral particle with odd {\bf CP}~parity that had escaped detection.} Note that because of the interference term, observing only $\pi\pi$ final state does not allow us to understand the {\bf CPT}~constraint. Measuring all three terms for all $f$ and integrating over all $t_K$ -- possible in principle, though maybe not in practice -- recovers the full information on the production of $\overline K^0$ and $K^0$ with the relation of Eq.(\ref{K0K0BAR}). To be more explicit: one has to track the full decay rate evolution into a general state $f$ when the initial state was a $K^0$ -- $\Gamma (K^0(t_K)\to f)$ -- versus a $\overline K^0$ -- $\Gamma (\overline K^0(t_K)\to \overline f)$. The most general expression reads \begin{eqnarray} \nonumber \Gamma (K^0((t_K))\to f) =&& \frac{1}{2|p|^2} \left[|T(K_S \to f)|^2 e^{-\Gamma _St_K} + |T(K_L \to f)|^2 e^{-\Gamma _Lt_K} \right. \\ &+& \left.2e^{-\frac{1}{2}(\Gamma _S + \Gamma _L)t_K} {\rm Re}(e^{i\Delta M_Kt_K}T(K_S \to f)T(K_L \to f)^*) \right] \label{GENEXP1} \\ \nonumber \Gamma (\overline K^0(t_K)\to \overline f) =&& \frac{1}{2|q|^2} \left[|T(K_S \to \overline f)|^2 e^{-\Gamma _St_K} + |T(K_L \to \overline f)|^2 e^{-\Gamma _Lt_K} \right. \\ &-& \left.2e^{-\frac{1}{2}(\Gamma _S + \Gamma _L)t_K} {\rm Re}(e^{i\Delta M_Kt_K}T(K_S \to \overline f)T(K_L \to \overline f)^*) \right] \label{GENEXP2} \end{eqnarray} For short times of decay the first term dominates, which describes $K_S$ decays, and Eq.(\ref{KS}) applies; for very long times the second term does producing the same {\bf CP}~asymmetry as stated in Eq.(\ref{KL}). Yet for the intermediate range in times of decay the third term reflecting $K_S-K_L$ interference becomes important. By rewriting the {\em interference} term in terms of $K^0$ and $\overline K^0$, integrating over $t_K$ and finally summing over all possible final state $f$ and $\overline f$, we have \begin{eqnarray} \nonumber &&\frac{1}{|p|^2}\sum_f\int dt_Ke^{-\frac{1}{2}(\Gamma _S + \Gamma _L)t_K} [e^{i\Delta M_Kt_K} T(K_S \to f)T(K_L \to f)^*)]\\ \nonumber && \frac{1}{\frac{\Gamma_L+\Gamma_S}{2}-i\Delta M} [2(|p|^2-|q|^2)\Gamma_{11}+i4 {\rm Im}(qp^*\Gamma_{12})]\\% \right] \\ && 2(|p|^2-|q|^2)+\frac{2i}{\frac{\Gamma_L+\Gamma_S}{2}-i\Delta M}[2\Delta M~{\rm Re}\epsilon -\Delta\Gamma~{\rm Im}\epsilon] \label{int1} \end{eqnarray} where we have used relations valid for this problem to first order in the {\bf CP}~violating parameters: $\Gamma_{11}=\frac{\Gamma_L+\Gamma_S}{2}$, $p=\frac{1}{\sqrt{2}}(1+\epsilon)$, $q=\frac{1}{\sqrt{2}}(1-\epsilon)$, $\Delta \Gamma=2\Gamma_{12}$. Finally using ${\rm arg}~\epsilon={\rm arc tan}\left(\frac{2\Delta M}{\Delta\Gamma}\right)$ we find that the square bracket in the last line of Eq.(\ref{int1}) vanishes; i.e. \begin{equation} \frac{1}{|p|^2}\sum_f\int dt_Ke^{-\frac{1}{2}(\Gamma _S + \Gamma _L)t_K} [e^{i\Delta M_Kt_K} T(K_S \to f)T(K_L \to f)^*)]=2(|p|^2-|q|^2) \label{int2} \end{equation} Using Eq.(\ref{int2}) it is simple to show that the interference term indeed restores the constraints from {\bf CPT}~symmetry: \begin{equation} \sum_f\int dt_K \Gamma(\tau^+\to [f]_{"K^0(t_K)"} \pi^+ \overline \nu) =\sum_{\overline f}\int dt_K \Gamma(\tau^-\to [\overline f]_{"\overline K^0(t_K)"} \pi^- \nu). \end{equation} While this is as it must be, it is still instructive to see how it comes about. In talking about the time of decay $t_K$ we were referring to the proper time of the neutral kaon decay. In a real experiment one has two times of decay, namely that of the $\tau$ lepton and of the kaon. The explicit formulae have been given for the even more involved case of $D^0 \to K_SX$ allowing even for $D^0 - \overline D^0$ oscillations to take place \cite{AZI98}. Yet for the experimentally most accessible case involving $K_S$ decays at short values of $t_K$ there is no practical need for the full machinery. \section{{\bf CPT}~violation} While we are not suggesting that {\bf CPT}~violation is likely to surface in $\tau$ decays, it is not inappropriate to address this issue. It has been searched for extensively in semileptonic K decays; thus it is convenient to employ the notation used there. Without imposing {\bf CPT}~invariance Eq.(\ref{KLKSDEFCPT}) is generalized to \begin{eqnarray} |K_S\rangle&=&p_S|K^0\rangle +q_S|\overline K^0\rangle \nonumber \\ |K_L\rangle &=&p_L|K^0\rangle -q_L|\overline K^0\rangle \label{KLKSDEFNOCPT} \end{eqnarray} with \cite{bs} \begin{eqnarray} p_S=N_S\cos\theta/2,&~~~~&q_S=N_Se^{i\phi}\sin\theta/2 \nonumber\\ p_L=N_L\sin\theta/2,&~~~~&q_L=N_Le^{i\phi}\cos\theta/2 \end{eqnarray} where $\phi$ and $\theta$ are both {\em complex} numbers constrained by the discrete symmetries as follows: \begin{eqnarray} {\bf CPT}~ \; \; {\rm or} \; \; {\bf CP}~{\rm invariance} &\Longrightarrow& \; \; {\rm cos}\theta = 0 \\ {\bf CP}~ \; \; {\rm or} \; \; {\bf T}~{\rm invariance} &\Longrightarrow& \; \; \phi = 0 \end{eqnarray} The normalization constants $N_S$ and $N_L$ are given by: \begin{equation} N_S= \frac{1}{\sqrt{\left|{\rm cos}\frac{\theta}{2} \right|^2 + \left|e^{i\phi}{\rm sin}\frac{\theta}{2} \right|^2 }} \; , \; N_L = \frac{1}{\sqrt{\left|{\rm sin}\frac{\theta}{2} \right|^2 + \left|e^{i\phi}{\rm cos}\frac{\theta}{2} \right|^2 }} \end{equation} If {\bf CPT}~symmetry is violated $\cos\theta\ne 0$ and $\Im \phi\ne 0$. \begin{eqnarray} \frac{\Gamma(\tau^+\to "K_S" \pi^+ \overline \nu)-\Gamma(\tau^-\to "K_S" \pi^- \nu)} {\Gamma(\tau^+\to "K_S" \pi^+ \overline \nu)+\Gamma(\tau^-\to "K_S" \pi^- \nu)}&=&\Im\phi+{\rm Re}\cos\theta \nonumber \\ \frac{\Gamma(K_L\to \pi^- l^+\nu)-\Gamma(K_L\to \pi^+ l^- \overline \nu)} {\Gamma(K_L\to \pi^- l^+\nu)+\Gamma(K_L\to \pi^+ l^- \overline \nu)}&=&\Im\phi-{\rm Re}\cos\theta \label{CPTTEST} \end{eqnarray} where as before $"K_S"$ is understood as the {\em short}-time component in $K\to \pi^+\pi^-$; we have also assumed the $\Delta S=\Delta Q$ selection rule for these decay amplitudes. We look forward to new information on ${\rm Re}\cos\theta$ from $\tau$ decays. To be consistently heretical, one could also entertain the idea of the $\Delta S=\Delta Q$ rule being violated and possibly by different amounts in $K$ and $\tau$ decays. The relevant expressions are quite straightforward and can be derived using similar techniques described, for example, in Ref.\cite{bs}. We will not write them down here, since we feel there is even less space for heresy in $\Delta S=1$ than in $\Delta S=2$ dynamics. \section{Decays of other particles} The power of $K_{L,S}$ to discriminate matter against antimatter affects the decays of other particles as well. In $B_d/\overline B_d \to \psi K_S$ its effect is covered up by the huge {\bf CP}~violation in $\Delta B\neq 0$ dynamics. It is of relevance when SM forces generate no or only small {\bf CP}~violation like in $D^{\pm} \to K_S\pi^{\pm}$ \cite{YAMA} or in $D^{\pm} \to l^{\pm}\nu K_S$. \section{Conclusion} As stated before, the asymmetries expressed in Eqs.(\ref{KL},\ref{KS}) have to be there, since they are caused by a well established effect, namely that the $K_{L,S}$ are ever so slightly, yet definitely sensitive to the matter-antimatter distinction. In principle it does not even matter, whether the SM can reproduce the observed size of $\epsilon_K$. In that sense observing this asymmetry does not teach us anything we do not already know. This, however, is an incomplete evaluation of the situation. For {\bf CP}~asymmetries in the channels $\tau \to \nu K + {\rm pions}$ are natural portals for the emergence of New Physics. For the final state is sufficiently complex to allow for {\bf CP}~odd observables also in distributions beyond fully integrated widths; secondly they should be particularly sensitive to non-SM charged Higgs exchanges \cite{KUHN}. Obviously it is then essential to note that known physics produces a reliably predicted asymmetry in the full width, but not in final state distributions for some channels. There are some more subtle points as well: it is actually most useful experimentally when not all modes are predicted to exhibit a null effect within the SM. For if one does not observe the effect predicted by Eqs.(\ref{KL},\ref{KS}), then there has to be New Physics, which (partially) cancels the effect from known physics -- or one does not understand one's experiment with the required accuracy. The small expected {\bf CP}~asymmetry discussed above thus provides a most helpful calibration. Finally it behooves us to allow for the admittedly exotic possibility of {\bf CPT}~invariance being violated in general and in the dynamics of the $K^0 - \bar K^0$ complex in particular. Eq.(\ref{CPTTEST}) provides a novel test for it. \vspace{0.5cm} {\bf Acknowledgments:} One of us (I.B.) gratefully acknowledges the hospitality extended to him by the Laboratoire de l'Acc\' el\' erateur Lin\' enaire, while this work was done. We thank our colleagues J. Rosner and Y. Grossman for pointing our attention to the problem of how equal lifetimes arise and Ya. Azimov for providing us with references to his work. This work was supported by the NSF under grant PHY03-55098 and by JSPS under grant C-17540248.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{The sample} A sample of 823 stars with abundances of several elements (Fe,O,Mg,Ti,Si, Na,Ni,Al) was compiled from several papers (Ref 1 to 10) after checking the lack significant differences between their results. The velocities (U,V,W) and orbital parameters were computed for 640 Hipparcos stars having $\sigma_{\pi}\over \pi$$<$0.25, to make a large database combining kinematics and detailed abundances. Ages of 442 stars were retrieved from Nordstr\"{o}m et al(2004). \begin{figure}[h!] \begin{center} \includegraphics[width=.30\textwidth,angle=90]{girardF1.ps} \includegraphics[width=.30\textwidth,angle=90]{girardF2.ps} \end{center} \caption[]{Left : [Ca/Fe] vs [Fe/H] for the thin (empty triangles) and the thick (filled triangles) disk stars. Right : [$\alpha$/Fe] vs [Fe/H].} \label{gir:F1} \end{figure} In order to investigate the chemical and age properties of the thin and the thick disks separately we have performed the deconvolution of their velocitiy distributions. We show that about 25\% of the sample has kinematics typical of the thick disk, adopting for its parameters V$_{lag}$ = $-51 \mathrm{km\,s}^{-1}$ and $(\sigma_U, \sigma_V, \sigma_W)=(63, 39, 39) \, \mathrm{km\,s}^{-1}$. Stars having a probability higher than 80\% to belong to the thin and thick disks were selected. Plots on Fig.1 show nicely the separation between the thin and the thick disks. The thick disk is $\alpha$-enhanced as compared to the thin disk but the decreasing trends are parallel. In the metallicity overlap, [$\alpha$/Fe] of the thick disk exceeds by 0.08 dex that of the thin disk. No clear vertical gradient of abundance in the thick disk is seen on Fig.2. When only high precison ages (relative error $<$ 15\%) are considered, a transition between ages of the thin and the thick disks stars at 10 Gyr is observed (Fig.2). \begin{figure}[h!] \begin{center} \includegraphics[width=.30\textwidth,angle=90]{girardF3.ps} \includegraphics[width=.30\textwidth,angle=90]{girardF4.ps} \end{center} \caption[]{Left : Zmax vs [$\alpha$/Fe] for the thick disk stars. Right : Age distribution of the thin and thick disks stars.} \label{gir:F2} \end{figure} \\ {\large {\bf Conclusion.}} Thanks to our large sample, the statistic is improved and the separation between the two disks is quantified. It is now clear that the thin and the thick disks are chemically well separated. We found a transition in the age distribution of the thin disk and the thick disk stars at 10 Gyr but no clear vertical gradient in the thick disk. These results constrain the formation scenarii of the Milky Way's disks.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} During the last decades, decentralized control of networked multi-agent systems has attracted significant attention due to the great variety of its applications, including multi-robot systems, transportation, multi-point surveillance as well as biological systems \cite{jadbabaie2003coordination,olfati2007consensus,couzin2005effective,modares2017optimal,bechlioulis2016decentralized,verginis2019adaptive,ni2020adaptive,zhang2012adaptive,hu2014adaptive}. {In such systems, each agent calculates its own actions based on local information, as modeled by a connectivity graph, without relying on any central control unit. Although many works on distributed cooperative control consider known and simple dynamic models, there exist many practical engineering systems that cannot be modeled accurately and are affected by unknown exogenous disturbances. Thus, the design of control algorithms that are robust and adaptable to such uncertainties and disturbances is important. For multi-agent systems, ensuring robustness is particularly challenging due to the lack of global information and the interacting dynamics of the individual agents. {A promising step towards the control of systems with uncertain dynamics is the use of data obtained a priori from system runs. However, engineering systems often undergo purposeful modifications (e.g., substitution of a motor or link in a robotic arm or exposure to new working environments) or suffer gradual faults (e.g., mechanical degradation), which might change the systems' dynamics or operating conditions. Therefore, one cannot rely on the aforementioned data to provably guarantee the successful control of the system. On the other hand, the exact incorporation of these changes in the dynamic model, and consequently, the design of new model-based algorithms, can be a challenging procedure. Hence, the goal in such cases is to exploit the data obtained a priori and construct intelligent online policies that achieve a user-defined task while adapting to the aforementioned changes. } { This paper addresses the distributed coordination of networked multi-agent systems governed by unknown nonlinear dynamics. We design a control algorithm that draws a novel connection between distributed learning with neural-network-based representations and adaptive feedback control, and consists of the following steps. Firstly, it trains a number of neural networks, one for each agent, to approximate controllers for the agents that accomplish the given formation task. The data used to train the neural networks consist of pairs of states and control actions of the agents that are gathered from runs of the multi-agent system. Secondly, it uses an online adaptive feedback control policy that guarantees accomplishment of the given formation task. Both steps can be executed in a distributed manner in a sense that each agent uses only local information, as modeled by a connectivity graph. } \section{Control Algorithm} \label{sec:PF} Consider a networked multi-agent group comprised of a leader, indexed by $i=0$, and $N$ followers, with $\mathcal{N}\coloneqq\{1,\dots,N\}$. The leading agent acts as an exosystem that generates a desired reference trajectory for the multi-agent group. The followers, which have to be controlled, evolve according to the $2$nd-order dynamics \begin{subequations} \label{eq:dynamics} \begin{align} \dot{x}_{i,1}(t) &= x_{i,2}(t) \\ \dot{x}_{i,2}(t) &= f_i(x_i(t),t) + g_i({x}_i(t),t)u_i(t) \end{align} \end{subequations} where ${x}_i \coloneqq [x_{i,1}^\top, x_{i,2}^\top]^\top \in \mathbb{R}^{2n}$ is the $i$th agent's state, assumed available for measurement by agent $i$, $f_i:\mathbb{R}^{2n}\times[0,\infty) \to \mathbb{R}^n$, $g_i:\mathbb{R}^{2n}\times[0,\infty) \to \mathbb{R}^n$ are unknown functions modeling the agent's dynamics, and $u_i$ is the $i$th agent's control input. The vector fields $f_i(\cdot)$ and $g_i(\cdot)$ are assumed to be locally Lipschitz in ${x}_i$ over $\mathbb{R}^{2n}$ for each fixed $t\geq 0$, and uniformly bounded in $t$ over $[t_0,\infty)$ for each fixed ${x}_i\in\mathbb{R}^{2n}$, for all $i\in\mathcal{N}$. Further, we assume that the matrices $g_i$ are positive definite, for all $i\in\mathcal{N}$. We use an undirected graph $\mathcal{G} \coloneqq (\mathcal{N},\mathcal{E})$ to model the communication among the agents, with $\mathcal{N}$ being the index set of the agents. The set of neighbors of agent $i$ is denoted by $\mathcal{N}_i \coloneqq \{j\in\mathcal{N}:(i,j)\in\mathcal{E}\}$. We assume that $\mathcal{G}$ is connected, i.e., there exists a communication path between any two agents. The state/command variables of the leading agent (indexed by $0$) are denoted by $x_{0,1}$, $x_{0,2}$ $\in\mathbb{R}^n$ and obey the $2$nd-order dynamics $\dot{x}_{0,1}(t) = x_{0,2}(t)$, $\dot{x}_{0,2}(t) = u_{0}(t)$ for a smooth and bounded $u_0:[0,\infty) \to \mathbb{R}^n$. However, the state of the leader is only provided to a subgroup of the $N$ agents. In particular, the access of the follower agents to the leader's state is modeled by a diagonal matrix $\mathcal{B} \coloneqq \textup{diag}\{b_1,\dots,b_N\} \in \mathbb{R}^{N\times N}$; if $b_i = 1$, then the $i$th agent has access to the leader's state, whereas it does not if $b_i = 0$, for $i\in\mathcal{N}$. Thus, we may also define the augmented graph as $\bar{\mathcal{G}} \coloneqq (\mathcal{N}\cup\{0\}, \bar{\mathcal{E}})$, where $\bar{\mathcal{E}} \coloneqq \mathcal{E} \cup \{ (i,0) : b_i = 1 \}$. The goal of this work is to design a distributed control algorithm, where each agent has access only to its neighbors' information, to achieve a pre-specified geometric formation of the agents in $\mathbb{R}^n$. More specifically, consider for each agent $i\in\mathcal{N}$ the constants $c_{ij}$, $j\in \{0\} \cup \mathcal{N}_i$ prescribing a desired offset that agent $i$ desires to achieve with respect to the leader ($j=0$), and its neighbors ($j\in\mathcal{N}_i$). That is, each agent $i\in\mathcal{N}_i$ aims at achieving $x_{i,1} = x_{j,1} - c_{ij}$, for all $j\in\mathcal{N}_i$, and if $b_i=1$, $x_{i,1} = x_{0,1} - c_{i0}$. In other words, we aim to minimize the errors, \begin{align*} e_{i,1} \coloneqq \sum_{j\in\mathcal{N}_i} (x_{i,1} - x_{j,1} + c_{ij}) + b_i(x_{i,1} - x_{0,1} + c_{i0}), \ \ \ i\in\mathcal{N}. \end{align*} We describe now the control algorithm. We assume the existence of data gathered from a finite set of $T$ trajectories $\mathcal{J}$ generated by a priori runs of the multi-agent system. More specifically, we consider that $\mathcal{J}$ is decomposed as $\mathcal{J} = (\mathcal{J}_1,\dots,\mathcal{J}_N)$, where $\mathcal{J}_i$ is the set of trajectories $$\mathcal{J}_i = \left\{\bar{x}^k_i(t), \{\bar{x}^j\}_{j\in\mathcal{N}^k_i},u^k_i \left(\bar{x}^k_i(t), \{\bar{x}^j\}_{j\in\mathcal{N}^k_i},t \right) \right\}_{t\in \mathbb{T}_i}$$ of agent $i$, where $\mathbb{T}_i$ is a finite set of time instants, $\bar{x}^k_i\in\mathbb{R}^{2n}$ is the state trajectory of agent $i$ for trajectory $k$, $\mathcal{N}^k_i$ are the neighbors of agent $i$ in trajectory $k$, with $\{\bar{x}^j\}_{j\in\mathcal{N}^k_i}$ being their respective state trajectories, and $u^k_i(\bar{x}^k_i(t), \{\bar{x}^j\}_{j\in\mathcal{N}^k_i},t) \in \mathbb{R}^n$ is the control input trajectory of agent $i$. Each agent $i\in\mathcal{N}$ uses the data to train a neural network in order to approximate a controller that accomplishes the formation task. More specifically, each agent uses the tuples $\{\bar{x}^k_i(t), \{\bar{x}^j\}_{j\in\mathcal{N}^k_i}\}_{t\in\mathbb{T}_i}$ as input to a neural network, and $u^k_i \big(\bar{x}^k_i(t), \{\bar{x}^j\}_{j\in\mathcal{N}^k_i},t \big)_{t\in\mathbb{T}_i}$ as the respective output targets, for all $T$ trajectories. For the inputs corresponding to agents that are not neighbors of agent $i$ in a trajectory $k$, we disable the respective neurons. For a given $\bar{x} \in \mathbb{R}^{2n}$, we denote by $u_{i,nn}(\bar{x})$ the output of the neural network of agent $ i \in \mathcal{N}$ We now design a distributed, adaptive feedback control policy to accomplish the formation task. Consider the adaptation variables $\hat{d}_{i,1}$ and $\hat{d}_{i,2}$ for each agent $i\in\mathcal{N}$, corresponding to upper bounds of the unknown dynamic terms $f_i$ and $g_i$. Consider the augmented errors for each agent $e_{i,2} \coloneqq \dot{e}_{i,1} + k_{i,1}e_{i,1}$, where $k_{i,1}$ are positive constants, for all $i\in\mathcal{N}$. We design the distributed control policy as \begin{subequations} \label{eq:control and adapt} \begin{align} u_i(\bar{x},\hat{d}_{i,1},\hat{d}_{i,2}) = u_{i,nn}(\bar{x}) -(k_{i,2} + \hat{d}_{i,1})e_{i,2} - \hat{d}_{i,2} \hat{e}_{i,2} \end{align} where $k_{i,2}$ are positive constants, and $\hat{e}_{i,2}$ are defined as $\hat{e}_{i,2} \coloneqq \frac{e_{i,2}}{\|e_{i,2}\|^2}$ if $e_{i,2} \neq 0$ and $\hat{e}_{i,2} \coloneqq 0$ otherwise, for all $i\in\mathcal{N}$. The adaptation variables $\hat{d}_{i,1}$, $\hat{d}_{i,2}$ are updated as \begin{align*} \label{eq:adaptation law} \dot{\hat{d}}_{i,1} \coloneqq \mu_{i,1}\|e_{i,2}\|^2, \hspace{5mm} \dot{\hat{d}}_{i,2} \coloneqq \mu_{i,2}\|e_{i,2}\|, \end{align*} \end{subequations} where $\mu_{i,1}$, $\mu_{i,2}$ are positive constants, for all $i\in\mathcal{N}$. \begin{figure} \centering \includegraphics[width=.35\textwidth]{AS_errors.eps} \caption{Evolution of the error signals $\|e_{i,1}(t)\|+\|\dot{e}_{i,1}(t)\|$ for $i\in\{1,\dots,5\}$, and $t\in[0,55]$, for the numerical experiments.} \label{fig:AS errors} \end{figure} \begin{figure} \centering \includegraphics[width=.3\textwidth]{ext_plot.png} \caption{The convergence of the followers to the desired formation around the leader, which follows a pre-specified trajectory (continuous blue line), in the $x$-$y$ plane. } \label{fig:ext_plot} \end{figure} \section{Numerical Experiments} \label{sec:exps} We consider $N=5$ follower aerial vehicles in $\mathbb{R}^3$ with dynamics of the form \eqref{eq:dynamics}, with communication graph modeled by the edge set $\bar{\mathcal{E}}$ $=$ $\{$ $(1,2)$, $(2,3)$, $(3,4)$, $(4,5)$, $(1,0)$, $(3,0)$, $(5,0)$ $\}$. The leader's task is to track a reference time-varying trajectory profile $x_0(t)$. The formation constants $c_{ij}$ are chosen randomly in $(-1,1)$, $(i,j)\in\bar{\mathcal{E}}$. We generate data from $100$ trajectories that correspond to different $f_i$, ${g}_i$, and initial conditions, and we train $5$ neural networks, one for each agent. We test the control policy \eqref{eq:control and adapt} and obtain the results depicted in Fig \ref{fig:AS errors}, which shows the evolution of the error signals $\|e_{i,1}(t)\| + \|\dot{e}_{i,1}(t)\|$ for $i\in\{1,\dots,5\}$. One concludes that the multi-agent system converges successfully to the pre-specified formation since the error signals converge to zero. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Overall, there are three major reasons for considering a discrete model of spacetime. The first reason is the employment of numerical simulations which often operate with some kind of spacetime lattice in order to find approximate solutions of the Einstein equations. Among other things, these have been used for extensive investigations of black holes, from the two-body systems which are prominent sources of gravitational waves and have recently attracted great attention due to the rise of interferometry \cite{Buonanno2014} to various configurations of three-body systems \cite{Lousto2008, Galaviz2010} and much more \cite{Sperhake2013, Okawa2014, Fendt2019}. Numerical computations based on lattices also play an important role in cosmology. Here, the link to black holes is present in the approach of \textit{black hole lattices} \cite{Bentivegna2018, Gregoris2020} allowing to build large scale cosmological models from sets of discrete matter sources. The second reason for going discrete is to study the expected effect of quantum gravity which is commonly presumed to introduce some nonzero minimal length-scale, probably of Planckian order \cite{Garay1994, Hossenfelder2013, Padmanabhan2015, Hooft2016}. In some way or another, this aspect is present in the majority of approaches to quantum gravity like the string theory \cite{Szabo2002}, loop quantum gravity \cite{Gambini2011}, causal set theory \cite{Henson2010}, and others. Meanwhile, it is argued that the (built-in or effective) minimal length-scale of spacetime would have deep theoretical and phenomenological implications within areas like quantum field theory, high energy physics, black hole physics and cosmology \cite{Mead1966}. This motivates a large volume of research based on various effective models of discrete spacetime inspired by quantum gravity which attempt to study the (potentially observable) effects of the minimal length-scale. In context of black holes, these may include the backreaction on gravity \cite{Maziashvili2013}, bounds on the creation of very small black holes \cite{Ali2012}, serious consequences for certain physical processes like proton decay \cite{Bambi2008} or effects on the Hawking radiation \cite{Corley1998}, black hole entropy \cite{Mejrhit2019}, information loss \cite{Lowe2015} and the central singularity \cite{Spallucci2011}. There is also a noteworthy line of research exploring the analogy of quantum black holes and the Hydrogen atom \cite{Bekenstein1999, Corda2020}. The third reason for establishing discrete models of spacetime is their direct employment in certain approaches to quantum gravity. They are central to a handful of significant non-perturbative path-integral approaches like causal sets \cite{Henson2010, Surya2019}, quantum Regge calculus \cite{Hamber2009, Barrett2018} or causal dynamical triangulations \cite{Loll2019} just to name a few. The main advantage of the discrete system over the continuous one is the relative simplicity of its phase space: it only takes a finite number of degrees of freedom to describe a given compact region of spacetime, which opens the door to standard quantum-mechanical construction of the Hilbert space and the representation of the canonical commutation relations. It also allows for the employment of some kind of \textit{sum over histories}. The respective models of spacetime used in these approaches can be very different, as illustrated by the examples mentioned above, and relating them to their continuous analogue is not always straightforward: they become important subjects of research themselves. Besides studies of the general properties, one can also encounter papers that provide discrete analogues of known continuous solutions of the Einstein equations. They often feature black hole spacetimes which have been studied within causal sets \cite{Dou2003, Rideout2006, He2008, Asato2019, Surya2020}, Regge calculus \cite{Wong1971, Khatsymovsky2020} or causal dynamical triangulations \cite{Dittrich2006}. They will be briefly revisited in the next two sections. Besides pure gravity, discrete spacetime models listed in the previous paragraph can be useful for modeling quantum fields on curved backgrounds. Of course, it is expected that these should be ultimately coupled to gravity, giving rise to a full unified quantum theory. As of now though, such theory is far out of reach. Nevertheless, establishing quantum field theory on discrete fixed background is very much possible as well as relevant, since it provides a viable discrete analogy of the quantum field theory on classical curved spacetime. The problem has been addressed in many works, most often for Regge lattices \cite{Sorkin1975, Hamber1993, Hamber2009, McDonald2010a, Paunkovic2016} but more recently also for causal sets \cite{Sverdlov2008, Johnston2008, Johnston2010, DableHeath2020}. This allows one to study quantum fields non-perturbatively in a number of interesting cosmological or phenomenological scenarios. The aim of the present letter is to advocate for an application to the case of black holes, which are---as we have illustrated---of great interest for a number of reasons. In the last section, we shall outline one particular framework which can be used to one's advantage. \section{Causal Sets} Causal set \cite{Surya2019} is a partially ordered locally finite set, i.e., a set $ C $ with relation $ \preceq $ satisfying acyclicity ($ x \preceq y $ and $ y \preceq x ~ \Rightarrow ~ x = y $ for all $ x,y \in C $), transitivity ($ x \preceq y $ and $ y \preceq z ~ \Rightarrow ~ x \preceq z $ for all $ x,y \in C $) and local finiteness (for every $ x, y \in C $, the order interval $ I(x,y) = \{ w \in C ~ \vert ~ x \preceq w \preceq y, ~ x \neq w \neq y \} $ between $ x $ and $ y $ is finite). The elements of $ C $ represent building blocks of the discrete spacetime and the relation $ \preceq $ encodes their causal structure. The volume element needed to fix the residual conformal freedom is determined by counting the elements of $ C $ inside a given spacetime region. Casual set can be generated from the continuous manifold by a process called \textit{sprinkling}, in which one selects points of the manifold uniformly at random (the average number of selected points in a region is made proportional to the region's volume) and imposes partial ordering induced by the causal order of the points. In this way, one can relatively easily build a causal set model of any classical solution, including black hole spacetimes. For the Schwarzschild solution, this was done in \cite{He2008}. The resulting set has the expected causal structure of the event horizon (no causal links pointing from the interior to the exterior). In the vicinity of the central singularity, it is less ordered due to the narrowing lightcones. Since local finiteness only restricts the number of elements inside the order interval, the causal set as defined above may include infinite or even uncountable subsets. For instance, it may contain \textit{singular antichains}, i.e., subsets which are uncountable and totally unordered. In \cite{Asato2019}, it is argued that these may correspond to singularities of the continuous theory. Based on this observation, the author suggests the following definition of a causal set black hole: for a singular antichain $ S $, one defines the associated black hole as $ B_{S} = \{ x \in C ~ \vert ~ p^{+}(x) \cap S \neq \varnothing \text{ for all } p^{+}(x) \} $ where $ p^{+}(x) $ is an inextensible future-oriented causal path beginning at $ x $. In other words, if every future-oriented causal path beginning at $ x $ intersects the antichain $ S $, then $ x $ lies inside the black hole. In yet another words, one could say that $ B_{S} = D^{-}(S) $ is the past domain of dependence of $ S $ \cite{Joshi1993}. It appears that singular antichains cannot be actually obtained by sprinkling so their role as causal set singularities is limited; however, the latter definition can be used for an arbitrary set $ S $ representing the singularity. The partial ordering of a causal set $ C $ is fully determined by its unique collection of \textit{links}, i.e., the relations between couples of distinct elements $ x, y \in C $ ($ x \neq y $) such that $ x \preceq y $ and $ I(x,y) = \varnothing $. Given all the links of $ C $, its causal order is easily induced via transitivity. Links can be therefore viewed as representing the irreducible relations. In \cite{Dou2003}, the number of links between two regions of a causal set was used as a measure of their mutual entanglement, determining the so-called \textit{entanglement entropy}. It was then argued that the black hole entropy can be related to the number of links spanning across the event horizon of a causal set black hole. For several different cases, the entropy was indirectly shown to be proportional to the horizon area. As noted in the introduction, there are works addressing the issue of inhabiting causal sets with quantum fields \cite{Sverdlov2008, Johnston2008, Johnston2010, DableHeath2020}. However, it will presumably take more time before we can see some real-world applications like the study of quantum field dynamics in the proximity of a causal set black hole. Causal sets are very economical in the sense that they contain very small---if not minimal---amount of information needed to provide an approximate discrete representation of the continuous Lorentzian manifold. An unpleasant consequence of this principle is that the (approximate) reconstruction of geometry from the causal set is a nontrivial task \cite{Eichhorn2019}. For this reason, causal set is usually not the model of first choice when it comes to practical computations within discrete spacetimes. The most common alternative and one its remarkable spin-off are described in the next section. \section{Regge Triangulations} In 1961, it was suggested by Regge \cite{Regge1961} that a continuous differentiable manifold of dimension $ d $ could be approximated by a piecewise-flat one by gluing flat $ d $-dimensional blocks along their faces. The designated building blocks of the piecewise-flat space are \textit{$ d $-simplices}. An $ n $-simplex $ \sigma^{n} $ in $ \mathbb{R}^{d} $ ($ d \geq n $) is the convex span of $ n+1 $ affinely independent points in $ \mathbb{R}^{d} $ called \textit{vertices} \cite{Ambjorn2005}. (In the Lorentzian case, just replace $ \mathbb{R}^{d} $ with the Minkowski space $ \mathbb{M}^{d} $.) A 0-simplex is a point, 1-simplex is a line segment, 2-simplex is a triangle, 3-simplex is a tetrahedron, and so on. Every $ n $-simplex contains $ {n+1}\choose{k+1} $ $ k- $simplices in its boundary, in particular, it has $ \frac{n(n+1)}{2} $ edges ($ k = 1 $). The values of the edge lengths uniquely specify the geometry. The simplex spanned by a subset of vertices of $ \sigma^{n} $ is a \textit{subsimplex} of $ \sigma^{n} $. A $ (d-1) $-dimensional subsimplex of a $ d $-simplex in $ \mathbb{R}^{d} $ is called a \textit{face}, a $ (d-2) $-dimensional subsimplex is called a \textit{hinge}. Regge calculus soon turned out to be a powerful tool for studying discretized manifolds and their geometrical properties. It comes with well known expressions for volumes and curvatures corresponding to their continuous analogues \cite{Cheeger1984} which allows for the introduction of Einstein-Hilbert action and the discrete version of Einstein equations \cite{Hamber2009}. The clear analogy between piecewise linear and continuous manifolds makes Regge triangulations well suited for practical computations. For instance, it can be applied to obtain discrete versions of exact black hole solutions like the Schwarzschild or Reissner-Nordstr\"{o}m geometries \cite{Wong1971, Khatsymovsky2020}. Over the years, Regge calculus has lead to several established non-perturbative approaches to quantum gravity \cite{Oriti2009}. This is the case of \textit{causal dynamical triangulations} \cite{Loll2019} which implement two further assumptions about the triangulation. First, it is assumed that all the vertices belong into individual non-intersecting time-slices labeled by a discrete time parameter and that the slice topology is fixed (this is motivated by a causality argument). Second, the triangulation is assumed to be composed of standardized building blocks, which is helpful for fixing gauge freedom \cite{Romer1986}. In $ d $ dimensions, these building blocks are $ d $-simplices whose spacelike edges have squared edge length $ a^{2} > 0 $ and timelike edges have squared edge length $ - \alpha a^{2} $ where $ \alpha $ is a positive real constant \cite{Ambjoern2013}. Edges between vertices which belong to the same time-slice are always spacelike; edges between vertices which inhabit neighboring slices are always timelike. Note that there are no null edges. All the time-slices are equidistant, as if marked by a uniformly ticking (proper) time parameter. The standardized building blocks come in several types. In dimension $ d = 2 $ there are only two types of triangles: (1,2) which has one vertex at the sooner time-slice and the remaining two on the later one, and (2,1) whose configuration is the opposite. In dimension $ d = 3 $ the triangulation is made of three types of tetrahedra, namely (1,3), (2,2) and (3,1). Analogically, for $ d = 4 $ the triangulation consists of 4-simplices which come in four types. In general there are $ d $ types. Because the building blocks are standardized, the geometry of the resulting simplicial spacetime depends only on the occurrence of these types. The above brings us to the work \cite{Dittrich2006} in which the authors present a causal dynamical triangulation corresponding to a black hole spacetime, in particular the Kottler solution with the line element \begin{equation}\label{kottler} ds^{2} = - \left( 1-\frac{2M}{r}-\frac{r^{2}}{L^{2}} \right) dt^{2} + \left( 1-\frac{2M}{r}-\frac{r^{2}}{L^{2}} \right)^{-1} dr^{2} + r^{2} d\Omega^{2} \end{equation} which was chosen because it conforms to the assumptions of the theory. They also build a horizon finder based on counting distinct types of building blocks, and introduce the concept of \textit{Lorentzian triangulations of product type} whose utility goes beyond the application at hand. The causal dynamical triangulation example is worth attention especially because it is considerably simplified by the use of standardized $ d $-simplices. This allows for measuring various geometrical quantities (e.g. volume or curvature) by means of counting certain structures present within the triangulation---a feature which reminds of causal sets. The strategy of assembling complex geometries from a handful of building blocks can be helpful even outside the strict scope of causal dynamical triangulations; for example, upon relaxing some of the assumptions and taking advantage of the existing tools in order to study phenomenological or other questions. We have seen that in connection to black holes, such questions are abundant. Thanks to their direct geometrical interpretation, Regge lattices can be enhanced with matter or gauge fields without greater difficulties. The interested reader may find various works providing prescriptions for the lattice field action \cite{Sorkin1975, Hamber2009, McDonald2010a}. For example, the discrete analogue of the classical scalar field action \begin{equation}\label{scont} S = \frac{1}{2} \int_{\Omega} \left( \nabla_a \varphi \nabla^{a} \varphi + m^{2} \varphi^{2} \right) \sqrt{\vert g \vert} ~ d^{d}x \end{equation} on a region $ \Omega $ of a $ d $-dimensional manifold $ (\mathcal{M},g) $ takes the lattice form \begin{equation}\label{slatt} S = \frac{1}{2} \sum_{\text{edges } ij} \frac{\mathcal{V}_{ij}}{\mathcal{l}_{ij}^{2}} ~ (\varphi_{i}-\varphi_{j})^{2} + \frac{1}{2} \sum_{\text{vertices } i} V_{i} ~ m^{2} \varphi_{i}^{2} \end{equation} Note that the field is defined on the vertices of the lattice. Here, $ \mathcal{l}_{ij}^{2} $ is is the \textit{proper edge length} and $ \mathcal{V}_{ij}, V_{i} $ stand for the \textit{dual edge} and \textit{vertex volumes}, respectively. The sums run over edges and vertices belonging to the lattice region corresponding to $ \Omega $. In case of a lattice composed of standardized $ d $-simplices, the proper edge length is given by two constants (one for spacelike edges, another for timelike edges) and the dual volumes are computed easily by counting distinct types of neighboring simplices. Some authors have addressed the problem of coupling matter fields to quantum gravity \cite{Hamber1993, Paunkovic2016}; however, it is quite demanding both in theory and practice. In fact, even if we keep the geometry fixed, performing a consistent analysis of the field dynamics is not as simple as one could expect. Before we conclude this letter, we devote the last section to a brief outline of one particular framework serving this purpose. \section{Discrete Evolution and Matter Fields} If one views the Regge triangulation as a lattice instead as a piecewise-linear Lorentzian manifold, one obtains a system with a discrete notion of time, whose dynamics can be described by \textit{discrete canonical evolution} \cite{Dittrich2011, Dittrich2012, Dittrich2013}. It comes in two versions: a global one, which applies only if the lattice admits a foliation into non-intersecting time-slices (in very much the same way it is assumed within causal dynamical triangulations), and a local one, which applies to more general configurations. For simplicity, we shall discuss the global version. It starts with the lattice action \begin{equation}\label{action} S = \sum_{n=0}^{t-1} S_{n+1}(x_{n},x_{n+1}) \end{equation} composed of individual time-step contributions $ S_{n+1}(x_{n},x_{n+1}) $ and passes to the canonical picture upon defining momenta and obtaining a specific form of the equations of motion. Remarkably, the model allows for changing the number of degrees of freedom (i.e., the dimension of the configuration space $ \mathcal{Q}_{n} \ni x_{n} $) along the evolution, e.g. when the lattice expands or shrinks from one time-slice to the next. This irregularity gives rise to \textit{constraints} as well as \textit{free parameters} which need to be taken in account and have significant impact on the overall dynamics \cite{Dittrich2013}. The analogical formalism for quantum systems was introduced in \cite{Hoehn2014a, Hoehn2014b} and subsequently applied to systems with quadratic action, which are known to posses linear equations of motion and can be therefore treated most easily \cite{Hoehn2014}. Discrete quantum evolution for systems with quadratic action was further investigated in \cite{Kaninsky2020} where it was argued that it is most naturally described by a non-unitary evolution map. The non-unitarity has its roots in the irregularity of the classical system, and craves for regularization (and occasionally renormalization) of final states. The theory comes with one-step propagators of the form \begin{equation}\label{prop} _{\mathtt{c}} \langle \beta_{n+1} \vert \mathbb{U}_{n+1} \vert \gamma_{n} \rangle_{\mathtt{c}} = \mathcal{l}(V_{2}^{T} \beta_{n+1}) ~ \abs{\det \Sigma_{r} }^{1/2} ~ (2\pi)^{-q/2} ~ e^{iS_{n+1}(\gamma_{n},\beta_{n+1})} \end{equation} where $ \mathcal{l}(V_{2}^{T} \beta_{n+1}) $ is a regularization term and $ \abs{\det \Sigma_{r} }^{1/2} $ is a constant factor. The main part of the propagator is formed by the standard complex exponential featuring the one-step action contribution $ S_{n+1}(\gamma_{n},\beta_{n+1}) $ familiar from \eqref{action}. Upon the composition of multiple time-steps, the propagators \eqref{prop} make up the path integral. Although the formalism of discrete canonical evolution was originally introduced in order to describe the dynamics of the lattice---or to say, the geometry---itself \cite{Dittrich2011}, it is suitable for various other systems: for instance, it can be easily applied to the case of scalar field on a fixed Lorentzian Regge lattice, as shown in \cite{Kaninsky2020}. What more, if one refrains from introducing higher order interaction terms, one can keep the action quadratic and benefit from a rather straightforward employment of the linear formalism. This way, it is possible to study the dynamical response of various quantum fields to a chosen geometry, very much like in quantum field theory on curved spacetime. As far as the author is concerned, this has not been done before. Within this letter, we would like to advocate for the application to a triangulated black hole spacetime. Note that such application could greatly benefit from the model of Dittrich and Loll \cite{Dittrich2006} which has two advantages: the standardized simplices make the lattice structure (and consequently the action) quite simple and the assumption that the lattice is foliated into separate time-steps allows one to use the global version of discrete canonical evolution. On the other hand, upon limiting the assumption of global foliation to a selected region, one can build similar models for exact spacetimes other than the Kottler solution \eqref{kottler}. It can be expected that the non-unitary nature of the discrete quantum evolution will play a role in the triangulated black hole spacetime, which may result in information loss. Imagine the central singularity, which disposes all the information carried by the field once reached by it. In discrete spacetimes, such behavior is not limited to the spacetime boundary: information loss may occur at essentially any point of the evolution, since it depends on the numbers of neighboring vertices and their connectivity \cite{Hoehn2014, Kaninsky2020}. The idea is illustrated in Fig. \ref{fig:s}. The non-unitarity can be merely an artifact of discretization, but it can be also physical (like in the case of spacetime singularity). This distinction has some implications when it comes to the interpretation of final states \cite{Kaninsky2020}. The formalism is not free from complications, but it is flexible and suited for an immediate implementation. \begin{figure}[H] \centering \begin{tikzpicture}[scale=1] \tikzset{ vertex/.style={ shape=circle,fill=lightgray!100,minimum size=2mm,inner sep=0.2mm, label={[fill=none,label distance=1mm]90:#1} }, edge/.style={ draw,-,color=lightgray!100,line width=0.3mm }, edget/.style={ draw,dashed,color=lightgray!100,line width=0.3mm }, } \coordinate (cia) at (-2,0); \coordinate (c1) at (-1,0); \coordinate (c2) at (0,0); \coordinate (c3) at (1,0); \coordinate (cib) at (2,0); \coordinate (cic) at (-1.5,0.866); \coordinate (c4) at (-0.5,0.866); \coordinate (c5) at (0.5,0.866); \coordinate (c6) at (1.5,0.866); \coordinate (cid) at (2.5,0.866); \coordinate (cie) at (-2,1.732); \coordinate (c7) at (-1,1.732); \coordinate (c8) at (0,1.732); \coordinate (c9) at (1,1.732); \coordinate (cif) at (2,1.732); \coordinate (cig) at (-1.5,2.598); \coordinate (c10) at (-0.5,2.598); \coordinate (c11) at (0.5,2.598); \coordinate (c12) at (1.5,2.598); \coordinate (cih) at (2.5,2.598); \draw[edge] (c1) -- (c2) --(c3) -- (cib); \draw[edget] (c1) -- (c4) --(c2) -- (c5) -- (c3) -- (c6) -- (cib); \draw[edge] (c4) -- (c5) -- (c6); \draw[edget] (c4) -- (c8) -- (c5) -- (c9) -- (c6); \draw[edge] (c8) -- (c9); \draw[edget] (c8) -- (c11) -- (c9); \node[vertex] at (c1) {}; \node[vertex] at (c2) {}; \node[vertex] at (c3) {}; \node[vertex] at (cib) {}; \node[vertex] at (c4) {}; \node[vertex] at (c5) {}; \node[vertex] at (c6) {}; \node[vertex] at (c8) {}; \node[vertex] at (c9) {}; \node[vertex] at (c11) {}; \node[] at (3.5,0) {$ n = 0 $}; \node[] at (3.5,0.866) {$ n = 1 $}; \node[] at (3.5,1.732) {$ n = 2 $}; \node[] at (3.5,2.598) {$ n = 3 $}; \end{tikzpicture} \vspace{0 mm} \caption{A diagrammatic depiction of a shrinking lattice ending in a singularity. Spacelike edges are drawn in solid line, timelike edges are drawn in dashed line. Every time-step has one vertex less then the preceding one, which will result in a gradual information loss. Ultimately, the remaining information is disposed when it reaches the singularity at $ n = 3 $, which represents a spacelike boundary.} \label{fig:s} \vspace{4 mm} \end{figure} \section{Conclusion} Black holes are incredibly interesting gravitational phenomena attracting, among other things, constant attention of researchers from various fields. This still holds true in context of discrete spacetime. We have identified three main directions featuring discrete black hole models: numerical relativity and cosmology, effective models investigating the phenomenological implications of minimal length-scale, and discrete spacetime models in service of quantum gravity. We kept our focus on the latter family, which we find particularly remarkable, and looked closer at three of its successful representatives: causal sets, Regge triangulations, and causal dynamical triangulations. We found that in some way, all of them have been applied to black holes; the most important examples were briefly discussed. With small hyperbole, one could say that if a discrete spacetime model can describe a black hole, it is worth attention. In the last section, we outlined a way towards inhabiting the discrete spacetime with quantum fields. We have limited our discussion to triangulations because they are typically easier to work with than causal sets. It is suggested that the quantum field on a fixed Lorentzian lattice could be described within the framework of discrete canonical evolution, especially if its action is only quadratic. The application to black holes is not only possible but also desirable, since it can directly address a number of questions concerning black hole phenomenology, either in connection to minimal length-scale or simply as a way to facilitate for a numerical simulation. We therefore encourage any interested researcher to discretize his or her own favorite black hole solution and provide it with some quantum matter: the tools are out there. \section*{Acknowledgments} This work was supported by Charles University Grant Agency [Project No. 906419]. \bibliographystyle{unsrt} \renewcommand{\bibname}{Bibliography}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} By accurately locating the important body joints from 2D images, human pose estimation plays an essential role in computer vision. It has wide applications in intelligent surveillance, video-based action recognition, and human-computer interaction. However, human pose estimation from an 2D image is a well known challenging problem -- too many degrees of freedom are introduced by the large variability of the human pose, different visual appearance of the human body and joints, different angles of camera view, and possible occlusions of body parts and joints. Most of the previous works on human pose estimation are based on the two-layer part-based model \cite{Felzenszwalb2005,Tian2012,Wang2013,Eichner2013,Duan2012,Sun2011,Pishchulin2013,Andriluka2009,Pishchulin2012,Johnson2010,Sapp2010,Dantone2013,Singh2010,Wang2008,Sapp2013}. The first layer focuses on local (body) part appearance and the second layer imposes the contextual relations between local parts. One popular part-based approach is pictorial structures \cite{Felzenszwalb2005}, which capture the pairwise geometric relations between adjacent parts using a tree model. However, these pose estimation methods using part-based models are usually sensitive to noise and the graphical model lacks expressiveness to model complex human poses \cite{Dantone2013}. Furthermore, most of these methods search for each local part independently and the local appearance may not be sufficiently discriminative for identifying each local part reliably. \begin{figure} \begin{centering} \includegraphics[scale=0.2]{fig/overview_v1} \par\end{centering} \caption{An illustration of the proposed method based on DS-CNN. (a) Input image and generated image patches. (b) DS-CNN input on an image patch (containing a local part -- ankle). (c) DS-CNN input on full body and holistic view of the local part in the full body. (d) DS-CNN for learning. (e) DS-CNN output on joint detection. (f) DS-CNN output on joint localization.\label{fig:overview}} \vspace{-0.0in} \end{figure} Recently, deep neural network architectures, specifically deep convolutional neural networks (CNNs), have shown outstanding performance in many computer vision tasks. Due to CNNs' large learning capacity and robustness to variations, there is a natural rise in the interest to directly learn high-level representations of human poses without using hand-crafted low-level features and even graphical models. Toshev et al. \cite{Toshev2014} present such a holistic-styled pose estimation method named DeepPose using DNN-based joint regressors. This method also uses a two-layer architecture: The first layer resolves ambiguity between body parts (e.g. left and right legs) in a holistic way and provides an initial pose estimation and the second layer refines the joint locations in a local neighborhood around the initial estimation. From the experiments in \cite{Toshev2014}, DeepPose can achieve better performance on two widely used datasets, FLIC and LSP, than several recently developed human pose estimation methods. However, DeepPose does not consider local part appearance in initial pose estimation. As a result, it has difficulty in estimating complex human poses, even using the CNN architecture. In this paper, we propose a dual-source CNN (DS-CNN) based method for human pose estimation, as illustrated in Fig. \ref{fig:overview}. This proposed method integrates both the local part appearance in image patches and the holistic view of each local part for more accurate human pose estimation. Following the region-CNN (R-CNN) that was developed for object detection \cite{Girshick2014}, the proposed DS-CNN takes a set of category-independent object proposals detected from the input image for training. Compared to the sliding windows or the full image, that are used as the input in many previous human pose estimation methods, object proposals can capture the local body parts with better semantic meanings in multiple scales \cite{Girshick2014,Zhang2014}. In this paper, we extend the original single-source R-CNN to a dual-source model (DS-CNN) by including the full body and the holistic view of the local parts as a separate input, which provides a holistic view for human pose estimation. By taking both the local part object proposals and the full body as inputs in the training stage, the proposed DS-CNN performs a unified learning to achieve both joint detection, which determines whether an object proposal contains a body joint, and joint localization, which finds the exact location of the joint in the object proposal. In the testing stage, we use multi-scale sliding windows to provide local part information in order to avoid the performance degradation resulted by the uneven distribution of object proposals. Based on the DS-CNN outputs, we combine the joint detection results from all the sliding windows to construct a heatmap that reflects the joint location likelihood at each pixel and weightedly average the joint localization results at the high-likelihood regions of the heatmap to achieve the final estimation of each joint location. In the experiments, we test the proposed method on two widely used datasets and compare its performance to several recently reported human pose estimation methods, including DeepPose. The results show the effectiveness of the proposed method which combines local appearance and holistic view. \section{Related Work} \textbf{Part-based models for human pose estimation.} In the part-based models, human body is represented by a collection of physiologically inspired parts assembled through a deformable configuration. Following the pictorial-structure model \cite{Fischler1973,Felzenszwalb2005}, a variety of part-based methods have been developed for human pose estimation \cite{Tian2012,Wang2013,Eichner2013,Duan2012,Sun2011,Pishchulin2013,Andriluka2009,Pishchulin2012,Johnson2010,Sapp2010,Dantone2013,Singh2010,Wang2008,Sapp2013}. While many early methods build appearance models for each local part independently, recent works \cite{Andriluka2009,Eichner2009,Eichner2013,Dantone2013,Johnson2010,Sapp2013} attempt to design strong body part detectors by capturing the contextual relations between body parts. Johnson and Everingham \cite{Johnson2010} partition the pose space into a set of pose clusters and then apply nonlinear classifiers to learn pose-specific part appearance. In \cite{Dantone2013}, independent regressors are trained for each joint and the results from these regressors are combined to estimate the likelihood of each joint at each pixel of the image. Based on the appearance models built for each part, these methods usually leverage tree-structured graphical models to further impose the pairwise geometric constraints between parts \cite{Tian2012,Wang2013,Andriluka2009,Pishchulin2013,Yang2011}. Due to the limited expressiveness \cite{Toshev2014}, the tree-structured graphical models often suffer from the limb ambiguity, which affects the accuracy of human pose estimation. There have been several works that focus on designing richer graphical models to overcome the limitation of tree-structured graphical models. For example, in \cite{Johnson2010}, mixture of pictorial structure models are learned to capture the \textquoteleft{}multi-modal\textquoteright{} appearance of each body part. Yang and Ramanan \cite{Yang2011} introduce a flexible mixture-of-parts model to capture contextual co-occurrence relations between parts. In \cite{Tian2012}, the hierarchical structure is incorporated to model high-order spatial relation among parts. Loopy models \cite{Duan2012,Jiang2008,Tian2010,Ren2005} allow to include additional part constraints, but require approximate inference. In the latter experiments, we include several above-mentioned part-based methods for performance comparison. \textbf{Deep convolutional neural network (CNN)} \textbf{in computer vision.} As a popular deep learning approach, CNN \cite{LeCun1990} attempts to learn multiple levels of representation and abstraction and then use it to model complex non-linear relations. It has been shown to be a useful tool in many computer vision applications. For example, it has demonstrated impressive performance for image classification \cite{Jarrett2009,LeCun2004,Lee2009,Krizhevsky2012}. More recently, CNN architectures have been successfully applied to object localization and detection \cite{Szegedy2013,Girshick2014,Sermanet2014}. In \cite{Sermanet2014}, a single shared CNN named `Overfeat' is used to simultaneously classify, locate and detect objects from an image by examining every sliding window. In this paper, we also integrate joint detection and localization using a single DS-CNN. But our problem is much more challenging than object detection -- we need to find precise locations of a set of joints for human pose estimation. Girshick et al. \cite{Girshick2014} apply high-capacity R-CNNs to bottom-up object proposals \cite{Uijlings2013} for object localization and segmentation. It achieves 30\% performance improvement on PASCAL VOC 2012 against the state of the art. Zhang et al. \cite{Zhang2014} adopt the R-CNN \cite{Girshick2014} to part localization and verify that the use of object proposals instead of sliding windows in CNN can help localize smaller parts. Based on this, R-CNN is shown to be effective for fine-grained category detection. However, this method does not consider the complex relations between different parts \cite{Zhang2014} and is not applicable to human pose estimation. \textbf{CNN for human pose estimation.} In \cite{Toshev2014}, a cascade of CNN-based joint regressors are applied to reason about pose in a holistic manner and the developed method was named `DeepPose'. The DeepPose networks take the full image as the input and output the ultimate human pose without using any explicit graphical model or part detectors. In \cite{Jain2014}, Jain et al. introduce a CNN-based architecture and a learning technique that learns low-level features and a higher-level weak spatial model. Following \cite{Jain2014}, Tompson et al. show that the inclusion of a MRF-based graphical model into the CNN-based part detector can substantially increase the human pose estimation performance. Different from DeepPose and Tompson et al. \cite{Tompson2014}, the proposed method takes both the object proposals and the full body as the input for training, instead of using the sliding-windowed patches, to capture the local body parts with better semantic meanings in multiple scales. \section{Problem Description and Notations} In this paper, we adopt the following notations. A human pose can be represented by a set of human joints $\mathbf{J}=\left\{ \mathbf{j}_{i}\right\} _{i=1}^{L}\in\mathbb{R}^{2L\times1}$, where $\mathbf{j}_{i}=\left(x_{i},y_{i}\right)^{T}$ denotes the 2D coordinate of the joint $i$ and $L$ is the number of human joints. In this paper, we are interested in estimating the 2D joint locations $\mathbf{J}$ from a single image $I$. Since our detection and regression are applied to a set of image patches, in the form of rectangular bounding boxes, detected in $I$, it is necessary to convert absolute joint coordinates in image $I$ to relative joint coordinates in an image patch. Furthermore, we introduce a normalization to make the locations invariant to size of different image patches. Specifically, given an image patch $\mathbf{p}$, the location of $\mathbf{p}$ is represented by 4-element vector $\mathbf{p}=\left(w\left(\mathbf{p}\right),h\left(\mathbf{p}\right),\mathbf{c}\left(\mathbf{p}\right)\right)^{T}$, where $w\left(\mathbf{p}\right)$ and $h\left(\mathbf{p}\right)$ are the width and height of $\mathbf{p}$, $\mathbf{c}\left(\mathbf{p}\right)=\left(x_{c}\left(\mathbf{p}\right),y_{c}\left(\mathbf{p}\right)\right)^{T}$ is the center of $\mathbf{p}$. Then the normalized coordinate of joint $\mathbf{j}_{i}$ relative to $\mathbf{p}$ can be denoted as \begin{align} \mathbf{j}_{i}\left(\mathbf{p}\right) & =\left(x_{i}\left(\mathbf{p}\right),y_{i}\left(\mathbf{p}\right)\right)^{T}\nonumber \\ & =\left(\frac{x_{i}-x_{c}\left(\mathbf{p}\right)}{w\left(\mathbf{p}\right)},\frac{y_{i}-y_{c}\left(\mathbf{p}\right)}{h\left(\mathbf{p}\right)}\right)^{T}.\label{eq:cal_relative_coordinates} \end{align} Furthermore, the visibility of all the joints in $\mathbf{p}$ is denoted as $\mathbf{V}\mbox{\ensuremath{\left(\mathbf{p}\right)}}=\left\{ v_{i}\left(\mathbf{p}\right)\right\} _{i=1}^{L}\in\mathbb{R}^{L\times1}$, where \begin{equation} v_{i}\left(\mathbf{p}\right)=\begin{cases} 1, & \left|x_{i}\left(\mathbf{p}\right)\right|\leq0.5\text{ and}\left|y_{i}\left(\mathbf{p}\right)\right|\leq0.5\\ 0, & \text{otherwise}. \end{cases}\label{eq:joint_vis} \end{equation} If $v_{i}\left(\mathbf{p}\right)=1$, it indicates that the joint $i$ is visible in $\mathbf{p}$, i.e., it is located inside the patch $\mathbf{p}$. On the contrary, if $v_{i}\left(\mathbf{p}\right)=0$, it indicates that the joint $i$ is invisible in $\mathbf{p}$, i.e., it is located outside of $\mathbf{p}$. \begin{figure} \begin{centering} \includegraphics[scale=0.12]{fig/part_proposal_no_border} \par\end{centering} \caption{Extended part and body patches containing (a) right ankle, (b) left ankle, (c) right wrist, and (d) left wrist from the LSP training dataset. For each local part, the part patches are shown in the left while the corresponding body patches are shown in the right. \label{fig:Extended-part-and-global-proposal}} \vspace{-0.1in} \end{figure} \section{Model Inputs\label{sec:Model-Inputs}} As described earlier, to combine the local part appearance and the holistic view of each part, the proposed DS-CNN takes two inputs for training and testing -- image patches and the full body. To make it clearer, we call the former input the \textit{part patches,} denoted as $\mathbf{p}_{p}$, and the latter \textit{body patches}, denoted as $\mathbf{p}_{b}$. So the dual-source input is $\mathbf{p}_{p,b}=\left(\mathbf{p}_{p},\mathbf{p}_{b}\right)$. Randomly selected samples for these two kinds of inputs are shown in Fig. \ref{fig:Extended-part-and-global-proposal}, where for each local part, the part patches are shown in the left while the corresponding body patches are shown in the right. From these samples, we can see that it is difficult to distinguish the left and right wrists, or some wrists and legs, based only on the local appearance in the part patches. As we will see later, we use object proposals detected from an image for training and object proposals usually show different sizes and different aspect ratios. CNN requires the input to be of a fixed dimension. In \cite{Girshick2014}, all the object proposals are non-uniformly scaled to a fixed-size square and it may need to vary the original aspect ratios. This may complicate the CNN training by artificially introducing unrealistic patterns into training samples. In particular, in our model we are only interested in the body joint that is closest to the center of a part patch (This will be elaborated in detail later). If the part patch is non-uniformly scaled, the joint of interest may be different after the change of the aspect ratio. Thus, in this paper we keep the aspect ratio of image patches unchanged when unifying its size. Specifically, we extend the short side of the image patch to include additional rows or columns to make it a square. This extension is conducted in a way such that the center of each image patch keeps unchanged. After the extension, we can perform uniform scaling to make each patch a fixed-size square. This extension may not be preferred in object detection \cite{Girshick2014} by including undesired background information. However, in our problem this extension just includes more context information of the joint of interest. This will not introduce much negative effect to the part detection. The only minor effect is the a subtle reduction of the resolution of each patch (after the uniform scaling). \textbf{Part Patches }In the training stage, we construct part patches in two steps. 1) Running an algorithm to construct a set of category-independent object proposals. Any existing object proposal algorithms can be used here. In our experiments, we use the algorithm developed in \cite{Zitnick2014}. 2) Select a subset of the constructed proposals as the part patches. We consider two factors for Step 2). First, we only select object proposals with a size in certain range as part patches. If the size of an object proposal is too large, it may cover multiple body parts and its appearance lacks sufficient resolution (after the above-mentioned uniform scaling) for joint detection and localization. On the contrary, if the size of an object proposal is too small, its appearance may not provide sufficient features. To address this issue, we only select the object proposals $\mathbf{p}_{p}$ with an area in a specified range as part patches, i.e., \begin{equation} \mu_{1}d^{2}\left(\mathbf{J}\right)\leq w\left(\mathbf{p}_{p}\right)\cdot h\left(\mathbf{p}_{p}\right)\leq\mu_{2}d^{2}\left(\mathbf{J}\right)\label{eq:part_proposal_upper_bound} \end{equation} where $d\left(\mathbf{J}\right)$ is the distance between two opposing joints on the human torso \cite{Toshev2014}. $\mu_{1}$ and $\mu_{2}$ are two coefficients ($\mu_{1}$<$\mu_{2}$) that help define the lower bound and the upper bound for selecting an object proposal as a part patch. Second, from the training perspective, we desire all the body joints are covered by sufficient number of part patches. In the ideal case, we expect the selected part patches cover all the joints in a balanced way -- all the joints are covered by similar number of part patches. We empirically examine this issue and results are shown in Fig. \ref{fig:Joint-histogram} -- on both FLIC and LSP datasets, this simple part-patch selection algorithm provides quite balanced coverage to all the joints. In this figure, the x-axis indicates the label of different joints -- only upper-body joints are shown in FLIC dataset while all 14 body joints are shown in LSP dataset. The y-axis indicates the average number of part patches that covers the specified joint in each image. Here we count that a part patch covers a joint if this joint is visible to (i.e., located inside) this patch and this joint is the closest joint to the center of this patch. At each joint, we show three part-patch coverage numbers in three different colors. From left to right, they correspond to three different $\mu_{2}$ values of 1.0, 1.5 and 2.0 respectively. In this empirically study, we always set $\mu_{1}=0.1$. In the testing stage, part patches are selected from multi-scale sliding windows (this will be justified in Section \ref{sec:Experiments}). \begin{figure} \begin{centering} \includegraphics[scale=0.19]{fig/train_joint_hist_edgebox} \par\end{centering} \caption{The average number of part patches that cover each joint in (a) FLIC and (b) LSP datasets. Three colors indicates the results by selecting different $\mu_{2}$ values of 1.0, 1.5 and 2.0 respectively. \label{fig:Joint-histogram}} \vspace{-0.1in} \end{figure} \begin{figure*} \begin{centering} \includegraphics[scale=0.56]{fig/network_architecture_patch} \par\end{centering} \caption{The structure of DS-CNN. \label{fig:network_architecture}} \vspace{-0.1in} \end{figure*} \textbf{Body Patches} Similarly, in the training stage we construct body patches by selecting a subset of object proposals from the same pool of object proposals detected from the image. The only requirement is that the selected body patch should cover the whole body or all the joints, i.e., \begin{equation} \sum_{i=1}^{L}v_{i}\left(\mathbf{p}_{b}\right)=L.\label{eq:body_proposal_condition} \end{equation} In the testing stage, the body patch can be generated by using a human detector. For the experiments in this paper, each testing image only contains one person and we simply take the whole testing image as the body patch. For DS-CNN, each training sample is made up of a part patch $\mathbf{p}_{p}$, a body patch $\mathbf{p}_{b}$, and the binary mask that specifies the location of $\mathbf{p}_{p}$ in $\mathbf{p}_{b}$, as shown in Fig. \ref{fig:Extended-part-and-global-proposal}, where both $\mathbf{p}_{p}$ and $\mathbf{p}_{b}$ are extended and normalized to a fixed-size square image. For part patch $\mathbf{p}_{p}$, we directly take the RGB values at all the pixels as the first source of input to DS-CNN. For body patch $\mathbf{p}_{b}$, we take the binary mask as an additional alpha channel and concatenate the RGB values of $\mathbf{p}_{b}$ and the alpha values as the second source of input to DS-CNN. Given that we extend and normalize all the patches to a $N\times N$ square, the first source of input is of dimension $3N^{2}$ and the second source of input is of size $4N^{2}.$ In the training stage, based on the constructed part patches and body patches, we randomly select one from each as a training sample. For both training and testing, if the selected part patch is not fully contained in the selected body patch, we crop the part patch by removing the portion located outside the body patch before constructing the training or testing sample. \section{Multi-Task Learning\label{sec:Multi-Task-Learning}} We combine two tasks in a single DS-CNN -- joint detection, which determines whether a part patch contains a body joint, and joint localization, which finds the exact location of the joint in the part patch. Each task is associated with a loss function. \textbf{Joint detection} For joint detection, we label a patch-pair $\mathbf{p}_{p,b}$ to joint $i^{*}$, where \begin{equation} i^{*}\left(\mathbf{p}_{p}\right)=\begin{cases} \text{arg }\underset{1\leq i\leq L}{\text{min}}\parallel\mathbf{j}_{i}\left(\mathbf{p}_{p}\right)\parallel^{2} & \text{if }\sum_{k=1}^{L}v_{k}\left(\mathbf{p}_{p}\right)>0\\ 0 & \text{otherwise}, \end{cases}\label{eq:part_proposal_label} \end{equation} and this is taken as the ground truth for training. Let the DS-CNN output for joint detection be $\left(\mathcal{\ell}_{0}\left(\mathbf{p}_{p,b}\right),\mathcal{\ell}_{1}\left(\mathbf{p}_{p,b}\right)...,\mathcal{\ell}_{L}\left(\mathbf{p}_{p,b}\right)\right)^{T}$, where $\mathcal{\ell}_{0}$ indicates the likelihood of no body joint visible in $\mathbf{p}_{p}$ and $\mathcal{\ell}_{i},i=1,...,L$ represents the likelihood that joint $i$ is visible in $\mathbf{p}_{p}$ and is the closest joint to the center of $\mathbf{p}_{p}$ . We use a softmax classifier where the loss function is \begin{equation} C_{d}\left(\mathbf{p}_{p,b}\right)=-\sum_{i=0}^{L}1\left(i^{*}\left(\mathbf{p}_{p}\right)=i\right)\text{log}\left(\ell_{i}\left(\mathbf{p}_{p,b}\right)\right),\label{eq:part_proposal_joint_location} \end{equation} where $1\left(\cdot\right)$ is the indicator function. \textbf{Joint localization} Joint localization is formulated as a regression problem. In DS-CNN training, the ground-truth joint location for a patch-pair $\mathbf{p}_{p,b}$ is $\mathbf{j}_{i^{*}\left(\mathbf{p}_{p}\right)}\left(\mathbf{p}_{p}\right)=\left(x_{i^{*}\left(\mathbf{p}_{p}\right)}\left(\mathbf{p}_{p}\right),y_{i^{*}\left(\mathbf{p}_{p}\right)}\left(\mathbf{p}_{p}\right)\right)^{T}$, where $i^{*}\left(\mathbf{p}_{p}\right)$ is defined in Eq. (\ref{eq:part_proposal_label}). Let the DS-CNN output on joint localization be $\left\{ \mathbf{z}_{i}\left(\mathbf{p}_{p,b}\right)\right\} _{i=1}^{L}\in\mathbb{R}^{2L\times1}$ , where $\mathbf{z}_{i}\left(\mathbf{p}_{p,b}\right)=\left(\widehat{x}_{i},\widehat{y}_{i}\right)^{T}$ denotes the predicted location of the $i-$th joint in $\mathbf{p}_{p}$. We use the mean squared error as the loss function, \begin{equation} C_{r}\left(\mathbf{p}_{p,b}\right)=\begin{cases} \parallel\mathbf{z}_{i^{*}\left(\mathbf{p}_{p}\right)}\left(\mathbf{p}_{p,b}\right)-\mathbf{j}_{i^{*}\left(\mathbf{p}_{p}\right)}\left(\mathbf{p}_{p}\right)\parallel^{2} & \text{if }i^{*}>0\\ 0 & \text{otherwise}. \end{cases}\label{eq:regession_cost} \end{equation} Combining the joint detection and joint localization, the loss function for DS-CNN is \begin{align} C & =\sum_{\mathbf{p}_{p,b}}\left\{ \lambda_{d}C_{d}\left(\mathbf{p}_{p,b}\right)+C_{r}\left(\mathbf{p}_{p,b}\right)\right\} ,\label{eq:total_cost} \end{align} where the summation is over all the training samples (patch pairs) and $\lambda_{d}>0$ is a factor that balances the two loss functions. \section{DS-CNN Structure} The structure of the proposed DS-CNN is based on the CNN described in \cite{Krizhevsky2012}, which is made up of five convolutional layers, three fully-connected layers, and a final 1000-way softmax, in sequence. The convolutional layers 1, 2 and 5 are followed by max pooling. In the proposed DS-CNN, we include two separate sequence of convolutional layers. As shown in Fig. \ref{fig:network_architecture}, one sequence of 5 convolutional layers takes the part-patch input as defined in Section \ref{sec:Model-Inputs} and extracts the features from local appearance. The other sequence of 5 convolutional layers takes the body-patch input and extracts the holistic features of each part. The output from these two sequences of convolutional layers are then concatenated together, which are then fed to a sequence of three fully connected layers. We replace the final 1000-way softmax by a $\left(L+1\right)$-way softmax and a regressor for joint detection and joint localization, respectively. In DS-CNN, all the convolutional layers and the fully-connected layers are shared by both the joint detection and the joint localization. In Fig. \ref{fig:network_architecture}, the convolutional layer and the following pooling layer is labeled ${\tt C_{i}}$ and the fully-connected layer are labeled as ${\tt F_{i}}$ where $i$ is the index of layer. The size of a convolutional layer is described as $\text{depth}@\text{width}\times\text{height}$, where depth is the number of convolutional filters, width and height denote the spatial dimension. \section{Human Pose Estimation \label{sec:Human-Pose-Estimation}} Given a testing image, we construct a set of patch-pairs using multi-scale sliding windows as discussed in Section \ref{sec:Model-Inputs}. We then run the trained DS-CNN on each patch-pair $\mathbf{p}_{p,b}$ to obtain both joint detection and localization. In this section, we propose an algorithm for estimating the final human pose on the testing image by combining the joint detection and localization results from all the patch pairs. At first, we construct a heatmap $H_{i}$ for each joint $i$ -- the heatmap is of the same size as the original image and $H_{i}(\mathbf{x})$, the heatmap value at a pixel $\mathbf{x},$ reflects the likelihood that joint $i$ is located at $\mathbf{\mathbf{x}}$. Specifically, for each patch-pair $\mathbf{p}_{p,b}$, we uniformly allocate its joint-detection likelihood to all the pixels in $\mathbf{p}_{p}$, i.e., \begin{equation} h_{i}\left(\mathbf{x},\mathbf{p}_{p,b}\right)=\begin{cases} \mathcal{\mathcal{\ell}}_{i}\left(\mathbf{p}_{p,b}\right)/\left(w\left(\mathbf{p}_{p}\right)\cdot h\left(\mathbf{p}_{p}\right)\right),\\ \mathbf{\text{if }x}\in\mathbf{p}_{p}\text{\ and\ }\ell_{i}\left(\mathbf{p}_{p,b}\right)>\ell_{j}\left(\mathbf{p}_{p,b}\right),\forall j\neq i\\ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{otherwise}. & \text{} \end{cases}\label{eq:vote_of_a_proposal} \end{equation} We then sum up the allocated joint-detection likelihood over all the patch-pairs in a testing image as \begin{equation} H_{i}\left(\mathbf{x}\right)=\sum_{\mathbf{p}_{p,b}}h_{i}\left(\mathbf{x},\mathbf{p}_{p,b}\right).\label{eq:accumulated_votes} \end{equation} Figure \ref{fig:network_architecture} shows an example of the heatmap for the left wrist. We can see that, by incorporating the body patches, the constructed heatmap resolves the limb ambiguity. However, while the heatmap provides a rough estimation of the joint location, it is insufficient to accurately localize the body joints. To find the accurate location of a joint, we take the DS-CNN joint-localization outputs from a selected subset of patch-pairs, where the joint is visible with high likelihood. We then take the weighted average of these selected outputs as the final location of the joint. More specifically, we only select patch pairs $\mathbf{p}_{p,b}$ that satisfy the following conditions when finding the location of joint $i$ in the testing image. \begin{enumerate} \item The likelihood that no body joint is visible in $\mathbf{p}_{p}$ is smaller than the likelihood that joint $i$ is visible in the part patch, i.e. \begin{equation} \ell_{0}\left(\mathbf{p}_{p,b}\right)<\ell_{i}\left(\mathbf{p}_{p,b}\right).\label{eq:rule1} \end{equation} \item The likelihood that joint $i$ is visible in $\mathbf{p}_{p}$ should be among the $k$ largest ones over all $L$ joints. In a special case, if we set $k$=1, this condition requires $\ell_{i}\left(\mathbf{p}_{p,b}\right)>\ell_{j}\left(\mathbf{p}_{p,b}\right),\forall j\neq i$. \item The maximum heatmap value (for joint $i$) in $\mathbf{p}_{p}$ is close to the maximum heatmap value over the body patch (full testing image in our experiments). Specifically, let \begin{equation} H_{i}^{p}=\underset{\mathbf{x}\in\mathbf{p}_{p}}{\text{max }}H_{i}\left(\mathbf{x}\right),\label{eq:rule2} \end{equation} and \begin{equation} H_{i}^{b}=\underset{\mathbf{x}\in\mathbf{p}_{b}}{\text{max }}H_{i}\left(\mathbf{x}\right).\label{eq:rule3} \end{equation} We require \begin{equation} H_{i}^{p}>\lambda_{h}H_{i}^{b},\label{eq:rule4} \end{equation} where $\lambda_{h}$ is a scaling factor between $0$ and $1$. In our experiments, we set $\lambda_{h}=0.9$. \end{enumerate} Let $\mathbf{P}^{i}$ be the set of the selected patch-pairs that satisfy these three conditions. We estimate the location of joint $i$ by \begin{equation} \mathbf{j}'_{i}=\frac{\sum_{\mathbf{p}_{p,b}\in\mathbf{P}^{i}}\mathbf{z}'_{i}\left(\mathbf{p}_{p,b}\right)\ell_{i}\left(\mathbf{p}_{p,b}\right)}{\sum_{\mathbf{p}_{p,b}\in\mathbf{P}^{i}}\ell_{i}\left(\mathbf{p}_{p,b}\right)},\label{eq:rule5} \end{equation} where $\mathbf{z}'_{i}\left(\mathbf{p}_{p,b}\right)$ is the DS-CNN estimated joint-$i$ location in the coordinates of the body patch $\mathbf{p}_{b}$. As mentioned earlier, in our experiments, each testing image only contains one person and we simply take the whole image as the body patch $\mathbf{p}_{b}$, so $\mathbf{z}'_{i}\left(\mathbf{p}_{p,b}\right)$ can be derived from the DS-CNN joint localization output $\mathbf{z}{}_{i}\left(\mathbf{p}_{p}\right)$ by applying the inverse transform of Eq. (\ref{eq:cal_relative_coordinates}). \section{Experiments \label{sec:Experiments}} In this paper, we evaluate the proposed method on Leeds Sports Pose (LSP) dataset \cite{Johnson2010}, the extended LSP dataset \cite{Johnson2011}, and Frames Labeled in Cinema (FLIC) dataset \cite{Sapp2013}. LSP and its extension contains 11,000 training and 1,000 testing images of sports people gathered from Flickr with 14 full body joints annotated. These images are challenging because of different appearances and strong articulation. The images in LSP dataset have been scaled so that the most prominent person is about 150 pixels high. FLIC dataset contains 3,987 training and 1,016 testing images from Hollywood movies with 10 upper body joints annotated. The images in FLIC dataset contain people with diverse poses and appearances and are biased towards front-facing poses. \begin{table} \begin{centering} {\scriptsize{} \begin{tabular}{|c|>{\centering}p{0.35cm}|>{\centering}p{0.35cm}|>{\centering}p{0.35cm}|>{\centering}p{0.35cm}|>{\centering}p{0.4cm}|>{\centering}p{0.4cm}|>{\centering}p{0.4cm}|} \hline {\scriptsize{}Method} & \multicolumn{2}{c|}{{\scriptsize{}Arm}} & \multicolumn{2}{c|}{{\scriptsize{}Leg}} & \multirow{2}{0.4cm}{\centering{}{\scriptsize{}Torso}} & \multirow{2}{0.4cm}{\centering{}{\scriptsize{}Head}} & \multirow{2}{0.4cm}{\centering{}{\scriptsize{}Avg.}}\tabularnewline \cline{2-5} & \centering{}{\tiny{}Upper} & \centering{}{\tiny{}Lower} & \centering{}{\tiny{}Upper} & \centering{}{\tiny{}Lower} & & & \tabularnewline \hline \hline {\scriptsize{}DS-CNN} & \centering{}\textbf{\scriptsize{}0.80} & \centering{}\textbf{\scriptsize{}0.63} & \centering{}\textbf{\scriptsize{}0.90} & \centering{}\textbf{\scriptsize{}0.88} & \centering{}\textbf{\scriptsize{}0.98} & \centering{}{\scriptsize{}0.85} & \centering{}\textbf{\scriptsize{}0.84}\tabularnewline \hline \hline {\scriptsize{}DeepPose \cite{Toshev2014}} & \centering{}{\scriptsize{}0.56} & \centering{}{\scriptsize{}0.38} & \centering{}{\scriptsize{}0.77} & \centering{}{\scriptsize{}0.71} & \centering{}{\scriptsize{}-} & \centering{}{\scriptsize{}-} & \centering{}{\scriptsize{}-}\tabularnewline \hline {\scriptsize{}Dantone et al. \cite{Dantone2013}} & \centering{}{\scriptsize{}0.45} & \centering{}{\scriptsize{}0.25} & \centering{}{\scriptsize{}0.68} & \centering{}{\scriptsize{}0.61} & \centering{}{\scriptsize{}0.82} & \centering{}{\scriptsize{}0.79} & \centering{}{\scriptsize{}0.60}\tabularnewline \hline {\scriptsize{}Tian et al. \cite{Tian2012}} & \centering{}{\scriptsize{}0.52} & \centering{}{\scriptsize{}0.33} & \centering{}{\scriptsize{}0.70} & \centering{}{\scriptsize{}0.60} & \centering{}{\scriptsize{}0.96} & \centering{}\textbf{\scriptsize{}0.88} & \centering{}{\scriptsize{}0.66}\tabularnewline \hline {\scriptsize{}Johnson et al. \cite{Johnson2011}} & \centering{}{\scriptsize{}0.54} & \centering{}{\scriptsize{}0.38} & \centering{}{\scriptsize{}0.75} & \centering{}{\scriptsize{}0.67} & \centering{}{\scriptsize{}0.88} & \centering{}{\scriptsize{}0.75} & \centering{}{\scriptsize{}0.66}\tabularnewline \hline {\scriptsize{}Wang et al. \cite{Wang2013}} & \centering{}{\scriptsize{}0.49} & \centering{}{\scriptsize{}0.32} & \centering{}{\scriptsize{}0.74} & \centering{}{\scriptsize{}0.70} & \centering{}{\scriptsize{}0.92} & \centering{}{\scriptsize{}0.86} & \centering{}{\scriptsize{}0.67}\tabularnewline \hline {\scriptsize{}Pishchulin et al. \cite{Pishchulin2013}} & \centering{}{\scriptsize{}0.54} & \centering{}{\scriptsize{}0.34} & \centering{}{\scriptsize{}0.76} & \centering{}{\scriptsize{}0.68} & \centering{}{\scriptsize{}0.88} & \centering{}{\scriptsize{}0.78} & \centering{}{\scriptsize{}0.66}\tabularnewline \hline {\scriptsize{}Pishchulin et al. \cite{Pishchulin2013a}} & \centering{}{\scriptsize{}0.62} & \centering{}{\scriptsize{}0.45} & \centering{}{\scriptsize{}0.79} & \centering{}{\scriptsize{}0.73} & \centering{}{\scriptsize{}0.89} & \centering{}{\scriptsize{}0.86} & \centering{}{\scriptsize{}0.72}\tabularnewline \hline \end{tabular} \par\end{centering}{\scriptsize \par} \centering{} \vspace{0.1in} \caption{PCP comparison on LSP. Note that DS-CNN, DeepPose \cite{Toshev2014} and Johnson et al. \cite{Johnson2011} are trained with both the LSP and its extension, while the other methods use only LSP .\label{tab:PCP-comparison}} \end{table} \begin{figure} \begin{centering} \includegraphics[width=0.23\textwidth]{fig/FLIC_Elbows_with_tompson}\includegraphics[width=0.23\textwidth]{fig/FLIC_Wrists_with_tompson}\caption{PDJ comparison on FLIC. \label{fig:PDJs_on_FLIC}} \par\end{centering} \begin{centering} \includegraphics[width=0.23\textwidth]{fig/LSP_Arms_with_tompson}\includegraphics[width=0.23\textwidth]{fig/LSP_Legs_with_tompson} \par\end{centering} \begin{centering} \caption{PDJ comparison on LSP. \label{fig:PDJs_on_LSP}} \par\end{centering} \centering{} \vspace{-0.2in} \end{figure} Most LSP images only contain a single person. While each image in FLIC may contain multiple people, similar to \cite{Toshev2014}, a standard preprocessing of body detection has been conducted to extract individual persons. As in previous works, we take the subimages of these detected individual persons as training and testing samples. This way, the training and testing data only contain \textit{a single} \emph{person} and as mentioned earlier, in the testing stage, we simply take the whole image (for FLIC dataset, this means a whole subimage for an individual person) as the body patch. It has been verified that, in the training stage, the use of object proposals can help train better CNNs for object detection and part localization {[}17, 18{]}. However, in the testing stage, object proposals detected on an image may be unevenly distributed. As a result, an image region covered by dense low-likelihood object proposals may undesirably show higher values in the resulting heatmap than a region covered by sparser high-likelihood object proposals. To avoid this issue, in our experiments we use multi-scale sliding windows (with sizes of $0.5d\left(\mathbf{J}\right)$ and $d\left(\mathbf{J}\right)$, stride 2) to provide part patches in the testing stage. To compare with previous works, we evaluate the performance of human pose estimation using two popular metrics: Percentage of Corrected Parts (PCP) \cite{Eichner2012} and Percentage of Detected Joints (PDJ) \cite{Sapp2013,Toshev2014}. PCP measures the rate of correct limb detection -- a limb is detected if the distances between detected limb joints and true limb joints are no more than half of the limb length. Since PCP penalizes short limbs, PDJ is introduced to measure the detection rate of joints, where a joint is considered to be detected if the distance between detected joint and the true joint is less than a fraction of the torso diameter $d\left(\mathbf{J}\right)$ as described in Section \ref{sec:Model-Inputs}. For PDJ, we can obtain different detection rate by varying the fraction and generate a PDJ curve in terms of the normalized distance to true joint \cite{Toshev2014}. The parameters that need to be set in the proposed method are \begin{enumerate} \item Lower bound coefficient $\mu_{1}$ and the upper bound coefficient $\mu_{2}$ in Eq.(\ref{eq:part_proposal_upper_bound}). \item Balance factor $\lambda_{d}$ in the loss function in Eq. (\ref{eq:total_cost}). \item $k$ and $\lambda_{h}$ that are used for selecting patch-pairs for joint localization in Section \ref{sec:Human-Pose-Estimation}. \end{enumerate} In our experiments, we set $\mu_{1}=0.1$, $\mu_{2}=1.0$, $\lambda_{d}=4$, $k=3$, and $\lambda_{h}=0.9$. In this paper, we use the open-source CNN library Caffe \cite{Jia2014} for implementing DS-CNN. We finetune a CNN network pretrained on ImageNet \cite{Krizhevsky2012} for training the proposed DS-CNN. Following \cite{Girshick2014}, the learning rate is initialized to a tenth of the initial ImageNet learning rate and is decreased by a factor of ten after every certain number of iterations. \begin{figure} \begin{centering} \includegraphics[scale=0.12]{fig/feat_visulization_4} \par\end{centering} \caption{Visualization of the features extracted by layer ${\tt F}_{7}$ in DS-CNN. \label{fig:Visualization-of-feat} } \vspace{-0.0in} \end{figure} We first evaluate our method on LSP dataset. The PCP of the proposed method, DeepPose and six other comparison methods for head, torso, and four limbs (upper/lower arms and upper/lower legs) is shown in Table \ref{tab:PCP-comparison}. Except for `head', our method outperforms all the comparison methods including DeepPose at all body parts. The improvement on average PCP is over 15\% against the best results obtained by the comparison methods. \begin{table*} \begin{centering} \par\end{centering} \begin{centering} \begin{tabular}{|c|>{\centering}p{0.7cm}|c|c|c|c|c|c|c|c||c|c|c|c|c|c|c|} \hline {\footnotesize{}LSP} & {\scriptsize{}ankle} & {\scriptsize{}knee} & {\scriptsize{}hip} & {\scriptsize{}wrist} & {\scriptsize{}elbow} & {\scriptsize{}shoulder} & {\scriptsize{}neck} & {\scriptsize{}head } & {\scriptsize{}mAP} & {\scriptsize{}FLIC} & {\scriptsize{}hip} & {\scriptsize{}wrist} & {\scriptsize{}elbow} & {\scriptsize{}shoulder} & {\scriptsize{}Head } & {\scriptsize{}mAP}\tabularnewline \hline \hline {\footnotesize{}$\mathbf{p}_{p}$} & {\footnotesize{}35.7} & {\footnotesize{}25.5} & {\footnotesize{}27.3} & {\footnotesize{}20.7} & {\footnotesize{}17.1} & {\footnotesize{}35.0} & {\footnotesize{}47.9} & {\footnotesize{}70.3} & {\footnotesize{}31.5} & {\footnotesize{}$\mathbf{p}_{p}$} & {\footnotesize{}61.2} & {\footnotesize{}56.0} & {\footnotesize{}71.2} & {\footnotesize{}88.8} & {\footnotesize{}93.8} & {\footnotesize{}72.0}\tabularnewline \hline {\footnotesize{}$\mathbf{p}_{b}$} & {\footnotesize{}39.7} & {\footnotesize{}39.6} & {\footnotesize{}37.5} & {\footnotesize{}21.3} & {\footnotesize{}29.3} & {\footnotesize{}40.7} & {\footnotesize{}44.4} & {\footnotesize{}70.4} & {\footnotesize{}37.9} & {\footnotesize{}$\mathbf{p}_{b}$} & {\footnotesize{}72.8} & {\footnotesize{}59.3} & {\footnotesize{}77.7} & {\footnotesize{}91.0} & {\footnotesize{}94.0} & {\footnotesize{}77.2}\tabularnewline \hline {\footnotesize{}$\mathbf{p}_{p,b}$} & \textbf{\footnotesize{}44.6} & \textbf{\footnotesize{}41.9} & \textbf{\footnotesize{}41.8} & \textbf{\footnotesize{}30.4} & \textbf{\footnotesize{}34.2} & \textbf{\footnotesize{}48.7} & \textbf{\footnotesize{}58.9} & \textbf{\footnotesize{}79.6} & \textbf{\footnotesize{}44.4} & {\footnotesize{}$\mathbf{p}_{p,b}$} & \textbf{\footnotesize{}74.3} & \textbf{\footnotesize{}68.1} & \textbf{\footnotesize{}82.0} & \textbf{\footnotesize{}93.5} & \textbf{\footnotesize{}96.4} & \textbf{\footnotesize{}81.4}\tabularnewline \hline \end{tabular} \par\end{centering} \vspace{0.1in} \caption{Average precision (\%) of joint detection on LSP and FLIC testing datasets when CNN takes different types of patches as input. \label{tab:Detection-average-precision}} \end{table*} \begin{figure*} \begin{centering} \includegraphics[scale=1.2]{fig/pose_estimation_on_real_images} \par\end{centering} \caption{Human pose estimation on sample images from FLIC and LSP testing datasets. \label{fig:Human-pose-estimation-examples}} \vspace{-0.1in} \end{figure*} Figure \ref{fig:PDJs_on_FLIC} shows the PDJ curves of the proposed method and seven comparison methods at the elbows and wrists on the FLIC dataset \cite{Toshev2014,Sapp2013,Jain2014,Eichner2013,Sapp2010,Tompson2014,Yang2011}. We can see that the proposed method outperforms all the comparison methods except for Tompson et al. Tompson et al's PDJ performance is higher than the proposed method, when normalized distance to true joint, or for brevity, normalized distance, is less than a threshold $t$, but a little lower than the proposed method when normalized distance is larger than $t$. As shown in Fig. \ref{fig:PDJs_on_FLIC}, the value of $t$ is 0.15 and 0.18 for elbows and wrists respectively. As a further note, Tompson et al. \cite{Tompson2014} combines an MRF-based graphic model into CNN-based part detection. It shows that the inclusion of the graphical model can substantially improve the PDJ performance. In this paper, we focus on developing a new CNN-based method to detect local parts without using any high-level graphical models. We believe the PDJ performance of the proposed method can be further improved if we combine it to a graphical model as in Tompson et al. Performance comparison on LSP dataset using PDJ metric is shown in Fig. \ref{fig:PDJs_on_LSP}. Similar to PDJ comparison on FLIC, the PDJ of the proposed method is better than all the comparison methods except for Tompson et al. When compared with Tompson et al, the proposed method performs substantially better when normalized distance is large and performs worse when normalized distance is small. One observation is that the PDJ gain of the proposed method over Tompson et al. at large normalized distance in LSP is more significant than the same gain in FLIC. We also conduct an experiment to verify the effectiveness of using dual sources of inputs: $\mathbf{p}_{p}$ and $\mathbf{p}_{b}$. In this experiment, we compute the average precision (AP) of the joint detection when taking either 1) only part patches $\mathbf{p}_{p}$ , 2) only body patches $\mathbf{p}_{b},$ or 3) the proposed patch pairs $\mathbf{p}_{p,b}$ as the input to CNN. The results are shown in Table \ref{tab:Detection-average-precision}. On both LSP and FLIC testing datasets, the use of the dual-source patch-pairs achieves better AP at all joints, and the best mAP, the average AP over all the joints. Note that the body patch $\mathbf{p}_{b}$ in this paper actually include part patch information, in the form of a binary mask as discussed in Section \ref{sec:Model-Inputs}. That's why the use of only $\mathbf{p}_{b}$ can lead significantly better AP than the use of only $\mathbf{p}_{p}$ on both LSP and FLIC testing datasets. However, the binary mask is usually of very low resolution because we normalize the body patch to a fixed dimension. As a result, we still need to combine $\mathbf{p}_{p}$ and $\mathbf{p}_{b}$ and construct a dual-source CNN for pose estimation. Following \cite{Girshick2014,Ouyang2014}, we visualize the patterns extracted by DS-CNN. We compute the activations in each hidden node in layer ${\tt F}_{7}$ on a set of patch-pairs and Figure \ref{fig:Visualization-of-feat} shows several patch pairs with the largest activations in the first node of ${\tt F}_{7}$. We can see that this node fires for two pose patterns -- the bent right elbow and the right hip. For each pattern, the corresponding full-body pose also show high similarity because of the inclusion of both part and body patches in DS-CNN. Finally, sample human pose estimation results on FLIC and LSP testing datasets are shown in Fig. \ref{fig:Human-pose-estimation-examples}. In general, upper-body poses in FLIC are usually front-facing, while the full-body pose in LSP contains many complex poses. As a result, human pose estimation in LSP is less accurate than that in FLIC. By including holistic views of part patches, the proposed method can estimate the human pose even if some joints are occluded, as shown in Fig. \ref{fig:Human-pose-estimation-examples}. \section{Conclusion} In this paper, we developed a new human pose estimation method based on a dual-source convolutional neutral network (DS-CNN), which takes two kinds of patches -- part patches and body patches -- as inputs to combine both local and contextual information for more reliable pose estimation. In addition, the output of DS-CNN is designed for both joint detection and joint localization, which are combined for estimating the human pose. By testing on the FLIC and LSP datasets, we found that the proposed method can produce superior performance against several existing methods. When compared with Tompson et al \cite{Tompson2014}, the proposed method performs better when normalized distance is large and performs worse when normalized distance is small. The proposed method is implemented using the open-source CNN library Caffe and therefore has good expandability. \\ \\ \\ \textbf{Acknowledgement}: This work was supported in part by AFOSR FA9550-11-1-0327 and NSF IIS-1017199. \begin{small} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Person re-identification aims at spotting the same person from non-overlapping camera views, which can be applied to crime suspect recognition, target customer identification and other scenarios. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{procedure.jpg} \end{center} \caption{An illustration of the proposed method's procedure. We first initialize the CNN with the labeled samples(1). And then the initialized CNN is used to embed the unlabeled samples into the feature space. We assign the closest labeled sample's label to the unlabeled(2). Then we use adaptive relative distance sampling strategy to select reliable ones and join them into training set(3). We retrain the CNN use the new training set(1). We iterate the whole procedure until all unlabeled samples have joined the training set.} \label{fig:long} \label{fig:onecol} \end{figure} As a key technology in intelligent video surveillance, person re-identification has received great attention of scholars and lots of excellent results have emerged. Especially since the introduction of the deep learning method, the feature extraction and metric learning are integrated together. More discriminative feature representation and distance metric can be learned, far exceeding the traditional two-stage method of hand-crafted features and separate distance metric learning. \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\linewidth]{network.JPG} \end{center} \caption{The framework of Part Attention Model. We first feed T frames of a tracklet into backbone and get T feature maps. The upper global branch first exert global average poolling(GAP) and then dimension reduction. After time fusion, we get the global featrue. Four parts cut vertically from the feature map are fed into four local branches. After that we get four local feature.} \label{fig:long} \label{fig:onecol} \end{figure*} Although person re-identification has made great progress, there are still some problems. Mainly in the following two aspects: First, the existing deep learning method is dominated by a fully supervised network, relying on a large number of annotated data. In practical applications, the real situation of the scene is complex and changes with time. Obtaining a large number of annotation data suitable for the application scene is very expensive and impractical, which limits the application of the existing supervised algorithm relying on a large number of annotation data. Second, the existing research algorithms have achieved good results on a limited set of general data sets, but when applied to actual scenes, they are subject to a series of complexities such as occlusion, illumination changes, pedestrian attitude changes, viewing angle changes, and even camera model changes. The impact of factors will greatly reduce the accuracy of recognition. Therefore, the existing research is still far from the actual application level. It's very expensive and impractical to rely on a large amount of manual labeling data in practical applications. But a large number of unlabeled samples can be obtained from person detection and tracking in surveillance video. Few-example learning method only needs to label the sample with a small label or a small amount. By correctly estimating the label for the unlabeled sample, it is used to train the network, which can solve the problem that the manual labeling is expensive and the time consuming is too long in the practical application. Therefore, few-example person re-identification method has great research value and practical value. As illustrated in Figure 1, we iteratively estimates labels for unlabeled samples, incorporates them into training sets, and trains a more robust network. Our contributions can be summarized as follows: \begin{itemize} \item A multi-branch network PAM that jointly learns local and global features is proposed. PAM has high accuracy, few parameters and converges fast, which is suitable for few-example person re-identification. \item We propose the static relative distance sampling(SRD) strategy based on the relative distance between classes which surpass GPS on small-scale datasets. For the problem that SRD can not use all unlabeled samples, we propose adaptive relative distance sampling (ARD) strategy. \item For one-example experiment, PAM+GPS reaches 86.9\% rank-1 accuracy and 47.26\% mAP on DukeMTMC-VID and MARS respectively. PAM+ARD reaches 89.78\%, 56.13\%, 89.17\% rank-1 accuracy and 45.36\% mAP on PRID2011, iLIDS-VID, DukeMTMC-VID and MARS respectively, which exceeds the state-of-the-art by a large margin. \end{itemize} \section{Related work} In recent years, approaches based on deep neural networks have dominated the research of person re-identification. These approaches combined the feature representation learning and distance metric learning together with an end-to-end architecture. \textbf{Supervised Person Re-ID.} Dong Yi et al. \cite{yi_deep_2014} introduce siamese network to person re-identification and used cosine similarity to measure similarity loss. Although siamese network has many excellent features, it's easy to cause the network training failure because the cross entropy loss is very sensitive to small changes in the feature vector. In order to solve this problem, S. Ding et al. \cite{ding_deep_2015} used a triplet composed of positive and negative sample pairs as the input of the siamese network. By learning pairs of positive and negative samples, feature embeedings become more discriminative. At the same time, siamese network and common classification network are combined in their work, while using the Softmax loss and the improved triple loss. Zhao et al. \cite{zhao_person_2017} introduce saliency information into person re-identification, learning the saliency of pedestrians and matching the significant similarities of different pedestrians. W. Li et al. [8] propose a multi-branch network. The upstream single branch learns the global feature of pedestrians, and the downstream multi-branches learn multiple local features. Meanwhile, the interaction between soft and hard space, channel attention and spatial attention is used. The network proposed by S. Li et al. \cite{li_harmonious_2018} can learn multiple spatial partial attention models, and adopts a diversity regularity term to ensure that multiple partial attention models focus on different areas of the body. The time-attention mechanism employed enables the network to learn the features of the face, torso, and other parts of the body from the best-conditioned frames in the sequence. \textbf{Few-example person Re-ID.} In general, there are two main types of few-example methods. One is to establish a good model only with a small number of labeled data. These methods may use siamese network, matching network, Meta Learning, and Transfer Learning. The other is to train a good network by estimating the label for the unlabeled data and then augmenting the training set with them. In the first type, Koch et al. \cite{koch_siamese_nodate} propose a framework for solving the problem of one-shot classification. They first build a fully convolutional siamese network based on verification loss, and then use this network to calculate the similarity between the image to be identified and other labeled samples. The image is then recognized as a sample of the category which the most similar labeled sample belongs to. Vinyals et al. \cite{vinyals_matching_2016} propose matching network. During the training process, some samples are selected to form a support set and the remaining samples are used as training images. They construct different encoders for the support set and training pictures. The classfier's output is a weighted sum of the predicted values between the support set and the training images. During the test process, one-shot sample are used as support set to predict the category of new images. Rahimpour et al. \cite{rahimpour_attention-based_2018} use meta-learning methods to learn multiple similar tasks, and build two encoders for the gallery and probe respectively. Based on these encoders, they get gallery images' embedding according to the characteristics of the remaining gallery images. They get probe images' embedding according to the characteristics of the gallery images. In this way they obtain a more discriminative feature representation. In the second type, Ye et al. \cite{ye_dynamic_2017} establish a graph for each camera. They view the labeled sample as the node of the graph, and view the distance between the video sequence features as the path. Unlabeled sample are mapped into different graphs (namely estimating the labels) to minimize the objective function. The graphs are updated dynamically . They continually estimate labels, and train models until the algorithm converges. Liu et al. \cite{liu_stepwise_2017} first initialize the model with labeled samples. Then they calculate k nearest neighbors of the probe with the gallery. They remove the suspect samples and then add the remaining samples to the training set. The procedure is iterated until the algorithm converges. Wu et al. \cite{wu_exploit_2018} initialize a CNN with labeled data firstly, and then linearly incorporate pseudo-label samples to the training set according to the distance to labeled samples. Then the CNN is retrained with the new training set. Finally all unlabeled samples have estimated label and are added into training set, then they use a validation set to select the best model. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{false.JPG} \end{center} \caption{A false identification example of absolute distance sampling. The absolute distance of the first row which blongs to different person is smaller than the second row which blongs to the same person.} \label{fig:long} \label{fig:onecol} \end{figure} \section{Method} \subsection{Framework Overview} The framework of our approach is shown in Figure 1. We first initialize the CNN with the labeled samples. The CNN here is the PAM network (section 3.2). Then we use the trained CNN to estimate the labels for the unlabeled samples based on the distance between the labeled samples and unlabled samples. The label of the labeled samples closest to the unlabeled sample in the feature space is used as the estimated label of the unlabeled sample. Unlabeled samples and their estimated labels form pseudo-label samples. Then we use sampling strategy ARD(section 3.3) to select correctly estimated ones and then incorporate them into training set. After that, the enlarged training set is used to re-train the CNN again. We iterate this process until all unlabeled samples have been estimated and added to the training set. Since the training set is enlarged continuously during training iterations, we can progressively learn a more stable model. When the algorithm converges, we get a most robust model trained with all samples. \begin{algorithm}[htb] \caption{PAM+ARD} \label{alg:Framwork} \begin{algorithmic}[1] \REQUIRE ~~\\ Labeled data $L_{-} d a t a=\left\{\left(x_{1}, y_{1}\right),\ldots,\left(x_{N}, y_{N}\right)\right\}$\\ Unlabled data $U_{-} d a t a=\left\{\tilde{x}_{1}, \tilde{x}_{2}, \ldots, \tilde{x}_{M}\right\}$\\ CNN model $\Phi\left(\bullet ; \theta_{0}\right)$ \\ \ENSURE ~~\\ Best CNN model $\Phi\left(\bullet ; \theta^{*}\right)$ \\ \STATE Initialize CNN model $\Phi\left(\bullet ; \theta_{0}\right)$ with $L_{-}data$ \\ \label{ code:fram:Initialize \STATE Estimate labels $U_{-}data$ $\rightarrow\left\{\left(\tilde{x}_{1}, \tilde{y}_{1}\right),\ldots,\left(\tilde{x}_{M}, \tilde{y}_{M}\right)\right\}$\\ \label{code:fram:Estimate} \STATE $k=k_{0}$ \label{code:fram:add} \STATE \textbf{do} \label{code:fram:classify} \STATE $~~k=k_{0}+0.1$ \label{code:fram:add} \STATE ~~Sample $D_{\text {intra}}<k \times D_{\text {inter}}$ from $U_{-}$ data $\rightarrow P_{-}$ data \label{code:fram:add} \STATE \textbf{while} $| P_{-}$ data $\left|<0.15 \times \right| L_{-}$ data $|$ \label{code:fram:add} \STATE $k_{s}=k$ \label{code:fram:add} \STATE \textbf{for} $k=k_{s} \rightarrow 1.0$ \textbf{do} \label{code:fram:add} \STATE ~~\textbf{while} $\left|P_{-} d a t a_{k t}\right|-\left|P_{-} d a t a_{k-1}\right|$ \\ ~~~~~~~~~~~~$<(-k) \times \left|P_{-} d a t a_{k 1}\right|-\left|P_{-} d a t a_{k 0}\right|$ ~\textbf{do} \label{code:fram:add} \STATE ~~~~$Train_{-}data = \left\{L_{-}data, P_{-}data\right\} $ \label{code:fram:add} \STATE ~~~~Re-train the CNN model $\Phi\left(\cdot ; \theta_{0}\right)$ with $Train_{-}data$ \label{code:fram:add} \STATE ~~~~Est. labels $U_{-}data$ $\rightarrow\left\{\left(\tilde{x}_{1}, \tilde{y}_{1}\right),\ldots,\left(\tilde{x}_{M}, \tilde{y}_{M}\right)\right\}$ \label{code:fram:add} \STATE ~~~~Sam. $D_{\text {intra}}<k \times D_{\text {inter}}$ from $U_{-}data$ $\rightarrow P_{-}data_{kt}$ \label{code:fram:add} \STATE ~~\textbf{end while} \label{code:fram:add} \STATE \textbf{end for} \label{code:fram:add} \RETURN $\Phi\left(\bullet ; \theta^{*}\right)$ \end{algorithmic} \end{algorithm} \subsection{Part Attention Model} In this paper, the small sample method of iteratively estimating labels is adopted. If the model is too complex and the number of parameters is too large, the iteration time is too long. If the model performance is not good, the label estimation accuracy is low, so that all the training sets are error samples, which will lead to poor algorithm performance. Therefore, the network must be simple in structure and high in accuracy to meet the needs. Most of the existing fully supervised network models with good performance are complex, with large number of parameters and slow convergence, which cannot be applied in this paper. However, the network performance with few parameters can not meet the demand. To solve this problem, this paper proposes a multi-branch network for joint learning of local and global features. The network structure is shown in Figure 2, using a multi-branch network that learns both local and global features. A tracklet is randomly extracted from the frame and input into the network. After Backbone, a feature map is obtained. Then, the feature map is sent to the global branch as a whole, and the feature matrix of the dimension is obtained through global pooling and dimension reduction. Then, the global feature of the dimension is obtained by merging the time domain average pooling in the time domain; the feature graph is sliced vertically. After obtaining p local feature maps, the local feature maps are globally pooled and dimensionally reduced to obtain p-dimensional feature matrices, and then the time-domain averaged pools are combined in the time domain to obtain p-dimensional local features. Backbone uses the Resnet50 pre-trained on ImageNet\cite{krizhevsky_imagenet_2012}. The Global branch is the original IDE network [18], and the Local branch uses the 1:2:2:1 slice for the feature map output of the last pooled layer of ResNet50. The global feature and each local feature each train a classifier. Enter a tracklet to extract the global features and 4 local features of each frame, and concatenate the global feature and four local features as the feature representation of the entire tracklet. \subsection{Adaptively Sampling Distance Strategy} The task of the sampling strategy is to select all the correctly estimated pseudo-label samples to join the training set as much as possible. If the pseudo-label samples sampled after each iteration are correct, but the number is small, then adding it to the training set and training CNN again can bring very limited performance improvement. The next time the tag is estimated, it will only be the last time. The result is slightly better, and the increase in the number of correct labels that can be selected will be very slow, resulting in a network that has been growing in performance, but the training takes too long. On the other hand, if the number of pseudo-label samples selected by each iteration is large, but it is very correct, adding it to the training set to train the CNN will result in a decrease in network performance and loss of the meaning of increasing the number of training samples. Therefore, the number of pseudo-label samples selected by the sampling strategy is required to be large and the accuracy is high. Wu et al. [15] adopted a linearly increasing sampling strategy. After each iteration, the nearest neighbor-based tag estimation strategy is used to estimate the tag for the unlabeled sample, and then all the unlabeled samples are represented by the L2 distance between the feature space and the nearest tagged sample to represent the credibility of the tag estimate. It sorts from small to large, and then samples where t is the number of iterations and p is the tunable parameter), and has achieved good performance on the MARS and DukeMTMC-VID data sets. However, there are some problems with this sampling method. For a limited number of unlabeled samples, label estimation is difficult. Each iteration adds a fixed number of samples than the last time, causing the initial iterations to include only simple samples that are easy to estimate, wasting the network's ability to estimate. The last few iterations added too many difficult samples to estimate the error, overdrafting the network capabilities. In order to estimate and sample more robustly, the author set the parameter p to 0.05 in the experimental setup, requiring a total of 20 iterations, and the training time is too long. Moreover, the selection of the p value is a very difficult problem. A smaller p value can bring better performance, but the iteration time is too long. Larger p account for severe performance degradations due to the addition of too many false samples. The setting of the small sample determines that there are only a small number of samples when the network is initialized, and the network tends to learn the simple and direct distinction between the samples, such as the color of the clothes, and ignores other higher-level distinguishing information. If the sampling only depends on the absolute distance between the feature space samples, the shallow information will be similar and the distance between the samples of the actual different classes (the first line in Figure 4-1) is less than the shallow information is not so similar but actually the same kind. The distance between the samples (the second line in Figure 4-1) is preferentially added to the network. The result is catastrophic, because any unlabeled sample with similar surface information (wearing a yellow T-shirt) will be estimated to be added to the training set due to the smaller absolute distance, thus forming a malignant Positive feedback damages the discriminating power of the network. This is also the root cause of poor performance of PAM plus GPS algorithm on one example of small-scale data set PRID2011, iLIDS-VID. \subsubsection{Static Relative Distance Sampling} We propose a Static Relative Distance Sampling (SRD) based on the distance relationship between classes: The distance between the unlabeled sample in the feature space and the nearest labeled sample is the label sample and the rest of the label. However, the minimum distance between samples whose labels are not (ie, not belonging to the same class) is that when the a equation is satisfied, the unlabeled sample and its estimated label are added to the training set. Iteratively trains the network, estimates the label, and samples the extended training set. When the difference between the number of samples and the number of previous samples is less than b (for the manually set hyperparameter, it can be set to 0.01, 0.03, etc.) the algorithm converges and stops iterating. The reason for this is that CNN can learn a good feature extractor and distance measurement, and it can distinguish between similar samples and different samples after embedding samples into the feature space. If the estimated tag of the unlabeled sample is correct, the distance between the unlabeled sample and the nearest labeled sample belongs to the intra-class distance. However, the distance between samples with tags and other samples with tags but different (that is, they do not belong to the same class) is inter-class distance, among which the smallest distance is the smallest inter-class distance. When the intra-class distance is less than the minimum inter-class distance, we have reason to believe that the estimated tag is correct, so we add it to the training set. In order to improve the accuracy, I multiplied the minimum inter-class distance by a constant k less than 1 to expand the training set more carefully. Obviously, the smaller the k, the higher the probability that the sample you add has the correct label. SRD static sampling can achieve good performance, but there are two problems: first, although the convergence can be achieved after iterating a few steps, the convergence cannot use up all the unlabeled samples. Second, as the number of iterations increases, the number of samples in the last few iterations increases very little, but it takes up a lot of training time. We hope that the algorithm can make use of all the unlabeled samples and add more unlabeled samples for most purposes in each iteration, so as to save too many iterations and reduce meaningless learning. Based on the static sampling strategy SRD, we propose an Adaptive Relative Distance Sampling (ARD) based on the distance between classes. The core idea is to find the appropriate initial k value through the probe mechanism, and then perform SRD. When the relative value of the current iterative sampling number increases is less than a certain threshold, k is automatically increased, and the SRD is performed under the new k value, so that more samples are added in the next iteration. Use all unlabeled samples until k>1. The adaptive sampling strategy (ARD) specifically includes a k-probe mechanism and an adaptive k-value increasing mechanism, which are explained in detail below. \textbf{k-probe mechanism..} The goal of the k-probe mechanism is to find a suitable k-value to start sampling. Considering that the initial k value will increase the k value after the convergence of the algorithm, it will continue to iterate. Therefore, the k value is suitable from a small value. At the same time, considering that the k value is too small, there may be no qualified pseudo-label samples, or join. The number of pseudo-label samples is too small, making meaningless iterations, wasting the predictive power of the network. Therefore, the following k-probe mechanism is proposed: CNN estimates the label for the unlabeled sample after initializing the training with the labeled sample. Then try to sample k 0.6, 0.7, 0.8, 0.9, 1.0 in sequence, and stop when the number of samples is greater than the initial k value. \textbf{Adaptively increase k value mechanism.} For each k, record the difference $k_{-}margin0$ of the number of samples of the previous two iterations of the k value. If the difference between the current iteration sample number and the last sample number is less than (-k) $\times$ $k_{-}margin0$, then k=k + 0.1, increase The k value continues to train the network. When k exceeds 1, the training is terminated and the algorithm converges. The threshold (-k) $\times$ $k_{-}margin0$ takes a dynamic setting method, setting different thresholds for different k, and small k sets a larger threshold. The function of setting the threshold for each k value is to determine whether the SRD sample under the current k value has converged. When the increased number of samples is less than the threshold, the value of k is increased to start the next SRD, and the purpose of adaptive sampling is achieved. $k_{-}margin0$ is determined by the difference between the number of samples of the previous two iterations of the current k value, and is different for different k. This threshold setting method is a great innovation in this paper and is the core of the Adaptive Sampling Strategy (ARD). \section{Experiments} \begin{table*}[] \begin{center} \begin{tabular}{l|ll|ll|ll|ll} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{PRID2011} & \multicolumn{2}{c|}{iLIDS-VID} & \multicolumn{2}{c|}{DukeMTMC} & \multicolumn{2}{c}{MARS} \\ \cline{2-9} & \multicolumn{1}{c}{Rank-1} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c}{Rank-1} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c}{Rank-1} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c}{Rank-1} & \multicolumn{1}{c}{mAP} \\ \hline EUG (CVPR18) & 59.6 & 65.3 & -- & -- & 72.79 & 63.23 & 62.67 & 42.45 \\ DGM (ICCV17) & 82.4 & -- & 37.1 & -- & 44.36 & 33.62 & 36.81 & 16.87 \\ Stepwise(ICCV17) & 84.27 & 87.64 & -- & -- & 56.26 & 46.76 & 41 & 19.65 \\ BUC\cite{Lin2019ABC}(AAAI19) & -- & -- & -- & -- & 69.2 & 61.9 & 61.1 & 38.0 \\ TAUDL\cite{ferrari_beyond_2018}(ECCV18) & 49.4 & -- & 26.7 & -- & -- & -- & 43.8 & 29.1 \\ PAM+SRD(Alg2.) & 85.51 & 87.60 & 39.80 & 45.59 & 83.33 & 77.13 & 56.82 & 38.71 \\ PAM+ARD(Alg3.) & 89.78 & 94 & 56.13 & 61.14 & 89.17 & 85.16 & 61.57 & 45.36 \\ \hline \end{tabular} \end{center} \caption{ Comparison with the state-of-the-art methods on iLIDS-VID, PRID2011, DukeMTMC-VID, and MARS. All the methods are conducted based on one-example setting except BUC. Although DGM, Stepwise, TAUDL claim that their methods are unsupervised. Their methods belongs to one-example methods strictly. } \end{table*} \subsection{Datasets and Settings} \textbf{The iLIDS-VID dataset.} The iLIDS-VID dataset \cite{fleet_person_2014} contains 300 different pedestrians shot by two non-overlapping cameras with a total of 600 tracklets (each tracker has 2 tracklets). Each tracklet has a length of 23-192 frames and an average length of 73 frames. Because this data set was taken in the multi-camera network in the arrival hall of the airport, the clothing similarity, illumination, and viewing angle vary greatly, so it is more challenging. \textbf{The PRID2011 dataset.} The PRID2011 dataset \cite{heyden_person_2011} contains 934 pedestrians shot by two still cameras with different perspectives, including a total of 1134 tracklets. Camera 1 took 385 tracklets from 385 pedestrians, Camera 2 shot 749 tracklets from 749 pedestrians, and only the first 200 pedestrians appeared in both cameras. Each tracklet has a length of 5-675 frames and an average length of 100 frames. \textbf{The DukeMTMC dataset.} The DukeMTMC-VID dataset \cite{wu_exploit_2018} is a large-scale video person re-ID dataset that is processed by the DukeMTMC picture pedestrian re-identification data set. It is photographed by a plurality of cameras whose fields of view do not overlap, and is manually labeled by constructing a tracklet by equally extracting 12 frames per second in continuous video. A total of 1404 pedestrian tracklets (two tracklets for each pedestrian with at least two different cameras) and 408 pedestrian jamming tracklets (one tracklet for each pedestrian with only one camera), a total of 4832 tracklets. 2196 tracklets of 702 pedestrians in the data set were used for training, and 2,636 tracklets of the remaining 702 pedestrians and 408 disturbing pedestrians were used for testing. \textbf{The MARS dataset.} The MARS dataset \cite{noauthor_mars:_nodate} is the largest data set for video pedestrian recognition, and is expanded by the Market1501 data set. Shot on a college campus by six near-synchronized cameras, with 1,261 segments of 1,631 segments of tracklets and 3,248 segments of interference tracklets (error detection or tracking video sequences). It is divided into a training set of 625 pedestrians and a test set of 636 pedestrians. Each pedestrian has an average of 13 tracklets, 816 frames, and each pedestrian has at least two tracklets taken by different cameras. Another significant feature of the MARS dataset and the above dataset is that it uses algorithmic annotation rather than manual annotation. The detection and tracking of pedestrian bounding boxes uses the Variable Part Model (DPM) [19] and many more. Target tracking algorithm GMMCP tracker [20]. \textbf{Experiment Setting.}For one-example experiments, we use the same protocol as [21]. In both datasets, we randomly choose one tracklet in camera 1 for each identity as initialization. If there is no tracklet recorded by camera 1 for one identity, we randomly select one tracklet in the next camera to make sure each identity has one video tracklet for initialization. For few-example experiments, we conduct it on the MARS dataset. 20\%, 40\%, and 60\% of the samples were randomly selected from the training set as the initial labeled data. The remaining samples of the training set are stripped of its labels and used as unlabeled samples. \textbf{Implementation Details.} Train 70 epochs on the iLIDS-VID and PRID2011 data sets, using the random gradient descent (SGD) plus momentum (Momentum) optimization method, the momentum is set to 0.5, the initial initial learning law is set to 0.1, 55 epochs Set to 0.01 afterwards. 50 epochs were trained in the DukeMTMC-VID and MARS data sets, and the random gradient plus momentum optimization method was also used. The momentum was set to 0.5, the initial learning law was set to 0.1, and the learning law was set to 0.01 after 40 epochs. . The weight attenuation is set to 5e-4; the data enhancement method using random cropping, random flipping, and random erasure: random cropping to size, random horizontal flip with a probability of 0.5, and random erase area area ratio range [0.02, 0.2], an area with an aspect ratio of [0.3, 3.3], filled with pixels [0.0, 0.0, 0.0]. Fixed the parameters of conv1, layer1 and layer2 of ResNet50. The learning rate of the rest of ResNet50 is set to 0.1 times of the global learning rate; the value of the loss function is set to 0.1, and the value of K is the number of categories classified in the training stage (that is, the number of ids of the training set) ). \subsection{Comparison with the State-of-the-Art Methods} From the experimental results of the four datasets, the algorithm 2 static sampling SRD and the algorithm 3 adaptive sampling ARD have good performance. Especially on the small-scale datasets PRID2011 and iLIDS-VID, the performance of one-example exceeds the state of the art and is better than the PAM+GPS of Algorithm 1. Because it is based on the distance between the classes and the relative distance between the classes, more metric information is used, which can overcome the problems of GPS equalization incremental sampling in section 4.1.1, so it is obtained on PRID2011 and iLIDS-VID. A significant performance boost. The SRD algorithm of static sampling is not as good as the GPS algorithm on the DukeMTMC and MARS datasets, mainly because GPS can estimate more more in multiple iterations by using a smaller n0 and a smaller sampling increase factor k. The correct label. However, when the SRD algorithm is set too small, it will fall into local optimum on the big data set, and only a small number of unlabeled samples are added. The excessive k setting will cause too many error samples to be added in the initial iteration, which will gradually deteriorate as the number of iterations increases, which limits the performance improvement. After adaptive incremental sampling ARD, the performance of DukeMTMC and MARS data sets has been significantly improved, which proves the effectiveness of adaptive incremental sampling strategy compared to pure static sampling strategy, especially on large-scale data sets. Adaptive Sampling ARD (Algorithm 3) exceeds the previous algorithm on all four data sets. Rank-1 was 89.78\% and 56.13\% on PRID and iLIDS-VID, respectively, and 85.16\% and 45.36\% on DukeMTMC and MARS, respectively, although the MARS data set was slightly inferior to the PAM+GPS algorithm proposed in Chapter 3. However, the performance is better on small-scale data sets, and the number of iterations is also lower, so the overall performance is the best. \subsection{Few-example experiment} The results of the few-example settings on the MARS dataset using PAM and the Adaptive Sampling Algorithm (ARD) are shown in Table 1. \begin{table}[] \begin{center} \begin{tabular}{l|l|l|l|l} \hline \multicolumn{1}{c|}{No.} & \multicolumn{1}{c|}{Method} & \multicolumn{1}{c|}{Type} & \multicolumn{2}{c}{MARS} \\ \cline{4-5} & & & R-1 & mAP \\ \hline 1 & AMOC+EpicFlow\cite{liu_video-based_2016} & Super. & 68.3 & 52.9 \\ 2 & QAN\cite{liu_quality_2017} & Super. & 73.7 & 51.7 \\ 3 & PAM+ARD & Super. & 61.57 & 45.36 \\ 4 & PAM+ARD & Semi.(20\%) & 68.38 & 52.61 \\ 5 & PAM+ARD & Semi.(40\%) & 74.29 & 60.31 \\ 6 & PAM+ARD & Semi.(60\%) & 77.98 & 65.74 \\ \hline \end{tabular} \end{center} \caption{Comparison between the few-example performance of our method and some supervised methods. The percentage in the bracket indicates the ratio of labeled samples.} \end{table} As can be seen from the above 1, the PAM+SRD algorithm achieves 52.61\% when using 20\% of the labeled data, which is better than the full-supervised algorithm QAN, which is slightly inferior to the full-supervised algorithm AMOC+EpicFlow. Explain that our method can achieve the performance of the fully supervised algorithm when only 20\% of the labeled data is used, which further proves the excellent performance of the algorithm. Although few-example setting requires more manual labeling than single labeling, performance can be greatly improved. \subsection{Ablation Studies} We performed a series of ablation experiments on the PAM+ARD algorithm on the DukeMTMC-VID dataset according to one example to verify the performance of each part of the algorithm. \begin{table}[] \begin{center} \begin{tabular}{l|lllll} \hline \multirow{2}{*}{Methods} & \multicolumn{5}{c}{DukeMTMC-VID} \\ \cline{2-6} & \multicolumn{1}{l|}{R-1} & \multicolumn{1}{l|}{R-5} & \multicolumn{1}{l|}{R-10} & \multicolumn{1}{l|}{R-20} & mAP \\ \hline IDE+ARD & \multicolumn{1}{l|}{x} & \multicolumn{1}{l|}{x} & \multicolumn{1}{l|}{x} & \multicolumn{1}{l|}{x} & x \\ PAM+EUG(k=0.05) & \multicolumn{1}{l|}{x} & \multicolumn{1}{l|}{x} & \multicolumn{1}{l|}{x} & \multicolumn{1}{l|}{x} & x \\ PAM+ARD & \multicolumn{1}{l|}{89.2} & \multicolumn{1}{l|}{96.7} & \multicolumn{1}{l|}{97.9} & \multicolumn{1}{l|}{98.3} & 85.2 \\ \hline \end{tabular} \end{center} \caption{The ablation studies of our method. In the first experiment, we replace the CNN with common IDE network. In the second experiment, we replace the sampling strategy with linear growth sampling strategy proposed in \cite{wu_exploit_2018}.} \end{table} \textbf{Part attation model.} Table 3 is a control variable experiment for the Part Attention Model (PAM). The algorithm uses the Adaptive Increased Sampling Strategy, and the CNN networks are the IDE and PAM. The results show that PAM can achieve a 2.7\% performance improvement over Rank on the DukeMTMC-VID dataset, which demonstrates the effectiveness of Part Attention Model. \textbf{Sampling strategy.} Table 3 is a control variable experiment for the Adaptive Increased Sampling Strategy (ARD). The algorithm uses the IDE network, and the sampling strategy uses the linear increase sampling (p=0.05) and adaptive incremental sampling (ARD) of the EUG. The results show that ARD can achieve a 2.7\% performance improvement over Rank on the DukeMTMC-VID dataset, which demonstrates the effectiveness of adaptively increasing the sampling strategy. \textbf{Dynamic coefficient for threshold.} In order to verify the effectiveness of the method of setting thresholds by multiplying kmargin0 of different k values with different coefficients (-k), I designed a group of comparative experiments: one group multiplied kmargin0 of different k values with dynamic coefficients (-k), and one group multiplied kmargin0 of different k values with fixed coefficient 0.3. The experimental results are shown in table 4-5. It can be seen that comparing the kmargin0 of different k values multiplied by different coefficients (-k) with the fixed coefficient 0.3 can significantly reduce the number of iterations (15 times reduced to 12 times), and obtain the performance improvement of Rank-1 and mAP by 2.13\% and 2.85\% respectively. As can be seen from table 4-5, compared with SRD, ARD can reduce the number of iterations and significantly improve the performance, which proves the superiority of ARD algorithm. \begin{table}[] \begin{center} \begin{tabular}{l|l|lll} \hline No. & Method & \multicolumn{3}{c}{DukeMTMC-VID} \\ \cline{3-5} & & \multicolumn{1}{l|}{Total Steps} & \multicolumn{1}{l|}{R-1} & mAP \\ \hline 1 & IDE+EUG & \multicolumn{1}{l|}{20} & \multicolumn{1}{l|}{72.79} & 63.23 \\ 2 & PAM+SRD & \multicolumn{1}{l|}{16} & \multicolumn{1}{l|}{83.33} & 77.13 \\ 3 & PAM+ARD(0.3) & \multicolumn{1}{l|}{15} & \multicolumn{1}{l|}{87.04} & 82.31 \\ 4 & PAM+ARD(-k) & \multicolumn{1}{l|}{12} & \multicolumn{1}{l|}{89.17} & 85.16 \\ \hline \end{tabular} \end{center} \caption{The ablation study to verify the effectiveness of the dynamic coefficient for ARD. We replace the dynamic coefficient with static number 0.3.} \end{table} \subsection{Analysis and Visualization.} We visualize the relationship between the accuracy and the number of samples of the adaptive sampling ARD (Algorithm 3) on the DukeMTMC-VID data set with the number of iterations of the algorithm, as shown in Figure 4-6: It can be clearly seen that the increasing trend of the number of samples is only gentle when k=1.0, and stable rising trend when k= 0.7, 0.8 and 0.9. Therefore, learning in each iteration is meaningful. MAP accuracy curve shows an obvious rise in the form of four steps, with each value of k corresponding to a step, and the accuracy of iteration has been increasing. When the final algorithm converges, the accuracy rate is basically the highest, indicating that the performance of the last iteration model is the best, and there is no need to select the optimal model through additional verification sets. Experiment 4, 5 sample along with the iteration number of absolute value change as shown in figure 4, can see more clearly for different kmargin0 multiplied by the coefficient of different k value (1.2-k) play a role in iteration algorithm: make the sampling number increasing trend is more stable and fast, thus reducing the total number of iterations, improve the learning efficiency of the algorithm. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{v1.JPG} \end{center} \caption{The increasing trend of sampling numbers and mAP along with iteration steps on DukeMTMC. We can clearly see 4 step-by-step growth with different k.} \label{fig:long} \label{fig:onecol} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{v2.JPG} \end{center} \caption{Sampling number of dynamic coefficient and static coefficient increases at different paces along with iteration steps on DukeMTMC. It can be seen that dynamic coefficient grows more fastly.} \label{fig:long} \label{fig:onecol} \end{figure} \section{Conclusion} Since unlabeled person tracklets are cheap and easy to get, data driven deep models can get promising results with label estimation for few-example person re-identification. The challenge is that how to estimate labels correctly and select the reliable ones to enlarge traing set. In the paper, we propose a light and converges fast network PAM, which is suitable for few-example person re-ID. We also propose an adaptively sampling strategy to select most reliable pseudo label samples and gradually learn a more robust model. Our approach surpasses the state-of-the-art method by 5.5 19.0 16.4 points (absolute) in rank-1 accuracy on PRID2011 iLIDS-VID DukeMTMC-VID, and 2.9 points in mAP (absolute) on MAS. The proposed approach is very efficient and accurate for few-example person re-identification. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Human-robot interaction has become a popular research topic which can be integrated into and revolutionize almost every aspect of our lives. Similar to human-human interaction, there are many ways for humans to express their intentions or emotions in human-robot interaction, which can be classified into two categories: verbal and nonverbal communications \cite{aly2013model}. Nonverbal communication further includes facial expressions \cite{wu2019weight}, gestures \cite{chang2019improved}, proxemics \cite{patompak2019learning}, and eye gazes \cite{saran2018human}. Verbal communication has the advantage of simplicity, convenience, and clearness. Nonverbal communication, however, is an essential interaction way for scenarios where verbal communication is not available, such as noisy environments and long-distance interaction \cite{barattini2012proposed}. Even when verbal communication is available, nonverbal communication can also be used as a considerable augmentation of verbal communication, which will make the interaction more lively. A socially intelligent robot typically has the capacity of understanding human intentions through nonverbal communication to improve the effectiveness, efficiency, and human-friendless in human-robot interaction. In particular, hand gestures serve as a natural and intuitive way to assist interaction between human and robot. Therefore, recognition of hand gestures plays a key role in a variety of human-robot applications. There are mainly two types of hand gesture recognition. For the first type, glove-based hand gesture recognition, hand gestures are recorded using a data glove, and the position of each finger joint can be obtained accordingly. For the other type, vision-based hand gesture recognition, hand gestures are captured using cameras. Compared with data gloves, camera-based capture systems are much cheaper and easier to use. Besides, wearing gloves will cause difficulty for some hand operations, such as clenching fists. In this paper, we will focus on vision-based hand gesture recognition. In pattern recognition, sparse representation has shown its great power in compressing and processing high-dimensional data. Specifically, we assume that the object to be recognized can be sparsely represented as a linear combination of atoms in a redundant dictionary, which implies a large portion of coefficients are zeros. In order to find this sparse representation, we can resort to sparsity based regularization techniques. For example, $\ell_1$-regularization has been applied to dictionary-based action recognition \cite{qiu2011sparse} and its local version has been proposed for gesture recognition \cite{he2019gesture}. Sparsity-based methods typically improve interpretability and compressibility of data, which enable detection discriminative and avoid over-fitting. Recently, the nonconvex $\ell_{1-2}$ regularization has shown to promote higher sparsity and achieve better performance than its $\ell_1$ counterpart in image reconstruction \cite{yin2015minimization,li2016s} and in logistic regression \cite{qin20191}. In light of this, we propose a novel $\ell_{1-2}$-regularized hand gesture recognition model, which is then solved by applying the alternating direction method of multipliers (ADMM). Each resultant subproblem has a closed-form solution which leads to implementation efficiency. Note that the proposed sparsity-based model is not a trivial generalization of that in \cite{miao2016gesture}, which considers an inequality constrained $\ell_1$-minimization problem different from our proposed model. In addition, we will take advantage of various features, including binary segmented images, histograms of oriented gradients (HOG) \cite{MERL_TR94-03} and local binary patterns (LBP) \cite{ojala1996comparative}. HOG uses the distribution of intensity gradients along various orientations to describe local object appearance and shape within an image. By contrast, LBP exploits local binary patterns over an image. In pattern recognition, HOG and LBP have been shown to be effective and robust feature descriptors for object detection \cite{ghorbani2015hog}. To verify the effectiveness of the proposed method, we test two sets of hand gesture images in binary or gray scales. Performance of the method in terms of recognition rate and running time under various settings of training samples are reported. We also make discussions on parameter selection, cell size in HOG and LBP, and identification metric, and comparison between $\ell_1$ and $\ell_{1-2}$. The rest of this paper is organized as follows. In Section \ref{sec:SR}, we provide a brief introduction of sparse representation based models. In Section \ref{sec:method}, we propose a novel hand gesture recognition algorithm based on the nonconvex $\ell_{1-2}$ regularization. Numerical experiments on two realistic data sets of hand gestures and the results are discussed in Section \ref{sec:exp}. Finally, conclusions of this research and future works are presented in Section \ref{sec:con}. \section{SPARSE REPRESENTATION BASED MODELS}\label{sec:SR} Throughout the paper, we use boldface lowercase letters to denote vectors and boldface uppercase letters to denote matrices. The $\ell_p$-norm of a vector $\vx\in\R^n$ is defined as $\norm{\vx}_p=(\sum_{i=1}^n|x_i|^p)^{1/p}$ when $p\in\mathbb{N}$. Assume that a test vector $\vb\in\R^{m}$ can be sparsely represented as a linear combination of columns in a dictionary $\Phi$, i.e., there exists $\vx\in\R^n$ with small $\norm{\vx}_0$ such that $\vb=\Phi\vx$. Here $\norm{\vx}_0$ is the number of nonzero components in $\vx$ and can describe the sparsity of the vector $\vx$. Since the dictionary is redundant, the image size is much smaller than the number of images in the dictionary, i.e., $m\ll n$, which results in infinitely many solutions to the linear system $\Phi\vx=\vb$. To guarantee a unique solution, we consider the $\ell_0$ minimization problem of the form \[ \min_{\vx\in\R^n}\norm{\vx}_0\quad\st\quad \Phi\vx=\vb. \] However, this problem is NP-hard which can be relaxed to the convex $\ell_1$ minimization \[ \min_{\vx\in\R^n}\norm{\vx}_1\quad\st\quad \Phi\vx=\vb. \] To further enforce sparsity on the solution, we consider the $\ell_{1-2}$ minimization \begin{equation}\label{eqn1} \min_{\vx\in\R^n}\norm{\vx}_{1-2}\quad \st\quad \Phi\vx=\vb, \end{equation} where $\norm{\vx}_{1-2}=\norm{\vx}_1-\beta\norm{\vx}_2$. It has empirically shown that the choice of $\beta$ does not make significant impact on the performance. Thus we fix $\beta=1$ to reduce the number of tuning parameters throughout the paper. Note that $\ell_{1-2}$ is not a vector norm in $\R^n$ since the triangle inequality and positive definiteness cannot be guaranteed. Connections and comparisons between the $\ell_{1-2}$ regularization and its $\ell_1$ counterpart can be referred to \cite{li2016s,qin20191}. \section{PROPOSED METHOD}\label{sec:method} Recognition of hand gestures plays an important role in the human-robot interaction. In particular, vision-based recognition methods aim to identify the gesture pattern from a dictionary (also known as library) of images that is most similar to the test image. Each hand image in the dictionary is called an \emph{atom}. \subsection{Dictionary Construction} There are many types of dictionaries that can be used for gesture recognition, where each atom can characterize one or multiple features of an image. One simple example is to use binary segmented images as atom images which separates the hand from the background. However, once we reshape each image as a column vector, a sparse representation of atoms may not be sufficient to describe the image geometric information. To further take local geometry into consideration, we can create a dictionary of HOG or LBP features. In the continuous setting, each image can be considered as a function $f:\Omega\to\mathbb{R}$ where $\Omega\subseteq\mathbb{R}^2$ is a closed set with Dirichlet type of boundaries, e.g., $\Omega=[a,b]\times[c,d]$. Suppose that $f$ can be sparsely represented by the set of atoms $\{v_1,\ldots,v_n\}$ where each atom image $v_i:\Omega\to\R$, meaning that coefficients $c_1,\ldots,c_n$ exist with $f(x,y)=\sum_{i=1}^nc_iv_i(x,y)$ and the number of nonzero coefficients $c_i$'s is small. If we restrict the domain of this function on a grid, then sparse representation of a discrete image in terms of discrete atoms still holds locally which implies that sparse representation in the feature space can still be preserved. In this work, we adopt three types of dictionaries using binary or gray-scale segmented images, HOG and LBP features. \subsection{$\ell_{1-2}$-Regularized Recognition Method} Starting from this section, we will consider discrete images, i.e., each image is treated as a matrix. Given $n$ atoms of size $\sqrt{m}\times\sqrt{m}$, we reshape each image as a column vector by columnwise stacking and then concatenate them to form a dictionary $D=[\vd_1,\ldots,\vd_n]\in\mathbb{R}^{m\times n}$. Similarly, the test image $\vb$ is reshaped as a column vector. Suppose there are $L$ classes in the dictionary $D$ corresponding to $L$ gestures, i.e., the dictionary $D$ can be partitioned as $D=\bigcup_{i=1}^L D_i$ each $D_i\in\R^{m\times n_i}$ such that $D_i$ and $D_j$ have disjoint columns for $i\neq j$ and $n_1+\ldots+n_L=n$. Without loss of generality, let $\vA$ be one such sub-dictionary $D_i$. If the partition is not available, we can apply fast data clustering algorithms such as $k$-means. Next, we intend to find a sparse representation of the test data $\vb$ with respect to the dictionary $\vA$, i.e., finding $\vx$ with the smallest number of nonzero elements such that $\vA\vx=\vb$. First, we normalize the columns of $A$ so that every column has a unit $\ell_2$-norm. Then we consider the $\ell_{1-2}$-regularized sparse recovery model \begin{equation}\label{eqn2} \min_{\vx}\frac12\norm{\vA\vx-\vb}_2^2+\lambda\norm{\vx}_{1-2}. \end{equation} Here $\lambda>0$ is a regularization parameter. Different from the linear constrained model \eqref{eqn1}, the unconstrained model \eqref{eqn2} considers the presence of noise. By the change of variable, \eqref{eqn2} can be written as \[ \min_{\vx,\vy}\frac12\norm{\vA\vx-\vb}_2^2+\lambda\norm{\vy}_{1-2}\quad\st\quad \vx=\vy. \] To solve this minimization problem, we define the augmented Lagrange function as follows \[ \cL=\frac12\norm{\vA\vx-\vb}_2^2+\lambda\norm{\vy}_{1-2}+\frac{\rho}2\norm{\vx-\vy+\widehat{\vy}}_2^2. \] Here $\rho>0$ is a tuning parameter which controls the convergence speed. Following the framework of ADMM, we alternate the minimization of $\cL$ with respect to $\vx$ and $\vy$, respectively. Notice that the subproblem $\argmin_{\vx}\cL$ is a least-square problem which can be converted to solving its normal equation. Hence we obtain the following updating scheme \begin{equation}\label{eqn:admm} \left\{ \begin{aligned} \vx&\leftarrow (\vA^T\vA+\rho I)^{-1}(\vA^T\vb+\rho(\vy-\widehat{\vy}));\\%\vA^T(\vA\vx-\vb)+\rho(\vx-\vy+\widehat{\vy});\\ \vy&\leftarrow \prox_{\theta\norm{\cdot}_{1-2}}(\vx+\widehat{\vy});\\ \widehat{\vy}&\leftarrow \vx-\vy+\widehat{\vy}, \end{aligned} \right. \end{equation} where $\theta=\frac{\lambda}{\rho}$. To accelerate the computation, we apply the Cholesky factorization of the matrix $\vA^T\vA+\rho I=\vL\vL^T$ with $\vL$ a lower triangular matrix and thereby the update of $\vx$ becomes \begin{equation}\label{eqn:x_update} \vx\leftarrow \vL^{-T}(\vL^{-1}(\vA^T\vb+\rho\vy-\rho\widehat{\vy})). \end{equation} where $\vL^{-1}$ is the inverse of the matrix $\vL$, and $\vL^{-T}$ is the transpose of $\vL^{-1}$. Moreover, the proximal operator of a function $f$ is defined as $\prox_{f}(\vv)=\argmin_{\vu}\frac12\norm{\vu-\vv}_2^2+f(\vu)$. Note that the proximal operator of $\ell_{1-2}$ can be expressed as \cite{qin20191}: \begin{equation}\label{eqn:y_update} \prox_{\theta\norm{\cdot}_{1-2}}(\vx+\widehat{\vy})=\vz+\frac{\theta\vz}{\norm{\vz}_2}, \end{equation} where $\vz=\shrink(\vx+\widehat{\vy},\theta)$ and the shrinkage operator is defined componentwise \[ [\shrink(\vu,\mu)]_i=\sign(u_i)\max\{|u_i|-\mu,0\} \] for $i=1,2,\ldots,n$. Here $u_i$ is the $i$-th component of the vector $\vu$. The algorithm terminates if either the relative change of two consecutive estimates of $\vx$ reaches the preassigned tolerance, i.e., \begin{equation}\label{eqn:stop} \norm{\vx^{(j+1)}-\vx^{(j)}}_2/\norm{\vx^{(j)}}_2<tol \end{equation} where $\vx^{(j)}$ is the estimate of $\vx$ after $j$ iterations, or the maximal number of allowed iterations is achieved. From this step, we get the optimal coefficient vector $\vx^*$ of $\vb$ with respect to the dictionary $\vA$. Furthermore, we let $\vx_i^*$ be the solution to \eqref{eqn2} when $\vA=D_i$. To identify the most similar gesture class, we adopt two types of identification metrics for classification. One metric uses the $\ell_2$-norm residual for each class given by \begin{equation}\label{eqn:ri1} r_i(\vb)=\norm{\vb- D_i\vx_i^*}_2,\quad i=1,\ldots,L, \end{equation} where $\vx_i^*$ is obtained in the previous set with $\vA=D_i$. Alternatively, we compare the cosine similarity between $\vb$ and $D_i\vx_i^*$ and define the identification metric as \begin{equation}\label{eqn:ri2} r_i(\vb)=1-\cos(\vb,D_i\vx_i^*),\quad i=1,\ldots,L. \end{equation} Here cosine similarity is defined as $\cos(\vu,\vv)=\frac{\norm{\vu}_2\cdot\norm{\vv}_2}{\langle \vu,\vv\rangle}$ where $\langle \cdot,\cdot\rangle$ is the dot product of two vectors. Finally, we predict the class that $\vb$ belongs to by finding the minimum identification metric \begin{equation}\label{eqn:id} identity(\vb):=\argmin_{1\leq i\leq L}r_i(\vb). \end{equation} Other similarity metrics could be used to define $r_i(\vb)$ while it may take more computational time. The entire algorithm is summarized in Algorithm~\ref{alg}, which can be extended to recognize multiple test data points in parallel. \begin{algorithm} \caption{Nonconvex $\ell_{1-2}$-Regularized Recognition}\label{alg} \begin{algorithmic} \State\textbf{Inputs}: a dictionary with partition $D=\cup_{i=1}^LD_i$, test data $\vb$, parameters $\lambda,\rho>0$, maximum number of inner loops $N_{in}$, tolerance $tol$ for the stopping criterion \State\textbf{Outputs}: class label of $\vb$ \For{$i=1,2,\ldots,L$} \State Initialize $\vx=\vo$ \For{$j=1,2,\ldots,N_{in}$} \State Update $\vx$ via \eqref{eqn:x_update} \State Update $\vy=\prox_{\theta\norm{\cdot}_{1-2}}(\vx+\widehat{\vy})$ via \eqref{eqn:y_update} \State $\widehat{\vy}\leftarrow \vx-\vy+\widehat{\vy}$ \State {Exit the inner loop if \eqref{eqn:stop} is met.} \EndFor \State $\vx_i^*=\vx$ \State Calculate $r_i(\vb)$ via \eqref{eqn:ri1} or \eqref{eqn:ri2}. \EndFor \State Find the class label of $\vb$ via \eqref{eqn:id}. \end{algorithmic} \end{algorithm} \section{NUMERICAL EXPERIMENTS}\label{sec:exp} In this section, we will test the proposed Algorithm~\ref{alg} on one binary and one gray-scale data sets of hand gesture images. To quantify the performance, we use the recognition rate which is defined as the ratio of the correctly recognized labels out of the entire test set. To make comparison fair, we randomly select the test and atom images from the data set and take the average performance of 50 trials by default. There are three types of feature vectors being used: (1) raw feature vectors that are generated by reshaping each image as a vector via column-wise stacking; (2) reshaped HOG feature vectors with the cell size $k\times k$; (3) reshaped LBP feature vectors with the cell size $k\times k$. Both HOG and LBP extractions are implemented in Matlab. By default, the parameters of Algorithm~\ref{alg} are set as $\rho=1000,\lambda=1$. The maximum number of inner loops is set as 20 and the tolerance in \eqref{eqn:stop} is $tol=10^{-4}$. The cell size for both HOG and LBP is set as $k=8$. Note that even with the same cell size, HOG and LBP features do not have the same dimension. All experiments were run in Matlab R2019a on a desktop computer with Intel CPU i9-9960X RAM 64GB and GPU Dual Nvidia Quadro RTX5000 with Windows 10 Pro. \subsection{Experiment 1: Binary Data} The first set of data is downloaded from \cite{shivamgoyal_2020}. Specifically, there are three hand gestures in the database: fist, open-hand, and two-finger, which have 2003, 2010 and 2005 images, respectively. Each image is binary of the size $150\times 150$. We select 10 images randomly from each gesture class and select $Nt$ images from the rest as the atoms in the test dictionary. Fig.~\ref{fig1} displays one sample image for each type of gestures. \begin{figure}[h] \centering\setlength{\tabcolsep}{1pt} \begin{tabular}{ccc} \includegraphics[width=.16\textwidth]{fist_img1307}& \includegraphics[width=.16\textwidth]{openhand_img182}& \includegraphics[width=.16\textwidth]{twofingers_img292} \end{tabular} \caption{Sample images in a binary dictionary. From left to right: fist, open hand, two fingers.}\label{fig1} \vspace{-12pt} \end{figure} We set the number of atoms in the test dictionary as $Nt=100, 150, 200, 250$, respectively. The recognition rates for all cases are shown in Table~\ref{tab1}. One can see that HOG type of features yields the best performance. In the meanwhile, since each image is piecewise constant with limited texture-like patterns, LBP performs the worst which agrees with the fact that LBP features favor the texture patterns \cite{alhindi2018comparing}. When the number of atoms is larger than 300, the proposed method can achieve almost perfect recognition. Comparison of average running time for each case is shown in Fig.~\ref{fig:exp1_time}. With the fixed cell size $8\times 8$, the dimension of each HOG feature is 10404 while 19116 for that of each LBP feature which explains why LBP takes the most running time. \begin{table}[h] \centering \begin{tabular}{ccccc} \hline \hline Feature $\backslash$ Atom No. & 100 & 150 & 200 & 250 \\ \hline raw & 0.8060 & 0.8453 & 0.8800 & 0.8953 \\ HOG & 0.9080 & 0.9393 & 0.9520 & 0.9667 \\ LBP & 0.7373 & 0.7633 & 0.8053 & 0.8460 \\ \hline\hline \end{tabular} \caption{Recognition rates on a binary dictionary.}\label{tab1} \vspace{-12pt} \end{table} \begin{figure}[H] \centering \includegraphics[width=.34\textwidth]{exp1_time} \vspace{-8pt} \caption{Running time comparison for a binary dictionary.}\label{fig:exp1_time} \vspace{-10pt} \end{figure} \subsection{Experiment 2: Gray-Scale Data}\label{subsec:exp2} In the second experiment, we download the HGM-4 multi-cameras dataset \cite{hoang2020hgm} from \url{https://data.mendeley.com/datasets/jzy8zngkbg/1}. In particular, we choose five classes of images corresponding to the hand gestures that express the five letters: A, B, C, D and W. Each class of the original database has 40 atom images, each of size $160\times 90$. To ensure a sparse representation of atoms from the dictionary, we generate 67 additional images corresponding to those five gestures using a Logitech RGB webcam of resolution $1280\times 720$. A simple interface is developed to allow a user to classify and record gestures on their own using this webcam. All gestures are done with a whiteboard backdrop to reduce noise and help normalize the dataset. One set of such high-resolution images are shown in Fig.~\ref{fig2}. All newly generated images are resized to $160\times 90$. Thus far, we get a dataset with five classes, and the numbers of images at all classes are distributed as 54, 52, 54, 54, 53. Furthermore, we expand the dataset by making four types of image rotations for each image in Matlab: clockwise/counterclockwise rotation by one/two degrees. Image rotation is illustrated in Fig.~\ref{fig:rot}. Notice that rotation could bring zero boundary artifacts for large angles. Next we randomly select 10 test images from each class, and randomly select $Nt$ atom images from the rest of the class to form a test dictionary. We select $Nt=50,100,150,200$. A collection of five test gray-scale images are shown in Fig.~\ref{fig3}. In Table~\ref{tab2}, we list recognition rates for various numbers of atoms in each class of the dictionary using various types of features. One can see that HOG performs best most of the time and has a great advantage for small training sets. Raw feature in gray scale performs slightly better than LBP in this case due to the limited texture-like patterns. If the number of atoms is larger than 250, then all those three features yield almost perfect recognition. Running time for each test case is illustrated in Fig.~\ref{fig:exp2_time}. With the fixed cell size $8\times 8$, HOG feature vector has the smallest dimension among all the three feature types which indicates HOG takes the least running time and yields the highest efficiency on average. \begin{figure*}[h] \centering\setlength{\tabcolsep}{2pt} \begin{tabular}{ccccc} \includegraphics[width=0.19\textwidth]{A}& \includegraphics[width=0.19\textwidth]{B}& \includegraphics[width=0.19\textwidth]{C}& \includegraphics[width=0.19\textwidth]{D}& \includegraphics[width=0.19\textwidth]{W}\\ (a) A & (b) B & (c) C & (d) D & (e) W \end{tabular} \vspace{-4pt} \caption{Sample gesture images. The gestures from left to right correspond to the letters: A, B, C, D and W.}\label{fig2} \end{figure*} \begin{figure*}[h] \centering\setlength{\tabcolsep}{2pt} \begin{tabular}{ccccc} \includegraphics[width=0.19\textwidth]{exp2_1}& \includegraphics[width=0.19\textwidth]{exp2_2}& \includegraphics[width=0.19\textwidth]{exp2_3}& \includegraphics[width=0.19\textwidth]{exp2_4}& \includegraphics[width=0.19\textwidth]{exp2_5}\\ (a) A & (b) B & (c) C & (d) D & (e) W \end{tabular} \vspace{-4pt} \caption{Sample raw images in the gray-scale dictionary. The gestures from left to right correspond to the letters: A, B, C, D and W.}\label{fig3} \end{figure*} \begin{figure*}[h] \centering\setlength{\tabcolsep}{2pt} \begin{tabular}{ccccc} \includegraphics[width=0.19\textwidth]{exp2_rot0}& \includegraphics[width=0.19\textwidth]{exp2_rot1}& \includegraphics[width=0.19\textwidth]{exp2_rot1n}& \includegraphics[width=0.19\textwidth]{exp2_rot2}& \includegraphics[width=0.19\textwidth]{exp2_rot2n} \end{tabular} \vspace{-4pt} \caption{Image rotation. From left to right: original image, rotated by $1^\circ$, $-1^\circ$, $2^\circ$ and $-2^\circ$. Positive angles correspond to counterclockwise rotation and negative angles correspond to clockwise rotation.}\label{fig:rot} \end{figure*} \begin{table}[h] \centering \begin{tabular}{ccccc} \hline \hline Feature $\backslash$ Atom No. & 50 & 100 & 150 & 200 \\ \hline raw & 0.7124 & 0.9100 & 0.9756 & 0.9956 \\ HOG & 0.7360 & 0.9140 & 0.9832 & 0.9972 \\ LBP & 0.7140 & 0.8940 & 0.9664 & 0.9936 \\ \hline\hline \end{tabular} \caption{Recognition rates on a gray-scale dictionary.} \label{tab2} \vspace{-12pt} \end{table} \begin{figure}[H] \centering \includegraphics[width=.34\textwidth]{exp2_time} \vspace{-8pt} \caption{Running time comparison for a gray-scale dictionary.}\label{fig:exp2_time} \vspace{-10pt} \end{figure} \subsection{Discussions} In this section, we discuss selection of parameters, cell size in HOG/LBP feature extraction and identification criteria. In addition, we make a remark about the comparison of $\ell_1$ and $\ell_{1-2}$ in our method. \paragraph*{Parameter Selection} In Algorithm~\ref{alg}, $\lambda$ is a regularization parameter which controls the balance between the data fidelity and the sparsity. The larger the parameter $\lambda$ is, higher sparsity is enforced to the desired vector $\vx$ but with larger residual error. In other words, if the test image is very similar to one atom in the test dictionary, then we could choose a large value for $\lambda$. In the $\vx$-update \eqref{eqn:x_update}, $\rho$ can be set as a large number to penalize the high sparsity so that the objective function decays fast. The number of inner loops could be set to be a small integer when it decays fast. Further, if the background is not removed, then the recognition could be sensitive to the parameter selection. As one illustrative example, Fig.~\ref{fig:exp3_d1} has the ground truth gesture ``D'' which can be mistakenly recognized as ``C'' with HOG/LBP features of cell size $16\times 16$ unless we choose the parameters $\lambda=\rho=1$ and $N_{in}=100$. Here we use the HGM-4 database as in Section~\ref{subsec:exp2} with 200 images in each class. In this case, we can either preprocess the test image by removing the background or tune parameters carefully. \begin{figure}[H] \centering \includegraphics[width=.2\textwidth]{exp3_d1} \vspace{-6pt} \caption{One example that is sensitive to parameter selection.}\label{fig:exp3_d1} \vspace{-6pt} \end{figure} \paragraph*{Cell Size in HOG and LBP} The cell size in HOG and LBP feature extraction influences the running time of Algorithm~\ref{alg} and its performance on parameter-sensitive test images, e.g., Fig.~\ref{fig:exp3_d1}. There is a trade-off between computational cost and recognition accuracy. Large cell sizes will yield low-dimensional features and thereby ease the computational burden, which however may cause local description inaccurate and reduce the recognition rate. Dimensions for various types of features for our experiments are listed in Table~\ref{tab3}. A cell of size $k\times k$ for both HOG and LBP features in the range 8$\sim$28 works in most situations. \begin{table}[H] \centering \begin{tabular}{ccccc} \hline\hline $k$ & 8 & 12 & 16 & 20\\ \hline \multicolumn{5}{c}{image size $150\times 150$}\\ \hline HOG & 10404 & 4356 & 2304& 1296\\ LBP & 19116 & 8496 & 4779 &2891\\ \hline \multicolumn{5}{c}{image size $160\times 90$}\\ \hline HOG & 6840 & 2592 & 1296 & 756\\ LBP & 12980 & 5369 & 2950 & 1888\\ \hline\hline \end{tabular} \caption{Dimensions of Various Feature Vectors}\label{tab3} \vspace{-12pt} \end{table} \paragraph*{Identification Metric} Two types of identification metrics are introduced in the paper, including $\ell_2$-norm based residual \eqref{eqn:ri1} and cosine similarity based metric \eqref{eqn:ri2}. According to our numerical experiments, these two metrics achieve almost the same recognition performance. However, it is worth noting that \eqref{eqn:ri1} may result in a very large number while \eqref{eqn:ri2} is always between 0 and 1. To avoid numerical instability issues, \eqref{eqn:ri2} could be a top choice. \paragraph*{Comparison of $\ell_1$ and $\ell_{1-2}$ regularizations} The $\ell_{1-2}$-regularization $\norm{\cdot}_1-\beta\norm{\cdot}_2$ can be reduced to the $\ell_1$-regularization when $\beta=0$, and it is also related to the iterative reweighted $\ell_1$ (IRL1) \cite{candes2008enhancing,guo2021novel} by choosing a special weighting scheme. Our vast experiments have shown that $\ell_{1-2}$ regularization performs slight better than $\ell_1$ in the same algorithmic framework especially with LBP features. For instance, Table~\ref{tab4} shows the recognition rates for Algorithm~\ref{alg} with $\ell_1$ and $\ell_{1-2}$ regularizations and LBP features using the same data and parameter setting as in Section~\ref{subsec:exp2}. This phenomenon can be understood by the fact that both regularizations could lead to the solutions with same sparsity level, which will not significantly impact the recognition. Nevertheless, $\ell_{1-2}$ will converge to a local minimizer faster than $\ell_1$ and thus is more efficient when only a few training data is available. \begin{table}[h] \centering \begin{tabular}{ccccc}\hline\hline Atom No. & 50 & 100 & 150 & 200 \\ \hline $\ell_1$ & 0.6620 & 0.8940 & 0.9640 & 0.9920\\ $\ell_{1-2}$ & 0.6640 & 0.8980 & 0.9640 & 0.9940\\ \hline\hline \end{tabular} \caption{Recognition rate comparison for $\ell_1$ and $\ell_{1-2}$}\label{tab4} \vspace{-12pt} \end{table} \section{CONCLUSIONS AND FUTURE WORK}\label{sec:con} Vision-based hand gesture recognition has been widely in a lot of human-robot interaction applications. When there are only a limited number of training samples available, it becomes challenging to accurately detect the class of a given hand gesture image. In this paper, we propose a novel hand gesture recognition approach based on the nonconvex $\ell_{1-2}$ regularization to improve the performance. Compared to the $\ell_1$-regularization, $\ell_{1-2}$-regularization in the form of the difference of two vector norms can further promote sparsity which can enhance the prediction accuracy and/or achieve fast convergence to a local minimizer. To solve the $\ell_{1-2}$-regularized recognition model, we apply the ADMM framework which leads to two subproblems at each iteration. One subproblem is a least-square problem that has a closed-form solution by solving its normal equation. Another one is reduced to the proximal operator of the $\ell_{1-2}$ regularizer which can be expressed by the shrinkage operator. To make the proposed method robust, we consider three types of features, including raw images in either binary or gray scale, HOG and LBP features. Numerical experiments on two data sets with various settings have shown the proposed effectiveness. In the future work, we will explore hybrid types of features by concatenating multiple features such as fusion of HOG and LBP, and extend this framework to solve other related recognition problems, e.g., arm gesture recognition. \addtolength{\textheight}{-12cm} \section*{ACKNOWLEDGMENTS} The research of Qin is supported by the NSF grant DMS-1941197 and the research of Ashley and Xie is supported by Woodrow W. Everett, Jr. SCEEE Development Fund in cooperation with the Southeastern Association of Electrical Engineering Department Heads. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Radiation reaction (RR) continues to attract attention in classical and quantum electrodynamics, both experimentally~\cite{Cole:2017zca,Poder:2018ifi} and theoretically~\cite{Burton:2014wsa,Blackburn:2019rfv,Gonoskov:2021hwf} with a particular focus on intense laser fields where RR forces compare to or dominate the Lorentz force ~\cite{danson2019petawatt,Abramowicz:2021zja,Meuren:2020nbw}. RR in strong fields is also relevant in gravitational physics, first clearly observed in the Hulse-Taylor binary pulsar~\cite{Taylor:1979zz}, and studied theoretically in, e.g., Refs.~\cite{Damour:2020tta,DiVecchia:2021bdo,Herrmann:2021tct,Bjerrum-Bohr:2021vuf}. Recently many authors have applied resummation~\cite{Torgrimsson:2020mto,Torgrimsson:2020wlz,Karbstein:2019wmj,Mironov:2020gbi,Heinzl:2021mji,Torgrimsson:2021wcj,Ekman:2021eqc,Torgrimsson:2021zob} and resurgence and transseries concepts~\cite{Taya:2020dco,Taya:2021dcz,Dunne:2021acr} in classical and quantum electrodynamics in strong backgrounds. (For introductions to and reviews of these concepts, see Refs.~\cite{Bender:1999,Marino:2012zq,Dorigoni:2014hea,Dunne:2015eaa,Aniceto:2018bis,Costin:2019xql,Costin:2020hwg}.) As a prominent example all-orders, resummed results~\cite{Mironov:2020gbi,Heinzl:2021mji} have been vital to progress on the Ritus-Narozhny conjecture~\cite{PhysRevD.21.1176,Fedotov:2016afw} of the breakdown of Furry picture perturbation theory In this paper we use the the Lorentz-Abraham-Dirac (LAD) equation of motion for radiation reaction~\cite{abraham1905,lorentz1909,dirac1938classical} as a ``test-bed'' for transseries analysis. It is a natural choice of a simple setting in which to explore transseries structures, essentially because we know that they must be there and their physical interpretation. They are the `unwanted' features of the LAD equation, pre-acceleration and runaway solutions, that are explicitly non-perturbative in $\tau_0$, the time-scale of radiation reaction. Indeed these are not seen in perturbative approaches, including reductive procedures which lead to e.g.~the Landau-Lifshitz~\cite{LandauLifshitzII} (LL) equation, at any order~\cite{Ekman:2021eqc}. We will see that the time-dependent nature of our problem means that even though the physics is quite simple, the formal structure can still be rich. We extend our previous work~\cite{Ekman:2021eqc}, which iterated `reduction of order' \emph{ad infinitum} in a constant crossed field (CCF) to obtain the all-orders (in $\tau_0$) equation of motion $\LL\infty$ by showing that this procedure eliminates non-perturbative transseries structure at the level of the equation of motion. We also show that the same holds in a circularly polarised monochromatic plane wave. The elimination of non-perturbative terms is, however, dependant on an ``initial condition'' matching to the Lorentz force at vanishing field. Other ``initial conditions'' keep non-perturbative terms and lead to runaway solutions of the order-reduced equation of motion. We then consider inserting a hard cutoff into a constant field; this is the simplest time-dependence which allows us to unambiguously investigate pre-acceleration and its transseries structure. This paper is organised as follows. We begin in \cref{sec:LLINF} by reviewing reduction of order as applied to the LAD equation, and $\LL\infty$. We show that non-pertubative contributions to $\LL\infty$ are large as the coupling goes to zero, and lead to runaway solutions if kept. Next, in \cref{sec:step} we solve the two equations of motion in a step field profile, finding on the level of solutions to LAD instanton terms that are precisely the pre-accelerating and runaway solutions. We conclude in \cref{sec:concs}. \section{\texorpdfstring{$\LL\infty$}{LL∞}: reduction of order and transseries} \label{sec:LLINF} \subsection{Conventions and notations} We will consider the momentum $p^\mu$ of a particle of charge $e$ and mass $m$ in a constant crossed field (CCF) given by \begin{equation} f_{\mu\nu} := \frac{e}{m} F_{\mu\nu} = \ensuremath{\mathcal{E}} m \, n_{[\mu} \epsilon_{\nu]} \end{equation} where $\ensuremath{\mathcal{E}}$ is the dimensionless field strength, $n_\mu$ is lightlike and $\epsilon^2 = -1$ with $n \cdot \epsilon = 0$. As we will only be concerned with one species of particle we henceforth use units where $m = 1$, although we will restore $m$ in places for clarity. We will use lightfront coordinates $p^{\scriptscriptstyle \pm} = p^0 \pm p^z, p^{\scriptscriptstyle \perp} = (p^\mathfrak{1}, p^\mathfrak{2})$, the $z$-axis aligned such that $p^{\scriptscriptstyle +} = n \cdot p$. The Lorentz-Abraham-Dirac (LAD) equation reads, using an overdot for derivative with respect to proper time, \begin{equation} \label{eq:LAD} \dot{p}_\mu = f_{\mu\nu}p^\nu + \tau_0 P_{\mu\nu} f^{\nu\rho} \ddot{p}_\rho \end{equation} where $P_{\mu\nu} = g_{\mu\nu} - p_\mu p_\nu$ is the projector orthogonal to $p_\mu$ and \begin{equation} \tau_0 := \frac{2\alpha}{3m} \end{equation} is the typical time-scale for radiation reaction, $\alpha$ being the fine-structure constant; for an electron $\tau_0 \approx \SI{6.2e-24}{s}$. The interaction is characterised by an energy parameter, \begin{equation} \delta^2 = \tau_0^2 p_\mu f^{\mu\nu} f_{\nu\rho} p^\rho = ( \tau_0 \ensuremath{\mathcal{E}} p^{\scriptscriptstyle +})^2 \, . \end{equation} When working at the level of the solution, the initial value $\delta_0$ will play the role of a coupling. Note that $\delta = \tau_0 \chi$ where $\chi$ is the quantum non-linearity parameter~\cite{DiPiazza:2011tq,Gonoskov:2021hwf}; they are related like the classical electron radius and the Compton length. \subsection{Reduction of order and \texorpdfstring{$\LL\infty$}{LL∞}} The Landau-Lifshitz (LL) equation~\cite{LandauLifshitzII} is obtained from Lorentz-Abraham-Dirac by reduction of order: we apply $\ensuremath{\mathrm{d}}/\ensuremath{\mathrm{d}} \tau$ to both sides of~\cref{eq:LAD}, substitute for $\dot{u}$ according to~\cref{eq:LAD} itself, and discard terms of order $\tau_0^2$. This yields, in general, \begin{equation} \label{eq:LL1} \dot{p}_\mu = f_{\mu\nu} p^\nu + \tau_0 \left[ (P f^2)_{\mu\nu} p^\nu + p^\rho \partial_\rho f_{\mu\nu} p^\nu \right] \, , \end{equation} although the final, gradient, term of course vanishes for a CCF. The reduction of order procedure as just described reduces the order \emph{in time}, but the procedure can be iterated any number of times to any order \emph{in $\tau_0$}~\cite{PiazzaExact,Ekman:2021vwg}. We will therefore refer to the first iteration~\cref{eq:LL1} as $\LL1$. If reduction of order is iterated \emph{ad infinitum}, i.e., to all orders in $\tau_0$, it yields the equation of motion $\LL\infty$, \begin{equation} \label{eq:LL-infinity} \dot{p}^\mu = \mathcal{A}(\delta)f^{\mu\nu}p_\nu + \tau_0 \mathcal{B}(\delta)(Pf^2)^{\mu\nu}p_\nu \, , \end{equation} as discussed in a previous paper~\cite{Ekman:2021eqc}. Here the functions $\mathcal{A}$ and $\mathcal{B}$ are solutions of the ODE:s \begin{equation} \label{eq:fixed-point} \left\{ \begin{array}{rcl} \delta^3 \ensuremath{\mathcal{B}} \frac{\ensuremath{\mathrm{d}} \ensuremath{\mathcal{A}}}{\ensuremath{\mathrm{d}} \delta} &=& 1 - \ensuremath{\mathcal{A}} - 2 \delta^2 \ensuremath{\mathcal{A}} \ensuremath{\mathcal{B}} \\ \delta^3 \ensuremath{\mathcal{B}} \frac{\ensuremath{\mathrm{d}} \ensuremath{\mathcal{B}}}{\ensuremath{\mathrm{d}} \delta} &=& - \ensuremath{\mathcal{B}} - 2 \delta^2 \ensuremath{\mathcal{B}}^2 + \ensuremath{\mathcal{A}}^2 \\ \end{array} \right. \, . \end{equation} and the initial conditions that recover first-order Landau-Lifshitz are \begin{equation} \label{eq:A-B-ic} \ensuremath{\mathcal{A}}(0) = \ensuremath{\mathcal{B}}(0) = 1 \, . \end{equation} The functions $\ensuremath{\mathcal{A}}, \ensuremath{\mathcal{B}}$ encode how the RR force varies with energy, vaguely analogous to a running coupling. We emphasise here that when $\ensuremath{\mathcal{A}}, \ensuremath{\mathcal{B}}$ verify~\cref{eq:fixed-point} the solution of $\LL\infty$ is a solution of LAD. Explicitly, differentiating~\cref{eq:LL-infinity} we obtain \begin{widetext} \begin{equation} \ddot{p}_\mu = \frac{\ensuremath{\mathrm{d}} \ensuremath{\mathcal{A}}}{\ensuremath{\mathrm{d}} \delta} \frac{\ensuremath{\mathrm{d}} \delta}{\ensuremath{\mathrm{d}} \tau} f_{\mu\nu} p^\nu + \ensuremath{\mathcal{A}} f_{\mu\nu} \dot{p}^\nu + \tau_0 \left[ \frac{\ensuremath{\mathrm{d}} \ensuremath{\mathcal{B}}}{\ensuremath{\mathrm{d}} \delta} \frac{\ensuremath{\mathrm{d}} \delta}{\ensuremath{\mathrm{d}} \tau} (P f^2)_{\mu\nu} p^\nu + \ensuremath{\mathcal{B}} \left( (P f^2)_{\mu\nu} \dot{p}^\nu - \dot{p}_\mu (p^{\scriptscriptstyle +} \ensuremath{\mathcal{E}})^2 - p_\mu (\dot{p} f^2 p) \right) \right] \, . \end{equation} Now dotting $n^\mu$ into LAD, it reads \begin{equation} \label{eq:LL-inf-derivation} n \cdot \dot{p} = \tau_0 ( n \cdot \ddot{p} - p^{\scriptscriptstyle +} p\cdot \ddot{p} ) = - \tau_0^2 \left[ \frac{\ensuremath{\mathrm{d}} \ensuremath{\mathcal{B}}}{\ensuremath{\mathrm{d}} \delta} \frac{\ensuremath{\mathrm{d}} \delta}{\ensuremath{\mathrm{d}} \tau} (p^{\scriptscriptstyle +})^3 \ensuremath{\mathcal{E}}^2 + 2 \ensuremath{\mathcal{B}} p^{\scriptscriptstyle +} (\dot{p} f^2 p) + \ensuremath{\mathcal{B}} (n \cdot \dot{p}) (p^{\scriptscriptstyle +} \ensuremath{\mathcal{E}})^2 \right] - \tau_0 \ensuremath{\mathcal{A}} p^{\scriptscriptstyle +} (p f \dot p) - \tau_0^2 p^{\scriptscriptstyle +} \ensuremath{\mathcal{B}} (\dot{p} f^2 p) \, . \end{equation} It follows from~\cref{eq:LL-infinity} that $p f \dot{p} = \ensuremath{\mathcal{A}} (p^{\scriptscriptstyle +} \ensuremath{\mathcal{E}})^2$ and $ \dot{p} f^2 p = -\tau_0 \ensuremath{\mathcal{B}} (p^{\scriptscriptstyle +} \ensuremath{\mathcal{E}})^4$; we also have $ \frac{\ensuremath{\mathrm{d}} \delta}{\ensuremath{\mathrm{d}} \tau} = \tau_0 \dot{p}^{\scriptscriptstyle +} \ensuremath{\mathcal{E}} = -\tau_0^2 \ensuremath{\mathcal{B}} (p^{\scriptscriptstyle +} \ensuremath{\mathcal{E}})^3$. Substituting these into the RHS of~\cref{eq:LL-inf-derivation}, writing out the LHS according to~\cref{eq:LL-infinity}, and dividing by $\tau_0 (p^{\scriptscriptstyle +})^3 \ensuremath{\mathcal{E}}^2$ it becomes \begin{equation} -\ensuremath{\mathcal{B}} = (\tau_0 p^{\scriptscriptstyle +} \ensuremath{\mathcal{E}})^3 \ensuremath{\mathcal{B}} \frac{\ensuremath{\mathrm{d}} \ensuremath{\mathcal{B}}}{\ensuremath{\mathrm{d}} \delta} + 2 (\tau_0 p^{\scriptscriptstyle +} \ensuremath{\mathcal{E}})^2 \ensuremath{\mathcal{B}}^2 - \ensuremath{\mathcal{A}}^2 \, , \end{equation} which is one of the ODE:s~\cref{eq:fixed-point}. Hence the ${}^{\scriptscriptstyle +}$ component of LAD will be satisfied if~\cref{eq:LL-infinity} holds, with $\ensuremath{\mathcal{B}}$ a solution to~\cref{eq:fixed-point}. A similiar calculation shows that the transverse components of LAD will be satisfied if~\cref{eq:LL-infinity} holds and $\ensuremath{\mathcal{A}}$ is a solution to~\cref{eq:fixed-point}. The remaining component is fixed by the mass-shell condition. \end{widetext} As $\LL\infty$ is obtained from reduction of order in a small parameter, it is essentially a resummed perturbative expansion. It is therefore entirely possible that the procedure could miss non-perturbatively small terms in the expansion parameter. We will here investigate the possible presence of such terms. That is, \cref{eq:fixed-point,eq:A-B-ic} can be solved as perturbative series $\ensuremath{\mathcal{A}} \sim 1 - 2 \delta^2 + \ldots, \ensuremath{\mathcal{B}} \sim 1 - 6 \delta^2 + \ldots$. Although divergent, these series can be resummed with the Borel-Pad\'e (see, e.g., Ref.~\cite[Ch.~8]{Bender:1999}) or ``educated match''~\cite{Alvarez:2017sza} methods. Resummations using only a handful of coefficients~\cite{Ekman:2021eqc} agree with a numerical solution of~\cref{eq:fixed-point,eq:A-B-ic}, but there could be solutions of a more general transseries form, \begin{equation} \label{eq:trans-ansatz} \begin{Bmatrix} \ensuremath{\mathcal{A}} \\ \ensuremath{\mathcal{B}} \end{Bmatrix} \sim \sum_{k, \ell \ge 0} \begin{Bmatrix} A_{k, \ell} \\ B_{k, \ell} \end{Bmatrix} \delta^{2k} e^{-\ell \kappa / \delta^\lambda} \, , \end{equation} for some $\kappa, \lambda$ to be determined, which are not found by perturbative expansion or numerics. We use $\sim$ rather than equality here and treat, for now, the expansion~\cref{eq:trans-ansatz} formally -- the space of such transseries is closed under algebraic operations and differentiation. To determine the parameters $\kappa, \lambda$ we linearise around $(\ensuremath{\mathcal{A}}, \ensuremath{\mathcal{B}}) = (1,1)$ and $\delta = 0$; the general solution of the linearisation is \begin{subequations} \label{eq:gen-sol} \begin{align} \ensuremath{\mathcal{A}} & = 1 - 2 \delta^2 + c_1 \frac{1}{\delta^2} e^{1/2\delta^2} + \ordo{\delta^3} \\ \ensuremath{\mathcal{B}} & = 1 - 6 \delta^2 + c_1 \frac{1}{\delta^4} e^{1/2\delta^2} + c_2 \frac{1}{\delta^2} e^{1/2\delta^2} + \ordo{\delta^3} \end{align} \end{subequations} for arbitrary constants $c_1, c_2$. We see that there are indeed non-perturbative terms depending exponentially on $1/\delta^2$, but these are \emph{large} for real $\delta$. The only solution finite as $\delta \searrow 0$ has $c_1 = c_2 = 0$, and hence lacks a non-perturbative part (its perturbative expansion is, we stress, divergent and must be resummed, though). We return to this at the end of this Section. We can strengthen our argument through the interpretation of $\kappa = -1/2$ as the location of the convergence-limiting singularity in the complex Borel plane. Borel singularities, and the overall transseries structure, are intimately related to the large-order growth of the perturbative coefficients~\cite{Borinsky:2021hnd}. In our case this can be determined to be, to leading order, \begin{equation} A_{k}, B_{k} \sim (-2)^k k! \end{equation} by computing many coefficients using the recursion relations in Ref.~\cite{Ekman:2021eqc}. We compute a normalised Borel transform \begin{equation} \label{eq:Borel} \operatorname{Borel}[\ensuremath{\mathcal{A}}](t) = \sum_k \frac{A_{k}}{2^k k!} t^k \, . \end{equation} The transform cancels the factorial growth of the $A_n$, producing a series with finite radius of convergence, which can be analytically continued. With this normalisation we expect the leading singularity to appear at $t = -1$. The convergence-limiting singularity of the analytical continuation can now be probed using Pad\'e approximants. Pad\'e can struggle to identify multiple branch cuts, as it must accumulate poles along a cut to approximate it. This difficulty can be circumvented with a conformal map~\cite{Kleinert:2001ax,le2012large,Costin:2019xql,Costin:2020hwg}, making it also possible to identify singularities beyond the leading~\cite{Borinsky:2021hnd,Dunne:2021acr} and increase the accuracy of resummations~\cite{Florio:2019hzn,Torgrimsson:2020wlz,Costin:2021bay} Even without conformal mapping, though, there is a clear accumulation of Borel-Pad\'e poles along the ray $t \le -1$, seen in \cref{fig:poles}. \begin{figure}[hbt] \centering \includegraphics[width=\linewidth]{Figures/naive-poles.pdf} \caption{ Borel-Pad\'e poles accumulating along the negative real axis, indicating the presence of a branch cut. } \label{fig:poles} \end{figure} A fairly large number of terms are needed to see the structure in \cref{fig:poles}. The reason for this is that while \begin{equation} \frac{B_k}{k B_{k-1}} \xrightarrow{k \to \infty} \frac{1}{\kappa} = -2 \, , \end{equation} there are slowly decaying subleading corrections. Even after applying high-order Richardson extrapolation~\cite[Ch.~8.1]{Bender:1999}, the slow convergence persists. Experimentally, this is because the subleading large-order behaviour of the coefficients is \emph{logarithmic} \begin{equation} \frac{B_k}{k B_{k-1}} \approx -2 \left[ 1 + \frac{\Lambda}{k} (\log k)^2 + \mathcal{O}\left( (\log k)^2/k^2 \right) \right] \, , \end{equation} and so not eliminated by standard Richardson extrapolation. Modifying Richardson extrapolation to account for logarithmic corrections (see Ref.~\cite{Borinsky:2021hnd} and \cref{app:richardson}), the convergence is improved significantly, as shown in \cref{fig:richardson}. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{Figures/Richardson.pdf} \caption{ Slow convergence of $\theta_k = -\frac{B_k}{k B_{k-1}} - 2$ as $k \to \infty$, even applying order $8$ Richardson extrapolation ($R_8$), due to subleading logarithmic corrections. The modified extrapolations $R^{(2,3)}_K$ are accurate up to order $(\log k)^{(1,2)}/n^{-K}$. } \label{fig:richardson} \end{figure} Instead of a Pad\'e approximant, we can use a hypergeometric approximant in the Borel plane~\cite{Mera:2018qte}. With perturbative data up to order $N = 2M + 1$ a hypergeometric $_{M+1}F_M(\cdots, \cdots ; t/\hat{\kappa_M} )$ can be fitted; it has built-in a branch cut at $\hat{\kappa}_M$. \Cref{fig:hypergeometric} shows an example $_{2}F_1$ approximant for $\operatorname{Borel}[B](t)$, and \cref{fig:hypergeometric-cut} how $\hat{\kappa}_M$ converges to $\kappa = - \frac{1}{2}$. \begin{figure}[hbt] \centering \includegraphics[width=\linewidth]{Figures/Hypergeometric-B} \caption{ Hypergeometric $_{2}F_1$ approximant to $\operatorname{Borel}[B](t)$. The built-in branch cut along the negative real axis is evident as a discontinuity in the colouring. At this low order the estimate for the branch point is not very accurate, but this improves at higher order, cf.~\cref{fig:hypergeometric-cut} } \label{fig:hypergeometric} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\linewidth]{Figures/Hypergeometric-cut} \caption{ Estimates of the branch point using a hypergeometric approximant based on perturbative coefficients up to order $2M + 1$. } \label{fig:hypergeometric-cut} \end{figure} We now return to the question of the sign of $\kappa$. While having exponentially \emph{large} terms seems to be against the spirit of perturbation theory, in a purely formal treatment there is no ``wrong sign'' for $\kappa$, which may even be complex. An instructive example (discussed in detail in Ref.~\cite[Sec.~2]{Marino:2012zq}) is the Airy functions, which have expansions \begin{equation} 2 \Ai(z), \Bi(z) \sim \\ \frac{z^{-1/4}}{\sqrt{\pi}} e^{\mp \frac{2}{3} z^{3/2} } \left(1 + \ordo{z^{-3/2}} \right) \end{equation} as $z \to + \infty$ along the real axis. The exponentially large $\Bi$ is a valid solution to the Airy equation; it just does not match the boundary condition $f(+\infty) = 0$. As $z \to -\infty$ both $\Ai$ and $\Bi$ become oscillatory, corresponding to an imaginary $\kappa$. This is the eponymous phenomenon first studied by Stokes~\cite{Stokes1851,Stokes1864} in precisely the context of the Airy functions. (For a physical example with imaginary $\kappa$, see Ref.~\cite{Dunne:2021acr}.) For LAD the initial acceleration is to be specified, while for $\LL\infty$ it is determined by the initial momentum and $\ensuremath{\mathcal{A}}, \ensuremath{\mathcal{B}}$ at $\delta_0 = \tau_0 \ensuremath{\mathcal{E}} p_0^{\scriptscriptstyle +}$. Only the ODE:s~\cref{eq:fixed-point} need to hold for a solution of $\LL\infty$ to be a solution to LAD; hence the choice of initial condition for the ODE:s~\cref{eq:fixed-point} determines which, among all solutions of LAD with a given inital momentum, is picked out by $\LL\infty$. By dotting $n^\mu$ into and squaring~\cref{eq:LL-infinity}, respectively, we find that $\LL\infty$ implies \begin{equation} \dot{p}^{\scriptscriptstyle +} = - \tau_0 p^{\scriptscriptstyle +} \ensuremath{\mathcal{B}}(\delta) \delta^2 \end{equation} and \begin{equation} \label{eq:pdot-sq} \tau_0^2 \dot{p}^2 = -\ensuremath{\mathcal{A}}(\delta)^2 \delta^2 - \ensuremath{\mathcal{B}}(\delta)^2 \delta^4 \, . \end{equation} With the Lorentz initial condition~\cref{eq:A-B-ic} the resummed perturbative $\ensuremath{\mathcal{A}}_\text{pert}, \ensuremath{\mathcal{B}}_\text{pert}$ are positive and approach $1$ smoothly as $\delta \to 0$. This means that $\delta \to 0$ and thence $\dot{p}^2 \to 0$ as $\tau \to \infty$. The solution of $\LL\infty$ is therefore the \emph{physical, non-runaway, solution} of LAD, shown in \cref{fig:LAD-LLinf-step}. In other words, $\LL\infty$ with the Lorentz initial condition~\cref{eq:A-B-ic} determines the \emph{critical acceleration} \begin{equation} \label{eq:crit-acc} \dot{p}^\mu_\text{crit} := \ensuremath{\mathcal{A}}_\text{pert}(\delta_0) f^{\mu}{}_{\nu} p^\nu_{0} + \ensuremath{\mathcal{B}}_\text{pert}(\delta_0) (Pf^2)^{\mu}{}_{\nu} p^\nu_{0} \end{equation} that, for a given field strength and initial momentum, leads to the physical solution of LAD. As the purely perturbative solution of~\cref{eq:fixed-point} leads to the physical solution to LAD, the remaining, non-perturbative, solutions must lead to the runaways. This is clear from~\cref{eq:pdot-sq} at least when $\ensuremath{\mathcal{B}} \xrightarrow{\delta \to 0} +\infty$, but we will present a concrete example. We cannot find non-perturbative solutions with an initial condition at $\delta = 0$, but we can equally well set the initial condition at $\delta_0 = \tau_0 \ensuremath{\mathcal{E}} p^{\scriptscriptstyle +}_0$. Again using the Airy functions to illustrate, with the boundary condition $f(+\infty) = 0$ we discard $\operatorname{Bi}$, but setting a condition at finite argument retains it. The solution of~\cref{eq:fixed-point} satisfying \begin{subequations} \label{eq:non-pert-ic} \begin{align} \ensuremath{\mathcal{A}}(\tau_0 \ensuremath{\mathcal{E}} p^{\scriptscriptstyle +}_0) & = \ensuremath{\mathcal{A}}_\text{pert}(\tau_0 \ensuremath{\mathcal{E}} p^{\scriptscriptstyle +}_0) \\ \ensuremath{\mathcal{B}}(\tau_0 \ensuremath{\mathcal{E}} p^{\scriptscriptstyle +}_0) & = \ensuremath{\mathcal{B}}_\text{pert}(\tau_0 \ensuremath{\mathcal{E}} p^{\scriptscriptstyle +}_0) + \frac{\tau_0}{\delta^2_0} \varepsilon \, . \end{align} \end{subequations} will give us a solution to LAD with an initial longitudinal acceleration differing from the critical by $\varepsilon$; we expect this solution to be a runaway. The general solution~\cref{eq:gen-sol} with $c_1 = 0, c_2 = -\tau_0 \varepsilon e^{-1/2\delta_0^2} $ verifies the initial condition~\cref{eq:non-pert-ic}. This is only the leading term at first non-perturbative order, but it will be sufficient. This gives us for the longitudinal acceleration \begin{equation} \begin{split} \frac{\ensuremath{\mathrm{d}} p^{\scriptscriptstyle +}}{\ensuremath{\mathrm{d}} x^{\scriptscriptstyle +}} = - \frac{\delta^2}{\tau_0} \ensuremath{\mathcal{B}}(\delta) = & -\frac{\delta^2}{\tau_0} \Big[ 1 - 6 \delta^2 + \ldots \\ & + \frac{\varepsilon}{\delta^2} \exp \big( \frac{1}{2\delta^2} - \frac{1}{2\delta_0^2} \big) \Big] \, . \end{split} \end{equation} After a short time $\delta(x^{\scriptscriptstyle +}) \approx \delta_0 + x^{\scriptscriptstyle +} \ensuremath{\mathcal{E}} \tau_0 \frac{\ensuremath{\mathrm{d}} p^{\scriptscriptstyle +}}{\ensuremath{\mathrm{d}} x^{\scriptscriptstyle +}}$ and using this to expand we have \begin{equation} \label{eq:runaway-acc} \frac{\ensuremath{\mathrm{d}} p^{\scriptscriptstyle +}}{\ensuremath{\mathrm{d}} x^{\scriptscriptstyle +}} = - \frac{\delta^2_0}{\tau_0} \Big( \ensuremath{\mathcal{B}}_\text{pert}(\delta_0) + \frac{\tau_0 \varepsilon}{\delta^2_0} e^\frac{x^{\scriptscriptstyle +}}{\tau_0 p^{\scriptscriptstyle +}_0} + \ldots \Big) \end{equation} omitting some inessential terms. Clearly the second term inside the brackets is a runaway over a proper time $\tau_0$, and it is only seen because we included non-perturbative terms in $\ensuremath{\mathcal{B}}$. The form of~\cref{eq:LL-infinity} is fully determined by there being only two possible tensor structures and one scalar invariant ($\delta$) in the CCF geometry. Another highly restricted geometry is a circularly polarised monochromatic plane wave, and it is possible to derive equations similar to~\cref{eq:fixed-point}, and hence $\LL\infty$ also in that case. It can be studied with the Borel plane methods we have applied to the CCF in this Section; as the details are very similar, we defer them to \cref{app:monochromatic}. In either case, we obtain that reduction of order eliminates non-perturbative terms on the level of the \emph{equation of motion} when a physical boundary condition -- matching to the Lorentz force at vanishing field intensity -- is imposed. We therefore now turn to how non-perturbative pre-accelerating and runaway solutions arise on the level of \emph{solutions} to LAD. \section{LAD and \texorpdfstring{$\LL\infty$}{LL∞} in a crossed step field} \label{sec:step} \begin{figure}[hbt] \centering \includegraphics[width=0.48\textwidth]{Figures/LAD-LLinf/up.pdf} \includegraphics[width=0.48\textwidth]{Figures/LAD-LLinf/u1.pdf} \caption{ Longitudinal and transverse momentum components across a step. $\LL\infty$ is seen to agree with the physical solution of LAD after the step, while the latter exhibits pre-acceleration. The pre-acceleration occurs over a few $\tau_0$ worth of proper time $\sim x^{\scriptscriptstyle +}/p^{\scriptscriptstyle +}$ } \label{fig:LAD-LLinf-step} \end{figure} We will now consider LAD and $\LL\infty$ in a field with a step profile, i.e. \begin{equation} a'_\mu = \mathcal{E} \theta(x^{\scriptscriptstyle +}) \epsilon_\mu \, . \end{equation} That the field is off for an interval of time will allow an unambiguous identification of pre-acceleration. For LAD we are faced with the problem of matching solutions before and after the step. The integro-differential form of LAD~\cite{Rohrlich1961,Plass:1961zz,PhysRevE.88.033203}, however, shows that the acceleration is continuous across a step, and so we should use the critical acceleration~\cref{eq:crit-acc} \subsection{ Exact Solution to Free LAD } Before the step, with the field turned off, all the equations of motion can be solved exactly. For $\LL1$ and $\LL\infty$ the solution is just uniform motion, while for LAD we make an Ansatz in terms of proper time $\tau$ and the rapidity $\zeta$, \begin{equation} \label{eq:LAD-Ansatz} p^\mu(\tau) = \cosh(\zeta(\tau)) p^\mu_0 + \sinh(\zeta(\tau)) \frac{\dot{p}_0^\mu}{\sqrt{-\dot{p}_0^2}} \, , \end{equation} where the subscript $0$ indicates values at $\tau = 0$. LAD then implies an initial-value problem for $\zeta$, \begin{equation} \label{eq:zeta-ivp} \tau_0 \ddot{\zeta} = \dot{\zeta} \qquad \zeta(0) = 0 \quad \dot{\zeta}(0) = \sqrt{-\dot{p}_0^2} \, , \end{equation} with solution \begin{equation} \label{eq:zeta-sol} \zeta = \tau_0 \sqrt{-\dot{p}_0^2} \left( e^{\tau/\tau_0} - 1 \right) \, . \end{equation} We see that the pre-step solution is pre-accelerating unless $\dot{p}_0^\mu = 0$. Viewed forwards in time this solution generalises the well-known non-relativistic runaway in that the exponential runaway is in the rapidity, rather than in the velocity. To the best of our knowledge, the covariant solution matched to the initial conditions, \cref{eq:LAD-Ansatz,eq:zeta-sol}, has not previously appeared in the literature% ~\footnote{ An expression involving $\sinh$ appears in Ref.~\cite{Plass:1961zz} but the approach there is not explicitly covariant and only seeks to prove that the non-runaway solution is uniform motion. }. The solution has the form of a transseries in $\tau_0$ with, expanding the hyperbolic functions, non-perturbative instanton terms of all orders. For $\tau > 0$ these become \emph{large} as $\tau_0 \to 0$, corresponding to faster runaways; for $\tau < 0$ they become \emph{small} in this limit, corresponding to the pre-acceleration occurring in a ``boundary layer'' of width $\approx 1/\tau_0$. Note, though, that the solution is analytic in the proper time $\tau$: the pre-factor of each $e^{\ell \tau/\tau_0}$ term is some power series in $\tau_0$, cf.~\cref{eq:crit-acc}. In fact $\tau$ appears only appears as $\tau/\tau_0$ and after a change of variables $(\tau, \tau_0) \to (\tilde{\tau}, \tau_0) = (\tau/\tau_0, \tau_0)$ the solution is analytic in both variables. This can be traced to that in free LAD, or equivalently~\cref{eq:zeta-ivp} the only scale is $\tau_0$, which can be eliminated by rescaling. There is then no coupling in which to do perturbation theory, but the equation can be solved as a power series in rescaled time; $\tau_0$ reenters when substituting for the initial condition~\cref{eq:crit-acc}. This is a simple demonstration that the character of a transseries in \emph{two} variables can change dramatically with a non-linear change of variables. Such non-linear transformations can in effect perform partial resummations in one of the variables, a point previously discussed in the contexts of a unitary matrix model~\cite{Ahmed:2017lhl} and radiation reaction~\cite{Torgrimsson:2021zob}. As a consequence there can be subtleties in how (e.g., in which order) limits are taken; we will return to this shortly. (See also Refs.~\cite{Podszus:2018hnz,Ilderton:2019kqp} for another example in strong-field physics where the manner of taking a limit matters.) The above discussion has been in terms of proper time only while the rest of this paper uses lightfront time. We therefore conclude this subsection with a short discussion of the solution of free LAD in lightfront parametrisation. In lightfront time the equation for the rapidity retains factors of $\cosh \zeta, \sinh \zeta$ and cannot be solved analytically. Alternatively we can obtain the lightfront time by quadrature, \begin{equation} \label{eq:xplus-implicit} x^{\scriptscriptstyle +}(\tau) = \int_0^\tau \ensuremath{\mathrm{d}} \sigma \, p^{\scriptscriptstyle +}(\sigma) \, . \end{equation} While this integral does have an analytic expression in terms of $\Ei(\cdot)$, it gives only an implicit relation for $\tau(x^{\scriptscriptstyle +})$. It can, though, be expanded to NLO $\tau/\tau_0$ to find \begin{equation} \tau/\tau_0 = p^{\scriptscriptstyle +}_0 x^{\scriptscriptstyle +}/\tau_0 - \frac{\tau_0}{2} \dot{p}_0^{\scriptscriptstyle +} (x^{\scriptscriptstyle +}/\tau_0)^2 + \ordo{(x^{\scriptscriptstyle +}/\tau_0)^3} \, . \end{equation} Inserting this back into~\cref{eq:LAD-Ansatz,eq:zeta-sol} yields another example of a non-linear transformation strongly modifying the two-variable transseries structure. \subsection{ Transseries Solution of LAD } We now come to the transseries structure of solutions to LAD in a constant crossed field. A convenient formulation of LAD in a CCF was provided in Ref.~\cite{Torgrimsson:2021zob}; in slightly different notation it reads \begin{subequations} \label{eq:torgrimsson} \begin{align} g' & = \delta \left[ \partial_u ( g g' ) + g^2 P \right] \\ h' & = 1 + \delta \left[ \partial_u ( g h' ) + g h P \right] \label{eq:torgrimsson-b} \\ P & = (g')^2 - (h')^2 + 2 g' \partial_u \frac{1 + h^2 - g^2}{2 g} \notag \, . \end{align} \end{subequations} Here $g$ and $h$ are normalised longitudinal and transverse components respectively, \begin{subequations} \begin{align} g & := p^{\scriptscriptstyle +}/p_0^{\scriptscriptstyle +} \\ h & := g p^\mathfrak{1}_0 - p^\mathfrak{1} \, , \end{align} \end{subequations} the prime is a derivative with respect to a normalised lightfront time $u := \ensuremath{\mathcal{E}} x^{\scriptscriptstyle +}$, and $\delta^2 = \tau_0^2 p_0 f^2 p_0$, i.e., we drop the subscript on $\delta_0$ from the previous Section. Ref.~\cite{Torgrimsson:2021zob} solves these equations iteratively by noting that if $g, h$ have series expansions in $\delta$, with the coefficients being functions of time, \begin{equation} \label{eq:gh-ansatz} \begin{Bmatrix} g \\ h \end{Bmatrix} \sim \sum_n \delta^n \begin{Bmatrix} g_n(u) \\ h_n(u) \end{Bmatrix} \end{equation} the order $n$ terms of the RHS are determined by terms of strictly lower order, so $g_n, h_n$ can be found iteratively by simple integration. The zeroth order starting point is $g_0 = 1, h_0 = u$, corresponding to the Lorentz force. The coefficients are polynomials in $u$, with the first few being as follows: \begin{widetext} \begin{subequations} \label{eq:gh-sol} \label{eq:torgrimsson-sol-perturb} \begin{align} g(u) & = 1 - u \delta + u^2 \delta^2 + (6u - u^3) \delta^3 + (-18 u^2 + u^4) + \ordo{\delta^5} \\ h(u) & = u - \frac{1}{2} u^2 \delta + \left(-2u + \frac{u^3}{2}\right) \delta^2 + \left(6u^2 - \frac{u^4}{2}\right) \delta^3 + \left(20 u - \frac{41 u^3}{3} + \frac{u^5}{2u} \right) \delta^4 + \ordo{\delta^5} \, . \end{align} \end{subequations} \end{widetext} Notably $g'(0), h'(0)$ have precisely the same perturbative expansion as one would find using $\LL\infty$ for $\dot{p}^\mu_0$. The solution~\cref{eq:torgrimsson-sol-perturb} also illustrates the care needed in taking limits in formal, divergent expansion. At each order in $\delta$ the leading behaviour in $u$ of $g$ is $(-u\delta)^n$, the series has a finite radius of convergence, and can be resummed into $1/(1 + u \delta)$, which is the exact solution of $\LL1$ \cite{Heintzmann:1972mn,PiazzaExact}. This has a single pole in the complex plane \footnote{ The pole indicates that there is a minimum lightfront time in the past at which the particle was at the speed of light, cf.~Refs.~\cite{Tomaras:2000ag,Woodard:2001hi,Ekman:2021vwg} } and its Borel transform ($e^{-\delta t}$) is everywhere analytic. For any fixed $u$ the linear term $\sim n! u$ will always win over $u^n$, though, meaning that the $u \to \infty$ limit must be taken \emph{inside} the sum in~\cref{eq:gh-ansatz}. If this iterative method is applied to free LAD (which corresponds to striking the $1$ on the RHS of~\cref{eq:torgrimsson-b}), only the ``trivial'' solution of uniform motion is found. It is to be expected that solutions are lost as the method is only sensitive to initial conditions for the momentum, not the acceleration. In either case, the generated perturbative solution is the physical solution (but must be resummed), and we must introduce non-perturbative transseries terms to capture pre-acceleration and runaways. To find all solutions, including pre-accelerating and runaway solutions, instead of a simple series in $\delta$, then, we should use a transseries Ansatz, \begin{equation} \label{eq:trans-series-std} g(u) \sim \sum_{n, \ell} \delta^n e^{\ell u/\delta} g_{n,\ell}(u) \, . \end{equation} We will refer to terms with $\ell \ge 1$ as \emph{instanton} terms by analogy with quantum theory~\cite{Lipatov:1977hj}, even though their origin is different. Note again that the coefficients are functions of time -- as in the previous subsection this is an expansion in two variables. The operator $\sim \delta \frac{\ensuremath{\mathrm{d}}^2}{\ensuremath{\mathrm{d}} u^2}$ on the RHS of~\cref{eq:torgrimsson} lowers by $1$ the degree in $\delta$ of any term with $\ell \ge 1$. Thus we no longer have that $g'_{n, \ell}$ is determined by simply integrating lower-order coefficients, but rather by coupled first-order ODE:s. The expansion~\cref{eq:trans-series-std} is also lacking in that the intial conditions for the $g_{n,\ell}$ are grossly underdetermined, as we only have \begin{equation} \label{eq:instanton-ic-mom} \begin{split} g(0) & = 1 \sim \sum_n \delta^n \sum_\ell g_{n, \ell}(0) \\ h(0) & = 0 \sim \sum_n \delta^n \sum_\ell h_{n, \ell}(0) \end{split} \end{equation} and similar initial conditions for the acceleration, \begin{equation} \label{eq:instanton-ic-acc} g'(0) \sim \sum_n \delta^{n-1} \sum_\ell g_{n-1, \ell}'(0) + \ell g_{n, \ell}(0) \, . \end{equation} Since the zeroth and first derivatives of $g, h$ at $0$ determine all higher derivatives at $0$ through LAD~\cref{eq:torgrimsson}, there is in principle an infinite hierarchy of constraints resolving the underdetermination. However, each rung of the ladder involves instanton terms of all orders, so we cannot proceed iteratively. (The system is not ``triangular'', so to speak.) We are thus unable to iteratively determine fully self-consistently the precise transseries form of a specified runaway or pre-accelerating solution. We can however truncate the system to one-instanton terms and assume that their initial amplitudes are $\ordo{\varepsilon}$, which will be accurate to $\ordo{\varepsilon^2 e^{2u/\delta}}$. To make contact with the preceding section we will look for a solution such that \begin{subequations} \begin{align} g'(0) & = -\frac{\varepsilon}{\delta} - \delta \ensuremath{\mathcal{B}}_\text{pert}(\delta) \\ h'(0) & = \ensuremath{\mathcal{A}}_\text{pert}(\delta) \, . \end{align} \end{subequations} This again corresponds to a runaway with ininitial acceleration $\varepsilon$ different from the critical. (This is for concreteness only; the ODE:s for the instanton coefficients are linear and other initial conditions pose no greater problems.) At $n = 0, \ell = 0$, we have $ g_{0,0} = 1 + \varepsilon, h_{0,0} = u$, keeping an integration constant that was implicitly dropped in the perturbative solution in order to account for instantonic contributions to the initial momentum. This implies order $\varepsilon$ corrections to the following perturbative terms, beginning with $g_{1,0} = 2\varepsilon - u, h_{1,0} = - \varepsilon + \frac{u^2}{2}(\varepsilon - 1)$. The $n = 0, \ell = 1$ components~\cite{Torgrimsson:2021zob} verify \begin{widetext} \begin{equation} \label{eq:instanton-ode} \frac{\ensuremath{\mathrm{d}}}{\ensuremath{\mathrm{d}} u} \begin{pmatrix} g_{0,1} \\ h_{0,1} \end{pmatrix} = \begin{pmatrix} \tilde{g}_{1,0} + u & -2 \\ 1 + 2u^2 & \tilde{g}_{1,0} - 3u \end{pmatrix} \begin{pmatrix} g_{0, 1} \\ h_{0, 1} \end{pmatrix} \implies \begin{pmatrix} g_{0,1}(u) \\ h_{0,1}(u) \end{pmatrix} = \varepsilon e^{u^2/2} \begin{pmatrix} \cos 2u \\ u \cos 2u - \sin 2u \end{pmatrix} \, . \end{equation} \end{widetext} For any initial acceleration other than the critical the instanton coefficients $g_{0,1}, h_{0, 1}$ grow superexponentially, i.e., we have a runaway solution. This procedure can in principle be iterated to any instanton order and any order in $\delta$, although expanding the RHS of~\cref{eq:torgrimsson} becomes progressively costlier. At order $\delta^n e^{\ell u/\delta}$ the instanton coefficients take the form $ \varepsilon^\ell \operatorname{Re} \left[ P_{n,\ell} (u) e^{\ell (u^2/2 - 2 i u) } \right] $ for some complex polynomial $P_{n,\ell} $ of degree $n$. We have calculated $P_{n,1}$ up to $n = 16$, for which the constant terms and leading coefficients grow factorially and exponentially, respectively. Hence just like the perturbative series, the instanton series must also be resummed for small $u$, but are convergent when the limit $u \to \infty$ is taken inside the sum. The Gaussian form can be understood as the instanton coefficients reconstructing the non-trivial dependence $\tau(x^{\scriptscriptstyle +})$. For the free solution, \begin{equation} \label{eq:xp-tau} \frac{\tau}{\tau_0} \approx \frac{x^{\scriptscriptstyle +}}{\tau_0 p^{\scriptscriptstyle +}_0} - \frac{(x^{\scriptscriptstyle +})^2 \dot{p}^{\scriptscriptstyle +}_0}{\tau_0 (p_0^{\scriptscriptstyle +})^3} = \frac{u}{\delta} + \frac{u^2}{2} + \ordo{u^3 \delta} \end{equation} when the initial acceleration is (close to) the critical. Because the quadratic term is independent of $\delta$ it appears separately at each order and the modification to the exponent can be read off directly. The next term in the exponent, going like $u^3 \delta$, cannot be identified at a single order in $\delta$, but would appear in an explicit resummation. \section{Conclusions} \label{sec:concs} We have used the Lorentz-Abraham-Dirac (LAD) equation for radiation reaction (RR) as a ``laboratory'' setting in which to probe non-perturbative physics using transseries methods. Our choice of LAD for this purpose is motivated both by a large current interest in radiation reaction~% \cite{Burton:2014wsa,Blackburn:2019rfv,Gonoskov:2021hwf,Damour:2020tta,DiVecchia:2021bdo,Herrmann:2021tct,Bjerrum-Bohr:2021vuf,Torgrimsson:2021wcj,Torgrimsson:2021zob,Ekman:2021eqc,Heinzl:2021mji}, and by LAD featuring known, non-perturbative physics: pre-accelaration and runaway solutions. It is also a time-dependent problem, allowing us to study double expansions (in a time and a coupling), while most applications have looked at expansions in a coupling only~\cite{Florio:2019hzn,Torgrimsson:2020wlz,Mironov:2020gbi,Heinzl:2021mji,Dunne:2021acr,Borinsky:2021hnd}. (But see Refs.~\cite{Ahmed:2017lhl,Torgrimsson:2021wcj,Torgrimsson:2021zob}) Extending our previous work on reduction of order and RR~\cite{Ekman:2021eqc} we have shown that the non-perturbative runaway solutions are eliminated by reduction of order only when an essentially perturbative initial condition is applied. We illustrate the this with the toy model (similar examples are found in several textbooks, e.g.~\cite[Ch.~7]{Bender:1999}) \begin{equation} \label{eq:toy-model} z = 1 - \varepsilon z^2 \end{equation} for a small parameter $\varepsilon$. The two solutions to this equation are \begin{equation} \begin{split} z_\pm & = \frac{1}{2\varepsilon} (-1 \pm \sqrt{1 + 4 \varepsilon} ) \\ & = \begin{cases} \phantom{-\frac{1}{\varepsilon} - \;\,} 1 - \varepsilon + 2 \varepsilon^2 - 5 \varepsilon^3 \cdots \\ -\frac{1}{\varepsilon} - 1 + \varepsilon - 2 \varepsilon^2 + 5 \varepsilon^3 \cdots \end{cases} \, . \end{split} \end{equation} If reduction of order is initiated with $z_0 = 1 + O(\varepsilon)$ only the purely perturbative solution $z_+$ is seen. If on the other hand an Ansatz $z_0 = c_1/\varepsilon + c_2 + O(\varepsilon)$ including a possible non-perturbative term is made, one finds two branches $c_{i,+} = (0,1)$ and $c_{i,-} = (-1,-1)$. These generate $z_+$ and $z_-$, respectively. We see that it is not reduction of order itself that eliminates non-perturbative terms, but reduction of order combined with an initial condition on the purely perturbative branch. When non-perturbative terms are large, as is the case for the toy model~\cref{eq:toy-model} and LAD, this is the only branch smoothly connected to vanishing expansion parameter. Thus we had to set an initial condition~\cref{eq:non-pert-ic} at non-zero expansion parameter to keep non-perturbative runaway solutions with reduction of order. We then considered the transseries structure of \emph{solutions} to LAD. We showed to generate a solution of LAD with a given initial (or final, for pre-accelerating solutions) acceleration, instanton terms \emph{of all orders} must, in general, be kept and their initial (final) coefficients must be chosen consistently with LAD to the desired accuracy. The one exception to this is when the initial acceleration leads to the physical, non-runaway solution: then all instanton terms vanish, and the solution is entirely perturbative. As time-dependent quantities, solutions to LAD exemplify that expansions in two variables can display strikingly different behaviour in different regions of the variable plane and limits~\cite{Ahmed:2017lhl}, and under non-linear transformations. First, the solution to free LAD contains non-perturbative terms of all instanton orders in one set of variables but not in another. Secondly, in a field, both the perturbative series and the instanton series are divergent and must be resummed at small times, but convergent for large times. Understanding the singularity structure of the Borel transform of a series is important for efficiently resumming it~\cite{Mera:2018qte,Costin:2019xql,Costin:2021bay}. The series~\cref{eq:gh-sol} is difficult to resum at large, finite, times because it ``looks" convergent, with an analytic Borel transform, whereas Pad\'e has poles. Ref.~\cite{Torgrimsson:2021zob} found that a non-linear transformation effectively performed a partial resummation in one variable leading to an expansion divergent at all times, and therefore well-suited to Borel-Pad\'e resummation. We take this and our results as a strong indicitation that a more thorough understanding of multi-variable divergent expansions, Borel transforms, and transseries would be highly useful to guide resummations in time-dependent problems and other expansions in multiple parameters. \begin{acknowledgments} \emph{ We thank Tom Heinzl, Anton Ilderton, and Greger Torgrimsson for useful discussions and comments on this manuscript. The author was supported by the Leverhulme Trust, grant RPG-2019-148. } \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec-intro} Nonequilibrium work relations concern the behavior of dynamical systems which are out of equilibrium under nonequilibrium driving forces. Different from linear response theory~\cite{kubo1966,MARCONI2008} where systems are required to be close to equilibrium, nonequilibrium work relations refer to a set of equalities which hold for general systems far away from equilibrium. And the most remarkable ones include Jarzynski's equality~\cite{jarzynski1997-master,jarzynski1997} and Crooks's fluctuation theorem~\cite{crooks1999}. In particular, Jarzynski's equality relates free energy differences to the work that is applied to the system in order to drive the system from one state to another within a finite period of time. Since its first report in $1997$~\cite{jarzynski1997-master,jarzynski1997}, considerable amount of research work has been done both numerically and experimentally to study the computation of free energy differences, by driving the system out of equilibrium using nonequilibrium forces~\cite{bias-error-2003,optimum-bias-2008,optimal-estimator-minh-2009,path_sampling_zuckerman2004,compare_free_energy_methods_2006}. In recent years, inspired by the work~\cite{generalized-jarzynski-feedback-2010}, there has also been growing research interest to generalize both Jarzynski's equality and fluctuation theorems to nonequilibrium systems under discrete feedback controls~\cite{fluct-thm-2012,generalized-fluct-thm-feedback,detailed-fluct-thm-repeated-discrete-feedback-2010,nonequ-feedback-control-sagawa-2012}. Although Jarzynski's equality ensures that free energy differences can be calculated by pulling the system using any control forces (protocols) and the transition can be done within any finite time, the efficiency of Monte Carlo estimators for free energy computation based on Jarzynski's equality crucially depends on the control protocols and therefore careful design is needed. Various techniques, such as importance sampling in trajectory space~\cite{path_sampling_zuckerman2004,optimum-bias-2008}, the use of both forward and reversed trajectories~\cite{crooks-path-ensemble-pre2000,compare_free_energy_methods_2006,optimal-estimator-minh-2009,escorted-simulation2011}, the interacting particle system techniques~\cite{Rousset-stoltz-ips-2006}, and the escorted free energy simulation method~\cite{pre-escort-fe-simulation2008,escorted-simulation2011}, have been proposed in order to improve the efficiency of Monte Carlo estimators. Meanwhile, we note that several recent works have considered optimal control protocols which minimize either average work or average heat~\cite{optimal-protocol2008, optimal-finite-time-seifert-2007,extracting-work-feedback-2011,optimal-protocols-transport-2011}. However, it is important to point out that, although these protocols are optimal in certain sense and are physically interesting, they do not necessarily provide the optimal Monte Carlo estimators in the sense of smallest variance. Readers are referred to ~\cite{bias-error-2003,biased-sampling-dellago-2005,Jarzynskia2008,dellago-hummer-2014, compare_free_energy_methods_2006} for detailed discussions on related issues. In the aforementioned literature, the concept of free energy is often defined as a function of physical parameters, e.g., temperature, volume or pressure, which characterize the macroscopic status of physical system. This is termed as the alchemical transition case in~\cite{tony-free-energy-compuation}. Free energy also plays an important role in the study of model reduction of complex (molecular) systems along a given reaction coordinate or collective variables. In this context, free energy is often defined as a function of reaction coordinate which in turn depends on the state of the system. And calculating free energy differences along a given reaction coordinate has attracted considerable attentions in the study of molecular systems~\cite{peptide_cmd,enhanced_sampling2014,eric_recent_techniques,basic_ingredients_free_energy,tony-free-energy-compuation}. Similar to the alchemical transition case, Jarzynski-like equalities and their applications in free energy calculation have been considered in~\cite{LELIEVRE2007, Tony-constrained-langevin2012}. Motivated by the development of nonequilibrium work relations and their potential applications, the goal of the current work is to understand these results from a mathematical point of view, and to study variance reduction approaches, such as importance sampling, in Monte Carlo methods for free energy calculation based on Jarzynski's equality. In the alchemical transition case, we provide mathematical proofs of both Jarzynski's equality and fluctuation theorems in a general setting based on the theory of stochastic differential equations, making them more accessible for readers in mathematical community (we refer to the previous study~\cite{Ge2008} for a mathematical proof of Jarzynski's equality). It is worth emphasizing that the nonequilibrium diffusion processes in our setting are allowed to be irreversible and can have multiplicative noise. Furthermore, the Jarzynski's equality is generalized to allow noisy control protocols. This generalization may be useful to study systems in experiments~\cite{Hummer-szabo-2001}, since the implementations of control protocols through physical devices are typically imprecise to some extent. As an advantage of our mathematical approach, it allows us to elucidate the connection between thermodynamic integration identity and Jarzynski's equality, which were usually considered as two distinct identities involving free energy differences. Such a connection is indeed known in physics community~\cite{Crooks1998}, but we believe it is helpful to present its mathematical derivation. In the reaction coordinate case, we prove a fluctuation theorem and derive a Jarzynski-like equality based on the fluctuation theorem. These results complement the previous mathematical studies in~\cite{LELIEVRE2007,Tony-constrained-langevin2012}. In both the alchemical transition case and the reaction coordinate case, following our previous studies~\cite{ce_paper2014,Hartmann2017-ptrf,Hartmann2016-Nonlinearity}, we investigate variance reduction approaches in order to compute free energy differences using Monte Carlo method based on Jarzynski's equality. The paper is organized as follows. In Section~\ref{sec-alchemical}, we study the Jarzynski's equality and fluctuation theorem in the alchemical transition case. In particular, the cases when the control protocols are noisy will be considered. Information-theoretic formulation of Jarzynski's equality, the importance sampling method, as well as the cross-entropy method will be discussed in the context of free energy calculation. In Section~\ref{sec-coordinate}, we study the Jarzynski-like equality and the fluctuation theorem in the reaction coordinate case. Information-theoretic formulations and variance reduction approaches will be discussed following a similar reasoning as in Section~\ref{sec-alchemical}. Two simple numerical examples are studied in detail in Section~\ref{sec-examples} to illustrate the numerical issues of Monte Carlo estimators for free energy calculation as well as the variance reduction ideas proposed in this work. In Appendix~\ref{app-1} two asymptotic regimes of nonequilibrium processes(fast mixing and slow driving) and, in particular, connections between Jarzynski's equality and thermodynamic integration identity will be discussed. Appendix~\ref{app-2} records the thermodynamic integration identity in the reaction coordinate case. Appendix~\ref{app-3} contains an alternative proof of the fluctuation theorem (Theorem~\ref{thm-fluct-relation}) in the alchemical transition case. The proof of the fluctuation theorem in the reaction coordinate case (Theorem~\ref{thm-fluct-relation-coordinate}) is given in Appendix~\ref{app-4}. \section{Jarzynski's equality and fluctuation theorem: alchemical transition case} \label{sec-alchemical} In this section, we study the Jarzynski's equality and the fluctuation theorem in the alchemical transition case. In Subsection~\ref{sub-sec-setup}, we introduce the dynamical systems which will be studied in this section and fix notations. Jarzynski's equality and fluctuation theorem will be studied from Subsection~\ref{sub-sec-jarzynski} to Subsection~\ref{subsec-fluct-thm}. Finally, Information-theoretic formulation of Jarzynski's equality, as well as the cross-entropy method will be discussed in Subsection~\ref{subsec-is} and Subsection~\ref{subsec-ce}, respectively. \subsection{Mathematical setup} \label{sub-sec-setup} Consider the stochastic process $x(s) \in \mathbb{R}^n$ which satisfies the stochastic differential equation (SDE) \begin{align} \begin{split} d x(s) & = b(x(s), \lambda(s))\, ds + \sqrt{2\beta^{-1}} \sigma(x(s), \lambda(s)) \,dw^{(1)}(s)\,,\quad s \ge 0\,, \end{split} \label{dynamics-1} \end{align} where $\beta>0$ is a constant, $w^{(1)}(s)$ is a $d_1$-dimensional Brownian motion with $d_1 \ge n$. Both the drift vector $b : \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}^n$ and the matrix $\sigma : \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}^{n \times d_1}$ are smooth functions depending on the \textit{control protocol} $\lambda(s) \in \mathbb{R}^m$, which we assume is governed by \begin{align} d\lambda(s) = f(\lambda(s), s)\,ds + \sqrt{2\epsilon}\, \alpha(\lambda(s), s)\,dw^{(2)}(s)\,. \label{lambda-dynamics} \end{align} In the above, $\epsilon \ge 0$ is related to the intensity of the noise, $\lambda(0)\in\mathbb{R}^m$ is fixed, $f : \mathbb{R}^m \times \mathbb{R}^+ \rightarrow \mathbb{R}^m$, $\alpha : \mathbb{R}^m \times \mathbb{R}^+ \rightarrow \mathbb{R}^{m \times d_2}$ are smooth functions, and $w^{(2)}(s)$ is a $d_2$-dimensional Brownian motion independent of $w^{(1)}(s)$. Notice that in equation (\ref{lambda-dynamics}), functions $f, \alpha$ are assumed to be independent of $x(s)$, and therefore the control protocol $\lambda(s)$ is of feedback form with respect to itself but does not depend on the system state $x(s)$. More generally, in Subsection~\ref{subsec-fluct-thm}, we will also consider the case when the control protocol is of feedback form with respect to both processes $x(s)$ and $\lambda(s)$, i.e., \begin{align} d\lambda(s) = f(x(s), \lambda(s), s)\, ds + \sqrt{2\epsilon}\, \alpha(x(s), \lambda(s), s)\, dw^{(2)}(s)\,. \label{lambda-dynamics-full} \end{align} In both cases (\ref{lambda-dynamics}) and (\ref{lambda-dynamics-full}), the infinitesimal generator of the dynamics $\lambda(s)$ for fixed $x(s)$ is given by \begin{align} \mathcal{L}_2 = f \cdot \nabla_\lambda + \epsilon\,(\alpha\alpha^T):\nabla^2_\lambda\,, \label{l-lambda} \end{align} where $\nabla_\lambda$ denotes the gradient operator with respect to the variable $\lambda \in \mathbb{R}^m$ and \begin{align*} (\alpha\alpha^T):\nabla^2_\lambda \phi \coloneqq \sum_{1 \le i,j \le m} (\alpha\alpha^T)_{ij} \frac{\partial^2\phi}{\partial\lambda_i\partial\lambda_j} \,, \end{align*} for a smooth function $\phi$ of variable $\lambda \in \mathbb{R}^m$. For fixed parameter $\lambda\in \mathbb{R}^m$, the dynamics (\ref{dynamics-1}) reads \begin{align} d x(s) & = b(x(s), \lambda)\, ds + \sqrt{2\beta^{-1}} \sigma(x(s), \lambda)\, dw^{(1)}(s)\,,\quad s \ge 0\,, \label{dynamics-1-fixed-lambda} \end{align} and its infinitesimal generator is \begin{align} \mathcal{L}_{1} = b(\cdot, \lambda) \cdot \nabla + \frac{1}{\beta} a(\cdot, \lambda) : \nabla^2\,, \label{l-1} \end{align} where the matrix $a=\sigma\sigma^T$ and $\nabla$ denotes the gradient operator with respect to $x \in \mathbb{R}^n$. Correspondingly, the infinitesimal generator of the joint process $(x(s), \lambda(s))$ is \begin{align} \mathcal{L} = \mathcal{L}_1 + \mathcal{L}_2\,, \label{l-x-lambda} \end{align} since the two Brownian motions $w^{(1)}(s)$, $w^{(2)}(s)$ are independent. Throughout this article, we assume that the drift and noise coefficients satisfy appropriate Lipschitz and growth conditions, such that equations (\ref{dynamics-1})-(\ref{lambda-dynamics-full}) have unique strong solutions~\cite{oksendalSDE}. For each fixed parameter $\lambda\in \mathbb{R}^m$, we further assume that the process $x(s)$ in (\ref{dynamics-1-fixed-lambda}) is ergodic and has a unique invariant measure $\mu_\lambda$ satisfying \begin{align} \mu_\lambda(dx) = \rho(x,\lambda) dx\,, \quad \int_{\mathbb{R}^n} \rho(x,\lambda) dx = 1\,. \label{invariant-mu} \end{align} Furthermore, we introduce the potential \begin{align} V(x,\lambda) = -\beta^{-1} \ln \rho(x,\lambda) + \mbox{constant}\,, \label{potential} \end{align} where the constant only depends on the parameter $\lambda$. Equivalently, we have $\rho(x,\lambda) = \frac{1}{Z(\lambda)} e^{-\beta V(x,\lambda)}$, and the normalization constant $Z(\lambda)$ is given by \begin{align} Z(\lambda) = \int_{\mathbb{R}^n} e^{-\beta V(x,\lambda)} dx \,. \label{normal-const} \end{align} The free energy of the system (\ref{dynamics-1-fixed-lambda}) for a fixed parameter $\lambda \in \mathbb{R}^m$ is defined as \begin{align} F(\lambda) = -\beta^{-1}\ln Z(\lambda)\,. \label{free-energy} \end{align} To proceed, we follow the previous study~\cite{effective_dyn_2017} and introduce the quantity \begin{align} J_i(x,\lambda) = b_i - \frac{1}{\beta \rho} \sum_{j=1}^n \frac{\partial (a_{ij} \rho)}{\partial x_j} \,, \quad 1 \le i \le n\,. \label{flux-def} \end{align} Note that both here and in the following, $J_i$, $b_i$ denote the $i$th component of the vectors $J,\,b$, respectively. Also, the dependence of the functions on the variables $x$ and $\lambda$ will be omitted when no ambiguities arise. Since the probability measure $\mu_\lambda$ in (\ref{invariant-mu}) is the invariant measure of the dynamics (\ref{dynamics-1-fixed-lambda}), we can verify that \begin{align} \mbox{div} \Big(J(x, \lambda) e^{-\beta V(x, \lambda)}\Big) \equiv 0\,, \quad \rho-\mbox{a.e.}\hspace{0.2cm} x \in \mathbb{R}^n\,, \label{div-j-zero} \end{align} for every $\lambda \in \mathbb{R}^m$. Thus, (\ref{dynamics-1}) can be written as \begin{align} d x_i(s) & = J_i ds + \frac{1}{\beta \rho} \sum_{j=1}^n \frac{\partial(a_{ij} \rho)}{\partial x_j} ds + \sqrt{2\beta^{-1}} \sum_{j=1}^{d_1}\sigma_{ij}\,dw^{(1)}_j(s)\,, \quad 1 \le i \le n\,, \label{dynamics-1-q} \end{align} or, in vector form, \begin{align} d x(s) & = \Big(J - a \nabla V + \frac{1}{\beta} \nabla \cdot a\Big)\,ds + \sqrt{2\beta^{-1}} \sigma\,dw^{(1)}(s)\,, \label{dynamics-1-q-vector} \end{align} where $\nabla \cdot a$ denotes the vector in $\mathbb{R}^n$ with components \begin{align} (\nabla \cdot a)_i = \sum_{j=1}^n \frac{\partial a_{ij}}{\partial x_j}\,, \quad 1 \le i \le n\,. \label{nabla-dot} \end{align} Finally, we introduce two physical quantities which are associated to the trajectories of the stochastic processes $x(s),\lambda(s)$ and will be relevant for our subsequent study. For each trajectory $x(s)$, $\lambda(s)$ of the dynamics (\ref{dynamics-1}), (\ref{lambda-dynamics-full}) on the time interval $[t_1,t_2] \subseteq [0,T]$, the \textit{change of internal energy} and the \textit{work} done to the system are defined as \begin{align} \begin{split} \Delta \mathcal{U}_{(t_1,\,t_2)} =& V\big(x(t_2), \lambda(t_2)\big) - V\big(x(t_1), \lambda(t_1)\big)\,\\ W_{(t_1,\,t_2)} =& \int_{t_1}^{t_2} \nabla_{\lambda} V(x(s), \lambda(s)) \circ d\lambda(s) \,, \end{split} \label{u-q-w} \end{align} \begin{comment} Note that, in (\ref{u-q-w}), $\nabla\cdot \sigma^T \in \mathbb{R}^{d_1}$ is the vector with components $(\nabla\cdot \sigma^T)_{i} = \sum\limits_{l=1}^n \frac{\partial \sigma_{li}}{\partial x_l}$, $1 \le i \le d_1$, The total heat $Q^{\mbox{tot}}_{(t_1,\,t_2)}$ was introduced in~\cite{sekimoto-heat} and further investigations can be found in~\cite{sst-langevin-hatano,ift-seifert-heat}. The sign is chosen such that the total heat is positive when there is a heat flow from the system into the environment (heat bath). integration by parts, equation (\ref{lambda-dynamics}) (or (\ref{lambda-dynamics-full})), Q^{\mbox{tot}}_{(t_1,\,t_2)}= & - \Delta \mathcal{U}_{(t_1,t_2)} + W_{(t_1,t_2)} + \int_{t_1}^{t_2} \Big(a^{-1}J + \frac{1}{\beta} a^{-1} \sigma \nabla \cdot \sigma^{T}\Big)\big(x(s), \lambda(s)\big) \circ dx(s)\,,\\ In the special case when $a=\sigma = \mbox{id}$, $J\equiv 0$ (detailed balance), and $\epsilon=0$, the quantities in (\ref{u-q-w}) become more familiar and in particular the identity $Q^{\mbox{tot}}_{(t_1,t_2)} = -\Delta \mathcal{U}_{(t_1,t_2)} + W_{(t_1,t_2)}$ (the first law of thermodynamics) is recovered from (\ref{q-w-1}). \end{comment} respectively. Note that, in (\ref{u-q-w}), the notation `$\circ$' indicates that Stratonovich integration has been used. Using the relation between Stratonovich integration and Ito integration, we can verify the alternative expression \begin{align} \begin{split} W_{(t_1,\,t_2)} = & \int_{t_1}^{t_2} \Big(\nabla_{\lambda} V \cdot f + \epsilon\, \alpha\alpha^T:\nabla^2_\lambda V\Big)\big(x(s), \lambda(s),s\big)\, ds \\ & + \sqrt{2\epsilon} \int_{t_1}^{t_2} \big(\alpha^T \nabla_{\lambda} V\big) \big(x(s), \lambda(s),s\big)\cdot dw^{(2)}(s)\,, \end{split} \label{q-w-1} \end{align} where Ito integration has been used. In the following, we will omit the subscripts and adopt the notation $W=W_{(t_1,t_2)}$ when we consider the time interval $[t_1,t_2] = [0,T]$. Similarly, $W(t)$ will be used to denote the work $W_{(0,t)}$ for $t \in [0,T]$. \subsection{Jarzynski's equality under noisy control protocol} \label{sub-sec-jarzynski} Jarzynski's equality can be derived using different approaches~\cite{Jarzynskia2008}. In this subsection, we will provide a simple argument to obtain the (generalized) Jarzynski's equality, where the nonequilibrium processes $x(s)$ can be irreversible for fixed parameter $\lambda$, the diffusion coefficient $\sigma$ in the equation (\ref{dynamics-1}) of $x(s)$ can be position dependent (multiplicative noise), and the control protocol $\lambda(s)$ can be stochastic ($\epsilon > 0$). The proof has some similarities with the one in~\cite{Hummer-szabo-2001} using the Feynman-Kac formula. As an advantage of our method, it allows us to figure out the connections between thermodynamic integration and Jarzynski's equality by analyzing the related PDEs. See Remark~\ref{rmk-1} and Appendix~\ref{app-1} for more details. Before starting, we first introduce the quantity \begin{align} \begin{split} g(x, \lambda, t) =& \mathbf{E}_{x,\lambda,t} \Big(\varphi(x(T), \lambda(T))\,e^{-\beta W_{(t,T)}} \Big)\\ =& \mathbf{E}_{x,\lambda,t} \Big[\varphi(x(T), \lambda(T))\,e^{-\beta \int_t^T \nabla_{\lambda} V\big(x(u), \lambda(u)\big) \circ\, d\lambda(u)}\Big]\,, \end{split} \label{g-def} \end{align} for fixed $x \in \mathbb{R}^n$, $\lambda \in \mathbb{R}^m$ and $0 \le t \le T$, where $\varphi : \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}$ is a bounded and continuous test function, $\mathbf{E}_{x,\lambda,t}$ denotes the conditional expectation with respect to the path ensemble of the dynamics (\ref{dynamics-1}), (\ref{lambda-dynamics-full}) starting from $x(t) = x$ and $\lambda(t) = \lambda$ at time $t$. The following lemma is a direct application of the Feynman-Kac formula~\cite{oksendalSDE}, and we provide its proof for completeness. \begin{lemma} Consider the dynamics $x(s), \lambda(s)$ given in (\ref{dynamics-1}), (\ref{lambda-dynamics-full}). The function $g$ defined in (\ref{g-def}) satisfies the equation \begin{align} \begin{split} &\partial_t g + \mathcal{L}_{1} g + \mathcal{L}_2 g -2\epsilon\beta \big(\alpha^T\nabla_\lambda V\big) \cdot \big(\alpha^T \nabla_\lambda g\big) + \Big(\epsilon \beta^2 |\alpha^T\nabla_{\lambda} V|^2 - \beta \mathcal{L}_2V \Big) g = 0\,, \quad 0 \le t < T\,,\\ &g(\cdot,\cdot, T) = \varphi \,, \end{split} \label{g-pde} \end{align} where $\mathcal{L}_1$ is the operator defined in (\ref{l-1}), which is the infinitesimal generator of the dynamics (\ref{dynamics-1}) for $x(s)$ when $\lambda \in \mathbb{R}^m$ is fixed, and $\mathcal{L}_2$ is the operator defined in (\ref{l-lambda}) for the process $\lambda(s)$ when $x \in \mathbb{R}^n$ is fixed. \label{lemma-g} \end{lemma} \begin{proof} Using the tower property of the conditional expectation, we have \begin{align} \begin{split} g(x,\lambda, t) =\, &\mathbf{E}_{x,\lambda,t}\Big[\varphi(x(T), \lambda(T))\,e^{-\beta \int_t^T \nabla_{\lambda} V\big(x(u), \lambda(u)\big) \circ\, d\lambda(u)}\Big] \\ =\, &\mathbf{E}_{x,\lambda,t}\Big[e^{-\beta \int_t^{s} \nabla_{\lambda} V\big(x(u), \lambda(u)\big) \circ\, d\lambda(u)} g(x(s), \lambda(s), s) \Big]\,, \end{split} \label{lemma-g-1} \end{align} for all time $s \in [t,T]$. Let us define $Y(s) = e^{-\beta \int_t^{s} \nabla_{\lambda} V\big(x(u), \lambda(u)\big) \circ\, d\lambda(u)}$. Changing Stratonovich integration into Ito integration as in (\ref{q-w-1}) and applying Ito's formula to the process $Y(s)$, we get \begin{align*} dY(s) = Y(s) \Big[-\beta\mathcal{L}_2V\, ds + \epsilon \beta^2\big|\alpha^T \nabla_\lambda V\big|^2\, ds - \sqrt{2\epsilon}\beta \big(\alpha^T \nabla_\lambda V\big) \cdot \,dw^{(2)}(s)\Big]\,. \end{align*} In a similar way, applying Ito's formula to $g(x(s), \lambda(s), s)$, gives \begin{align*} dg = \big(\partial_t g + \mathcal{L}_1 g + \mathcal{L}_2 g\big) ds + \sqrt{2\beta^{-1}} \big(\sigma^T\nabla g\big) \cdot dw^{(1)}(s) + \sqrt{2\epsilon} \big(\alpha^T\nabla_\lambda g\big)\cdot \,dw^{(2)}(s)\,. \end{align*} Note that, here and in the following, we drop the dependence of the functions on the states $x(s)$, $\lambda(s)$ and the time $s$ in order to simplify notation. Applying Ito's formula to the product $Y(s) g(x(s), \lambda(s), s)$, we obtain \begin{align} \begin{split} & e^{-\beta \int_t^{s} \nabla_{\lambda} V\big(x(u), \lambda(u)\big) \circ\, d\lambda(u)} g(x(s), \lambda(s), s) \\ =\,& g(x,\lambda, t) + \int_t^{s} Y(u) \Big(-\beta\mathcal{L}_2 V + \epsilon \beta^2|\alpha^T \nabla_\lambda V|^2 \Big) g(x(u), \lambda(u), u)\, du \\ & + \int_t^{s} Y(u) \big(\partial_t g + \mathcal{L}_1 g + \mathcal{L}_2 g\big)\,du - 2\epsilon \beta \int_t^{s} Y(u) \big(\alpha^T\nabla_\lambda V\big)\cdot \big(\alpha^T\nabla_\lambda g\big)\,du + M(s)\,, \end{split} \label{lemma-yg} \end{align} where $M(s)$ is a (local) martingale. Taking expectations in (\ref{lemma-yg}) and using (\ref{lemma-g-1}), we get \begin{align*} \mathbf{E}_{x,\lambda,t}\bigg[ &-\beta\int_t^{s} Y(u) (\mathcal{L}_2 V) g\, du + \epsilon\beta^2 \int_t^s Y(u) |\alpha^T\nabla_\lambda V|^2 g\, du \\ & + \int_t^{s} Y(u) \big(\partial_t g + \mathcal{L}_1 g + \mathcal{L}_2 g\big)\,du - 2\epsilon \beta \int_t^{s} Y(u) \big(\alpha^T\nabla_\lambda V\big)\cdot \big(\alpha^T\nabla_\lambda g\big)\,du \bigg] = 0 \,. \end{align*} Notice that $Y(t)=1$, $x(t) = x$ and $\lambda(t) = \lambda$ at time $t$. Dividing the last equation by $(s-t)$ and letting $s \rightarrow t+$, we obtain (\ref{g-pde}) which concludes the proof. \end{proof} Now we can prove the Jarzynski equality as stated below. \begin{theorem}[Generalized Jarzynski equality] Let $x(s)$ and $\lambda(s)$ be given by (\ref{dynamics-1}) and (\ref{lambda-dynamics}), respectively. Then, for any bounded smooth test function $\varphi : \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}$, we have \begin{align} \mathbf{E}_{\lambda(0),0}\Big[\varphi(x(t), \lambda(t))\,e^{-\beta W(t)}\,\Big] = \mathbf{E}_{\lambda(0),0}\Big[ e^{-\beta \big(F(\lambda(t)) - F(\lambda(0))\big)} \mathbf{E}_{\mu_{\lambda(t)}} \varphi(\cdot, \lambda(t))\Big] \,, \label{generalized-jarzynski-varphi} \end{align} where $F(\cdot)$ is the free energy in (\ref{free-energy}) and $W(t)=W_{(0,t)}$ is the work defined in (\ref{u-q-w}) on the time interval $[0,t]$. $\mathbf{E}_{\mu_{\lambda(t)}}$ denotes the expectation with respect to the probability measure $\mu_{\lambda(t)}$ on $\mathbb{R}^n$. And $\mathbf{E}_{\lambda(0), 0}$ denotes the conditional expectation over the realizations of $x(s)$ and $\lambda(s)$, starting from fixed $\lambda(0)\in \mathbb{R}^m$ and the initial distribution $x(0) \sim \mu_{\lambda(0)}$. In particular, choosing $\varphi\equiv 1$, we have \begin{align} \mathbf{E}_{\lambda(0), 0}\Big[e^{-\beta W(t)}\Big] = \mathbf{E}_{\lambda(0),0} \Big[e^{-\beta \big(F(\lambda(t)) - F(\lambda(0))\big)}\Big]\,. \label{generalized-jarzynski} \end{align} \label{thm-1} \end{theorem} \begin{proof} It suffices to prove the equality (\ref{generalized-jarzynski-varphi}) for $t=T$. From the definitions of the function $g$ in (\ref{g-def}) and the function $Z(\lambda)$ in (\ref{normal-const}), it is easy to see that (\ref{generalized-jarzynski-varphi}) is equivalent to \begin{align} \int_{\mathbb{R}^n} g(x, \lambda(0), 0) e^{-\beta V(x, \lambda(0))} dx = \mathbf{E}_{\lambda(0),0}\bigg[\int_{\mathbb{R}^n} g(x,\lambda(T), T)\, e^{-\beta V(x, \lambda(T))} \, dx\bigg]\,. \label{thm-1-eqn-1} \end{align} Noticing that the process $\lambda(s)$ in (\ref{lambda-dynamics}) is independent of $x(s)$ and motivated by the form of (\ref{thm-1-eqn-1}), we consider the quantity $\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} g(x,\lambda(s), s) dx$ as a function of time $s$. Applying Ito's formula, we compute \begin{align} \begin{split} & d\bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} g(x,\lambda(s), s) dx \bigg] \\ = & \bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} \Big( \partial_t g\,+ \mathcal{L}_2 g + \big( \epsilon \beta^2 |\alpha^T\nabla_{\lambda} V|^2 - \beta \mathcal{L}_2V \big) g - 2\epsilon\beta \big(\alpha^T\nabla_\lambda V\big) \cdot \big(\alpha^T\nabla_\lambda g\big) \Big)\, dx\bigg] ds \\ & + \sqrt{2\epsilon} \bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} \alpha^T\big(\nabla_\lambda g - \beta \nabla_\lambda V\, g\big) dx\,\bigg]\cdot \,dw^{(2)}(s)\,, \end{split} \end{align} where the functions under the integral above are evaluated at $(x,\lambda(s), s)$. Since the function $g$ satisfies the equation (\ref{g-pde}) in Lemma~\ref{lemma-g}, we find \begin{align} \begin{split} & d\bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} g(x,\lambda(s), s) dx \bigg] \\ =& - \bigg[\int_{\mathbb{R}^n}e^{-\beta V(x,\lambda(s))} \mathcal{L}_1 g \, dx\bigg]\,ds + \sqrt{2\epsilon} \bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} \alpha^T\big(\nabla_\lambda g - \beta \nabla_\lambda V\, g\big) dx\,\bigg]\cdot \,dw^{(2)}(s)\,. \end{split} \label{int-identity} \end{align} Recalling that $\mu_{\lambda}$ in (\ref{invariant-mu}) and $\mathcal{L}_1$ are the invariant measure and the infinitesimal generator of dynamics (\ref{dynamics-1-fixed-lambda}), we have $\mathcal{L}^*_{1} \big(e^{-\beta V(x,\lambda)}\big) = 0$, where $\mathcal{L}^*_{1}$ is the formal $L^2$ adjoint of $\mathcal{L}_{1}$. Integrating by parts, we conclude that the first term on the right hand side of equation (\ref{int-identity}) vanishes and therefore \begin{align*} d\bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} g(x,\lambda(s), s) dx\bigg] = \sqrt{2\epsilon} \bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} \alpha^T\big(\nabla_\lambda g - \beta \nabla_\lambda V\, g\big) dx\,\bigg]\cdot \,dw^{(2)}(s)\,. \end{align*} Taking expectation and noticing that $g(\cdot, \cdot, T) \equiv \varphi$, we obtain (\ref{thm-1-eqn-1}) and the equality (\ref{generalized-jarzynski-varphi}) readily follows. \end{proof} \begin{remark} \begin{enumerate} \item While Lemma~\ref{lemma-g} holds in both cases when the control protocol $\lambda(s)$ satisfies either dynamics (\ref{lambda-dynamics}) or dynamics (\ref{lambda-dynamics-full}), a close examination reveals that the proof of Theorem~\ref{thm-1} above is valid only when the process $\lambda(s)$ is independent of the process $x(s)$, i.e., when $\lambda(s)$ satisfies dynamics (\ref{lambda-dynamics}). \item When $\epsilon = 0$, the control protocol is deterministic and the work becomes \begin{align} W(t) = \int_0^{t} \nabla_{\lambda} V\big(x(s), \lambda(s)\big)\cdot \dot{\lambda}(s)\,ds = \int_0^{t} \nabla_{\lambda} V\big(x(s), \lambda(s)\big)\cdot f(\lambda(s),s)\, ds \,. \label{w-eps-0} \end{align} In this case, we recover the standard Jarzynski equality~\cite{jarzynski1997-master,jarzynski1997,Jarzynskia2008}, since (\ref{generalized-jarzynski}) becomes \begin{align} \mathbf{E}_{\lambda(0), 0}\Big[e^{-\beta W(t)}\Big] = e^{-\beta \Delta F(t)} \,, \label{jarzynski-1} \end{align} where \begin{align} \Delta F(t) = F\big(\lambda(t)\big) - F\big(\lambda(0)\big) \label{delta-f} \end{align} is the free energy difference and the conditional expectation is taken with respect to dynamics (\ref{dynamics-1}), starting from the equilibrium distribution $\mu_{\lambda(0)}$. \item Besides the Jarzynski's equality, the thermodynamic integration identity is another well known representation of the free energy that can be used to calculate free energy differences~\cite{frenkel2001understanding}. Based on the argument in this subsection, in Appendix~\ref{app-1} we will derive the thermodynamic integration identity from Jarzynski's equality, and therefore provide connections of these two methods. \end{enumerate} \label{rmk-1} \end{remark} In~\cite{pre-escort-fe-simulation2008}, the authors proposed the escorted free energy calculation method based on an identity for dynamics involving an extra force term. In the following, we briefly discuss this identity and provide a proof of it using the same argument of Theorem~\ref{thm-1}. Let us consider the dynamics \begin{align} \begin{split} d\bar{x}(s) & = b(\bar{x}(s), \lambda(s))\, ds + u(\bar{x}(s), \lambda(s))\,ds +\sqrt{2\beta^{-1}} \sigma(\bar{x}(s), \lambda(s)) \,dw^{(1)}(s)\,,\quad s \ge 0\,, \end{split} \label{dynamics-1-escorted} \end{align} where $u: \mathbb{R}^{n}\times \mathbb{R}^{m} \rightarrow \mathbb{R}^{n}$ is a smooth vector field with compact support and $\lambda(s)$ satisfies (\ref{lambda-dynamics}). We define the modified work \begin{align} \overline{W}_{(t_1, t_2)} = \int_{t_1}^{t_2} \nabla_\lambda V(\bar{x}(s), \lambda(s)) \circ d\lambda(s) + \int_{t_1}^{t_2} \Big(u\cdot \nabla V - \frac{1}{\beta} \nabla\cdot u\Big)(\bar{x}(s),\lambda(s))\,ds \,, \label{work-w-escorted} \end{align} for $0 \le t_1 \le t_2 \le T$. \begin{corollary} Let $\bar{x}(s)$ and $\lambda(s)$ be given by (\ref{dynamics-1-escorted}) and (\ref{lambda-dynamics}), respectively. Then, for any bounded smooth test function $\varphi : \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}$, we have \begin{align} \overline{\mathbf{E}}_{\lambda(0),0}\Big[\varphi(\bar{x}(t), \lambda(t))\,e^{-\beta \overline{W}(t)}\,\Big] = \overline{\mathbf{E}}_{\lambda(0),0}\Big[ e^{-\beta \big(F(\lambda(t)) - F(\lambda(0))\big)} \mathbf{E}_{\mu_{\lambda(t)}} \varphi(\cdot, \lambda(t))\Big] \,, \label{jarzynski-varphi-escorted} \end{align} $\forall~0 \le t \le T$, where $F(\cdot)$ is the free energy in (\ref{free-energy}) and $\overline{W}(t)=\overline{W}_{(0,t)}$ is the modified work in (\ref{work-w-escorted}). $\mathbf{E}_{\mu_{\lambda(t)}}$ denotes the expectation with respect to the probability measure $\mu_{\lambda(t)}$ on $\mathbb{R}^n$, while $\overline{\mathbf{E}}_{\lambda(0), 0}$ denotes the conditional expectation over the realizations of $\bar{x}(s)$ and $\lambda(s)$, starting from fixed $\lambda(0)\in \mathbb{R}^m$ and the initial distribution $\bar{x}(0) \sim \mu_{\lambda(0)}$. In particular, choosing $\varphi\equiv 1$, we have \begin{align} \overline{\mathbf{E}}_{\lambda(0), 0}\Big[e^{-\beta \overline{W}(t)}\Big] = \overline{\mathbf{E}}_{\lambda(0),0} \Big[e^{-\beta \big(F(\lambda(t)) - F(\lambda(0))\big)}\Big]\,. \label{jarzynski-escorted} \end{align} \label{corollary-1-escorted} \end{corollary} \begin{proof} We only sketch the proof since it is similar to the proof of Theorem~\ref{thm-1}. Similar to (\ref{g-def}), we introduce the function \begin{align} \begin{split} g(x, \lambda, t) =& \overline{\mathbf{E}}_{x,\lambda,t} \Big(\varphi(\bar{x}(T), \lambda(T))\,e^{-\beta \overline{W}_{(t,T)}} \Big)\,, \end{split} \label{g-def-escorted} \end{align} where $\bar{x}(t) = x\in \mathbb{R}^n$, $\lambda(t) = \lambda\in \mathbb{R}^m$ and $t\in[0,T]$. Using the same argument of Lemma~\ref{lemma-g}, we can verify that $g$ satisfies the PDE \begin{align} \begin{split} &\partial_t g + \mathcal{L}_{1} g + \mathcal{L}_2 g + u\cdot\nabla g -2\epsilon\beta \big(\alpha^T\nabla_\lambda V\big) \cdot \big(\alpha^T \nabla_\lambda g\big)\\ &+ \Big(\epsilon \beta^2 |\alpha^T\nabla_{\lambda} V|^2 - \beta \mathcal{L}_2V -\beta u\cdot \nabla V + \nabla\cdot u\Big) g = 0\,, \quad 0 \le t < T\,,\\ \end{split} \label{g-pde-escorted} \end{align} with the terminal condition $g(\cdot,\cdot, T) = \varphi$. Applying Ito's formula as we did in Theorem~\ref{thm-1}, we can get \begin{align} \begin{split} & d\bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} g(x,\lambda(s), s)\,dx \bigg] \\ =& - \bigg[\int_{\mathbb{R}^n}e^{-\beta V(x,\lambda(s))} \Big(\mathcal{L}_1 g + u\cdot \nabla g - (\beta u\cdot\nabla V)g + (\nabla\cdot u) g\Big) \, dx\bigg]\,ds\\ & + \sqrt{2\epsilon} \bigg[\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda(s))} \alpha^T\big(\nabla_\lambda g - \beta \nabla_\lambda V\, g\big) dx\,\bigg]\cdot \,dw^{(2)}(s)\,. \end{split} \label{int-identity-escorted} \end{align} Since $u$ is smooth and has compact support, the first term on the right hand side above vanishes using integration by parts formula. (\ref{jarzynski-varphi-escorted}) is obtained following the same argument in the proof of Theorem~\ref{thm-1}. \end{proof} \begin{comment} \subsection{Jarzynski's equality under feedback control protocol} \label{subsec-jarzynski-feedback} In recent years, there has been an increasing interest to study the behaviour of nonequilibrium dynamics under feedback controls. In particular, Jarzynski's equality has been generalized to the case when the control protocol depends on discrete measurements of states of the system~\cite{generalized-jarzynski-feedback-2010,generalized-fluct-thm-feedback}. In this subsection, we provide a mathematical proof of the identity in~\cite{generalized-jarzynski-feedback-2010} as a direct application of Theorem~\ref{thm-1}. Specifically, suppose $N$ measurements $y_i \in \mathbb{R}^{n_o}$, $1 \le i \le N$, of the state $x$ are recorded at times \begin{align} 0 =t_0 < t_1 < t_2< \cdots < t_N < t_{N+1}=T\,. \end{align} A typical example is \begin{align} y_i = F(x(t_i)) + r_i\,, \label{x-to-y} \end{align} where $F : \mathbb{R}^n \rightarrow \mathbb{R}^{n_o}$ is an observation function and $r_i \in \mathbb{R}^{n_o}$ are $N$ independent $n_o$-dimensional Gaussian variables with zero means. Let $Y_j$ denote the set of measurements that are collected up to time $t_j$, i.e., \begin{align} Y_j = \big\{y_1, y_2, \cdots, y_j\big\}\,,\quad j=1,2,\cdots, N\,. \label{measurement-set} \end{align} We also denote by $p(x_1, x_2, \cdots, x_N)$ the joint probability density of the states $x(t_1), x(t_2), \cdots, x(t_N)$, and by $p(y_1, y_2, \cdots, y_N\,|\,\{x(t_i)\}_{1 \le i \le N})$ the conditional joint probability density of the measurements given the system's states at time $t_1, t_2, \cdots, t_N$. To quantify the uncertainty due to measurements, we define the log-likelihood ratio~\cite{generalized-jarzynski-feedback-2010} \begin{align} I = \ln \frac{p\big(y_1, y_2, \cdots, y_N\,\big|\,\{x(t_i)\}_{1\le i \le N}\big)}{p(y_1, y_2, \cdots, y_N)}\,, \label{i-uncertain} \end{align} where \begin{align*} p(y_1, y_2, \cdots, y_N) =& \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\cdots\int_{\mathbb{R}^n} p\Big(y_1, y_2, \cdots, y_N\,\big|\,\{x(t_i)\}_{1\le i \le N} \Big) \\ & \times p\Big(x_1, x_2, \cdots, x_N\Big) \,dx_1\,dx_2\,\cdots\, dx_N\,. \end{align*} In particular, in the case of (\ref{x-to-y}) where $r_i$ are independent of each other, we can verify the expression \begin{align} I = \ln \frac{\prod\limits_{i=1}^N p\big(y_i\,|\,x(t_i)\big)}{p(y_1) \prod\limits_{i=2}^N p\big(y_i\,|\,Y_{i-1}\big)}\,. \label{i-uncertain-independent} \end{align} We have the following result. \begin{theorem} Let $x(s)$ be the stochastic process in (\ref{dynamics-1}) and $y_i \in \mathbb{R}^{n_o}$ are $N$ measurements of states $x(t_i)$ at time $t_i$, $1 \le i \le N$. The control protocol satisfies \begin{align} \dot{\lambda}(s) = f_0(\lambda(s), s), \qquad s \le t_1 \,, \label{feedback-control-eqn-1} \end{align} and \begin{align} \dot{\lambda}(s) = f_i(\lambda(s), s\,;\,Y_i), \qquad s \in [t_i, t_{i+1}) \,, \quad i = 1, 2, \cdots, N\,, \label{feedback-control-eqn-2} \end{align} i.e., the control protocol may depend on the available measurements that are already recorded. Let $W, \Delta F, I$ denote the work in (\ref{w-eps-0}), the free energy difference (\ref{delta-f}), and the quantity in (\ref{i-uncertain}), respectively. We have \begin{align} \mathbf{E} \Big[e^{-\beta (W-\Delta F) - I}\Big] = 1\,, \label{w-df-i-is-1} \end{align} where $\mathbf{E}$ denotes the expectation with respect to both the trajectory ensemble $x(s)$ starting from $x(0)\sim \mu_{\lambda(0)}$ and the measurements $Y_N$. \end{theorem} \begin{proof} First of all, notice that, for given measurements $Y_N$ that enter only as parameters in (\ref{feedback-control-eqn-2}), the control protocol does not explicitly depend on the process $x(s)$ and therefore Theorem~\ref{thm-1} holds. In other words, Theorem~\ref{thm-1} and Remark~\ref{rmk-1} imply that \begin{align} \mathbf{E}\Big[e^{-\beta (W-\Delta F)}\,\Big|\,Y_N\Big] = 1\,. \label{thm-2-proof-en1} \end{align} Using (\ref{thm-2-proof-en1}) and the definition (\ref{i-uncertain}) of $I$, we can compute \begin{align*} & \mathbf{E}\Big[e^{-\beta (W-\Delta F) - I}\Big]\\ = &\mathbf{E} \Big[e^{-\beta (W-\Delta F)}\frac{p\big(y_1, y_2, \cdots, y_N\big)}{p\big(y_1, y_2, \cdots, y_N\,\big|\,\{x(t_i)\}_{1\le i \le N}\big)}\Big]\\ =& \mathbf{E}\bigg[\mathbf{E} \bigg(e^{-\beta (W-\Delta F)}\frac{p(y_1, y_2, \cdots, y_N)}{p\big(y_1, y_2, \cdots, y_N\,\big|\,\{x(t_i)\}_{1 \le i \le N}\big)}\,\Big|\,\big\{x(s), \lambda(s)\big\}_{0 \le s \le T}\bigg)\bigg] \\ =& \mathbf{E}\bigg[e^{-\beta (W-\Delta F)} \mathbf{E}\bigg(\frac{p\big(y_1, y_2, \cdots, y_N\big)}{p\big(y_1, y_2, \cdots, y_N\,\big|\,\{x(t_i)\}_{1 \le i \le N}\big)}\,\Big|\,\big\{x(s), \lambda(s)\big\}_{0 \le s \le T}\bigg)\bigg]\\ =& \mathbf{E}\Big[ e^{-\beta (W-\Delta F)}\Big] \\ =& \mathbf{E}\Big[\mathbf{E}\Big(e^{-\beta (W-\Delta F)}\,\Big|\,Y_N\Big)\Big] = 1\,. \end{align*} \end{proof} \end{comment} \subsection{Fluctuation theorem} \label{subsec-fluct-thm} In this subsection we study the fluctuation theorem in the alchemical transition case. Note that the main result below (Theorem~\ref{thm-fluct-relation}) has been obtained in~\cite{Chetrite2008}, where comprehensive analysis as well as several concrete examples have been presented. The main purpose of this subsection is to provide a both concise and mathematical derivation which directly leads to Theorem~\ref{thm-fluct-relation}. A different proof which is similar (but shorter) to the argument in~\cite{Chetrite2008} can be found in Appendix~\ref{app-3}. First of all, we introduce the ``reversed'' dynamics, which is closely related to the dynamics $x(s)$ in (\ref{dynamics-1}), or its vector form (\ref{dynamics-1-q-vector}). Notice that different reversals of stochastic dynamics have been studied in the literature in both mathematics and physics communities. We refer to~\cite{haussmann1986,Chetrite2008} and the references therein. In our case, we consider the dynamics $x^R(s)$ on the time interval $s \in [0,T]$, which is governed by \begin{align} d x^R(s) & = \Big(-J-a\nabla V + \frac{1}{\beta}\nabla \cdot a\Big)\big(x^R(s), \lambda^R(s)\big) ds + \sqrt{2\beta^{-1}} \sigma\big(x^R(s), \lambda^R(s)\big)\,dw^{(1)}(s)\,, \label{dynamics-1-reversed} \end{align} where $\lambda^R(s)$ is the control protocol satisfying the SDE \begin{align} \begin{split} d\lambda^R(s) =& -f\big(x^R(s), \lambda^R(s), T-s\big)\, ds + 2\epsilon \big(\nabla_\lambda \cdot (\alpha\alpha^T)\big) \big(x^R(s), \lambda^R(s), T-s\big)\,ds \\ &+ \sqrt{2\epsilon}\,\alpha\big(x^R(s), \lambda^R(s),T-s\big)\, dw^{(2)}(s)\,. \end{split} \label{lambda-dynamics-inverse-full} \end{align} Comparing to dynamics (\ref{lambda-dynamics-full}), we note that there is an extra term $\nabla_\lambda \cdot (\alpha\alpha^T)$ in (\ref{lambda-dynamics-inverse-full}). The infinitesimal generator of the system (\ref{dynamics-1-reversed}) and (\ref{lambda-dynamics-inverse-full}) is given by \begin{align} \begin{split} \mathcal{L}^R = & \Big(-J-a\nabla V + \frac{1}{\beta} \nabla\cdot a\Big) \cdot \nabla + \frac{1}{\beta} a : \nabla^2 + \Big(2\epsilon \nabla_\lambda \cdot (\alpha\alpha^T) - f\Big) \cdot \nabla_\lambda + \epsilon\,\alpha\alpha^T :\nabla^2_\lambda\,\\ =& \mathcal{L}^R_1 + \mathcal{L}^R_2 \,, \end{split} \label{l-reversed} \end{align} where $\mathcal{L}^R_1$ is the infinitesimal generator of the dynamics (\ref{dynamics-1-reversed}) when $\lambda^R(s)$ is fixed, and similarly $\mathcal{L}^R_2$ is the infinitesimal generator of the dynamics (\ref{lambda-dynamics-inverse-full}) when $x^R(s)$ is fixed. We will also use the notation $\mathcal{L}^R_{(x, \lambda,T-t)}$ to emphasize that functions in the operator (\ref{l-reversed}) are evaluated at $(x, \lambda,T-t)$. The following fluctuation result concerns the relation between dynamics (\ref{dynamics-1-q-vector}), (\ref{lambda-dynamics-full}) and the reversed ones (\ref{dynamics-1-reversed}), (\ref{lambda-dynamics-inverse-full}). \begin{theorem} Let $0 \le t' < t \le T$, $x,x' \in \mathbb{R}^n$ and $\lambda, \lambda' \in \mathbb{R}^m$. For any continuous function $\eta \in C\big(\mathbb{R}^n \times \mathbb{R}^m \times [0,T]\big)$ with compact support, we have \begin{align} \begin{split} &e^{-\beta V(x',\lambda')}\, \mathbf{E}^R_{x',\lambda',t'}\bigg[\exp\bigg(\int_{t'}^t \eta\big(x^R(s), \lambda^R(s), T-s\big) ds\bigg) \delta\big(x^R(t)-x\big)\,\delta\big(\lambda^{R}(t)-\lambda\big)\bigg]\\ =&e^{-\beta V(x,\lambda)}\,\mathbf{E}_{x,\lambda,T-t}\bigg[e^{-\beta \mathcal{W}} \exp\bigg(\int_{T-t}^{T-t'} \eta\big(x(s), \lambda(s), s\big) ds\bigg) \delta\big(x(T-t')-x'\big)\delta\big(\lambda(T-t')-\lambda'\big) \bigg]\,, \end{split} \label{fluct-relation} \end{align} where \begin{align} \mathcal{W} = \int_{T-t}^{T-t'} \nabla_\lambda V\big(x(s), \lambda(s)\big) \circ d\lambda(s) - \frac{1}{\beta} \int_{T-t}^{T-t'} \Big[\mbox{div}_\lambda \big(f - \epsilon \nabla_\lambda\cdot(\alpha\alpha^T)\big)\Big] \big(x(s), \lambda(s),s\big) ds\,, \label{w-div-f} \end{align} $x^R(\cdot), \lambda^R(\cdot)$ satisfy the dynamics (\ref{dynamics-1-reversed}), (\ref{lambda-dynamics-inverse-full}), and $x(\cdot), \lambda(\cdot)$ satisfy the dynamics (\ref{dynamics-1-q-vector}), (\ref{lambda-dynamics-full}), respectively. Here, $\delta(\cdot)$ denotes the Dirac delta function (see Remark~\ref{rmk-delta} below) and $\mbox{div}_\lambda$ denotes the divergence operator with respect to $\lambda \in \mathbb{R}^m$. $\mathbf{E}^R_{x',\lambda',t'}$ is the conditional expectation with respect to the path ensemble of the dynamics (\ref{dynamics-1-reversed}), (\ref{lambda-dynamics-inverse-full}) starting from $x^R(t') = x'$ and $\lambda^R(t')=\lambda'$ at time $t'$, while $\mathbf{E}_{x,\lambda,T-t}$ is the conditional expectation with respect to the dynamics (\ref{dynamics-1-q-vector}) and (\ref{lambda-dynamics-full}). \label{thm-fluct-relation} \end{theorem} \begin{proof} We consider the quantities on both sides of the equality (\ref{fluct-relation}). For the left hand side of (\ref{fluct-relation}), let us fix the values $(x',\lambda',t') \in \mathbb{R}^n \times \mathbb{R}^m \times [0,T]$ and define the function $u$ by \begin{align} u\big(x,\lambda,t\,;x',\lambda',t'\big) = \mathbf{E}_{x',\lambda',t'}^R\bigg[ \exp\bigg(\int_{t'}^t \eta\big(x^R(s), \lambda^R(s), T-s\big) ds\bigg) \delta\big(x^R(t)-x\big)\delta\big(\lambda^{R}(t)-\lambda\big)\bigg]\,, \label{u-exp-form} \end{align} for $(x,\lambda,t) \in \mathbb{R}^n \times \mathbb{R}^m \times [0,T]$. It is known that $u$ satisfies the PDE \begin{align} \begin{split} &\frac{\partial u}{\partial t} = \big(\mathcal{L}^R_{(x, \lambda,T-t)})^* u + \eta(x,\lambda,T-t) \,u \,, \quad \forall\, (x, \lambda,t) \in \mathbb{R}^n\times \mathbb{R}^m \times (t',T] \,,\\ & u(x, \lambda,t\,;x',\lambda',t')=\delta(x-x')\,\delta(\lambda-\lambda')\,, \quad \mbox{if}~~t=t'\,, \end{split} \label{pde-u-forward} \end{align} where the operator $\mathcal{L}_{(x, \lambda,T-t)}^R$ is defined in (\ref{l-reversed}) and $\big(\mathcal{L}^R_{(x, \lambda,T-t)}\big)^*$ denotes its formal $L^2$ adjoint. Direct calculation shows that, after some cancellation, we have \begin{align} \begin{split} \big(\mathcal{L}^R_{(x, \lambda,T-t)}\big)^*\phi = & \Big[\mbox{div}(J + a\nabla V) + \mbox{div}_\lambda \Big(f -\epsilon \nabla_\lambda\cdot (\alpha\alpha^T)\Big)\Big]\phi + \Big(J + a\nabla V + \frac{1}{\beta} \nabla\cdot a\Big) \cdot \nabla \phi \\ & + \frac{1}{\beta} a : \nabla^2 \phi + f \cdot \nabla_\lambda \phi + \epsilon\,\alpha\alpha^T : \nabla^2_\lambda \phi\,, \end{split} \label{l-r-trans} \end{align} for a smooth function $\phi$. For the right hand side of (\ref{fluct-relation}), we define the function $g$ for fixed $(x', \lambda',t')$ as \begin{align*} g(x,\lambda,t) = \mathbf{E}_{x,\lambda,T-t}\bigg[&e^{-\beta \mathcal{W}} \exp\bigg(\int_{T-t}^{T-t'} \eta\big(x(s), \lambda(s), s\big) ds\bigg) \\ & \times \delta\big(x(T-t')-x'\big)\delta\big(\lambda(T-t')-\lambda'\big) \bigg]\,, \end{align*} where $\mathcal{W}$ is defined in (\ref{w-div-f}), and the dynamics $x(\cdot), \lambda(\cdot)$ satisfies SDEs (\ref{dynamics-1-q-vector}), (\ref{lambda-dynamics-full}). Using the same argument as in Lemma~\ref{lemma-g}, we can verify that the function $g$ satisfies the PDE \begin{align} \begin{split} &\frac{\partial g}{\partial t} = \overline{\mathcal{L}}_{( x,\lambda,T-t)}\, g\,,\qquad \forall\, (x,\lambda,t) \in \mathbb{R}^n \times \mathbb{R}^m \times (t',T] \,,\\ & g(x,\lambda,t) = \delta(x-x')\delta(\lambda-\lambda') \,, \qquad \mbox{if}~~t=t'\,, \end{split} \label{g-delta} \end{align} where the operator $\overline{\mathcal{L}}_{(x,\lambda,T-t)}$ is defined as \begin{align} \begin{split} \overline{\mathcal{L}}_{(x,\lambda,T-t)}\,\phi =& \Big[\epsilon \beta^2 |\alpha^T\nabla_{\lambda} V|^2 - \beta \mathcal{L}_2V + \mbox{div}_\lambda \Big(f - \epsilon \nabla_\lambda\cdot (\alpha\alpha^T)\Big) + \eta \Big] \phi \\ & + \mathcal{L}_{1} \phi + \mathcal{L}_2 \phi -2\epsilon\beta \big(\alpha^T\nabla_\lambda V\big) \cdot \big(\alpha^T\nabla_\lambda \phi\big) \end{split} \label{bar-l} \end{align} for a smooth function $\phi$, and the functions in (\ref{bar-l}) are evaluated at $(x,\lambda,T-t)$. Motivated by the right hand side of (\ref{fluct-relation}), now a key step is to consider the function $\omega(x,\lambda,t) = e^{-\beta V(x, \lambda)} g(x,\lambda,t)$. Recalling the relation (\ref{div-j-zero}), a direct calculation shows that \begin{align} \begin{split} e^{-\beta V} \mathcal{L}_1 g = & e^{-\beta V} \Big(J - a\nabla V + \frac{1}{\beta} \nabla \cdot a\Big) \cdot \nabla \big(e^{\beta V}\omega\big) + \frac{e^{-\beta V}}{\beta} a:\nabla^2 \big(e^{\beta V}\omega\big)\\ =& \Big(J - a\nabla V + \frac{1}{\beta} \nabla \cdot a\Big) \cdot \nabla \omega + \beta \Big[\big(J - a\nabla V + \frac{1}{\beta} \nabla \cdot a\big) \cdot \nabla V \Big] \omega \\ & + \frac{1}{\beta} a:\nabla^2 \omega + 2 (a\nabla V)\cdot \nabla \omega + \frac{e^{-\beta V}\omega}{\beta} a:\nabla^2\big(e^{\beta V}\big)\\ =& \Big[\mbox{div}(J + a\nabla V)\Big] \omega + \Big(J + a\nabla V + \frac{1}{\beta} \nabla \cdot a\Big) \cdot \nabla \omega + \frac{1}{\beta} a:\nabla^2 \omega \,,\\ e^{-\beta V} \mathcal{L}_2 g = & e^{-\beta V} \Big[f\cdot \nabla_{\lambda} (e^{\beta V} \omega) + \epsilon\,\alpha\alpha^T:\nabla^2_{\lambda} (e^{\beta V} \omega) \Big] \\ =& \mathcal{L}_2 \omega + \beta (\mathcal{L}_2 V)\omega + 2\epsilon\beta \big(\alpha^T\nabla_\lambda V\big) \cdot \big(\alpha^T\nabla_\lambda \omega\big) + \epsilon\beta^2|\alpha^T\nabla_\lambda V|^2\, \omega\,, \\ e^{-\beta V} \nabla_\lambda g = & e^{-\beta V} \nabla_\lambda \big(e^{\beta V} \omega\big) = \beta \big(\nabla_\lambda V\big) \omega + \nabla_\lambda \omega\,. \end{split} \label{e-v-l1-l2} \end{align} Combining (\ref{l-reversed}), (\ref{g-delta}), (\ref{bar-l}), (\ref{e-v-l1-l2}), we can conclude that $\omega$ satisfies PDE \begin{align*} &\frac{\partial \omega}{\partial t} = e^{-\beta V} \overline{\mathcal{L}}_{(x,\lambda,T-t)}\,g = \big(\mathcal{L}^R_{(x,\lambda,T-t)}\big)^* \,\omega + \eta(x,\lambda, T-t)\,\omega \,,\quad \forall\,( x, \lambda,t) \in \mathbb{R}^n\times \mathbb{R}^m \times (t',T] \,,\\ &\omega(x,\lambda,t) = e^{-\beta V(x',\lambda')} \delta(x-x')\delta(\lambda-\lambda')\,,\quad \mbox{if}~~ t=t'\,. \end{align*} Comparing the latter with (\ref{pde-u-forward}), we obtain that $e^{-\beta V(x',\lambda')} u(x,\lambda,t\,;x',\lambda',t') = \omega(x,\lambda,t)$, which is equivalent to the equality (\ref{fluct-relation}). \end{proof} \begin{remark} We have adopted the Dirac delta function both in Theorem~\ref{thm-fluct-relation} and in its proof above, in order to simplify the derivations. Precisely, (\ref{fluct-relation}) should be understood in the sense of distributions, or equivalently, \begin{align} \begin{split} &\int_{\mathbb{R}^n} \int_{\mathbb{R}^m} e^{-\beta V(x',\lambda')}\, \mathbf{E}^R_{x',\lambda',t'}\bigg[\exp\bigg(\int_{t'}^t \eta\big(x^R(s), \lambda^R(s), T-s\big) ds\bigg) \varphi\big(x^R(t), \lambda^{R}(t), x', \lambda'\big) \bigg] dx' d\lambda' \\ =&\int_{\mathbb{R}^n} \int_{\mathbb{R}^m} e^{-\beta V(x,\lambda)}\,\mathbf{E}_{x,\lambda,T-t}\bigg[e^{-\beta \mathcal{W}} \exp\bigg(\int_{T-t}^{T-t'} \eta\big(x(s), \lambda(s), s\big) ds\bigg) \varphi\big(x, \lambda, x(T-t'), \lambda(T-t')\big)\bigg] dx\,d\lambda\,, \end{split} \label{fluct-relation-test-function} \end{align} for all test functions $\varphi(x,\lambda, x', \lambda')$ which are smooth enough with compact support. We emphasize that the above proof can be reformulated more rigorously, by introducing test functions and applying integration by parts. \label{rmk-delta} \end{remark} \textbf{From fluctuation theorems to Jarzynski's equality}. It is well known that Jarzynski's equality can be obtained from the fluctuation theorem~\cite{Chetrite2008}. In the remaining part of this subsection, we consider the case when the control protocol $\lambda(s)$ satisfies the dynamics (\ref{lambda-dynamics}) and show that Theorem~\ref{thm-1} is a consequence of Theorem~\ref{thm-fluct-relation}. In this case, (\ref{lambda-dynamics-inverse-full}) governing the reversed protocol $\lambda^R(\cdot)$ simplifies to \begin{align} \begin{split} d\lambda^R(s) =& -f\big(\lambda^R(s), T-s\big)\, ds + 2\epsilon \big(\nabla_\lambda \cdot (\alpha\alpha^T)\big) \big(\lambda^R(s),T-s\big)\,ds \\ & + \sqrt{2\epsilon}\, \alpha\big(\lambda^R(s),T-s\big)\,dw^{(2)}(s)\,, \end{split} \label{lambda-dynamics-inverse} \end{align} and therefore is independent of the process $x^R(\cdot)$ in (\ref{dynamics-1-reversed}). For simplicity, we only prove the equality (\ref{generalized-jarzynski-varphi}) for $t=T$. In order to derive the equality (\ref{generalized-jarzynski-varphi}) in Theorem~\ref{thm-1}, we set $t'=0, t=T$ and $\eta =-\mbox{div}_\lambda \big(f-\epsilon \nabla_\lambda\cdot(\alpha\alpha^T)\big)$, which is a function independent of $x\in \mathbb{R}^n$. Multiplying $\varphi(x',\lambda')$ on both sides of the equality (\ref{fluct-relation}), integrating with respect to $x, x', \lambda'$, and recalling the definition (\ref{u-q-w}) of the work $W$, we obtain \begin{align} \begin{split} &\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda)}\,\mathbf{E}_{x,\lambda,0}\Big(\varphi(x(T), \lambda(T))\, e^{-\beta W}\Big)\,dx \\ =&\int_{\mathbb{R}^n} \int_{\mathbb{R}^m} \varphi(x',\lambda')\,e^{-\beta V(x',\lambda')}\, \mathbf{E}^R_{x',\lambda',0}\bigg[\exp\bigg(\int_{0}^T \eta\big(\lambda^R(s), T-s\big) ds\bigg) \delta(\lambda^{R}(T)-\lambda)\bigg] dx' d\lambda' \,. \end{split} \label{thm-2-identity-0} \end{align} Notice that the conditional expectation on the right hand side of (\ref{thm-2-identity-0}) is actually independent of $x'$ (This is only true when the control protocol doesn't depend on the dynamics. See Remark~\ref{rmk-1}.). We have \begin{align} \begin{split} &\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda)}\,\mathbf{E}_{x,\lambda,0}\Big(\varphi(x(T), \lambda(T))\,e^{-\beta W}\Big)\, dx \\ =& \int_{\mathbb{R}^m} \big[\mathbf{E}_{\mu_{\lambda'}} \varphi(\cdot,\lambda')\big]\,Z(\lambda') \mathbf{E}^R_{\lambda',0}\bigg[\exp\bigg(\int_{0}^T \eta \big(\lambda^R(s), T-s\big) ds\bigg) \delta(\lambda^{R}(T)-\lambda)\bigg]\, d\lambda'\,, \end{split} \label{thm-2-identity-1} \end{align} where $Z(\cdot)$ is the normalization constant in (\ref{normal-const}). More generally, let us define the function \begin{align*} \psi(\lambda,t) =& \int_{\mathbb{R}^m} \big[\mathbf{E}_{\mu_{\lambda'}} \varphi(\cdot,\lambda')\big]\,Z(\lambda') \mathbf{E}^R_{\lambda',0}\bigg[\exp\bigg(\int_{0}^{T-t} \eta(\lambda^R(s), T-s) ds\bigg) \delta(\lambda^{R}(T-t)-\lambda)\bigg]\, d\lambda'\,. \end{align*} Similarly to the function $u$ in (\ref{u-exp-form}) which satisfies the PDE (\ref{pde-u-forward}), we know that $\psi$ satisfies \begin{align} \begin{split} & \frac{\partial \psi}{\partial t} + \big(\mathcal{L}^R_2\big)^* \psi - \Big[\mbox{div}_\lambda \Big(f-\epsilon \nabla_\lambda \cdot (\alpha\alpha^T)\Big)\Big]\psi = 0 \,,\quad \forall~ (\lambda,t) \in \mathbb{R}^m \times [0, T) \,,\\ & \psi(\lambda,T) = Z(\lambda)\mathbf{E}_{\mu_{\lambda}} \varphi(\cdot,\lambda)\,\,, \end{split} \label{psi-pde} \end{align} where $\mathcal{L}^R_2 = \big(2\epsilon \nabla_\lambda\cdot (\alpha\alpha^T) -f\big) \cdot \nabla_\lambda + \epsilon\,\alpha\alpha^T:\nabla^2_\lambda$, and the functions in (\ref{psi-pde}) are evaluated at $(\lambda, t)$. Calculating $(\mathcal{L}^R_2)^*$, one can conclude that (\ref{psi-pde}) is equivalent to \begin{align} \begin{split} & \frac{\partial \psi}{\partial t} + \mathcal{L}_2 \psi = 0 \,,\quad \forall~ (\lambda,t) \in \mathbb{R}^m \times [0, T) \,,\\ & \psi(\lambda,T) = Z(\lambda)\mathbf{E}_{\mu_{\lambda}} \varphi(\cdot,\lambda)\,, \end{split} \label{psi-pde-1} \end{align} where $\mathcal{L}_2$ is the infinitesimal generator defined in (\ref{l-lambda}) for the dynamics (\ref{lambda-dynamics}), and therefore the Feynman-Kac formula implies that $$\psi(\lambda,t) = \mathbf{E}_{\lambda,t}\Big[Z\big(\lambda(T)\big)\mathbf{E}_{\mu_{\lambda(T)}} \varphi(\cdot,\lambda(T))\Big].$$ Combining this with the identity in (\ref{thm-2-identity-1}), we conclude that \begin{align*} &\int_{\mathbb{R}^n} e^{-\beta V(x,\lambda)}\,\mathbf{E}_{x,\lambda,0}\Big(\varphi(x(T), \lambda(T))\, e^{-\beta W}\Big) dx = \psi(\lambda,0) = \mathbf{E}_{\lambda,0}\big[Z\big(\lambda(T)\big)\mathbf{E}_{\mu_{\lambda(T)}} \varphi(\cdot,\lambda(T))\big]\,, \end{align*} which is equivalent to the equality (\ref{generalized-jarzynski-varphi}) in Theorem~\ref{thm-1} for $t=T$. $\hfill\square$ \vspace{0.2cm} In the above analysis, we have assumed that the control protocol $\lambda(s)$ is perturbed by noise. Let us now consider the case when $\lambda(s)$ is deterministic, i.e., when $\epsilon = 0$ in dynamics (\ref{lambda-dynamics}). In this case, we have \begin{align} \dot{\lambda}(s) = f(\lambda(s), s)\,, \quad 0 \le s \le T\,, \label{ode-control} \end{align} and $\lambda^R(s) = \lambda(T-s)$. It is well known that Crooks's relations~\cite{crooks-path-ensemble-pre2000} can be derived from the fluctuation relation~\cite{Chetrite2008,escorted-simulation2011}. In the following remark, for simplicity we will only state Crooks's relations for the escorted dynamics (\ref{dynamics-1-escorted}). Results corresponding to the original dynamics (\ref{dynamics-1}) can be recovered by choosing $u\equiv 0$. \begin{remark}[Crooks's relations for the escorted dynamics] Consider the reversed version of the escorted dynamics (\ref{dynamics-1-escorted}), which satisfies \begin{align} \begin{split} d \bar{x}^{R}(s) =& \Big(-J-a\nabla V + \frac{1}{\beta}\nabla \cdot a\Big)\big(\bar{x}^R(s), \lambda^R(s)\big)\,ds - u(\bar{x}^{R}(s), \lambda^R(s))\,ds \\ & +\sqrt{2\beta^{-1}} \sigma(\bar{x}^{R}(s), \lambda^R(s)) \,dw^{(1)}(s)\,, \quad s \ge 0\,. \end{split} \label{dynamics-1-escorted-reversed} \end{align} By slightly modifying the proof of Theorem~\ref{thm-fluct-relation}, we can prove \begin{align} \begin{split} &e^{-\beta V(x',\lambda(T))}\, \overline{\mathbf{E}}^R_{x',0}\bigg[\exp\bigg(\int_{0}^T \eta\big(\bar{x}^{R}(s), T-s\big) ds\bigg) \delta\big(\bar{x}^{R}(T)-x\big)\bigg]\\ =&e^{-\beta V(x,\lambda(0))}\,\overline{\mathbf{E}}_{x,0}\bigg[e^{-\beta \overline{W}} \exp\bigg(\int_{0}^{T} \eta\big(\bar{x}(s), s\big)\,ds\bigg) \delta\big(\bar{x}(T)-x'\big) \bigg]\,, \quad \forall~x,x' \in \mathbb{R}^n\,, \end{split} \label{fluct-relation-eps0} \end{align} where $\overline{W}=\overline{W}_{(0,T)}$ is the modified work in (\ref{work-w-escorted}) and $\eta \in C\big(\mathbb{R}^n \times [0,T]\big)$ is continuous with compact support. The notations $\overline{\mathbf{E}}_{x,0}$ and $\overline{\mathbf{E}}^R_{x',0}$ denote the ensemble averages with respect to the escorted dynamics $\bar{x}(\cdot)$ in (\ref{dynamics-1-escorted}) and its reversed counterpart $\bar{x}^{R}(\cdot)$ in (\ref{dynamics-1-escorted-reversed}) starting from fixed state at time $s=0$, respectively. Since any (bounded) continuous function $\mathcal{G}$ on the path space can be approximated by linear combinations of functions which are of the form $\exp\big(\int_0^T \eta(\bar{x}(s),s)\,ds\big)$ (for instance, by discretizing $[0,T]$ into subintervals), integrating (\ref{fluct-relation-eps0}) gives \begin{align} \frac{\overline{\mathbf{E}}_{\lambda(0), 0} \big(e^{-\beta \overline{W}}\mathcal{G}\big)}{\overline{\mathbf{E}}^{R}_{\lambda(T), 0}\,(\mathcal{G}^{R})} = e^{-\beta \Delta F(T)}\,, \label{crooks-eps0-1} \end{align} where $\mathcal{G}^R\big(x(\cdot)\big)=\mathcal{G}\big(x(T-\cdot)\big)$ for all path $x(\cdot) \in C\big([0,T], \mathbb{R}^n\big)$, and $\Delta F(T)$ is the free energy difference in (\ref{delta-f}). The notation $\overline{\mathbf{E}}_{\lambda(0), 0}$ is the path ensemble average of the forward dynamics $\bar{x}(s)$ starting from $\bar{x}(0)\sim \mu_{\lambda(0)}$, and $\overline{\mathbf{E}}^R_{\lambda(T), 0}$ is defined similarly for the reversed dynamics $\bar{x}^{R}(s)$. If we formally write $\overline{\mathcal{P}}[\bar{x}(\cdot)\,|\,\bar{x}(0)]$, $\overline{\mathcal{P}}^R[\bar{x}^{R}(\cdot)\,|\,\bar{x}^{R}(0)]$ as the probability densities on the path space for the dynamics $\bar{x}(s)$, $\bar{x}^{R}(s)$ starting from $\bar{x}(0)$ and $\bar{x}^{R}(0)$ respectively, we obtain from (\ref{crooks-eps0-1}) that \begin{align} \frac{\overline{\mathcal{P}}[x(\cdot)\,|\,x(0)]} {\overline{\mathcal{P}}^R[x(T-\cdot)\,|\,x(T)]} = e^{-\beta (\Delta \mathcal{U}(T)- \overline{W})}\,,\quad \forall~x(\cdot) \in C\big([0,T], \mathbb{R}^n\big)\,, \label{micro-reversibility} \end{align} where $\Delta \mathcal{U}(T)$ is the change of internal energy in (\ref{u-q-w}). Furthermore, notice that for the work function $\mathcal{G}\big(x(\cdot)) = \overline{W}$ in (\ref{work-w-escorted}), we have \begin{align*} \mathcal{G}^R\big(x(\cdot)) = & \mathcal{G}\big(x(T-\cdot))\\ =& \int_0^T \Big(\nabla_\lambda V\cdot f + u\cdot \nabla V - \frac{1}{\beta} \nabla \cdot u\Big)(x(s),\lambda(T-s), T-s)\, ds \\ =& -\int_0^T \Big(\nabla_\lambda V\cdot \dot{\lambda}^R + (-u)\cdot \nabla V - \frac{1}{\beta} \nabla \cdot (-u)\Big)(x(s),\lambda^R(s), s)\, ds \\ =& -\overline{W}^R, \end{align*} where $\overline{W}^R$ is the modified work of the reversed dynamics (\ref{dynamics-1-escorted-reversed}). Therefore, (\ref{crooks-eps0-1}) implies \begin{align} \frac{\overline{\mathbf{E}}_{\lambda(0), 0} \big(e^{-\beta \overline{W}}\phi(\overline{W})\big)}{\overline{\mathbf{E}}^{R}_{\lambda(T), 0}\,(\phi(-\overline{W}^R))} = e^{-\beta \Delta F(T)}\,,\quad \forall~ \phi \in C_b(\mathbb{R})\,. \label{crooks-eps0-2} \end{align} Readers can recognize that the identities (\ref{micro-reversibility}), (\ref{crooks-eps0-1}) and (\ref{crooks-eps0-2}) are the counterparts of the microscopic reversibility and Crooks's relations in~\cite{crooks-path-ensemble-pre2000,escorted-simulation2011} for (escorted) continuous-time Markovian processes. It was already pointed out in~\cite{crooks-path-ensemble-pre2000} that these relations (in particular the microscopic reversibility) hold for general Markov chains out of equilibrium without reversibility assumption. The derivations above show that this is also true for the continuous-time process $\bar{x}(s)$ in (\ref{dynamics-1-escorted}) with the control protocol in (\ref{ode-control}). \label{rmk-crook-jarzynski-eps0} \end{remark} \subsection{Change of measure and information-theoretic formulation} \label{subsec-is} In this subsection, we explore the idea of importance sampling~\cite{ce_paper2014,Hartmann2016-Nonlinearity} to study the Jarzynski's equality. We focus on the case when the control protocol $\lambda(s)$ is deterministic and satisfies the ODE (\ref{ode-control}), i.e. $\epsilon = 0$ in dynamics (\ref{lambda-dynamics}). For simplicity, we also assume that the coefficient matrix $\sigma$ in dynamics (\ref{dynamics-1}) is an invertible $n\times n$ matrix. Denote $\mathbf{P}$, $\mathbf{E}$ as the probability measure and the mathematical expectation on path space $C\big([0,T], \mathbb{R}^n\big)$ with respect to paths of the process (\ref{dynamics-1-q-vector}) starting from $x(0) \sim \mu_{\lambda(0)}$, where $\lambda(s)$ satisfies (\ref{ode-control}) with fixed $\lambda(0) \in \mathbb{R}^m$. Then the Jarzynski's equality (\ref{generalized-jarzynski}) reads \begin{align} \mathbf{E}\Big[ e^{-\beta W}\Big] = e^{-\beta \Delta F}\,, \label{jarzynski-repeat} \end{align} where $\Delta F = F\big(\lambda(T)\big)-F\big(\lambda(0)\big)$, with \begin{align} W=\int_0^T \nabla_\lambda V\big(x(s), \lambda(s)\big) \cdot f\big(\lambda(s), s\big) ds\,. \label{w-repeat} \end{align} See Remark~\ref{rmk-1} for related discussions. Let $\overline{\mathbf{P}}$ be another probability measure on the space $C\big([0,T], \mathbb{R}^n\big)$ which is equivalent to $\mathbf{P}$ and let $\overline{\mathbf{E}}$ be the corresponding expectation. Applying a change of measure in (\ref{jarzynski-repeat}), together with Jensen's inequality, we can deduce \begin{align} \begin{split} \Delta F =&\, -\beta^{-1} \ln \overline{\mathbf{E}} \Big(e^{-\beta W} \frac{d\mathbf{P}}{d\overline{\mathbf{P}}}\Big) \\ \le&\, \overline{\mathbf{E}} \Big(W + \beta^{-1} \ln \frac{d\overline{\mathbf{P}}}{d\mathbf{P}}\Big) \\ =&\, \overline{\mathbf{E}}(W) + \beta^{-1} D_{KL}\big(\overline{\mathbf{P}}\,\|\,\mathbf{P}\big)\,, \end{split} \label{df-w-ineq} \end{align} where $D_{KL}\big(\cdot\,\|\,\cdot\big)$ denotes the Kullback-Leibler divergence of two probability measures~\cite{MacKay2002-inf-theory,Bishop2006-pattern-recognition}. Notice that the inequality (\ref{df-w-ineq}) can be interpreted as a generalization of the second law of thermodynamics~\cite{callen1985thermodynamics}. In particular, under certain conditions on the work $W$, the equality in (\ref{df-w-ineq}) can be attained by the optimal probability measure $\mathbf{P}^*$, which is determined by \begin{align} \frac{d\mathbf{P}^*}{d\mathbf{P}} = e^{-\beta(W-\Delta F)}\,,\qquad \mathbf{P}^*-a.s. \label{opt-p} \end{align} In other words, the optimal change of measure tilts the original path probabilities exponentially according to the differences between the work $W$ and the free energy difference $\Delta F$. In particular, the probability of paths with smaller work $W$ (compared to $\Delta F$) increases under the optimal measure. Meanwhile, the importance sampling Monte Carlo estimator for the free energy difference $\Delta F$ based on the identity \begin{align} \Delta F =-\beta^{-1} \ln\mathbf{E}^* \Big(e^{-\beta W} \frac{d\mathbf{P}}{\,d\mathbf{P}^*}\Big) \label{free-energy-optimal-estimator} \end{align} will achieve zero variance. More generally, inspired by the last line in (\ref{df-w-ineq}), we define \begin{align} \Phi(\overline{\mathbf{P}}) := \overline{\mathbf{E}}\big(W) + \beta^{-1} D_{KL}\big(\overline{\mathbf{P}}\,\|\,\mathbf{P}\big) \,, \label{phi-cost} \end{align} for a general probability measure $\overline{\mathbf{P}}$ which is equivalent to $\mathbf{P}$. Then the above discussions imply the following variational principle \begin{align} \begin{split} \Delta F =& \inf_{\overline{\mathbf{P}}\sim \mathbf{P}} \Big[\overline{\mathbf{E}}\big(W) + \beta^{-1} D_{KL}\big(\overline{\mathbf{P}}\,\|\, \mathbf{P}\big)\Big] \\ =& \inf_{\overline{\mathbf{P}}\sim \mathbf{P}} \Phi(\overline{\mathbf{P}}) = \Phi(\mathbf{P}^*)\,, \end{split} \label{variation-form} \end{align} where `$\sim$' denotes the equivalence relation between two probability measures. In other words, the optimal probability measure $\mathbf{P}^*$ in (\ref{opt-p}) can be characterized as the minimizer of the minimization problem (\ref{variation-form}) and the corresponding minimum equals to $\Delta F$. Furthermore, using (\ref{opt-p}) and (\ref{phi-cost}), we can verify the following simple relation \begin{align} \begin{split} \Phi(\overline{\mathbf{P}}) =& \overline{\mathbf{E}} \bigg(W + \beta^{-1} \ln \frac{d\overline{\mathbf{P}}}{d\mathbf{P}}\bigg) \\ = & \mathbf{E}^*\bigg[\bigg(W + \beta^{-1} \ln\frac{d\overline{\mathbf{P}}}{d\mathbf{P}}\bigg) \frac{d\overline{\mathbf{P}}}{\,d\mathbf{P}^*}\bigg] \\ =& \mathbf{E}^*\bigg[\bigg(\Delta F + \beta^{-1} \ln \frac{d\mathbf{P}}{\,d\mathbf{P}^*} + \beta^{-1} \ln \frac{d\overline{\mathbf{P}}}{d\mathbf{P}}\bigg) \frac{d\overline{\mathbf{P}}}{\,d\mathbf{P}^*}\bigg] \\ =& \Delta F + \beta^{-1} \mathbf{E}^*\bigg[\bigg( \ln \frac{d\overline{\mathbf{P}}}{\,d\mathbf{P}^*}\bigg) \frac{d\overline{\mathbf{P}}}{\,d\mathbf{P}^*}\bigg] \\ =& \Delta F + \beta^{-1} D_{KL}\big(\overline{\mathbf{P}}\,\|\, \mathbf{P}^*\big)\,, \end{split} \label{entropy-exp} \end{align} for a general probability measure $\overline{\mathbf{P}}$ such that $\overline{\mathbf{P}} \sim \mathbf{P}$. It becomes apparent from the last expression in (\ref{entropy-exp}) that $\Delta F$ is the global minimum of the function $\Phi$ and is attained by the (unique) probability measure $\mathbf{P}^*$, since $D_{KL}\big(\overline{\mathbf{P}}\,\|\, \mathbf{P}^*\big)\ge 0$ and the equality is achieved if and only if $\overline{\mathbf{P}}=\mathbf{P}^*$. Furthermore, minimizing the function $\Phi$ is equivalent to minimizing the Kullback-Leibler divergence $D_{KL}\big(\cdot\,\|\, \mathbf{P}^*\big)$. In the following, we show that the optimal change of measure $\mathbf{P}^*$ can be characterized more transparently. To this end, let $\mathbf{P}_{x,t}$, $\mathbf{E}_{x,t}$ denote the path measure and the conditional expectation of the process (\ref{dynamics-1-q-vector}) starting from a fixed state $x \in \mathbb{R}^n$ at time $t$. Notice that, by the disintegration theorem~\cite[Theorem~$5.3.1$]{ambrosio2005gradient}, we can write the path measure $\mathbf{P}$ as $$\mathbf{P}=\int_{\mathbb{R}^n} \mathbf{P}_{x,0}\, d\mu_{\lambda(0)}(x).$$ Defining the function \begin{align} g(x,t) = \mathbf{E}_{x,t} \big(e^{-\beta W_{(t,T)}}\big)\,, \label{g-fun-repeat} \end{align} analogously to (\ref{g-def}), Jarzynski's equality (\ref{jarzynski-repeat}) implies that \begin{align} \Delta F = -\beta^{-1} \ln \big(\mathbf{E}_{\mu_{\lambda(0)}} g(\cdot, 0)\big)\,. \label{jarzynski-g} \end{align} Sampling an expectation value whose form is similar to (\ref{g-fun-repeat}) using importance sampling Monte Carlo method has been studied in previous work~\cite{ip-dupuis-multiscale,ip-kostas1,ip-eric,ce_paper2014,Hartmann2017-ptrf,Hartmann2016-Nonlinearity}. In particular, we know from the Feynman-Kac formula that $g$ solves the PDE \begin{align} &\partial_t g + \mathcal{L}_1 g -\beta (f\cdot \nabla_\lambda V)g = 0\,,\quad g(\cdot, T) = 1\,, \label{g-pde-repeat} \end{align} where $\mathcal{L}_1$ is the infinitesimal generator in (\ref{l-1}) with $\lambda=\lambda(\cdot)$ being dependent on time $t$. Introducing $U=-\beta^{-1}\ln g$, it follows from (\ref{g-pde-repeat}) that $U$ satisfies a Hamilton-Jacobi-Bellman equation \begin{align} \begin{split} &\partial_t U + \min_{c\in \mathbb{R}^n}\Big\{\mathcal{L}_1 U + \sigma c\cdot \nabla U + \frac{|c|^2}{4} + (f \cdot \nabla_\lambda V)\Big\} = 0\,,\\ & U(\cdot, T) = 0\,, \end{split} \end{align} and one can show~\cite{fleming2006} that $U$ is the value function of the optimal control problem \begin{align} U(x,t) = \inf_{u_s} \mathbf{E}^u_{x,t} \bigg[\int_t^T \Big( \nabla_\lambda V\big(x^u(s), \lambda(s)\big) \cdot f(\lambda(s),s) + \frac{|u_s|^2}{4}\Big) ds\bigg]\,, \end{align} where $u_s \in \mathbb{R}^n$ is the control policy, $x^u(s)$ is the controlled process given by \begin{align} d x^u(s) = b(x^u(s), \lambda(s)) ds + \sigma(x^u(s), \lambda(s))u_s\,ds + \sqrt{2\beta^{-1}} \sigma(x^u(s), \lambda(s)) \,dw^{(1)}(s)\,, \label{dynamics-1-u} \end{align} and $\mathbf{E}^u_{x,t}$ denotes the corresponding conditional expectation starting from $x^u(t) = x$ at time $t$. In particular, it is well known that the feedback control policy \begin{align} u^*_s(x) = - 2 \sigma^T(x,\lambda(s)) \nabla U(x,s) = 2 \beta^{-1} \frac{\sigma^T(x,\lambda(s))\nabla g(x,s)}{g(x,s)} \,, \quad (x, s) \in \mathbb{R}^n \times [0, T] \label{opt-u} \end{align} leads to the zero-variance importance sampling Monte Carlo estimator for the path ensemble average in (\ref{g-fun-repeat})~\cite{entropy-var-free-energy2017}. Based on these facts and the equality (\ref{jarzynski-g}), it is not difficult to conclude that the optimal probability measure to sample the free energy $\Delta F$ in (\ref{free-energy-optimal-estimator}) is given by the disintegration expression \begin{align} \mathbf{P}^*= \int_{\mathbb{R}^n} \mathbf{P}_{x,0}^*\, d\mu_0^*(x) \,, \label{opt-p-decomp} \end{align} where $\mu_0^*$ is the probability measure on $\mathbb{R}^n$ such that \begin{align} \frac{d\mu^*_0}{dx} \propto e^{-\beta V(x,\lambda(0))} g(x,0) \,, \label{opt-mu0} \end{align} and $\mathbf{P}^*_{x,0}$ is the probability measure corresponding to the controlled dynamics (\ref{dynamics-1-u}) starting from $x^u(0) = x$, with $u^*_s = u^*_s(x^u(s))$ which is defined in (\ref{opt-u}) for $s \in [0,T]$. In other words, the importance sampling estimator (\ref{free-energy-optimal-estimator}) for the free energy $\Delta F$ will achieve zero-variance, if we generate trajectories from dynamics (\ref{dynamics-1-u}) with the control $u^*_s$ starting from the initial distribution $x^u(0) \sim \mu_0^*$. \begin{remark} In the following, we make a comparison with other relevant directions in the literature. \begin{enumerate} \item (Optimal control protocol) In the importance sampling approach above, where the main purpose is to improve the numerical efficiency of free energy calculation, we assumed that the control protocol $\lambda(s)$ is fixed and the dynamics of the original nonequilibrium process is modified by adding an extra (additive) control force. In contrast to this, the problem of minimizing either the average work or the average heat by varying the control protocols has been considered in several recent works in the study of thermodynamics for small systems~\cite{optimal-protocol2008, optimal-finite-time-seifert-2007,extracting-work-feedback-2011,optimal-protocols-transport-2011}. Motivated by these studies, it may be also interesting to optimize the control protocols in order to minimize the variance of the Monte Carlo estimators. This problem is beyond the scope of the current paper but we would like to consider it in the future. \item (Escorted free energy simulation) The idea of further adding an extra control force to the nonequilibrium processes in order to improve the efficiency of free energy calculation has also been explored in the escorted free energy simulation method~\cite{pre-escort-fe-simulation2008,escorted-simulation2011}. In this method~\cite{pre-escort-fe-simulation2008}, the authors derived the identity (\ref{jarzynski-escorted}) for the modified dynamics (\ref{dynamics-1-escorted}), and suggested to apply it to compute the free energy difference $\Delta F$ by choosing the vector field $u$ in (\ref{dynamics-1-escorted}) properly (such that the ``lag'' is reduced). There also exists an optimal vector field, at least formally, such that the Monte Carlo estimator in the escorted simulation method achieves zero variance. Despite of these similarities, we emphasize that the importance sampling method in this subsection and the escorted free energy simulation method rely on different identities (of the nonequilibrium processes with extra control). In other words, the change of measure identity in the first line of (\ref{df-w-ineq}) and the identity (\ref{jarzynski-escorted}) can not be derived from one to the other straightforwardly. Furthermore, unlike the escorted free energy simulation method where the initial distribution is fixed, in importance sampling one has the freedom to change the initial distribution as well. In particular, this is the case for the optimal change of measure, since $\mu_0^*$ in (\ref{opt-mu0}) is typically different from the equilibrium distribution $\mu_{\lambda(0)}$. \item (Bidirectional sampling, Bennett's acceptance ratio method) It is known in the literature~\cite{crooks-path-ensemble-pre2000,compare_free_energy_methods_2006,optimal-estimator-minh-2009,escorted-simulation2011} that free energy estimators based on Crooks's relation (\ref{crooks-eps0-2}), using trajectories of both the forward and backward processes, perform much better than estimators based on the Jarzynski's equality (\ref{jarzynski-repeat}), which only use trajectories of the forward process. The optimal choice of the function $\phi$ in (\ref{crooks-eps0-2}) is known~\cite{BENNETT1976}, given the numbers of both forward and backward trajectories. It is interesting to consider how one can apply the importance sampling idea to further improve the efficiency of estimators which use trajectories of both forward and backward processes. We leave this question in future study. \end{enumerate} \label{rmk-ip-and-escorted} \end{remark} \subsection{Cross-entropy method} \label{subsec-ce} From the previous subsection, we know that the probability measure $\mathbf{P}^*$ in (\ref{opt-p}), or equivalently in (\ref{opt-p-decomp}), is optimal in the sense that the importance sampling estimator (\ref{free-energy-optimal-estimator}) has zero-variance. However, in practice it is often difficult to compute $\mathbf{P}^*$ or $u_s^*$. In this subsection, we briefly outline a numerical approach to sample the free energy difference $\Delta F$ using the importance sampling Monte Carlo method~\cite{ce_paper2014,ce_book}. The main idea is to approximate the optimal measure $\mathbf{P}^*$ within a family of parameterized probability measures $\big\{\mathbf{P}_{\boldsymbol{\omega}}\,|\,\boldsymbol{\omega} \in \mathbb{R}^k\big\}$, with the hope that the closer $\mathbf{P}_{\boldsymbol{\omega}}$ is to $\mathbf{P}^*$, the more efficient the importance sampling estimator will be (in the sense that variance is small). Different from the importance sampling method studied in~\cite{path_sampling_zuckerman2004, optimum-bias-2008} which requires Monte Carlo sampling in path space with an acceptance-rejection procedure, the method proposed below can be implemented at the SDE level. We recall that the probability measure $\mathbf{P}$ corresponds to the trajectories of processes (\ref{dynamics-1}) and (\ref{ode-control}). Now let $\bar{\mu}_0$ be the probability measure on $\mathbb{R}^n$, possibly different from $\mu_{\lambda(0)}$. Given a parameter $\boldsymbol{\omega}=(\omega_1, \omega_2, \cdots, \omega_k)^T \in \mathbb{R}^k$, we define $\mathbf{P}_{\boldsymbol{\omega}}$ as the probability measure corresponding to the trajectories of the process \begin{align} \begin{split} d x(s) & = b\big(x(s), \lambda(s)\big) ds + \sigma\big(x(s), \lambda(s)\big) \Big(\sum_{l=1}^k \omega_l \phi^{(l)}\big(x(s), \lambda(s), s\big)\Big)\,ds + \sqrt{2\beta^{-1}} \sigma\big(x(s), \lambda(s)\big) \,dw(s)\,, \end{split} \label{dynamics-ce-1} \end{align} and the control protocol (\ref{ode-control}), starting from $x(0) \sim \bar{\mu}_{0}$, where $\phi^{(l)} : \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R}^+ \rightarrow \mathbb{R}^n$, $1 \le l \le k$, are $k$ ansatz functions. Clearly, we have $\mathbf{P}_{\boldsymbol{\omega}}=\mathbf{P}$ when $\boldsymbol{\omega}=\boldsymbol{0}\in \mathbb{R}^k$ and $\bar{\mu}_0=\mu_{\lambda(0)}$. As a special choice of ansatz functions, we can take $\phi^{(l)} =-\sigma^T\nabla V^{(l)}$, where $V^{(l)}: \mathbb{R}^n\times \mathbb{R}^m\rightarrow \mathbb{R}$, $1 \le l \le k$, are $k$ potential functions. In this case, recalling that dynamics (\ref{dynamics-1}) can be written equivalently as (\ref{dynamics-1-q-vector}), we see that dynamics (\ref{dynamics-ce-1}) becomes \begin{align*} \begin{split} d x(s) & = \bigg[J - a\nabla\Big(V + \sum_{l=1}^k \omega_lV^{(l)}\Big)+ \frac{1}{\beta} \nabla\cdot a\bigg](x(s), \lambda(s))\, ds + \sqrt{2\beta^{-1}} \sigma\big(x(s), \lambda(s)\big) \,dw(s)\,, \end{split} \end{align*} i.e., probability measure $\mathbf{P}_{\bm{\omega}}$ corresponds to the dynamics under the modified potential $V + \sum\limits_{l=1}^k \omega_lV^{(l)}$. The optimal approximation of the probability measure $\mathbf{P}^*$ within the set $\big\{\mathbf{P}_{\boldsymbol{\omega}}\,|\,\boldsymbol{\omega} \in \mathbb{R}^k\big\}$ is defined as the minimizer of the minimization problem \begin{align} \min_{\boldsymbol{\omega} \in \mathbb{R}^k} D_{KL}\big(\mathbf{P}^*\,\|\, \mathbf{P}_{\boldsymbol{\omega}}\big)\,. \label{mini-omega-problem} \end{align} Note that, comparing to the minimization of the function $\Phi$ in (\ref{phi-cost}), which is equivalent to minimizing $D_{KL}(\cdot\,\|\,\mathbf{P}^*)$ by (\ref{entropy-exp}), approximations have been introduced in (\ref{mini-omega-problem}), i.e., we have first switched the order of the two arguments in $D_{KL}(\cdot\,\|\,\cdot)$ and then confined ourselves on a parameterized subset of probability measures with fixed starting distribution $\bar{\mu}_0$. Using (\ref{opt-p}), we can write the objective function in (\ref{mini-omega-problem}) more explicitly as \begin{align} D_{KL}\big(\mathbf{P}^*\,\|\, \mathbf{P}_{\boldsymbol{\omega}}\big) = D_{KL}\big(\mathbf{P}^*\,\|\, \mathbf{P}\big) - e^{\beta\Delta F} \mathbf{E}\Big(e^{-\beta W} \ln\frac{d\mathbf{P}_{\boldsymbol{\omega}}}{d\mathbf{P}} \Big) \,, \label{omega-problem-objective-explicit} \end{align} where the parameter $\boldsymbol{\omega}$ only appears in the second term on the right hand side of the above equality. Applying Girsanov's theorem~\cite{oksendalSDE}, we have \begin{align} \frac{d\mathbf{P}_{\boldsymbol{\omega}}}{d\mathbf{P}} = \frac{d\bar{\mu}_0}{d\mu_{\lambda(0)}}\big(x(0)\big) \times \exp\bigg[\frac{\beta}{2} \int_0^T \Big(\sum_{l=1}^k \omega_l\phi^{(l)}\Big)\cdot \sigma^{-1} \big(dx(s) - b\, ds\big) - \frac{\beta}{4} \int_0^T \Big|\sum_{l=1}^k \omega_l\phi^{(l)}\Big|^2\, ds\bigg]\,, \label{girsanov-ce} \end{align} where the dependence of the functions $b, \sigma, \phi^{(l)}$ on $x(s), \lambda(s), s$ is omitted for simplicity. Substituting (\ref{girsanov-ce}) into equality (\ref{omega-problem-objective-explicit}), we can observe that the objective function in (\ref{mini-omega-problem}) is in fact quadratic with respect to the parameter $\boldsymbol{\omega} \in \mathbb{R}^k$. Taking derivatives, we conclude that the minimizer of (\ref{mini-omega-problem}) is determined by the linear equation $A\boldsymbol{\omega}^* = R$, where \begin{align} \begin{split} A_{ll'} = \mathbf{E} \bigg[e^{-\beta W} \int_0^T \phi^{(l)}\cdot \phi^{(l')}\, ds\bigg]\,, \quad R_{l} = \mathbf{E} \bigg[e^{-\beta W} \int_0^T \phi^{(l)} \cdot \sigma^{-1} \big(dx(s) - b\, ds\big) \bigg]\,, \end{split} \label{a-r-coeff} \end{align} for $1 \le l, l' \le k$. In practice, we can estimate entries of $A$ and $R$ in (\ref{a-r-coeff}) by simulating a relatively small number of trajectories, and compute $\bm{\omega}^*$ by solving the linear equation $A\bm{\omega}^*=R$. After this, the free energy difference $\Delta F$ can be estimated using importance sampling by simulating a large number of trajectories corresponding to $\mathbf{P}_{\bm{\omega}^*}$. Also notice that, instead of computing $A$ and $R$ using the original dynamics and solving $\bm{\omega}^*$ directly, it is helpful to solve $\bm{\omega}^*$ in an iterative manner starting from a higher temperature (small $\beta$) or running a different dynamics (importance sampling). We refer readers to the previous studies~\cite{ce_book,ce_paper2014} for more algorithmic details. \begin{remark} More generally, instead of keeping the starting distribution $\bar{\mu}_0$ fixed, we could also optimize $\bar{\mu}_0$ within a parameterized set of probability measures on $\mathbb{R}^n$ by solving an optimization problem which is similar to (\ref{mini-omega-problem}). In this case, while the optimal parameter $\bm{\omega}^*$ can still be obtained from the same linear equation $A\bm{\omega}^*=R$, a nonlinear equation needs to be solved in order to get the optimal $\bar{\mu}_0$. We expect to develop algorithms which adaptively optimize $\bm{\omega}^*$ and $\bar{\mu}_0$ in an alternative manner. This will be considered in future work. \label{rmk-3} \end{remark} \textbf{Choices of ansatz functions}. Clearly, the efficiency of the importance sampling Monte Carlo method crucially depends on the choices of ansatz functions used in the cross-entropy method. From Jarzynski's equality (\ref{jarzynski-repeat}) and the optimal change of measure (\ref{opt-p}), we can expect that an importance sampling estimator will have better performance if paths with smaller work $W$ (comparing to $\Delta F$) are sampled more frequently. Accordingly, the ansatz functions used in the cross-entropy method should be chosen such that the work $W$ can be decreased by the control forces. A similar idea has been used in the previous work~\cite{Hartmann2016-Nonlinearity}, where several ways of choosing ansatz functions have been proposed. In the current situation where the work $W$ is given in (\ref{w-repeat}), we can see that $W$ will be large if the potential increases along the movement of the parameter $\lambda$. Actually, this already explains the reason why a standard Monte Carlo simulation of fast-switching dynamics based on Jarzynski's equality is likely to have poor efficiency. To elucidate this point more clearly, we consider a special situation when the expression of the work $W$ becomes simpler and allows us to have some insights on how to choose ansatz functions. Specifically, let $\lambda \in [0,1]$ and suppose that we are interested in the free energy differences corresponding to potentials $V(x,0)$ and $V(x,1)$, $x \in \mathbb{R}^n$. Then a simple way is to consider the linear interpolation~\cite{path_sampling_zuckerman2004} \begin{align} V(x,\lambda) = (1-\lambda)V(x,0) + \lambda V(x,1)\,,\quad \lambda \in [0,1]\,, \end{align} and the control protocol $\lambda(s) = s$ on the time interval $s \in [0,1]$. In this case, the expression of work in (\ref{w-repeat}) as a path functional becomes as simple as \begin{align} W = \int_0^1 \Big(V(x(s), 1) - V(x(s), 0)\Big) ds\,. \label{work-w-special} \end{align} It is not difficult to see that paths simulated by a standard Monte Carlo method will typically have large work due to the fact that, starting from the Boltzmann distribution of the potential $V(x,0)$ and on the finite time interval $[0,1]$, the nonequilibrium process $x(s)$ is likely to stay within the region where potential $V(x,1)$ is large, in particular when the low potential regions of $V(x,0)$ and $V(x,1)$ do not overlap (see \cite{jarzynski-rare2006} for more detailed discussions). Accordingly, the importance sampling can improve the efficiency of the standard Monte Carlo estimator if we place ansatz functions in a way such that, after optimization using the cross-entropy method, transitions of the controlled dynamics (\ref{dynamics-ce-1}) from low energy regions of $V(x,0)$ to low energy region of $V(x,1)$ within time $[0,1]$ become easier. Similar idea (i.e., to reduce the ``lag'') has been used to guide the choice of the vector field in the escorted free energy simulation method~\cite{pre-escort-fe-simulation2008,escorted-simulation2011}. Readers are referred to Subsection~\ref{subsec-ex1} for numerical study of the ideas discussed above. \section{Jarzynski-like equality and fluctuation theorem : reaction coordinate case} \label{sec-coordinate} Different from the situation in Section~\ref{sec-alchemical} where the free energy in (\ref{free-energy}) is defined as a function of the parameter $\lambda$ through the invariant measure $\mu_\lambda$ on $\mathbb{R}^n$, in this section we assume a function $\xi : \mathbb{R}^n \rightarrow \mathbb{R}^d$ is given and the free energy is defined as a function of $z \in \mathbb{R}^d$ through the invariant measure $\mu_z$ on the level set $\xi^{-1}(z)$. In the literature, such a function $\xi$ is often termed as \textit{reaction coordinate function} or \textit{collective variable}~\cite{givon2004emd,mimick_sde,effective_dynamics,blue-moon,tony-free-energy-compuation,Maragliano2006}. In this context, we point out that a Jarzynski-like equality has been obtained in the previous work~\cite{LELIEVRE2007}, and a Jarzynski-Crooks fluctuation identity has been derived for the constrained Langevin dynamics in~\cite{Tony-constrained-langevin2012}. In this section, following the analysis in Section~\ref{sec-alchemical}, we will prove a fluctuation theorem (Theorem~\ref{thm-fluct-relation-coordinate}) which is similar to Theorem~\ref{thm-fluct-relation}, and then we obtain the Jarzynski-like equality (Theorem~\ref{thm-jarzynski-coordinate}) by applying the fluctuation theorem. Importance sampling and variance reduction issues will be discussed in Subsection~\ref{subsec-info-the-ce-coordinate}. \subsection{Mathematical setup} \label{sec-coordinate-setup} First of all, we recall some notations as well as some results from the work~\cite{effective_dyn_2017,zhang2017} in order to introduce the problem under investigation. Let $ \xi : \mathbb{R}^n \rightarrow \mathbb{R}^d$ be a $C^2$ function with components $\xi = (\xi_1, \xi_2, \cdots, \xi_d)^T \in \mathbb{R}^d$, where $1 \le d < n$. Given $z \in \mbox{Im}\,\xi \subseteq \mathbb{R}^d$, which is a regular value of the map $\xi$, we define the level set \begin{align} \Sigma_{z} =\xi^{-1}(z)=\Big\{ y \in \mathbb{R}^n \,\Big|\, \xi(y) = z \in \mathbb{R}^d\Big\}\,. \label{submanifold-n} \end{align} It is known from the regular value theorem~\cite{banyaga2004lectures} that $\Sigma_{z}$ is a smooth $(n-d)$-dimensional submanifold of $\mathbb{R}^n$. Let $\nu_z$ denote the surface measure on $\Sigma_{z}$ which is induced from the Euclidean metric on $\mathbb{R}^n$, and $\nabla \xi$ denote the $n \times d$ matrix whose entries are $(\nabla\xi)_{i\gamma} =\frac{\partial\xi_\gamma}{\partial y_i}$, $1 \le i \le n$, $1 \le \gamma \le d$. Given a smooth function $V : \mathbb{R}^n \rightarrow \mathbb{R}$, we consider the probability measure on the submanifold $\Sigma_z$ defined as \begin{align} d \mu_z = \frac{1}{Q(z)} e^{-\beta V} \Big[\mbox{det}\big(\nabla \xi^T \nabla \xi\big)\Big]^{-\frac{1}{2}} d\nu_z\,, \label{mu-z} \end{align} where $Q(z)$ is the normalization constant. The probability measure $\mu_z$ arises in many situations and plays an important role in the free energy calculation along a reaction coordinate~\cite{blue-moon,projection_diffusion,effective_dynamics,effective_dyn_2017,tony-free-energy-compuation,zhang2017}. The free energy for fixed $z \in \mbox{Im}\,\xi \subseteq \mathbb{R}^d$ is defined as \begin{align} \begin{split} F(z) =& -\beta^{-1} \ln Q(z) \\ =& -\beta^{-1} \ln \int_{\Sigma_z} e^{-\beta V} \Big[\mbox{det}\big(\nabla\xi^T \nabla \xi\big)\Big]^{-\frac{1}{2}} d\nu_z \\ =& -\beta^{-1} \ln \int_{\mathbb{R}^n} e^{-\beta V(y)} \delta\big(\xi(y) - z\big)\,dy \,, \end{split} \label{free-energy-coordinate} \end{align} where the last equality follows from the co-area formula~\cite{evans1991measure,krantz2008geometric}. Let $\sigma : \mathbb{R}^n \rightarrow \mathbb{R}^{n \times n}$ be an $n \times n$ matrix valued function such that the function $a(\cdot) := (\sigma\sigma^T)(\cdot)$ is uniformly elliptic on $\mathbb{R}^n$. Let $\Psi =\nabla\xi^T a \nabla \xi$ be the invertible $d\times d$ matrix whose entries are \begin{align} \Psi_{\gamma\gamma'} =(\nabla \xi_\gamma)^T a\nabla \xi_{\gamma'} \,,\quad 1 \le \gamma, \gamma' \le d\,, \label{psi-ij} \end{align} where $\nabla\xi_\gamma$ is the usual gradient of the function $\xi_\gamma$. Let $P=\mbox{id}-a\nabla\xi\Psi^{-1}\nabla\xi^T$ be the projection matrix, with entries \begin{align} P_{ij} =& \delta_{ij} - (\Psi^{-1})_{\gamma\gamma'} a_{il}\partial_l\xi_\gamma\,\partial_j\xi_{\gamma'}\,, \quad 1 \le i,j \le n\,. \label{p-ij} \end{align} Notice that in the above $\delta_{ij}$ is the Kronecker delta function and Einstein's summation convention is used here and in the following. From (\ref{p-ij}), we can directly verify that \begin{align} \begin{split} &P^2=P\,,\quad P^T\nabla \xi_\gamma = 0\,,\quad 1 \le \gamma \le d\,, \\ &(aP^T)_{ij}=(Pa)_{ij} = a_{ij} - (\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i (a \nabla \xi_{\gamma'})_j\,, \quad 1\le i,j \le n\, , \end{split} \label{p-i-j} \end{align} i.e., $P$ is the orthogonal projection w.r.t. the scalar product $\langle u, v\rangle_{a^{-1}} = u^Ta^{-1} v$, for $u, v \in \mathbb{R}^n$. It is shown in~\cite{zhang2017} that, starting from $y(0) \in \Sigma_z$, the process \begin{align} \begin{split} dy_i(s) = & -(Pa)_{ij} \frac{\partial V}{\partial y_j}\,ds + \frac{1}{\beta} \frac{\partial (Pa)_{ij}}{\partial y_j}\,ds + \sqrt{2\beta^{-1}}\, (P\sigma)_{ij}\, dw_{j}(s)\,,\quad 1\le i \le n\,, \end{split} \label{dynamics-submanifold} \end{align} where $w(s)$ is an $n$-dimensional Brownian motion, will remain on the submanifold $\Sigma_z$ and has a unique invariant measure $\mu_z$ which is defined in (\ref{mu-z}). In particular, denoting by $\mathcal{L}^{\perp}$ the infinitesimal generator of the process (\ref{dynamics-submanifold}), i.e., \begin{align} \mathcal{L}^{\perp} = -(Pa)_{ij} \frac{\partial V}{\partial y_j}\frac{\partial}{\partial y_i} + \frac{1}{\beta} \frac{\partial (Pa)_{ij}}{\partial y_j} \frac{\partial}{\partial y_i} + \frac{1}{\beta} (Pa)_{ij} \frac{\partial^2}{\partial y_i\partial y_j}\,, \label{generator-l-perp} \end{align} it is easy to verify that $\mathcal{L}^{\perp} \xi_\gamma \equiv 0$, for $1 \le \gamma \le d$. \subsection{Fluctuation theorem} \label{subsec-fluct-thm-coordinate} In order to state the fluctuation theorem, we further introduce a ``controlled'' process as well as its time-reversed counterpart based on the process (\ref{dynamics-submanifold}). Specifically, we let $f = (f_1, f_2,\cdots, f_d)^T : \mathbb{R}^n \times [0, T] \rightarrow \mathbb{R}^d$ be a bounded smooth function and consider the process \begin{align} \begin{split} dy_i(s) = & - (Pa)_{ij} \frac{\partial V}{\partial y_j}\,ds + \frac{1}{\beta} \frac{\partial (Pa)_{ij}}{\partial y_j}\,ds + (\Psi^{-1})_{\gamma\gamma'} (a\nabla\xi_\gamma)_i\,f_{\gamma'}\,ds + \sqrt{2\beta^{-1}}\, (P\sigma)_{ij}\, dw_{j}(s)\,, \end{split} \label{dynamics-f} \end{align} for $1 \le i \le n$ on the time interval $[0,T]$. The infinitesimal generator of the process (\ref{dynamics-f}) is given by \begin{align} \mathcal{L} = \mathcal{L}^\perp + (\Psi^{-1})_{\gamma\gamma'} (a\nabla\xi_\gamma)_if_{\gamma'} \frac{\partial}{\partial y_i} \,, \label{l-coordinate} \end{align} where the operator $\mathcal{L}^\perp$ is defined in (\ref{generator-l-perp}), and a simple application of Ito's formula implies that \begin{align} d\xi(y(s)) = f(y(s), s)\,ds \,. \label{xi-dt-coordinate} \end{align} Similarly, the time-reversed process of the dynamics (\ref{dynamics-f}) on the time interval $[0, T]$ is defined as \begin{align} \begin{split} dy_i^{R}(s) = & - (Pa)_{ij} \frac{\partial V}{\partial y_j}\,ds + \frac{1}{\beta} \frac{\partial (Pa)_{ij}}{\partial y_j}\,ds - (\Psi^{-1})_{\gamma\gamma'} (a\nabla\xi_\gamma)_i\,f^-_{\gamma'}\,ds + \sqrt{2\beta^{-1}}\, (P\sigma)_{ij}\, dw_{j}(s)\,, \end{split} \label{dynamics-f-reversed} \end{align} where $1 \le i \le n$, $f^-_{\gamma'}(\cdot, s) = f_{\gamma'}(\cdot, T-s)$, and the infinitesimal generator is \begin{align} \mathcal{L}^R = \mathcal{L}^\perp - (\Psi^{-1})_{\gamma\gamma'} (a\nabla\xi_\gamma)_if^-_{\gamma'} \frac{\partial}{\partial y_i}\,. \label{l-reversed-coordinate} \end{align} Using a similar argument as in the proof of Theorem~\ref{thm-fluct-relation}, we obtain the following fluctuation theorem which concerns the relation between the dynamics (\ref{dynamics-f}) and the time-reversed one (\ref{dynamics-f-reversed}). \begin{theorem} Let $0 \le t' < t \le T$ and $y,y' \in \mathbb{R}^n$. For any continuous function $\eta \in C\big(\mathbb{R}^n \times [0,T]\big)$ with compact support, we have \begin{align} \begin{split} &e^{-\beta V(y')}\, \mathbf{E}^R_{y',t'}\bigg[\exp\bigg(\int_{t'}^t \eta(y^R(s), T-s) ds\bigg) \delta\big(y^R(t)-y\big)\,\bigg]\\ =&e^{-\beta V(y)}\,\mathbf{E}_{y,T-t}\bigg[e^{-\beta \mathcal{W}} \exp\bigg(\int_{T-t}^{T-t'} \eta(y(s), s) ds\bigg) \delta\big(y(T-t')-y'\big)\bigg]\,, \end{split} \label{fluct-relation-coordinate} \end{align} where \begin{align} \mathcal{W} = \int_{T-t}^{T-t'} \Big[ (\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i f_{\gamma'} \frac{\partial V}{\partial y_i} - \frac{1}{\beta} \frac{\partial}{\partial y_i} \Big((\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i f_{\gamma'}\Big)\Big] ds\,, \label{w-coordinate} \end{align} $y^R(\cdot)$, $y(\cdot)$ satisfy the dynamics (\ref{dynamics-f-reversed}) and (\ref{dynamics-f}), respectively. $\mathbf{E}^R_{y',t'}$ is the conditional expectation with respect to the path ensemble of the dynamics (\ref{dynamics-f-reversed}) starting from $y^R(t') = y'$ at time $t'$. And $\mathbf{E}_{y,T-t}$ is the conditional expectation with respect to the dynamics (\ref{dynamics-f}) starting from $y(T-t) = y$ at time $T-t$. \label{thm-fluct-relation-coordinate} \end{theorem} The proof of Theorem~\ref{thm-fluct-relation-coordinate} can be found in Appendix~\ref{app-4}. Similar to Theorem~\ref{thm-fluct-relation}, the identity~(\ref{fluct-relation-coordinate}) should be understood in the sense of distributions. We refer to Remark~\ref{rmk-delta} for further discussions. \subsection{Jarzynski-like equality} \label{subsec-jarzynski-like-coordinate} In this subsection, we assume that there is a function $\widetilde{f} = (\widetilde{f}_1, \widetilde{f}_2, \cdots, \widetilde{f}_d)^T : \mathbb{R}^d \times [0, T] \rightarrow \mathbb{R}^d$, such that \begin{align} f(y,s) = \widetilde{f}(\xi(y), s), \quad \forall (y, s) \in \mathbb{R}^n \times [0, T]\,. \label{f-f-tilde} \end{align} Fix $t\in [0, T]$ and suppose that both the ODE \begin{align} \dot{\zeta}(s\,;z) = \widetilde{f}(\zeta(s\,;z), s), \quad s \in [0, t]\,, \label{zt-ode} \end{align} starting from $\zeta(0\,;z)=z$, and the ODE \begin{align} \dot{\zeta}^R(s\,;z) = -\widetilde{f}(\zeta^R(s\,;z), T-s), \quad s \in [T-t, T]\,, \label{zt-ode-r} \end{align} starting from $\zeta^R(T-t\,;z) = z$, have a unique solution for any $z \in \mathbb{R}^d$. Under this assumption, it is not difficult to conclude that \begin{align*} \zeta^R(s\,;\zeta(t\,;z)) = \zeta(T-s\,;z)\,,\quad \zeta(T-s\,; \zeta^R(T\,;z)) = \zeta^R(s\,;z)\,,\quad s \in [T-t,T]\,, \end{align*} which in turn implies that the map $\zeta^R(T\,;\cdot) : \mathbb{R}^d \rightarrow \mathbb{R}^d$ is invertible and its inverse is given by $\zeta(t\,;\cdot)$. Consider the process $y(s)$ in (\ref{dynamics-f}) on the time interval $[0,t]$, and process $y^R(s)$ in (\ref{dynamics-f-reversed}) on the time interval $[T-t, T]$, respectively. Assume that $\xi(y(0))=z$ and $\xi(y^R(T-t)) = z'$, where $z, z'\in \mathbb{R}^d$. Similar to (\ref{xi-dt-coordinate}), we can obtain \begin{align*} d\xi(y(s)) = \widetilde{f}\big(\xi(y(s)), s\big)\, ds,\qquad d\xi(y^R(s)) = -\widetilde{f}\big(\xi(y^R(s)), T-s\big)\, ds\,, \end{align*} which imply that \begin{align} \xi(y(s)) = \zeta(s\,;z)\,,\quad \xi(y^R(T-s)) = \zeta^R(T-s\,;z')\,, \qquad \forall\,s \in [0,t]\,. \label{xi-yt-coordinate} \end{align} Applying Theorem~\ref{thm-fluct-relation-coordinate}, we can obtain the following Jarzynski-like equality for the free energy difference in the reaction coordinate case. \begin{theorem}[Jarzynski-like equality] Let $y(s)$ be the dynamics in (\ref{dynamics-f}) with the function $f$ in (\ref{f-f-tilde}) and $z(s)$ solve the ODE (\ref{zt-ode}). For any smooth and bounded test function $\varphi : \mathbb{R}^n \rightarrow \mathbb{R}$ and $t \in [0, T]$, we have \begin{align} \mathbf{E}_{z(0),0} \Big[\varphi(y(t))\,e^{-\beta W(t)}\Big] = e^{-\beta \big(F(z(t)) - F(z(0))\big)} \int_{\Sigma_{z(t)}} \varphi\, d\mu_{z(t)} \,, \label{generalized-jarzynski-coordinate-varphi} \end{align} where $F(\cdot)$ is the free energy in (\ref{free-energy-coordinate}) and $W(t)$ is defined as \begin{align} W(t) = \int_{0}^{t} \Big[ (\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i \frac{\partial V}{\partial y_i} - \frac{1}{\beta} \frac{\partial}{\partial y_i} \Big((\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i \Big)\Big]\, \dot{z}_{\gamma'}(s) \, ds\,. \label{w-coordinate-jarzynski} \end{align} $\mathbf{E}_{z(0),0}$ denotes the conditional expectation with respect to the dynamics $y(s)$, starting from the initial distribution $y(0) \sim \mu_{z(0)}$ on $\Sigma_{z(0)}$. In particular, taking $\varphi\equiv 1$, we have \begin{align} \mathbf{E}_{z(0),0}\Big[e^{-\beta W(t)}\Big] = e^{-\beta \big(F(z(t)) - F(z(0))\big)} \,. \label{generalized-jarzynski-coordinate} \end{align} \label{thm-jarzynski-coordinate} \end{theorem} \begin{proof} Let $\mbox{div}_z$ denote the divergence operator with respect to $z \in \mathbb{R}^d$. Notice that from the definitions of $\Psi$ in (\ref{psi-ij}) and the function $f$ in (\ref{f-f-tilde}) we can compute \begin{align*} (\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i \frac{\partial f_{\gamma'}}{\partial y_i} = (\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i \frac{\partial \widetilde{f}_{\gamma'}}{\partial z_j} \frac{\partial \xi_j}{\partial y_i} = (\mbox{div}_z \widetilde{f}\,)(\xi(y), s)\,. \end{align*} Choosing $\eta(y,s) =-(\mbox{div}_z\widetilde{f}\,)(\xi(y),s)$ in the equality (\ref{fluct-relation-coordinate}) of Theorem~\ref{thm-fluct-relation-coordinate}, we obtain \begin{align} \begin{split} &e^{-\beta V(y')}\, \mathbf{E}^R_{y',T-t}\bigg[\exp\bigg(-\int_{T-t}^T (\mbox{div}_z\widetilde{f}\,)\big(\xi(y^R(s)),T-s\big)ds\bigg)\delta\big(y^R(T)-y\big)\,\bigg]\\ =&e^{-\beta V(y)}\,\mathbf{E}_{y,0}\Big[e^{-\beta W(t)} \delta\big(y(t)-y'\big)\Big]\,. \end{split} \label{thm-jarzynski-coordinate-eqn1} \end{align} Let $\tau>0$ and multiply both sides of (\ref{thm-jarzynski-coordinate-eqn1}) by $\varphi(y')e^{-\beta\frac{|\xi(y)-z(0)|^2}{\tau}}$. Integrating with respect to $y,y'$, yields \begin{align} \begin{split} &\int_{\mathbb{R}^n} e^{-\beta \big(V(y) +\frac{|\zeta^R(T\,;\,\xi(y))-z(0)|^2}{\tau}\big)}\, \exp\bigg(-\int_{T-t}^T(\mbox{div}_z\widetilde{f}\,)\big(\zeta^R(s\,;\xi(y)),T-s\big)ds\bigg) \,\varphi(y)\, dy\\ =&\int_{\mathbb{R}^n}\,e^{-\beta \big(V(y) + \frac{|\xi(y)-z(0)|^2}{\tau}\big)}\,\mathbf{E}_{y,0}\Big[e^{-\beta W(t)} \varphi(y(t)) \Big]\, dy\,. \end{split} \label{thm-jarzynski-coordinate-eqn2-simplify} \end{align} Notice that, on the left hand side above, we have used the fact that $\xi(y^R(s))$ under the conditional expectation is deterministic and is given by (\ref{xi-yt-coordinate}). We can rewrite the left hand side of (\ref{thm-jarzynski-coordinate-eqn2-simplify}) by applying the co-area formula \begin{align} \begin{split} &\int_{\mathbb{R}^n} e^{-\beta \big(V(y)+ \frac{|\zeta^R(T\,;\,\xi(y))-z(0)|^2}{\tau}\big)}\, \exp\bigg(-\int_{T-t}^T(\mbox{div}_z\widetilde{f}\,)\big(\zeta^R(s\,;\xi(y)),T-s\big)ds\bigg) \,\varphi(y)\, dy\\ =&\int_{\mathbb{R}^d} e^{-\beta\frac{|z'-z(0)|^2}{\tau}} \bigg[ \int_{\{y\,|\,\zeta^R(T\,;\,\xi(y))=z'\}} e^{-\beta V(y)}\, \,\varphi(y)\, \exp\bigg(-\int_{T-t}^T(\mbox{div}_z\widetilde{f}\,)\big(\zeta^R(s\,;\xi(y)),T-s\big)ds\bigg)\\ &\hspace{3cm} \times \Big[\det\Big(\big(\nabla \zeta^R(T\,;\xi(y))\big)^T\nabla\zeta^R(T\,;\xi(y))\Big)\Big]^{-\frac{1}{2}} \nu^R_{z'}(dy)\bigg]\,dz'\,, \end{split} \label{thm-jarzynski-coordinate-eqn2-lhs-coarea} \end{align} where $\nu^R_{z'}$ is the volume measure on the level set $\big\{y\in\mathbb{R}^n\,|\,\zeta^R(T\,;\xi(y))=z'\big\}$, $\nabla\zeta^R(s\,;\xi(y))$ denotes the $n\times d$ matrix with components $\big(\nabla\zeta^R(s\,;\xi(y))\big)_{i\gamma}=\frac{\partial \zeta^R_\gamma(s\,;\,\xi(y))}{\partial y_i}$, for $s \in [T-t,T]$, $1 \le \gamma \le d$ and $1\le i \le n$. To simplify the above expressions, let $\nabla_z\zeta^R(s\,;z)$ denote the $d\times d$ matrix with components $(\nabla_z\zeta^R(s\,;z))_{ij} = \frac{\partial\zeta^R_i(s\,;\,z)}{\partial z_j}$ for $1 \le i, j \le d$, i.e., the differentiations with respect to the initial value at time $T-t$. Furthermore, since $\zeta^R(T\,;\cdot)$ is invertible, we can deduce that $\zeta^R(s\,;\cdot)$ is invertible for all $s\in [T-t,T]$, which then implies that the matrix $\nabla_z \zeta^R(s\,;z)$ has full rank for $s\in [T-t,T]$. Applying chain rule, we have $\nabla \zeta^R(s\,;\xi(y)) = \nabla\xi\nabla_z\zeta^R(s\,;\xi(y))$ and therefore \begin{align*} \Big[ \det\Big(\big(\nabla \zeta^R(T\,;\xi(y))\big)^T\nabla\zeta^R(T\,;\xi(y))\Big)\Big]^{-\frac{1}{2}} = \Big[\det \Big(\nabla_z\zeta^R(T\,;\xi(y))\Big)\Big]^{-1} \Big[\det\big(\nabla\xi^T\nabla\xi)(y)\Big]^{-\frac{1}{2}}\,. \end{align*} Combining the above identity, the equation (\ref{thm-jarzynski-coordinate-eqn2-lhs-coarea}), and applying Lemma~\ref{lemma-det} below, we know that equation (\ref{thm-jarzynski-coordinate-eqn2-simplify}) can be simplified as \begin{align} \begin{split} &\frac{1}{Z_\tau}\int_{\mathbb{R}^n}\,e^{-\beta \big(V(y) + \frac{|\xi(y)-z(0)|^2}{\tau}\big)}\,\mathbf{E}_{y,0}\Big[e^{-\beta W(t)} \varphi(y(t)) \Big]\, dy\\ =&\frac{\Big(\frac{\pi\tau}{\beta}\Big)^{\frac{d}{2}}}{Z_\tau}\Big(\frac{\beta}{\pi\tau}\Big)^{\frac{d}{2}}\int_{\mathbb{R}^d} e^{-\beta\frac{|z'-z(0)|^2}{\tau}} \bigg[ \int_{\{y\,|\,\zeta^R(T\,;\,\xi(y))=z'\}} e^{-\beta V(y)}\, \,\varphi(y)\Big[\det\big(\nabla \xi^T\nabla\xi)\Big]^{-\frac{1}{2}} \nu^R_{z'}(dy)\bigg]\,dz'\,, \end{split} \end{align} where $Z_\tau=\int_{\mathbb{R}^n} e^{-\beta \big(V(y)+\frac{|\xi(y)-z(0)|^2}{\tau}\big)} dy$ is the normalization constant. Letting $\tau\rightarrow 0$ and applying~\cite[Proposition $3$]{zhang2017}, we obtain \begin{align} \begin{split} &\int_{\Sigma_{z(0)}}\,\mathbf{E}_{y,0}\Big[e^{-\beta W(t)} \varphi(y(t)) \Big]\, \mu_{z(0)}(dy)\\ =& \frac{1}{Q(z(0))} \int_{\big\{y\,\big|\,\zeta^R(T\,;\,\xi(y))=z(0)\big\}} e^{-\beta V(y)}\, \,\varphi(y)\Big[\det\big(\nabla \xi^T\nabla\xi)\Big]^{-\frac{1}{2}} \nu^R_{z(0)}(dy)\,, \end{split} \label{tau-limit-equality} \end{align} where $Q(\cdot)$ is the normalization constant in (\ref{mu-z}). Since the inverse of the map $\zeta^R(T\,;\cdot)$ is $\zeta(t\,;\cdot)$, we know $$\big\{y\in\mathbb{R}^n\,\big|\, \zeta^R(T\,;\xi(y))=z(0)\big\}= \big\{y\in\mathbb{R}^n\,\big|\,\xi(y)=\zeta(t\,;z(0)) = z(t)\big\}=\Sigma_{z(t)}\,,$$ and therefore (\ref{tau-limit-equality}) becomes \begin{align} \begin{split} \int_{\Sigma_{z(0)}}\,\mathbf{E}_{y,0}\Big[e^{-\beta W(t)} \varphi(y(t)) \Big]\, \mu_{z(0)}(dy) = \frac{Q(z(t))}{Q(z(0))} \int_{\Sigma_{z(t)}} \varphi(y) \mu_{z(t)}(dy)\,, \end{split} \end{align} which is equivalent to the identity (\ref{generalized-jarzynski-coordinate-varphi}). \end{proof} We have used the following result in the above proof. \begin{lemma} Let $\zeta^R(s\,;z)$ be the solution of the ODE (\ref{zt-ode-r}) for $s\in [T-t,T]$, starting from $z\in\mathbb{R}^d$ at time $s=T-t$. $\nabla_z\zeta^R(s\,;z)$ denotes the $d\times d$ matrix where $(\nabla_z\zeta^R(s\,;z))_{ij} = \frac{\partial\zeta^R_i(s\,;\,z)}{\partial z_j}$ for $1 \le i,j \le d$ and $T-t \le s \le T$. Suppose that $\nabla_z\zeta^R(s\,;z)$ is invertible for $T-t \le s \le T$, then we have \begin{align} \det\Big(\nabla_z\zeta^R(s\,;z)\Big) = e^{-\int_{T-t}^s (\mbox{\normalfont{div}}_z\widetilde{f}\,)(\zeta^R(s'\,;\,z),T-s')\,ds'}\,,\quad s \in [T-t,T]\,. \label{lemma-det-formula} \end{align} \label{lemma-det} \end{lemma} \begin{proof} Differentiating both sides of the ODE (\ref{zt-ode-r}) with respect to $z$, we obtain the matrix equation \begin{align} \frac{d\big(\nabla_z\zeta^R(s\,;z)\big)}{ds} = - \nabla_z\zeta^R(s\,;z)\,\nabla_z \widetilde{f}(\zeta^R(s\,;z),T-s)\,,\quad s \in [T-t, T]\,, \end{align} with the initial condition $\nabla_z\zeta^R(T-t\,;z) = \mbox{id}$. Applying Jacobi's formula, we know that the determinant of $\nabla_z\zeta^R(s\,;z)$ satisfies \begin{align*} & \frac{d\big[\det\big(\nabla_z \zeta^R(s\,;z)\big)\big]}{ds}\\ =& \det\big(\nabla_z\zeta^R(s\,;z)\big)\,\mbox{tr}\bigg(\Big(\nabla_z\zeta^R(s\,;z)\Big)^{-1} \frac{d\big(\nabla_z\zeta^R(s\,;z)\big)}{ds}\bigg) \\ =& -\det\big(\nabla_z\zeta^R(s\,;z)\big)\,\mbox{tr}\Big( \nabla_z \widetilde{f}(\zeta^R(s\,;z),T-s) \Big) \\ =& -\det\big(\nabla_z\zeta^R(s\,;z)\big)\,\big(\mbox{div}_z\widetilde{f}\,\big)(\zeta^R(s\,;z),T-s)\,. \end{align*} The expression (\ref{lemma-det-formula}) is obtained by integrating the above equation. \end{proof} \begin{remark} \begin{enumerate} \item In the special case when the reaction coordinate $\xi \in \mathbb{R}$ is scalar, matrix $a=\sigma=\mbox{id}$, we have $\Psi = |\nabla \xi|^2$ and it can be checked that the work (\ref{w-coordinate-jarzynski}) becomes \begin{align} \begin{split} W(t) =& \int_{0}^{t} \bigg[ \frac{\nabla \xi}{|\nabla \xi|^2} \cdot \nabla V - \frac{1}{\beta} \mbox{div}\Big(\frac{\nabla \xi}{|\nabla\xi|^2}\Big)\bigg] \dot{z}(s)\,ds \\ =&\int_{0}^{t} \frac{\nabla \xi}{|\nabla \xi|^2} \cdot \Big[\nabla \Big(V + \frac{1}{\beta} \ln |\nabla \xi|\Big) + \frac{1}{\beta} H\Big]\,\dot{z}(s)\,ds \,, \end{split} \label{w-coordinate-jarzynski-special} \end{align} where $H=-\mbox{div}\Big(\frac{\nabla \xi}{|\nabla \xi|}\Big)\frac{\nabla\xi}{|\nabla \xi|}$ is the mean curvature vector (field) of the surface $\Sigma_{z}$~\cite{LELIEVRE2007}. Notice that the free energy (\ref{free-energy-coordinate}) is different from the one considered in~\cite{LELIEVRE2007}. In fact, from the second expression in (\ref{w-coordinate-jarzynski-special}), we see that Theorem~\ref{thm-jarzynski-coordinate} is identical to the Feynman-Kac fluctuation equality Theorem of~\cite{LELIEVRE2007} for the potential $V + \frac{1}{2\beta} \ln (\mbox{\textnormal{det}}\,\Psi)$. \item As in the alchemical transition case, one can also study the escorted dynamics and Crooks's relations in the reaction coordinate case. For simplicity, we will omit the discussions on the escorted dynamics and only briefly summarize the Crooks's relations. In fact, by modifying the proof of Theorem~\ref{thm-jarzynski-coordinate}, we can show that \begin{align} \frac{\mathbf{E}(e^{-\beta W} \mathcal{G})}{\mathbf{E}^R(\mathcal{G}^R)} = e^{-\beta \Delta F(T)}\,, \end{align} for any bounded smooth function $\mathcal{G}$ on the path space, where $W=W(T)$ is the work in (\ref{w-coordinate-jarzynski}), $\mathcal{G}^R(y(\cdot)) = \mathcal{G}(y(T-\cdot))$ for any path $y(\cdot)$, $\mathbf{E}$ and $\mathbf{E}^R$ are the expectation with respect to the process $y(\cdot)$ in (\ref{dynamics-f}) starting from $y(0)\sim \mu_{z(0)}$ on $\Sigma_{z(0)}$, and the expectation with respect to the process $y^R(\cdot)$ in (\ref{dynamics-f-reversed}) starting from $y^R(0) \sim \mu_{z(T)}$ on $\Sigma_{z(T)}$, respectively. In particular, this implies \begin{align} \frac{\mathbf{E} \big(e^{-\beta W}\phi(W)\big)}{\mathbf{E}^{R}(\phi(-W^R))} = e^{-\beta \Delta F(T)}\,,\quad \forall~ \phi \in C_b(\mathbb{R})\,, \label{crooks-eps0-2-coordinate} \end{align} where $W^R$ is the work for the time-reversed process $y^R(\cdot)$ in (\ref{dynamics-f-reversed}). We refer to Remark~\ref{rmk-crook-jarzynski-eps0} for comparisons. \item Similarly as in the alchemical transition case, by considering the Jarzynski-like equality~(\ref{generalized-jarzynski-coordinate}) for the dynamics \begin{align} \begin{split} dy_i(s) = & - \frac{1}{\tau} (Pa)_{ij} \frac{\partial V}{\partial y_j}\,ds + \frac{1}{\beta\tau} \frac{\partial (Pa)_{ij}}{\partial y_j}\,ds + (\Psi^{-1})_{\gamma\gamma'} (a\nabla\xi_\gamma)_i\,f_{\gamma'}\,ds \\ &+ \sqrt{\frac{2\beta^{-1}}{\tau}}\, (P\sigma)_{ij}\, dw_{j}(s)\,, \end{split} \label{dynamics-f-tau} \end{align} as $\tau\rightarrow 0$, we can recover the thermodynamic integration identity in the reaction coordinate case. See Appendix~\ref{app-1} and \ref{app-2} for details. \end{enumerate} \label{rmk-2} \end{remark} \subsection{Information-theoretic formulation and numerical considerations} \label{subsec-info-the-ce-coordinate} In this subsection, we study the information-theoretic formulation of the Jarzynski-like equality (\ref{generalized-jarzynski-coordinate}) in the reaction coordinate setting. Numerical issues related to computing free energy differences will be discussed as well. Since the analysis is similar to Subsection~\ref{subsec-is} and Subsection~\ref{subsec-ce}, the discussion in this subsection will be brief and mainly focus on the changes. First of all, let $\mathbf{P}$, $\mathbf{E}$ denote the probability measure and the expectation of the path ensemble corresponding to the dynamics (\ref{dynamics-f}) starting from $y(0)\sim \mu_{z(0)}$, with the function $f$ given in~(\ref{f-f-tilde}). We can rewrite the equality (\ref{generalized-jarzynski-coordinate}) as \begin{align} \Delta F = - \beta^{-1} \ln \mathbf{E}\,\big( e^{-\beta W}\big)\,, \label{df-jarzynski-coordinate} \end{align} where $\Delta F = F(z(T)) - F(z(0))$ is the free energy difference and $W=W(T)$ is defined in (\ref{w-coordinate-jarzynski}). Let $\overline{\mathbf{P}}$ be another probability measure on the path space which is equivalent to $\mathbf{P}$ and $\overline{\mathbf{E}}$ denote the corresponding expectation. Applying a change of measure in (\ref{df-jarzynski-coordinate}), we have \begin{align} \Delta F = - \beta^{-1} \ln \overline{\mathbf{E}}\,\Big( e^{-\beta W}\frac{d\mathbf{P}}{d\overline{\mathbf{P}}}\Big)\,. \label{df-jarzynski-coordinate-change-measure} \end{align} Following the same argument in Subsection~\ref{subsec-is}, we can deduce exactly the same inequality (\ref{df-w-ineq}), as well as the expression for the optimal measure $\mathbf{P}^*$, which is characterized by (\ref{opt-p}), such that the Monte Carlo estimator based on (\ref{free-energy-optimal-estimator}) will achieve zero variance. The derivations (\ref{phi-cost}), (\ref{variation-form}), (\ref{entropy-exp}) in Subsection~\ref{subsec-is} carry over to the current setting as well. On the other hand, since the trajectories of the dynamics (\ref{dynamics-f}) satisfy $\xi(y(t)) = z(t)$ for $t \in [0,T]$, it is important to notice that the probability measure $\mathbf{P}$ concentrates on the set of paths \begin{align} \Big\{y(\cdot)\,\Big|\, y(\cdot) \in C([0,T], \mathbb{R}^n), ~ y(t) \in \Sigma_{z(t)}, ~0 \le t \le T\Big\}\,. \label{path-subset-coordinate} \end{align} Accordingly, the probability measure $\overline{\mathbf{P}}$ used to perform the change of measure in (\ref{df-jarzynski-coordinate-change-measure}) should also concentrate on the set (\ref{path-subset-coordinate}) in order to assure that it is equivalent to $\mathbf{P}$. The optimal measure $\mathbf{P}^*$ can be characterized more transparently by considering the HJB equation. Specifically, define \begin{align} g(y,t) = \mathbf{E}\Big(e^{-\beta W_{(t,T)}}~\Big|~y(t) = y\Big)\,,\quad \forall\,y \in \Sigma_{z(t)}\,, \end{align} where $y(\cdot)$ satisfies (\ref{dynamics-f}) and $W_{(t,T)}$ is similarly defined as in (\ref{w-coordinate-jarzynski}) except that the integration is from $t$ to $T$. It follows from the Feynman-Kac formula that $g$ satisfies \begin{align} \begin{split} &\partial_t g + \mathcal{L} g -\beta \Big[(\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i \frac{\partial V}{\partial y_i} - \frac{1}{\beta} \frac{\partial}{\partial y_i} \Big((\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i \Big)\Big]\, f_{\gamma'} g = 0 \,, \\ &g(\cdot, T) = 1\,. \end{split} \end{align} where $\mathcal{L}$ is the infinitesimal generator defined in (\ref{l-coordinate}) for the process $y(\cdot)$. And a simple calculation shows that $U=-\beta^{-1}\ln g$ satisfies the HJB equation \begin{align} \begin{split} & \partial_t U + \min_{c \in \mathbb{R}^n} \Big\{\mathcal{L}U + (P\sigma c)\cdot \nabla U + \frac{|c|^2}{4} \\ &\hspace{2.2cm}+ \Big[(\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i \frac{\partial V}{\partial y_i} - \frac{1}{\beta} \frac{\partial}{\partial y_i} \Big((\Psi^{-1})_{\gamma\gamma'} (a\nabla \xi_\gamma)_i \Big)\Big]\, f_{\gamma'}\Big\} = 0\,, \\ & U(\cdot, T) = 0\,, \end{split} \end{align} from which we conclude that the optimally controlled dynamics satisfies \begin{align} \begin{split} dy_i(s) = & - (Pa)_{ij} \frac{\partial V}{\partial y_j}\,ds + \frac{1}{\beta} \frac{\partial (Pa)_{ij}}{\partial y_j}\,ds + (\Psi^{-1})_{\gamma\gamma'} (a\nabla\xi_\gamma)_i\,f_{\gamma'}\,ds \\ & + \big[P\sigma u^*_s(y(s))\big]_i\,ds + \sqrt{2\beta^{-1}}\, (P\sigma)_{ij}\, dw_{j}(s)\,, \quad 1 \le i \le n\,, \end{split} \label{dynamics-f-optimal-controlled} \end{align} where the optimal feedback control $u^*_s(y) = -2(P\sigma)^T\nabla U$, starting from the distribution $\mu_0^*$ which is determined by $\frac{d\mu_0^*}{d\mu_{z(0)}} \propto g(\cdot, 0)$. \textbf{Cross-entropy method.} In the following, we briefly discuss the cross-entropy method following Subsection~\ref{subsec-ce}. Consider a family of parameterized probability measures $\{\mathbf{P}_{\bm{\omega}}\,|\,\bm{\omega} \in \mathbb{R}^k\}$, where, for given $\bm{\omega} = (\omega_1, \omega_2, \cdots, \omega_k)^T \in \mathbb{R}^k$, $\mathbf{P}_{\bm{\omega}}$ is the probability measure of paths corresponding to the dynamics \begin{align} \begin{split} dy_i(s) = & - (Pa)_{ij} \frac{\partial V}{\partial y_j}\,ds + \frac{1}{\beta} \frac{\partial (Pa)_{ij}}{\partial y_j}\,ds + (\Psi^{-1})_{\gamma\gamma'} (a\nabla\xi_\gamma)_i\,f_{\gamma'}\,ds \\ & + (P\sigma)_{ij} \Big(\sum_{l=1}^k \omega_l \phi^{(l)}_j\Big) ds + \sqrt{2\beta^{-1}}\, (P\sigma)_{ij}\, dw_{j}(s)\,, \quad 1 \le i \le n\,, \end{split} \label{dynamics-f-omega} \end{align} where $\phi^{(l)} = (\phi^{(l)}_1, \phi^{(l)}_2, \cdots, \phi^{(l)}_n)^T : \mathbb{R}^n \times [0, T]\rightarrow \mathbb{R}^n$ are $k$ ansatz functions, $1 \le l \le k$. As a special choice, we consider $\phi^{(l)}=-\sigma^T\nabla V^{(l)}$ where $V^{(l)} : \mathbb{R}^n \rightarrow \mathbb{R}$, $1 \le l \le k$, are smooth and linearly independent potential functions, by which (\ref{dynamics-f-omega}) becomes \begin{align} \begin{split} dy_i(s) = & - (Pa)_{ij} \frac{\partial \big(V + \sum_{l=1}^k \omega_l V^{(l)}\big)}{\partial y_j}\,ds + \frac{1}{\beta} \frac{\partial (Pa)_{ij}}{\partial y_j}\,ds \\ & + (\Psi^{-1})_{\gamma\gamma'} (a\nabla\xi_\gamma)_i\,f_{\gamma'}\,ds + \sqrt{2\beta^{-1}}\, (P\sigma)_{ij}\, dw_{j}(s)\,, \quad 1 \le i \le n\,, \end{split} \label{dynamics-f-omega-vl} \end{align} i.e., paths are sampled with the modified potential function $V + \sum\limits_{l=1}^k \omega_l V^{(l)}$. Applying Ito's formula as in (\ref{xi-dt-coordinate}), we can verify that trajectories of the dynamics (\ref{dynamics-f-omega}), starting from $y(0) \in \Sigma_{z(0)}$, satisfy $\xi(y(t)) = z(t)$ for $t \in [0, T]$ as well. Therefore, the probability measures $\mathbf{P}_{\bm{\omega}}$ indeed concentrate on the set (\ref{path-subset-coordinate}). Applying Girsanov's theorem, we obtain \begin{align} \frac{d\mathbf{P}_{\boldsymbol{\omega}}}{d\mathbf{P}} = \exp\bigg[\sqrt{\frac{\beta}{2}} \int_0^T \Big(\sum_{l=1}^k \omega_l\phi^{(l)}\Big)\cdot dw(s) - \frac{\beta}{4} \int_0^T \Big|\sum_{l=1}^k \omega_l\phi^{(l)}\Big|^2\, ds\bigg]\,, \label{girsanov-ce-coordinate} \end{align} where $w(s)$ is the Brownian motion in the original dynamics (\ref{dynamics-f}) (i.e., under the probability measure $\mathbf{P}$). Following the same argument as in Subsection~\ref{subsec-ce}, we know that the minimizer of the optimization problem (\ref{mini-omega-problem}) is given by the unique solution of the linear equation $A\bm{\omega}^*=R$, where \begin{align} \begin{split} A_{ll'} = \mathbf{E} \bigg(e^{-\beta W} \int_0^T \phi^{(l)}\cdot \phi^{(l')}\, ds\bigg)\,, \quad R_{l} = \sqrt{2\beta^{-1}}\mathbf{E} \bigg[e^{-\beta W} \int_0^T \phi^{(l)} \cdot dw(s) \bigg]\,, \end{split} \label{a-r-coeff-coordinate} \end{align} for $1 \le l, l' \le k$. \textbf{Variance reduction by increasing mixing.} In practice, however, due to the complicate expressions of work $W$ in (\ref{w-coordinate-jarzynski}) or (\ref{w-coordinate-jarzynski-special}), it becomes difficult to have an intuitive idea to guide the choices of ansatz functions, which play a crucial role in the cross-entropy method above. In the following, we briefly discuss another idea that can be explored in order to reduce the variance in the free energy calculation based on Jarzynski-like identity. Different from the importance sampling method which improves the efficiency of Monte Carlo method by increasing the sampling frequency of paths with small work, the idea here, which is inspired by the analysis in Appendix~\ref{app-1} and Appendix~\ref{app-2}, is to compute free energy differences based on trajectories of the dynamics (\ref{dynamics-f-tau}) with a small $\tau$ (similar idea has also been investigated in~\cite{efficient-free-energy-calculation-2000,fast-growth-method-jarzynski2001}). The observation is that the standard Monte Carlo estimator based on Jarzynski-like identity typically sample trajectories with large work (therefore low efficiency) because the nonequilibrium dynamics do not have enough time to equilibrate under nonequilibrium force. Therefore, by decreasing $\tau$ in (\ref{dynamics-f-tau}), the mixing of the ``equilibrium part'' of the nonequilibrium system becomes faster at each fixed nonequilibrium force. Numerically, the work $W$ of the sampled trajectories is likely to be both smaller and more concentrated. From the analysis in Appendix~\ref{app-1} and Appendix~\ref{app-2}, we know that the free energy calculation method based on Jarzynski-like identity (\ref{generalized-jarzynski-coordinate}) reduces to the thermodynamic integration method when $\tau\rightarrow 0$. In practice, $\tau$ should be chosen not very small since otherwise the system will become more stiff and a smaller time step-size has to be used in numerical integration. Readers are referred to Subsection~\ref{subsec-ex2} for numerical study of free energy calculation using different $\tau$. \section{Numerical examples} \label{sec-examples} We consider two simple examples and study the efficiency of Monte Carlo methods for free energy computation. \subsection{Example $1$: 1D example in alchemical transition case} \label{subsec-ex1} In this example, we consider one-dimensional potentials \begin{align} V(x, \lambda) = (1-\lambda)\frac{(x + 1)^2}{2} + \lambda \Big(\frac{(x^2-1)^2}{4}-0.4x\Big)\,, \label{pot-v-ex1} \end{align} where $x\in \mathbb{R}$ and $\lambda \in [0,1]$. As $\lambda$ increases from $0$ to $1$, $V(\cdot, \lambda)$ varies from a quadratic potential centered at $x=-1$ to a tilted double well potential (Figure~\ref{sub-fig-pot-all}). Recalling the free energy $F$ defined in (\ref{free-energy}), (\ref{normal-const}), we will compute free energy differences $\Delta F(\lambda) = F(\lambda) - F(0)$, using Monte Carlo based on Jarzynski's identity (\ref{jarzynski-repeat}). We fix $\beta = 5.0$ and the SDE \begin{align} dx(s) = -\frac{\partial V}{\partial x}(x(s), \lambda(s))\, ds + \sqrt{2\beta^{-1}} dw(s)\,, \label{sde-ex1} \end{align} with control protocol $\lambda(s) = s$, $s \in [0,1]$, will be considered in the Monte Carlo simulations. Clearly, for the initial distribution $\mu_0=\mu_{\lambda(0)}$, we have $\frac{d\mu_0}{dx} \propto \exp\big(-\beta \frac{(x+1)^2}{2}\big)$. In fact, since the problem is one dimensional in space, we can directly compute the normalization constant $Z(\lambda)$ by numerically integrating (\ref{normal-const}) and therefore obtain the free energy differences $\Delta F(\lambda)$, which are shown in Figure~\ref{subfig-df-cureve}. In particular, we obtain $\Delta F(1) = F(1) - F(0)= -3.44 \times 10^{-1}$ and this will be our reference solution. Furthermore, we can also approximate the optimal change of measure $\mathbf{P}^*$ in (\ref{opt-p-decomp}) by computing the optimal control force $u^*$ and the optimal initial distribution $\mu_0^*$ according to (\ref{opt-u}), (\ref{opt-mu0}), respectively. For this purpose, we need to compute the function $g(x,t) = \mathbf{E}_{x,t} \big(e^{-\beta W_{(t,T)}}\big)$ in (\ref{g-fun-repeat}) which satisfies (\ref{g-pde-repeat}). Notice that, in the current setting, we have $T=1$ and (\ref{g-pde-repeat}) becomes \begin{align} \begin{split} &\frac{\partial g}{\partial t} - \frac{\partial V}{\partial x} \frac{\partial g}{\partial x} + \frac{1}{\beta} \frac{\partial^2 g}{\partial x^2} -\beta \big(V(x,1) - V(x,0)\big) g = 0\,,\quad 0 \le t < 1\,,\\ & g(\cdot, 1) = 1\,. \end{split} \label{g-fun-ex1} \end{align} To compute $g$, we truncate the space of $(x,t)$ to $[-5.0, 5.0] \times [0,1]$ and discretize the PDE (\ref{g-fun-ex1}) on a uniform grid of size $10000\times 10000$, following a similar way that was described in \cite{Hartmann2017-ptrf,zhangs-schuette-entropy-2017}. The solution $g$ is obtained by solving the discretized system backwardly from $t=1$ to $0$. The function $U=-\beta^{-1}\ln g$ is displayed in Figure~\ref{subfig-log-g-2d} and the profile of $g(\cdot, 0)$ at $t=0$ is shown in Figure~\ref{subfig-g-t0}. Based on these results, we can obtain the optimal control potentials (which is $V+2U$ according to (\ref{dynamics-1-u}) and (\ref{opt-u})) and the optimal initial distribution $\mu_0^*$. These results are shown in Figure~\ref{sub-fig-opt-pot-lam}, Figure~\ref{subfig-u-t0} and Figure~\ref{subfig-start-mu}, respectively. In particular, combining the expression (\ref{opt-mu0}) with Figure~\ref{subfig-g-t0} and Figure~\ref{subfig-start-mu}, it can be observed that, due to the strong inhomogeneity of $g(\cdot, 0)$, the high probability density region of the optimal initial distribution $\mu_0^*$ is shifted along the positive $x$ axis and has little overlap with that of the distribution $\mu_0$. Now we turn to discuss the performance of Monte Carlo methods. First of all, we apply the standard Monte Carlo method to estimate free energy differences. SDE (\ref{sde-ex1}) is discretized with time step-size $\Delta s = 5 \times 10^{-4}$ and we repeat the simulation $10$ times. For each independent run, the estimator \begin{align} \mathcal{I}(\lambda) = \frac{1}{N} \sum_{i=1}^N e^{-\beta W_i(\lambda)} \label{estimator-stdmc-ex1} \end{align} is computed by generating $N = 5 \times 10^{5}$ trajectories of dynamics (\ref{sde-ex1}) starting from $\mu_0$, where $W_i(\lambda)$ is the numerical approximation of (\ref{work-w-special}) on $[0, \lambda]$ for the $i$th trajectory. The free energy differences are then estimated by \begin{align} \Delta F(\lambda) \approx -\beta^{-1}\ln \mathcal{I}(\lambda)\,, \label{df-ex1} \end{align} which is asymptotically unbiased when $N \rightarrow +\infty$. The results are summarized in Figure~\ref{subfig-df-cureve}, Figure~\ref{subfig-multi-run} as well as in the last row of Table~\ref{tab-1}. We can observe that the estimations of free energy differences have very large fluctuations within the $10$ runs and the standard Monte Carlo estimator (\ref{estimator-stdmc-ex1}) has a very large (sample) standard deviation. Noticing that the initial distribution $\mu_0$ in fact is very different from the optimal initial distribution $\mu_0^*$, we have also used the probability measure $\bar{\mu}_0$, which is given by $\frac{d\bar{\mu}_0}{dx} \propto \exp\big(-\beta \frac{(x-0.5)^2}{2}\big)$, as the initial distribution in importance sampling Monte Carlo methods. From the profiles of their probability density functions in Figure~\ref{subfig-start-mu}, we expect that the importance sampling Monte Carlo estimators using $\bar{\mu}_0$ will have better performance than estimators using $\mu_0$. Besides the change of measure in the initial distribution, the controlled dynamics \begin{align} dx(s) = -\frac{\partial V}{\partial x}(x(s), \lambda(s))\, ds + \sum_{l=1}^k\omega_l\phi^{(l)}(x(s), s)\, ds + \sqrt{2\beta^{-1}}\, dw(s) \label{sde-ex1-control} \end{align} is used to generate trajectories instead of dynamics (\ref{sde-ex1}), which leads to a further change of measure on path space. In (\ref{sde-ex1-control}), $\phi^{(l)}$ are ansatz functions which we choose to be either piecewise linear functions or Gaussian functions~\cite{Hartmann2016-Nonlinearity}. In the case of piecewise linear ansatz function, we divide the domain $[-1.3, 1.3]$ uniformly into $30$ Voronoi cells $\mathcal{C}_l$ and the ansatz functions are defined as $\phi^{(l)}(x,t) = (1-t)\mathbf{1}_{\mathcal{C}_l}(x)$, $1 \le l \le 30$, where $\mathbf{1}_{\mathcal{C}_l}$ denotes the characteristic function of cell $\mathcal{C}_l$. In the case of Gaussian ansatz function, we choose two functions $\phi^{(l)}(x,t) = \frac{\partial V^{(l)}}{\partial x}(x,t)$, where $l=1,2$ and \begin{align} V^{(1)}(x,t) = (1-t) \exp\Big(-\frac{x^2}{2}\Big)\,,\quad V^{(2)}(x,t) = (1-t)\exp\Big(-\frac{(x-1.2)^2}{4.5}\Big)\,. \label{gauss-ansatz-ex1} \end{align} In both cases, the ansatz functions are chosen based on the idea discussed in Subsection~\ref{subsec-ce} and the dependence on time $t$ is included since we know that the optimal control force, which is proportional to $\frac{\partial g}{\partial x}$, vanishes at time $t=1$, due to the Dirichlet boundary condition in (\ref{g-fun-ex1}). After these preparations, we apply the cross-entropy method discussed in Subsection~\ref{subsec-ce} to optimize the coefficients $\omega_l$ in (\ref{sde-ex1-control}) by simulating $10^5$ trajectories. The control forces at time $t=0$, as well as the control potentials in Gaussian ansatz case are shown Figure~\ref{subfig-u-t0} and Figure~\ref{subfig-gauss-pot}, respectively. Apparently, although the control forces are different from the optimal one, all of them can help drive the system along the positive $x$ axis. Similarly as in the standard Monte Carlo case, we estimate the free energy differences using importance sampling Monte Carlo method for $10$ times where $N= 5\times 10^5$ trajectories of the controlled dynamics (\ref{sde-ex1-control}) are simulated for each run. Instead of (\ref{estimator-stdmc-ex1}), estimator \begin{align} \mathcal{I}(\lambda) = \frac{1}{N} \sum_{i=1}^N e^{-\beta W_i(\lambda)}\, r_i \label{estimator-ipmc-ex1} \end{align} is computed, where $r_i$ is the likelihood ratio given by Girsanov's theorem (see (\ref{girsanov-ce})). The results are shown in Figure~\ref{subfig-df-cureve}, Figure~\ref{subfig-multi-run}, as well as in Table~\ref{tab-1}. Comparing to the standard deviation of the standard Monte Carlo estimator (\ref{estimator-stdmc-ex1}), we observe that the standard deviations of the importance sampling Monte Carlo estimators $\mathcal{I}(\lambda)$ in (\ref{estimator-ipmc-ex1}) are significantly reduced when we applied a change of measure both in the initial distribution and in the dynamics, i.e., when the controlled dynamics (\ref{sde-ex1-control}) with initial distribution $\bar{\mu}_0$ is used. And both types of ansatz functions exhibit comparable performances. To better understand the efficiency of Monte Carlo methods, the probability density functions and the mean values of work within the $10$ runs of simulations are shown in Figure~\ref{subfig-work-1}, Figure~\ref{subfig-work-2} and Table~\ref{tab-1} for each Monte Carlo estimators. Clearly, by applying importance sampling both in the initial distribution and in the dynamics, trajectories with low work value are more efficiently sampled, leading to a much better efficiency of the Monte Carlo estimators. \begin{figure}[htpb] \subfigure[$V(x,\lambda)$]{\includegraphics[width=0.48\textwidth]{./pot_lam_all.eps} \label{sub-fig-pot-all}} \subfigure[$U=-\beta^{-1}\ln g$]{\includegraphics[width=.48\textwidth]{./log_phi_2d.eps}\label{subfig-log-g-2d}} \caption{Example $1$. (a) Potential $V(x,\lambda)$ in (\ref{pot-v-ex1}). (b) Function $U=-\beta^{-1} \ln g$, where $\beta = 5.0$ and $g$ solves PDE (\ref{g-fun-ex1}).} \end{figure} \begin{figure}[htpb] \subfigure[Optimally biased potentials]{\includegraphics[width=0.48\textwidth]{./exact_modified_pot_all.eps}\label{sub-fig-opt-pot-lam}} \subfigure[Biased potentials using Gaussian ansatz]{\includegraphics[width=0.48\textwidth]{./gaussian_modified_pot_lam_all.eps}\label{subfig-gauss-pot}} \caption{Example $1$ with the control protocol $\lambda(s) = s$, for $s\in [0, 1]$. (a) Optimally biased potential ($V + 2U)$. (b) Biased potentials computed from cross-entropy method with Gaussian ansatz functions (\ref{gauss-ansatz-ex1}). } \end{figure} \begin{figure}[htpb] \subfigure[$g(x,0)$]{\includegraphics[width=0.48\textwidth]{./phi_1d.eps}\label{subfig-g-t0}} \subfigure[Control forces at $t=0$]{ \includegraphics[width=0.48\textwidth]{./control_all_initial_time.eps} \label{subfig-u-t0}} \caption{Example $1$ with the control protocol $\lambda(s) = s$, for $s\in [0, 1]$. (a) Profile of the function $g(x, 0) = \mathbf{E}_{x,0}(e^{-\beta W})$ where $\beta = 5.0$ and $g$ solves PDE (\ref{g-fun-ex1}). (b) Profiles of control forces at time $t=0$. Curves with Labels ``optimal'', ``linear'' and ``Gaussian'' correspond to the optimal control $u^*$, the control forces obtained from the cross-entropy method using piecewise linear and Gaussian ansatz functions.} \end{figure} \begin{figure}[htpb] \centering \includegraphics[width=7cm]{./init_hist.eps} \caption{Example $1$ with the control protocol $\lambda(s) = s$, for $s\in [0, 1]$. Probability density functions of different initial distributions used in Monte Carlo methods for $\beta = 5.0$. The corresponding densities are $\frac{d\mu_0}{dx} \propto \exp\big(-\beta \frac{(x+1)^2}{2}\big)$, $\frac{d\bar{\mu}_0}{dx} \propto \exp\big(-\beta \frac{(x-0.5)^2}{2}\big)$, and $\frac{d\mu_0^*}{dx}\propto \exp\big(-\beta \frac{(x+1)^2}{2}\big) g(x,0)$, which is given by (\ref{opt-mu0}). \label{subfig-start-mu}} \end{figure} \begin{figure}[htpb] \subfigure[]{\includegraphics[width=7cm]{./work_hist_all_init.eps}\label{subfig-work-1}} \subfigure[]{\includegraphics[width=7cm]{./work_hist_all_no_init.eps}\label{subfig-work-2}} \caption{Example $1$ with the control protocol $\lambda(s) = s$, for $s\in [0, 1]$. Probability density functions of work along trajectories estimated from $10$ independent runs of Monte Carlo simulations where $5\times 10^5$ trajectories are simulated for each run. (a) ``optimal'' corresponds to the importance sampling estimator with control $u^*$ starting from the distribution $\mu^*_0$. The other three curves correspond to Monte Carlo estimators with initial distribution $\bar{\mu}_0$, using either the controlled dynamics (\ref{sde-ex1-control}) with piecewise linear ansatz functions (Label ``$\bar{\mu}_0$, linear''), Gaussian ansatz functions (Label ``$\bar{\mu}_0$, Gaussian''), or the uncontrolled dynamics (\ref{sde-ex1}) (Label ``$\bar{\mu}_0$, stdMC''). (b) Results correspond to Monte Carlo estimators with initial distribution $\mu_0$, using either the controlled dynamics (\ref{sde-ex1-control}) with piecewise linear ansatz functions (Label ``$\mu_0$, linear''), Gaussian ansatz functions (Label ``$\mu_0$, Gaussian''), or the uncontrolled dynamics (\ref{sde-ex1}) (Label ``$\mu_0$, stdMC''). \label{fig-work} } \end{figure} \begin{figure}[htpb] \subfigure[$\Delta F(\lambda)$]{\includegraphics[width=7cm]{./df_mean_var.eps}\label{subfig-df-cureve}} \subfigure[$\Delta F(1)$]{\includegraphics[width=7cm]{./df_mean_multi_run.eps}\label{subfig-multi-run}} \caption{Example $1$ with the control protocol $\lambda(s) = s$, for $s\in [0, 1]$. Labels of different curves have the same meaning as those appeared in Figure~\ref{fig-work}. (a) Profiles of free energy differences $\Delta F(\lambda)$ for $\lambda \in [0,1]$. Standard deviations of the free energy difference estimations for $10$ independent runs are shown in vertical error bar for different $\lambda$. ``exact'' corresponds to results obtained by directly integrating the normalization constant $Z(\lambda)$ from (\ref{normal-const}). (b) Mean values of free energy differences at $\lambda = 1$ for $10$ independent runs using different (importance sampling) Monte Carlo methods. For each run, $5 \times 10^5$ trajectories of either SDE (\ref{sde-ex1}) or the controlled SDE (\ref{sde-ex1-control}) are generated with time step-size $\Delta t = 5\times 10^{-4}$. Results corresponding to piecewise linear ansatz functions are not shown here since they are very similar to those corresponding Gaussian ansatz functions.} \end{figure} \begin{table}[htpb] \centering \begin{tabular}{cc|ccccc} \hline \hline initial & control & mean $\mathcal{I}$ & SD $\mathcal{I}$ & mean $\Delta F$ & SD $\Delta F$& mean $W$ \\ \hline $\mu^*_0$ & optimal & $5.58$ & $8.4\times 10^{-2}$ & $-3.44 \times 10^{-1}$ & $2.4\times 10^{-4}$ & $-1.85$ \\ \hline \multirow{3}{*}{$\bar{\mu}_0$}& linear & $5.59$ &$6.0\times 10^{0}$ &$-3.44\times 10^{-1}$ &$3.4 \times 10^{-4}$ & $-2.08$ \\ & Gaussian & $5.59$ &$7.1\times 10^{0}$ &$-3.44\times 10^{-1}$ &$3.5\times 10^{-4}$ & $-2.05$\\ & stdMC & $5.51$ &$9.8\times 10^{1}$ &$-3.41\times 10^{-1}$ &$5.4\times 10^{-3}$ & $-0.71$\\ \hline \multirow{3}{*}{$\mu_0$}& linear & $5.74$ &$2.2\times 10^{2}$ &$-3.49\times 10^{-1}$ &$1.0\times 10^{-2}$ & $-0.08$ \\ & Gaussian & $5.71$ &$2.6\times 10^{2}$ &$-3.48\times 10^{-1}$ &$1.3\times 10^{-2}$& $0.06$ \\ & stdMC & $6.28$ &$1.7\times 10^{3}$ &$-3.53\times 10^{-1}$&$7.2\times 10^{-2}$ & $0.40$\\ \hline \hline \end{tabular} \caption{Example $1$ with the control protocol $\lambda(s) = s$, for $s\in [0, 1]$. Estimations of free energy difference for $\lambda = 1$ using different (importance sampling) Monte Carlo methods. Direct calculation of (\ref{normal-const}) and (\ref{free-energy}) gives the reference value $\Delta F = -3.44\times 10^{-1}$. Column ``initial'' specifies the initial distribution that are used to generate trajectories in Monte Carlo simulations. Column ``control'' specifies the different dynamics (different control forces) and the meaning of each name is the same as those appeared in Figure~\ref{fig-work}. Columns ``mean $\mathcal{I}$'', ``SD $\mathcal{I}$'' show the mean and the sample standard deviation of estimators (\ref{estimator-stdmc-ex1}) or (\ref{estimator-ipmc-ex1}). Columns ``mean $\Delta F$'', ``SD $\Delta F$'' show the mean and the sample standard deviation of $10$ independent runs of the free energy difference estimations $\Delta F(1)$ using (\ref{df-ex1}). The mean values of work $W$ for different Monte Carlo methods are shown in Column ``mean $W$''. \label{tab-1} } \end{table} \subsection{Example $2$: reaction coordinate case} \label{subsec-ex2} In the second example, we study free energy calculation in the reaction coordinate case considered in Section~\ref{sec-coordinate}. A similar example has been considered in~\cite{effective_dynamics}, where the main focus was the approximation quality of effective dynamics. The system consists of three two-dimensional particles $A, B, C$ whose positions are at $x_A, x_B, x_C$, with potential \begin{align} V(x_A, x_B, x_C) = \frac{1}{2\epsilon}\bigg\{r_{BC}-\Big[1 + \kappa \Big(\sin(\theta_{ABC})-\frac{1}{2}\Big)\Big] l_{eq}\bigg\}^2 + \frac{1}{2\epsilon}\big(r_{AB}-l_{eq}\big)^2 + V_3(\theta_{ABC})\,, \label{pot-v-ex2} \end{align} where $r_{AB}$, $r_{BC}$ are the distances between particles $A$ and $B$, $B$ and $C$, respectively. $\theta_{ABC}$ is the angle spanned by the bonds $AB$ and $BC$, and $V_3$ is the potential of angle given by \begin{align} V_3(\theta) = \frac{k_\theta}{2}\Big((\theta-\theta_0)^2 - (\delta \theta)^2\Big)^2- k_{\theta, 1} (\theta-\theta_0) \,, \end{align} with $k_{\theta}>0$. Furthermore, in order to remove rigid body motion invariance, we fix the position of particle $B$ ($x_B=0$) and particle $A$ is only allowed to move along horizontal axis. For parameters, we take $\theta_0=\frac{\pi}{3}$, $\delta \theta = \frac{\pi}{6}$, $\epsilon = 0.1$, $k_\theta = 20$, $k_{\theta, 1} = 0.3$, and $l_{eq} = 5.0$. The system essentially has three degree of freedom, i.e., the position of $x_C=(y_1, y_2)$ and the position of $x_A = (y_3,0)$ on the $x$-axis. The free energy is defined according to (\ref{free-energy-coordinate}), where we take \begin{align} \xi(y_1,y_2,y_3) = \theta_{ABC}=\arctan\frac{y_2}{y_1} \label{xi-ex2} \end{align} as the reaction coordinate function and $\beta = 5.0$. In order to calculate free energy differences, we consider the dynamics $y(s)=(y_1(s), y_2(s), y_3(s))$ in (\ref{dynamics-f-tau}) during the time interval $[0, 1]$ with $a=\sigma=\mbox{id}$, and $f\equiv \frac{\pi}{3}$, starting from $\theta(y(0)) = \frac{\pi}{6}$ at time $s=0$. In this case, the projection matrix in (\ref{p-ij}) can be directly computed as \begin{align} P = \begin{pmatrix} \frac{y_1^2}{y_1^2+y_2^2} & \frac{y_1y_2}{y_1^2+y_2^2} & 0 \\ \frac{y_1y_2}{y_1^2+y_2^2} & \frac{y_2^2}{y_1^2+y_2^2} & 0 \\ 0 & 0 & 1 \end{pmatrix} \end{align} and we have $\Psi=|\nabla \xi|^2 = \frac{1}{y_1^2+y_2^2}$ in (\ref{psi-ij}). The angle $\theta_{ABC}$ of the system $y(s)$ evolves uniformly during time $s \in [0,1]$ from $\frac{\pi}{6}$ to $\frac{\pi}{2}$ and the free energy at $\theta_{ABC} = \frac{\pi}{6}$ is taken as reference. The free energy differences are calculated based on the Jarzynski-like identity (\ref{generalized-jarzynski-coordinate}), where the work $W$ is given in (\ref{w-coordinate-jarzynski-special}) and becomes as simple as \begin{align} W(t) = \int_0^t \Big(-y_2 \frac{\partial V}{\partial y_1} + y_1 \frac{\partial V}{\partial y_2}\Big)(y(s))\, ds \,. \label{w-ex2} \end{align} In the numerical experiment below, we take $\kappa=0.3,\,0.6$ in the potential $V$ in (\ref{pot-v-ex2}) and the performance of the Monte Carlo estimator is tested using different values $\tau = 1.0, \,0.6,\,0.3$ in dynamics (\ref{dynamics-f-tau}). In each case, we estimate the free energy differences based on $10$ independent runs of Monte Carlo sampling of \begin{align} \Delta F(\theta(t)) \approx -\beta^{-1} \ln \mathcal{I}(\theta(t)) = -\beta^{-1} \ln \Big(\frac{1}{N} \sum_{i=1}^N e^{-\beta W_i(t)}\Big) \,, \label{estimator-ex2} \end{align} where $\theta(t) = \frac{\pi}{6} + \frac{\pi}{3} t$. In each run, $N=5 \times 10^5$ trajectories of dynamics (\ref{dynamics-f-tau}) are simulated using time step-size $\Delta t = 10^{-4}$, where $W_i$ denotes the work (\ref{w-ex2}) of the $i$th trajectory. The numerical results are shown in Figure~\ref{fig-ex2-1} , Figure~\ref{fig-ex2-work-pdf} (results for $\kappa=0.3$ are similar and therefore are not displayed) and Table~\ref{tab-ex2}. From both Figure~\ref{fig-ex2-1} and Table~\ref{tab-ex2}, we can observe that the free energy calculation using $\tau=1.0$ lead to large fluctuations and inaccurate estimations. On the other hand, by decreasing $\tau$ to $0.3$, the variance of $10$ independent runs of free energy calculation decreases significantly and the results become stable. Based on the $10$ runs of Monte Carlo simulations of the nonequilibrium dynamics, we can also estimate the probability density functions of the work (\ref{w-ex2}) and the results are shown in Figure~\ref{fig-ex2-work-pdf}. It can be seen that, as $\tau$ decreases, the probability density functions shift along the negative horizontal axis and become more concentrated. This indicates that the work of the sampled paths becomes smaller on average and the variance decreases. All these results confirm that variance of the Monte Carlo estimator can be reduced by decreasing the value of $\tau$ (see discussions at the end of Subsection~\ref{subsec-info-the-ce-coordinate}). \begin{figure}[htpb] \centering \subfigure[$\Delta F(\theta)$]{\includegraphics[width=7cm]{./df_curve_ex2.eps}\label{fig-ex2-df-curve}} \subfigure[$\Delta F(\frac{\pi}{2})$]{\includegraphics[width=7cm]{./df_mean_multi_run_ex2.eps}\label{fig-ex2-df-10runs}} \caption{Example $2$ for $\kappa=0.6$. (a) Profiles of free energy differences $\Delta F(\theta)$ for $\theta=\theta_{ABC} \in [\frac{\pi}{6},\frac{\pi}{2}]$ computed using different $\tau$ in (\ref{dynamics-f-tau}). Standard deviations of the free energy difference estimations for $10$ independent runs are shown in vertical error bar for different $\theta$. ``exact'' corresponds to the reference results obtained by directly integrating the normalization constants $Q(\cdot)$ appeared in (\ref{mu-z}). Curves with Label ``$\tau=0.3$'' and Label ``exact'' almost coincide. (b) Mean values of free energy differences at $\theta = \frac{\pi}{2}$ for $10$ runs of Monte Carlo simulations using different values of $\tau$ in (\ref{dynamics-f-tau}). The horizontal line with Label ``exact'' corresponds to the reference value $\Delta F(\frac{\pi}{2}) = -3.74\times 10^{-1}$. For each run, $5 \times 10^5$ trajectories of SDE (\ref{dynamics-f-tau}) are generated with time step size $\Delta t = 10^{-4}$.\label{fig-ex2-1}} \end{figure} \begin{figure}[htpb] \centering \includegraphics[width=7cm]{./work_hist_cmp_ex2.eps} \caption{ Example $2$ for $\kappa=0.6$. Probability density functions of the work $W$ (\ref{w-ex2}) along trajectories of (\ref{dynamics-f-tau}) for different values $\tau=1.0, \, 0.6,\, 0.3$. For each $\tau$, the probability density function is estimated from $10$ runs of Monte Carlo simulations where $5\times 10^5$ trajectories are simulated in each run. \label{fig-ex2-work-pdf} } \end{figure} \begin{table}[htpb] \centering \begin{tabular}{cc|ccccc} \hline \hline $\kappa$ & $\tau$ & mean $\mathcal{I}$ & SD $\mathcal{I}$ & mean $\Delta F$ & SD $\Delta F$& mean $W$ \\ \hline \multirow{3}{*}{$0.3$} & $1.0$ & $5.46$ & $9.4\times 10^{1}$ & $-3.39 \times 10^{-1}$ & $1.4\times 10^{-2}$ & $0.29$ \\ & $0.6$ & $5.67$ &$4.0\times 10^{1}$ &$-3.47\times 10^{-1}$ &$1.1 \times 10^{-2}$ & $0.05$ \\ & $0.3$ & $5.52$ &$1.5\times 10^{1}$ &$-3.42\times 10^{-1}$ &$2.9\times 10^{-3}$ & $-0.13$\\ \hline \multirow{3}{*}{$0.6$} & $1.0$ & $4.27$ & $2.0\times 10^{3}$ & $-2.55 \times 10^{-1}$ & $1.6\times 10^{-1}$ & $2.14$ \\ & $0.6$ & $5.28$ &$5.1\times 10^{2}$ &$-3.32\times 10^{-1}$ &$4.0 \times 10^{-2}$ & $1.22$ \\ & $0.3$ & $6.33$ &$2.3\times 10^{2}$ &$-3.69\times 10^{-1}$ &$1.3\times 10^{-2}$ & $0.46$\\ \hline \hline \end{tabular} \caption{Example $2$. Estimations of free energy difference for $\theta= \frac{\pi}{2}$ using Monte Carlo methods for different values $\kappa$ and $\tau$. Direct calculation of (\ref{normal-const}) and (\ref{free-energy}) gives the reference value $\Delta F(\frac{\pi}{2}) = -3.42\times 10^{-1}$ and $\Delta F(\frac{\pi}{2}) = -3.74\times 10^{-1}$ for $\kappa=0.3$ and $0.6$, respectively. Columns ``mean $\mathcal{I}$'', ``SD $\mathcal{I}$'' show the mean and the sample standard deviation of the estimator $\mathcal{I}$ in (\ref{estimator-ex2}). Columns ``mean $\Delta F$'', ``SD $\Delta F$'' show the mean and the sample standard deviation of $10$ runs of free energy difference estimations $\Delta F(\frac{\pi}{2})$ using (\ref{estimator-ex2}). The mean values of the work $W$ for Monte Carlo simulations using different $\kappa$ and $\tau$ are shown in the Column ``mean $W$''. \label{tab-ex2} } \end{table} \section{Conclusions} \label{sec-conclusion} In this work, we have studied nonequilibrium theorems for diffusion processes. Jarzynski's equalities and fluctuation theorems are proved for quite general types of diffusion processes in both the alchemical transition case and the reaction coordinate case. The information-theoretic formulation of the Jarzynski's equality, as well as variance reduction approaches are discussed in both cases. Our mathematical tools to derive these nonequilibrium relations are from the theory of stochastic differential equation, in particular the Feynman-Kac formula and the Girsanov's theorem. An advantage of the approach is that, it enables us to elucidate the connections between Jarzynski's equality and the thermodynamic integration identity, which were often treated as two distinct free energy calculation methods. Two variance reduction approaches for Monte Carlo methods have been studied in order to compute free energy differences using Jarzynski's equality. As demonstrated by simple examples, these approaches can largely improve the efficiency of Monte Carlo estimators in both the alchemical transition case and the reaction coordinate case. One of the key findings is that variance reduction by a change of measure requires to change both the initial distribution and the equation of the dynamics. We expect that our simple numerical studies can provide some insights into the source of sampling variances. While the current work focuses on diffusion processes, the mathematical tools may be applicable to other types of stochastic processes, such as Markov chains, particle systems or networks, whose evolution depends on external parameters. In future work, we will also investigate free energy calculation for high-dimensional applications using the variance reduction approaches proposed in this work, together with the recent techniques of solving high-dimensional PDEs~\cite{Darbon2016,deep-relaxation-osher2017,deep-pde-weinan2017}. \section*{Acknowledgement} The authors acknowledge financial support by the Einstein Center of Mathematics (ECMath) through project CH21.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Most of the energy in the universe seems to be dark, and many different observations suggest that only a minority of this energy can be dark matter, the rest apparently being some kind of energy component with negative pressure. The simplest explanation for this dark energy component is a cosmological constant and people have attempted to explain the smallness of this energy density and hence its relatively recent dominance using anthropic arguments \cite{weinbergshapiro} which may concur with modern predictions from string theory \cite{landscape}. Other explanations exist however, including a class of theories known as quintessence where the observed acceleration of the universe results from the stress energy of some rolling scalar field which only comes to dominance recently \cite{ratra,wetterich}. A different approach is to ask whether General Relativity breaks down at large distances, in other words to modify the left-hand/curvature side of the Einstein equations rather than the right-hand/stress-energy side. One of the most studied examples of such a scenario is the DGP brane world model \cite{dgp,def1,def2}. This paper aims to compare the predictions of this theory and its generalisations with the latest cosmological data. In the DGP model, gravity is trapped on a four dimensional brane world at short distances, but is able to propagate into a higher dimensional space at large distances. The Lagrangian for the model is ($\hbar=c=1, M_{Pl}^2=(8\pi G)^{-1}$) \begin{equation} S=-\frac{M^3}{2}\int d^5x\sqrt{-g^{(5)}}R^{(5)}-\frac{M_{Pl}^2}{2}\int d^4x\sqrt{-g^{(4)}}R^{(4)}+\int d^4x\sqrt{-g^{(4)}}{\cal L}_M \label{DGPlagrangian} \end{equation} Due to the different mass scales $M$ and $M_{Pl}$, gravity propagates differently on the brane and in the bulk. The effect of gravitational leakage into the bulk will only appear at large distances ($r>L=M_{Pl}^2/2M^3$). The $tt$ Friedman equation in this theory takes the form \begin{equation} H^2-\epsilon\frac{H}{L}=\frac{\rho_M}{3M_{Pl}^2} \label{DGPfriedman} \end{equation} where $\epsilon=\pm 1$. It is $\epsilon=+1$ that gives the late-time accelerating solutions that could explain dark energy. The theory also predicts very small but potentially detectable modifications to the earth-moon distance \cite{moon}. There are also changes in structure formation which could give constraints upon the theory \cite{DGPstructure} (see also \cite{Reboucas}). Recently the self-accelerating branch of this theory has come under theoretical attack because it seems to possess ghost-like instabilities \cite{ghost} while one of the original authors has argued that the calculational regime within which the instabilities are found is not valid \cite{strong}. In this work, we will put this issue to one side and proceed to compare the model's predictions with the latest data. Even if it turned out that the DGP model was theoretically suspect, it is interesting to see how different expansion histories may or may not be ruled out by the developing suite of data. To test theories, we would like to make detailed comparisons with astronomical observations. There are basically two ways of doing this. As already mentioned, one is to look at the growth of perturbations in these models and see how they compare with perturbations in $\Lambda$CDM and in the observed galaxy correlation function \cite{DGPstructure}. Another way is to simply look at the space-time geometry of the universe on the largest length and time scales available in order to reconstruct the expansion history of the universe and compare it to the solutions of the modified gravity models. This latter approach is the subject of this paper. Studies regarding the comparison between data and the DGP model based upon expansion history are to date contradictory. The first paper on the subject was presented by two of the current authors \cite{fairbairngoobar} using the 2005 data release from the SuperNova Legacy Survey (SNLS) \cite{astier06}. In that work it was argued that the SNLS data combined with the position of the baryon acoustic peak in the Sloan Digital Sky Survey were less consistent with a flat universe in the DGP model than in the $\Lambda$CDM model. These conclusions were backed up when Maartens and Majoretto performed the same test using the SNLS data, the baryon acoustic peak and the CMB shift parameter \cite{maartens}. Later, a new "gold" data set of supernovae was released \cite{riess07}. Analysis of this data and the CMB shift parameter suggested that a flat universe was completely consistent with the DGP model \cite{scoop}, a conclusion the present authors can confirm, also upon addition of the data from the baryon acoustic peak. It is interesting to note that it has been suggested that there may be an inconsistency between the different parts of this gold data set \cite{inconsistency}. The two sets of data come from different instruments - the SNLS supernovae are detected using a combination of imaging at the Canada-France-Hawaii Telescope and spectroscopic studies on large ground based instruments, in particular Gemini, VLT and Keck. The Riess et al. 07 gold sample contains supernovae from the Supernova Cosmology Project, SNLS, the High-Z Team, the GOODS transient survey, and includes 21 new supernovae at extremely high redshift obtained with the Hubble Space Telescope. In all cases, low redshift supernovae from independent surveys have been included in the data sets. Furthermore the supernova magnitudes are obtained from the two different sets of supernova data using two different methods, the SALT algorithm which is used by the SNLS group fits supernovae based upon their light curve and their colour \cite{SALT} as does the MLCS method favoured by Riess et al. , the most recent version being called MLCS2k2 \cite{MLCS}. The parameters in these two algorithms are obtained by training on low redshift supernovae before they are used to obtain the magnitudes of high redshift supernovae. SALT and MLCS2k2 differ both in the details of the brightness-shape relation correction and in how extinction/colour corrections are applied. Recently the ESSENCE group has released a set of 60 supernovae at intermediate to high redshifts \cite{essence}. This data has been combined with the SNLS data and data at low redshift to form a new data set spanning a large redshift range in detail. The supernovae in this data set have been analysed using both methods - SALT and MLCS. In this paper, we combine this new data with the CMB and baryon oscillation data in order to see if the DGP model is favoured or disfavoured. While the model as it stands has one extra space dimension, it is possible to imagine generalisations of the model with higher dimensional bulks, although because of the curvature singularity an infinitely thin 4D brane would create in more than 5 space-time dimensions, the theory would need to be regularised at the location of the brane \cite{Gregory}. We will present a parametrisation of these higher dimensional generalisations as has been done in previous work \cite{dvaliturner,chung,fairbairngoobar} and we will introduce an extra degree of freedom which may go some way to modeling the regularisations. \section{\label{difdata}Observational status of Expansion History.} Observations of high-redshift Type Ia supernovae have been used for over a decade to map the expansion history of the universe \cite{goo95,perl95,perl97,garnavich98,riess98,perl98,schmidt98,perl99,knop03,sullivan03,tonry03,barris04,riess04,krisciunas05,nobili05,astier06,conley06,riess07,essence}\footnote{In fact, the very first detection of a high-z Type Ia SN, SN1988U at z=0.31, was done almost twenty years ago by \cite{danish89}. However, the data on this SN was scarce. Thus, this object is normally not included in the compilation of SNeIa distances.}. These high-z surveys, combined with low redshift SN sample to anchor the Hubble diagram \cite{hamuy96,riess99,jha06}, provided the first direct evidence that the universe expands at an accelerated rate. Cross-cutting techniques involving the anisotropies in the cosmic microwave radiation (CMB) \cite{bernardis,melchiorri,balbi,spergel03,spergel06}, mass density estimates from X-ray observations of clusters \cite{allen04} and more recently weak lensing \cite{hoekstra06}, baryon acoustic oscillations (BAO) \cite{Eisenstein05,tegmark06} and the integrated Sachs-Wolfe effect \cite{Boughn02,nolta04} are apparently continuing to confirm our current understanding of the universe, namely it being spatially flat, as predicted by inflation, and the current expansion being dominated by dark energy. The ESSENCE group has analysed supernovae at intermediate redshifts ($0.1<z<0.8$) using both the MLCS2k2 and the SALT methods to obtain the supernova magnitudes from the light curves and spectra. When using the MLCS2k2, the group adopt what they refer to as the 'glosz' prior to model $A_V$ - the extinction in the $V$ band of the supernova in the host galaxy \cite{essence}. They have also fitted higher redshift supernovae ($0.11<z<1.1$) from the SNLS survey \cite{astier06} and a set of nearby supernovae ($z<0.11$) again using both the SALT and MLCS2k2 method to obtain magnitudes of supernovae over a large redshift range. Measurements of the brightness of Type Ia supernovae as a function of redshift are sensitive to the cosmological model via the integration over expansion history in the expression for the luminosity distance ($c=1$) \begin{equation} d_L = \frac{1+z}{H_0\sqrt{|\Omega_k|}} {\cal S} \left( \sqrt{|\Omega_k|} \int_{0}^{z} { {d\bar{z} \over E(\bar{z})}} \right) \label{lumdim} \end{equation} where the function ${\cal S}(x)$ is defined as $\sin(x)$ for $\Omega_k<0$ ({\em closed Universe}), $\sinh(x)$ for $\Omega_k >0$ ({\em open Universe}) and ${\cal S}(x) = x$, and the factor $\sqrt{|\Omega_k|}$ is removed for the {\em flat Universe}. The parameter $E(z) = H(z)/H_0$. In our analysis we fit the cosmological model to the supernova data using the luminosity distance above but we also fit to the baryon acoustic peak detected in the SDSS Luminous Red Galaxy survey (LRG) of Eisenstein el al (2005) \cite{Eisenstein05,spergel06}, which constrains the following combination of parameters \begin{equation} { \sqrt{\Omega_M} \over E(z_1)^{1 \over 3} } \left[{1 \over z_1 \sqrt{|\Omega_k|} } {\cal S}\left(\sqrt{|\Omega_k|}\int_0^{z_1} {dz \over E(z)} \right) \right]^{2 \over 3} =0.472 \pm 0.017, \label{bao} \end{equation} where $z_1=0.35$ and ${\cal S}$ and $\Omega_k$ are defined as in Eq.(\ref{lumdim}). The quoted uncertainty corresponds to one standard deviation, where a Gaussian probability distribution has been assumed. There is some debate \cite{scoop, scoopref} as to whether or not one should use the baryon acoustic peak to constrain models of dark energy which behave differently to a cosmological constant, for two reasons. The first is that the reconstruction from redshift space to co-moving space required to accurately identify the position of the acoustic peak has been done assuming a constant equation of state \cite{Eisenstein05}. While one would expect the change in the position of the acoustic peak in an alternative dark energy model where the equation of state is a function of redshift to be small, the correction is difficult to quantify without detailed study for each model in question. Secondly, in modified models of gravity, one would expect structure formation to proceed differently \cite{DGPstructure}. Although at the first approximation this would not change the physical co-moving size of the acoustic peak feature in the correlation function, such an effect might create systematic distortions in its reconstruction from redshift space. For these reasons we will constrain the models with and without the SDSS baryon acoustic peak, we leave it to the reader to decide if they want to pay attention to the BAO constraints on parameter space or not. Finally, a cleaner measure of geometry is the CMB shift parameter \cite{bet,maartens} (see however \cite{elgaroy}) - the expansion history of the universe has to be such that the observed position of the CMB peak corresponds to the physical horizon size at last scattering. The photons used to measure this angular size have passed through the integrated geometry created by the particular dark energy model in question. The angular size of the first peak of the CMB as measured by the WMAP 3 year data constrains the ratio \cite{WMAP3,wangmukherjee} \begin{equation} \sqrt{\Omega_M}\int_{0}^{z} { {d\bar{z} \over E(\bar{z})}}=1.70 \pm 0.03 \label{cmb} \end{equation} which can then be applied as another cut to parameter space for each model. Having described the observations that we will compare with the theoretical models, we can move on the expansion predictions for brane world gravity. \section{\label{DGPsection}Comparison of the DGP model with the data.} In this section we will compare the predictions of the DGP model with the latest cosmological expansion history data set described in the previous section. As we have discussed in the introduction, the DGP model in its simplest form with one extra flat dimension (\ref{DGPlagrangian}) gives rise to a modified Friedman equation as written in equation (\ref{DGPfriedman}). We have made a distinction between best model fit and parameter determination - when we talk about 'confidence levels' we are referring to the regions within which the $\chi^2$ values change from the minimum $\chi^2$ value by the critical amounts that are usually used ($\chi^2_{min}+2.3$ for $68\%$ etc. ). If we use the term '1 $\sigma$', '2 $\sigma$' or '3 $\sigma$' we are referring to respectively the $68\%$, $95\%$ and $99\%$ significances obtained by comparing the $\chi^2$ to the number of degrees of freedom. If the errors were Gaussian {\it and} the model being fitted was a good one then those two measures of statistical significance should be equivalent. However, since we do not know if both those criteria are exactly fulfilled, it is important that the reader understands the distinction between the different quantities. \subsection{Mathematical preliminaries} It will be interesting to also consider generalisations of the DGP model which might result from having a higher dimensional bulk. Although such models have not been derived explicitly, it is possible to guess at their possible form and the way that the Friedman equation would be modified as a function of the number of extra dimensions \cite{dvaliturner}. Actual realisations of higher dimensional models of this kind have problems which arise when one considers a delta function of stress energy at the position of the brane \cite{Gregory}. This is solved assuming a non-zero brane thickness, which means that a potential is required to maintain a nearly massive 4D graviton on the brane \cite{porat}. This potential will distort the spectra of gravitons in the extra dimensions and also therefore the leakage of gravity into the extra dimensions. It is not known precisely what form the modifications to the Friedman equations would take in such a situation in the cross over from 4D to higher dimensional physics, we assume that such a regularisation would take the generalised form \cite{prive} \begin{equation} H^2-\frac{1}{L^2(\beta+(LH)^{n-2})}=\frac{\rho_M}{3M_{Pl}^2} \end{equation} for zero spatial curvature on the brane. Here $L$ corresponds to the crossover length scale and $n$ is the number of extra dimensions. The extra term $\beta$ parameterises the regularisation in cases where $n\neq 1$. Dividing through by $H_0^2$, we have \begin{equation} E^2(z)-\frac{\Omega_L}{\beta+\sqrt{\Omega_L}^{(2-n)}{E(z)^{(n-2)}}}=\Omega_M(1+z)^3 \label{dgpe} \end{equation} where $\Omega_L=(H_0L)^{-2}$ and $E(z) = H(z)/H_0$. It is easy to see that when $\beta$ dominates the denominator\footnote{try saying that five times quickly}, the $\Omega_L$-term will behave very much like a cosmological constant term. As usual, we can constrain one of the parameters in terms of the others by setting $z=0$ in (\ref{dgpe}), giving \begin{equation} \beta=\frac{\Omega_L}{1-\Omega_M}-\sqrt{\Omega_L}^{(2-n)} \label{constraint} \end{equation} which is the equivalent of the equation $\Omega_\Lambda+\Omega_M=1$ in flat $\Lambda$CDM. \subsection{Models with $\beta=0$, including original DGP model} The first models that we constrain are those with the regularisation parameter $\beta$ set to zero. If there are too many parameters in a dark energy model it is usually rather easy to fit the data for some combination of those parameters, so one can assume priors like imposing a flat universe in order to obtain interesting constraints. Here, because we have the same number of free parameters\footnote{we have theoretical motivation for $n$ being a discrete integer and each $n$ corresponding to a different model} in these $\beta=0$ models as in the case of $\Lambda$CDM, we are able to include curvature and still get interesting constraints. We do this simply by replacing the Hubble constant squared $H^2$ with $H^2+ka^{-2}$ \cite{fairbairngoobar}. This gives us \begin{equation} H^2+\frac{k}{a^2}-L^{-2}\left[\beta+\left(L\sqrt{H^2+\frac{k}{a^2}}\right)^{n-2}\right]^{-1}=\frac{\rho_M}{3M_{Pl}^2} \label{curvgeneral} \end{equation} which for the case of the original DGP model leads to \begin{equation} E^2(z)=\Omega_K(1+z)^2+\left(\sqrt{\Omega_M(1+z)^3+\Omega_L/4}+\sqrt{\Omega_L/4}\right)^2 \end{equation} where the normal definition $\Omega_K=-k/(a H_0)^{2}$ has been used. The different models were compared with the two sets of data, one of which contained supernovae that had been analysed using the SALT method of determining magnitudes while the other using the MLCS2k2 method. The data-set analysed by the ESSENCE collaboration with SALT gave rise to anomalously large values of $\chi^2$ seemingly due to a few outlying data points. Since the data analysed using MLCS gave $\chi^2$ per degree of freedom rather close to one we have more confidence in this data set and have restricted ourselves to using it. For this data set the intrinsic error in the magnitudes is taken to be 0.1 and the peculiar velocity error in the redshift, which has importance for the lowest redshift supernovae, is assumed to be 400 km/s. To demonstrate the effect of different error assumptions we present the resulting fits with and without inclusion of the peculiar velocity error. \begin{figure} \begin{tabular}{cc} \includegraphics[height=6cm,width=8cm]{n1MLCSnew.ps}& \includegraphics[height=6cm,width=8cm]{lcdmMLCSnew.ps}\cr \includegraphics[height=6cm,width=8cm]{n3MLCSnew.ps}& \includegraphics[height=6cm,width=8cm]{n4MLCSnew.ps}\cr \end{tabular} \caption{Supernovae from ESSENCE, SNLS and nearby sample analysed with the MLCS2k2 method compared with models described by equation \ref{curvgeneral} with $\beta=0$ which include the basic DGP model ($n=1$). Because $\Lambda$CDM is at this level identical to the $n=2$ case we plot the different models in order of increasing $n$. The pink concentric ellipses correspond to the 68\%, 95\% and 99\% confidence regions from the latest supernova data (\ref{lumdim}) when the peculiar velocity error is not yet included, the blue bands border the 99\% confidence region for the baryon acoustic peak data (\ref{bao}) while the green region borders the 99\% confidence region for the CMB shift parameter (\ref{cmb}). The pink dotted line corresponds to the 99\% confidence region for the supernova data when we have included the 400 km/s error in the redshift. The black line corresponds to spatially flat universes. \label{MLCSdgp} } \end{figure} \begin{figure} \begin{tabular}{cc} \includegraphics[height=6cm,width=8cm]{n1MLCSvsRiessgold.ps}& \includegraphics[height=6cm,width=8cm]{lcdmMLCSvsRiessgold.ps}\cr \end{tabular} \caption{Comparison between the results of fitting DGP and $\Lambda$CDM to the SNLS and ESSENCE supernova data set (filled in 68\%, 95\% and 99\% confidence regions) and the Riess 07 Gold set (dotted lines). The solid black line corresponds to spatially flat universes. \label{Riess} } \end{figure} Figure \ref{MLCSdgp} shows the comparison of the data (supernovae, galaxy survey baryon oscillation and CMB) with the DGP model and its variants where the supernova data has been analysed using the MLCS2k2 method. The confidence regions reflect the reported {\em statistical} uncertainties of the various measurements. The potential impact from systematic effects is not addressed by this analysis. Note that at this level of expansion history the $n=2$ DGP variant is identical to $\Lambda$CDM although presumably perturbations would grow very differently in the two models. The actual $\chi^2$ values corresponding to the two different models DGP (n=1) and $\Lambda$CDM (n=2), as well as the higher dimensional generalisations for n=3 and n=4 are listed in the table \ref{chitable} and \ref{chitablenew} . The effects of adding the velocity errors are of course lower minimum $\chi^2$ values as well as a small shift in the best fit parameter values, explaining the increased area of the supernova confidence regions and their shift in parameter space seen in figure \ref{MLCSdgp}. \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline $\beta=0$&\multicolumn{4}{|c|}{MLCS (162 data points)}\\ \cline{2-5} &n=1& $\Lambda$CDM (n=2) &n=3&n=4\\ \hline $\chi^2_{min}$(SNe)& 188 & 188 & 188& 188\\ $\chi^2_{min}$(SNe+flat)& 188 & 188 & 189& 189 \\ $\chi^2_{min}$(SNe+flat+CMB) & 200 & 188 & 190& 197\\ $\chi^2_{min}$(SNe+flat+CMB+BAO) &215 & 192 & 191& 197 \\ \hline \end{tabular} \end{center} \caption{Best fit $\chi^2$ values for the fits of different data sets to the various models. SNe is the supernova data set (analysed using the MLCS2k2 procedure) without inclusion of the redshift error due to the peculiar velocities, next flatness is assumed then the CMB shift parameter is added to the data set. Finally we include the Baryon acoustic oscillation result (BAO) from the Sloan Digital Sky Survey.} \label{chitable} \end{table} \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline $\beta=0$&\multicolumn{4}{|c|}{MLCS (162 data points)}\\ \cline{2-5} &n=1& $\Lambda$CDM (n=2) &n=3&n=4\\ \hline $\chi^2_{min}$(SNe)& 160 & 160 & 160 & 160\\ $\chi^2_{min}$(SNe+flat)& 161 & 161 & 161 & 162 \\ $\chi^2_{min}$(SNe+flat+CMB) & 171 & 161 & 163 & 170\\ $\chi^2_{min}$(SNe+flat+CMB+BAO) &180 & 163 & 164& 170 \\ \hline \end{tabular} \end{center} \caption{Same as table \ref{chitable} except here the redshift error of 400 km/s has been included in the supernova data set.} \label{chitablenew} \end{table} All models can fit the supernova data alone equally well, and more or less equally can fit the data assuming flatness. The requirement of having to fit the CMB data singles out $\Lambda$CDM as being slightly favoured over the other models, although the $\chi^2$ values are such that the DGP model is roughly 1-2 $\sigma$ disfavoured, depending on whether the peculiar velocity error has been included or not, which is not statistically significant enough to allow us to claim that the model is ruled out. If we add the baryon oscillation feature to the data we find that we do seem to be able to disfavour the DGP model at the 1.5-3 $\sigma$ level, again depending on what errors have been included, but as we have already stated in section \ref{difdata}, the reconstruction of the peak in the correlation function from redshift space depends upon assumptions which are not valid in cosmologies where the equation of state is varying over time. At some point in the future it may be interesting to investigate such systematic effects quantitatively. Note also that even if we from the last row of table \ref{chitable} draw the conclusion that the DGP model is ruled out at a 3 $\sigma$ level, the $\Lambda$CDM model is then disfavoured at the 2 $\sigma$ level. Looking instead at table \ref{chitablenew} the $\chi^2$ values obtained for DGP and $\Lambda$CDM give goodness of fit of 16\% and 46\% respectively. Figure \ref{Riess} shows a comparison between the results of fitting the DGP model and the $\Lambda$CDM model respectively to the Riess 07 gold set, used in \cite{scoop}, and the SNLS and ESSENCE data set, used in this paper. Clearly, there are inconsistencies between the two published sets of SN data and we have therefore not attempted to combine them. We see that for supernova data and a flat prior only, the Riess data makes some distinction distinction between the two models while the SNLS data does not. It is on including the prior of the CMB shift parameter that the DGP and $\Lambda$CDM are equally well favoured by Riess data while the result in this paper is that DGP looks slightly less favoured than $\Lambda$CDM. \subsection{Best fit values of $n$.} An interesting exercise is to take the case $\beta=0$ and allow the number of extra dimensions $n$ to be a free, non-integer parameter. We then calculate which values of $n$ fit the data best when we allow $\Omega_M$ to be a free parameter. \begin{figure} \begin{tabular}{cc} \includegraphics[height=6cm,width=8cm]{ommnMLCS.ps}& \includegraphics[height=6cm,width=8cm]{ommnSALT.ps}\cr \end{tabular} \caption{Best fit values of $n$ for $\beta=0$ and different values of the matter density $\Omega_M$. Flatness is assumed and the priors of CMB and BAO included. On the left the supernova data used is analysed by MLCS2k2 and on the right the data analysed by SALT.} \label{bestn} \end{figure} Figure \ref{bestn} shows us that the best fit region when $\beta=0$ occurs for values of $n$ between 1.5 and 3. The case $n=2$ corresponds exactly to a cosmological constant (whether $\beta=0$ or not), so this tells us that without $\beta$ the best fit to the data lies somewhere around the $\Lambda$CDM models, in agreement with the previous conclusions of \cite{fairbairngoobar} although the preferred value of $n$ may have increased slightly. \subsection{Models with non-zero $\beta$} As stated earlier, for the higher dimensional generalisations of DGP we should include the parameter $\beta$ which, while required for theoretical consistency, is an extra free parameter which renders the theory less predictive and makes it easier to fit the data. For this reason, for the higher dimensional cases, we will consider spatially flat universes with non zero $\beta$. The original DGP model corresponds to $n=1$ and in that case there is no need for a regularisation parameter $\beta$. In terms of large scale geometry of space time the $n=2$ case is equivalent to $\Lambda$CDM with or without $\beta$. We therefore only consider $n=3,4,5,6$. In all of these four models, the confidence region in the parameter space of $\Omega_M$ and $\Omega_L$ centers around $\Omega_M=0.26$. Taking this as our value for $\Omega_M$, we obtain a minimum value of $\Omega_L$ for each $n$ by looking at the 95$\%$ confidence level away from the best fit, corresponding to a minimum value of $\beta$ that increases for increasing $n$, as shown in table \ref{betatable}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline 95\% & $n=3$ & $n=4$ & $n=5$ & $n=6$\\ \hline $\Omega_L$ $>$ & (0.8) & 0.95 & 1.15 & 1.25 \\ $\beta$ $>$ & 0 & 0.23 & 0.74 & 1.05 \\ \hline \end{tabular} \end{center} \caption{Minimum values of $\Omega_L$ and $\beta$ (via equation (\ref{constraint}) with $\Omega_M=0.26$) with 95 $\%$ confidence. For n=3, the minimum value of $\Omega_L$ is restricted only through the cutoff where $\beta$ goes negative. \label{betatable}} \end{table} In figure \ref{betafig} we also show how the different cosmological constraints cut into the parameter space of $\beta$ and $\Omega_L$ by plotting the confidence regions for two of the higher dimensional models. \begin{figure} \begin{center} \includegraphics[height=9cm,width=12cm]{beta_oml.ps} \end{center} \caption{\label{betafig} Constraints on the higher dimensional versions of DGP taking into account the regularisation parameter $\beta$. The filled in contours are the confidence levels for the combined $\chi^2$ for the $n=3$ case (see note at beginning of section). The combined $\chi^2$ for the $n=6$ case is also plotted for comparison using black lines showing that for this case, zero $\beta$ is significantly disfavoured.} \end{figure} This shows that for a higher number of dimensions, the fit gets worse and one will have to increase the value of $\beta$ in order to get closer to the cosmological constant case which fits the data well. As $\beta$ becomes more dominant, the dark-energy term in eq. (\ref{dgpe}) approaches $\Omega_L/\beta$ and we can see from figure \ref{betafig} that the gradient approaches $\approx 0.7$ for larger $\beta$, which agrees with the usual best-fit value of $\Omega_{\Lambda}$. \section{Summary and Conclusions} Data from the Supernova legacy survey analysed with the SALT method has previously suggested that the DGP model is marginally disfavoured relative to $\Lambda$CDM \cite{fairbairngoobar}. At the same time, a larger data set including recent data from the Hubble space telescope and analysed using the MLCS approach has led to the conclusion that the DGP model is in fact perfectly safe \cite{scoop}. In this work, we have looked at the SNLS data and new data from the ESSENCE collaboration analysed with the MLCS2k2 algorithm as reported in \cite{essence}. Combination of this data with the CMB constraint suggests that the DGP model is slightly disfavoured, and becomes more dis-favoured if one can treat the baryon acoustic peak as a valid data point. The magnitude of the change in the position of the acoustic peak in the galaxy correlation function when one moves from a background cosmology with a constant equation of state to a DGP universe depends both upon the different geometries and the way that structure grows in those universes. Since the growth of structure in a DGP universe may be rather different from in $\Lambda$CDM we are not able at this stage to say whether or not the DGP model seems to be marginally or significantly disfavoured using this data. Either way, we find that the tests of the DGP model yield significant differences when using the ESSENCE supernova data and the ``gold set'' \cite{scoop,riess07}. The fact that the two data sets lead to different conclusions about the same model is very interesting and outlines the challenges which need to be overcome in order to move into the era of precision Dark Energy measurements. \ack We are grateful for discussions with Cedric Deffayet and Gia Dvali whilst doing this work. MF thanks the Swedish Research Council and the Perimeter Institute for Theoretical Physics for their hospitality. AG would like to acknowledge support by the Swedish Research Council and the G\"oran Gustafsson Foundation for Research in Natural Sciences and Medicine. \vspace{1cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We want to solve stably the equation \begin{equation} \label{eq1} Au=f, \end{equation} where A is a linear bounded operator in a real Hilbert space $H$. We assume that \eqref{eq1} has a solution, possibly nonunique, and denote by $y$ the unique minimal-norm solution to \eqref{eq1}, $y\perp \mathcal{N}:=\mathcal{N}(A):= \{u: Au=0\}$, $Ay=f$. We assume that the range of $A$, $R(A)$, is not closed, so problem \eqref{eq1} is ill-posed. Let $f_\delta$, $\|f-f_\delta\|\leq\delta$, be the noisy data. We want to construct a stable approximation of $y$, given $\{\delta, f_\delta, A\}$. There are many methods for doing this, see, e.g., \cite{I}--\cite{M}, \cite{R499}, \cite{VV}, \cite{V}, to mention some (of the many) books, where variational regularization, quasisolutions, quasiinversion, and iterative regularization are studied, and \cite{R499}-\cite{R491}, where the Dynamical Systems Method (DSM) is studied systematically (see also \cite{R401}, \cite{VV}, \cite{T}, and references therein for related results). The basic new results of this paper are: 1) a new version of the DSM for solving equation \eqref{eq1} is justified; 2) a stable method for solving equation \eqref{eq1} with noisy data by the DSM is given; a priori and a posteriori stopping rules are proposed and justified; 3) an iterative method for solving linear ill-conditioned algebraic systems, based on the proposed version of DSM, is formulated; its convergence is proved; 4) numerical results are given; these results show that the proposed method yields a good alternative to some of the standard methods (e.g., to variational regularization, Landweber iterations, and some other methods). The DSM version we study in this paper consists of solving the Cauchy problem \begin{equation} \label{eqi2} \dot{u}(t)=- P(Au(t)-f),\quad u(0)=u_0, \quad u_0\perp \mathcal{N},\quad \dot{u}:=\frac{du}{dt}, \end{equation} and proving the existence of the limit $\lim_{t\to\infty}u(t)=u(\infty)$, and the relation $u(\infty)=y$, i.e., \begin{equation} \label{eqi3} \lim_{t\to\infty}\|u(t)-y\|=0. \end{equation} Here $P$ is a bounded operator such that $T:=PA\ge 0$ is selfadjoint, $\mathcal{N}(T)=\mathcal{N}(A)$. For any linear (not necessarily bounded) operator $A$ there exists a bounded operator $P$ such that $T=PA\ge 0$. For example, if $A=U|A|$ is the polar decomposition of $A$, then $|A|:=(A^*A)^{\frac{1}{2}}$ is a selfadjoint operator, $T:=|A|\ge 0$, $U$ is a partial isometry, $\|U\|=1$, and if $P:=U^*$, then $\|P\|=1$ and $PA=T$. Another choice of $P$, namely, $P=(A^*A + aI)^{-1}A^*$, $a=const>0$, is used in Section~\ref{numsec}. If the noisy data $f_\delta$ are given, $\|f_\delta-f\|\le \delta$, then we solve the problem \begin{equation} \label{eqi4} \dot{u}_\delta(t)=-P(Au_\delta(t)-f_\delta),\quad u_\delta(0)=u_0, \end{equation} and prove that, for a suitable stopping time $t_\delta$, and $u_\delta :=u_\delta(t_\delta)$, one has \begin{equation} \label{eqi5} \lim_{\delta\to 0}\|u_\delta-y\|=0. \end{equation} An {\it a priori} and an {\it a posteriori} methods for choosing $t_\delta$ are given. In Section 2 these results are formulated and recipes for choosing $t_\delta$ are proposed. In Section \ref{numsec} a numerical example is presented. \section{Formulation and results} Suppose $ A: H\to H$ is a linear bounded operator in a real Hilbert space $H$. Assume that equation \eqref{eq1} has a solution not necessarily unique. Denote by $y$ the unique minimal-norm solution i.e., $y\perp \mathcal{N}:=\mathcal{N}(A)$. Consider the DSM \eqref{eqi2} where $u_0\perp \mathcal{N}$ is arbitrary. Denote \begin{equation} \label{eq6x} T:=PA,\quad Q:=AP. \end{equation} The unique solution to \eqref{eqi2} is \begin{equation} \label{2eq7} u(t)=e^{-tT}u_0 + e^{-tT}\int_0^t e^{sT} ds Pf. \end{equation} Let us first show that any ill-posed linear equation \eqref{eq1} with exact data can be solved by the DSM. \subsection{Exact data} The following result is known (see \cite{R499}) but a short proof is included for completeness. \begin{thm} \label{thm1} Suppose $u_0\perp \mathcal{N}$ and $T^*=T\ge 0$. Then problem \eqref{eqi2} has a unique solution defined on $[0,\infty)$, and $u(\infty)=y$, where $u(\infty)=\lim_{t\to\infty} u(t)$. \end{thm} \begin{proof} Denote $w:=u(t)-y,\, w_0:=w(0)=u_0-y$. Note that $w_0\perp \mathcal{N}$. One has \begin{equation} \label{eq3} \dot{w}=-Tw, \quad T:=PA,\quad w(0)=u_0 - y. \end{equation} The unique solution to \eqref{eq3} is $w=e^{-tT}w_0$. Thus, $$ \|w\|^2=\int_0^{\|T\|} e^{-2t\lambda}d\langle E_\lambda w_0,w_0\rangle. $$ where $\langle u,v\rangle$ is the inner product in $H$, and $E_\lambda$ is the resolution of the identity of $T$. Thus, $$ \|w(\infty)\|^2 =\lim_{t\to \infty}\int_0^{\|T\|} e^{-2t\lambda}d\langle E_\lambda w_0,w_0\rangle=\|P_\mathcal{N}w_0\|^2=0, $$ where $P_\mathcal{N}=E_0-E_{-0}$ is the orthogonal projector onto $\mathcal{N}$. Theorem \ref{thm1} is proved. \end{proof} \subsection{Noisy data $f_\delta$} \label{Sec2.2} Let us solve stably equation \eqref{eq1} assuming that $f$ is not known, but $f_\delta$, the noisy data, are known, where $\|f_\delta-f\|\le \delta$. Consider the following DSM \begin{equation} \label{eq7delta} \dot{u}_\delta = - P(Au_\delta - f_\delta),\quad u_\delta(0)=u_0. \end{equation} Denote $$ w_\delta:=u_\delta-y,\quad T:=PA,\quad w_\delta(0)=w_0:=u_0-y\in \mathcal{N}^\perp. $$ Let us prove the following result: \begin{thm} \label{thm2} If $T=T^*\ge0$, $\lim_{\delta\to0}t_\delta = \infty,\,\lim_{\delta\to 0}t_\delta \delta=0$, and $w_0\in \mathcal{N}^\perp$, then $$ \lim_{\delta\to 0}\|w_\delta(t_\delta)\|=0. $$ \end{thm} \begin{proof} One has \begin{equation} \label{eq4} \dot{w}_\delta= -Tw_\delta + \zeta_\delta,\quad \, \zeta_\delta =P(f_\delta - f),\quad \|\zeta_\delta\|\le\|P\|\delta. \end{equation} The unique solution of equation \eqref{eq4} is $$ w_\delta(t)=e^{-tT}w_\delta(0)+\int_0^te^{-(t-s)T}\zeta_\delta ds. $$ Let us show that $\lim_{\delta\to 0} \|w_\delta(t_\delta)\|=0$. One has \begin{equation} \label{extra1} \lim_{t\to\infty} \|w_\delta(t)\| \le \lim_{t\to\infty}\|e^{-tT}w_\delta(0)\| +\lim_{t\to\infty}\bigg{\|}\int_0^te^{-(t-s)T}\zeta_\delta ds\bigg{\|}. \end{equation} Let $E_\lambda$ be the resolution of identity corresponding to $T$. One uses the spectral theorem and gets: \begin{equation} \label{eq5} \begin{split} \int_0^te^{-(t-s)T}ds\zeta_\delta&=\int_0^t\int_0^{\|T\|} dE_\lambda \zeta_\delta e^{-(t-s)\lambda} ds\\ &=\int_0^{\|T\|} e^{-t\lambda}\frac{e^{t\lambda}-1}{\lambda}dE_\lambda\zeta_\delta =\int_0^{\|T\|}\frac{1-e^{-t\lambda}}{\lambda}dE_\lambda\zeta_\delta. \end{split} \end{equation} Note that \begin{equation} \label{eq6} 0\le\frac{1-e^{-t\lambda}}{\lambda}\le t,\quad \forall \lambda>0,\quad t\ge 0, \end{equation} since $1-x\le e^{-x}$ for $x\ge 0$. From \eqref{eq5} and \eqref{eq6}, one obtains \begin{equation} \label{extra2} \begin{split} \bigg{\|}\int_0^te^{-(t-s)T}ds\zeta_\delta\bigg{\|}^2 &=\int_0^{\|T\|}\big{|}\frac{1-e^{-t\lambda}}{\lambda}\big{|}^2d\langle E_\lambda\zeta_\delta,\zeta_\delta\rangle\\ &\le t^2\int_0^{\|T\|} d\langle E_\lambda\zeta_\delta,\zeta_\delta\rangle\\ &=t^2\|\zeta_\delta\|^2. \end{split} \end{equation} Since $\|\zeta_\delta\|\le \|P\|\delta$, from \eqref{extra1} and \eqref{extra2}, one gets $$ \lim_{\delta\to0} \|w_\delta(t_\delta)\| \le \lim_{\delta\to 0}\bigg{(} \| e^{-t_\delta T}w_\delta(0)\|+t_{\delta}\delta\|P\|\bigg{)}=0. $$ Here we have used the relation: $$ \lim_{\delta\to 0}\| e^{-t_\delta T}w_\delta(0)\|=\|P_\mathcal{N}w_0\|=0, $$ and the last equality holds because $w_0\in \mathcal{N}^\perp$. Theorem \ref{thm2} is proved. \end{proof} From Theorem \ref{thm2}, it follows that the relation $$ t_\delta=\frac{C}{\delta^\gamma},\quad \gamma=\text{const},\quad \gamma\in(0,1) $$ where $C>0$ is a constant, can be used as an \textit{a priori} stopping rule, i.e., for such $t_\delta$ one has \begin{equation}\ \label{eq7} \lim_{\delta\to0}\|u_\delta(t_\delta)-y\|=0. \end{equation} \subsection{Discrepancy principle} In this section we assume that $A$ is a linear finite-rank operator. Thus, it is a linear bounded operator. Let us consider equation \eqref{eq1} with noisy data $f_\delta$, and a DSM of the form \begin{equation} \label{eq8} \dot{u}_\delta = - PA u_\delta + Pf_\delta,\quad u_\delta(0)=u_0. \end{equation} for solving this equation. Equation \eqref{eq8} has been used in Section~\ref{Sec2.2}. Recall that $y$ denotes the minimal-norm solution of equation \eqref{eq1}. Example of a choice of $P$ is given in Section~\ref{numsec}. \begin{thm} \label{thm3} Let $T:=PA$, $Q:=AP$. Assume that $\|Au_0-f_\delta\|> C\delta$, $Q=Q^*\ge 0$, $T^*=T\ge0$, $T$ is a finite-rank operator. Let $\mathcal{N}(T)=:\mathcal{N}$. Note that $\mathcal{N}(T)=\mathcal{N}(A)$. The solution $t_\delta$ to the equation \begin{equation} \label{eq9} h(t):=\|Au_\delta(t)- f_\delta\|=C\delta,\quad C=\text{const},\quad C\in (1,2), \end{equation} does exist, is unique, and \begin{equation} \label{eq10} \lim_{\delta\to 0} \|u_\delta(t_\delta)-y\|=0, \end{equation} where $y$ is the unique minimal-norm solution to \eqref{eq1}. \end{thm} \begin{proof} Denote $$ v_\delta(t):=Au_\delta(t)- f_\delta,\quad w(t):=u(t)-y,\quad w_0:=u_0-y. $$ One has \begin{equation} \label{eq11} \begin{split} \frac{d}{dt}\|v_\delta(t)\|^2 &= 2\langle A\dot{u}_\delta(t),Au_\delta(t)-f_\delta \rangle\\ &= 2\langle A[-P(Au_\delta(t) - f_\delta)],Au_\delta(t)-f_\delta \rangle\\ &=-2\langle AP(Au_\delta-f_\delta),Au_\delta-f_\delta\rangle\le 0. \end{split} \end{equation} where the last inequality holds because $AP=Q\ge0$. Thus, $\|v_\delta(t)\|$ is a nonincreasing function. Let us prove that equation \eqref{eq9} has a solution for $C\in (1,2)$. One has the following commutation formulas: $$ e^{-sT}P=Pe^{-sQ},\quad Ae^{-sT}=e^{-sQ}A. $$ Using these formulas and the representation $$ u_\delta(t)=e^{-tT}u_0+\int_0^te^{-(t-s)T}Pf_\delta ds, $$ one gets: \begin{equation} \label{eq0302} \begin{split} v_\delta(t) &= Au_\delta(t)-f_\delta\\ &= Ae^{-tT}u_0+A\int_0^te^{-(t-s)T}Pf_\delta ds -f_\delta \\ &= e^{-t Q}Au_0+e^{-t Q}\int_0^{t} e^{sQ}dsQf_\delta-f_\delta \\ &= e^{-tQ}A(u_0-y)+e^{-tQ}f+e^{-tQ}(e^{tQ}-I)f_\delta - f_\delta\\ &= e^{-t Q}Aw_0 -e^{-t Q}f_\delta +e^{-t Q}f= e^{-tQ}Au_0 - e^{-tQ}f_\delta. \end{split} \end{equation} Note that $$ \lim_{t\to\infty}e^{-t Q}Aw_0=\lim_{t\to\infty}Ae^{-t T}w_0 = AP_\mathcal{N}w_0=0. $$ Here the continuity of $A$ and the following relation $$ \lim_{t\to\infty}e^{-tT}w_0=\lim_{t\to\infty}\int_0^{\|T\|}e^{-st}dE_sw_0=(E_0-E_{-0})w_0=P_\mathcal{N}w_0, $$ were used. Therefore, \begin{equation} \label{eq12} \lim_{t\to\infty}\|v_\delta(t)\|=\lim_{t\to\infty}\|e^{-t Q}(f-f_\delta)\|\le \|f-f_\delta\|\le\delta, \end{equation} where $\|e^{-tQ}\|\le 1$ because $Q\ge0$. The function $h(t)$ is continuous on $[0,\infty)$, $h(0)=\|Au_0-f_\delta\|>C\delta$, $h(\infty)\le \delta$. Thus, equation \eqref{eq9} must have a solution $t_\delta$. Let us prove the uniqueness of $t_\delta$. If $t_\delta$ is non-unique, then without loss of generality we can assume that there exists $t_1>t_\delta$ such that $\|Au_\delta(t_1)- f_\delta\|=C\delta$. Since $\|v_\delta(t)\|$ is nonincreasing and $\|v_\delta(t_\delta)\|=\|v_\delta(t_1)\|$, one has $$ \|v_\delta(t)\|=\|v_\delta(t_\delta)\|,\quad \forall t\in [t_\delta, t_1]. $$ Thus, \begin{equation} \label{eq13} \frac{d}{dt}\|v_\delta(t)\|^2=0,\quad \forall t\in (t_\delta, t_1). \end{equation} Using \eqref{eq11} and \eqref{eq13} one obtains $$ \| \sqrt{AP}(Au_\delta(t)-f_\delta)\|^2=\langle AP(Au_\delta(t)-f_\delta), Au_\delta(t)-f_\delta\rangle= 0,\quad \forall t\in [t_\delta,t_1], $$ where $\sqrt{AP}=Q^{\frac{1}{2}}\ge 0$ is well defined since $Q=Q^*\ge 0$. This implies $Q^{\frac{1}{2}}(Au_\delta-f_\delta)=0$. Thus \begin{equation} \label{thich1} \begin{split} Q(Au_\delta(t)-f_\delta)=0,\quad \forall t\in [t_\delta,t_1]. \end{split} \end{equation} From \eqref{eq0302} one gets: \begin{equation} \label{3eq23} v_\delta(t)= Au_\delta(t)-f_\delta= e^{-tQ}Au_0 - e^{-tQ}f_\delta. \end{equation} Since $Qe^{-tQ}=e^{-tQ}Q$ and $e^{-tQ}$ is an isomorphism, equalities \eqref{thich1} and \eqref{3eq23} imply \begin{align*} Q(Au_0 - f_\delta)=0. \end{align*} This and \eqref{3eq23} imply $$ AP(Au_\delta(t)-f_\delta)=e^{-tQ}(QAu_0 - Qf_\delta)=0,\quad t\ge0. $$ This and \eqref{eq11} imply \begin{equation} \label{eq14} \frac{d}{dt}\|v_\delta\|^2=0,\quad t\ge0. \end{equation} Consequently, $$ C\delta<\|Au_\delta(0)-f_\delta\|=\|v_\delta(0)\| =\|v_\delta(t_\delta)\| =\|Au_\delta(t_\delta)-f_\delta\| =C\delta. $$ This is a contradiction which proves the uniqueness of $t_\delta$. Let us prove \eqref{eq10}. First, we have the following estimate: \begin{equation} \label{eq16} \begin{split} \|Au(t_\delta)-f\|&\le \|Au(t_\delta)-Au_\delta(t_\delta) \|+\|Au_\delta(t_\delta)-f_\delta\|+\|f_\delta -f\|\\ &\le \bigg{\|}e^{-t_\delta Q}\int_0^{t_\delta}e^{sQ}Qds \bigg{\|} \|f_\delta-f\|+C\delta+\delta, \end{split} \end{equation} where $u(t)$ solves \eqref{eqi2} and $u_\delta(t)$ solves \eqref{eq7delta}. One uses the inequality: $$ \big{\|}e^{-t_\delta Q}\int_0^{t_\delta}e^{sQ}Qds \big{\|}= \|I-e^{-t_\delta Q}\|\le 2, $$ and concludes from \eqref{eq16}, that \begin{equation} \label{eq17} \lim_{\delta\to0}\|Au(t_\delta)-f\|=0. \end{equation} Secondly, we claim that $$ \lim_{\delta\to0}t_\delta=\infty. $$ Assume the contrary. Then there exist $t_0>0$ and a sequence $(t_{\delta_n})_{n=1}^\infty$, $t_{\delta_n}<t_0$, such that \begin{equation} \label{eq18} \lim_{n\to\infty}\|Au(t_{\delta_n})-f\|=0. \end{equation} Analogously to \eqref{eq11}, one proves that $$ \frac{d}{dt}\|v\|^2\le 0, $$ where $v(t):=Au(t)-f$. Thus, $\|v(t)\|$ is nonincreasing. This and \eqref{eq18} imply the relation $\|v(t_0)\|=\|Au(t_0)-f\|=0$. Thus, $$ 0=v(t_0)=e^{-t_0Q}A(u_0-y). $$ This implies $A(u_0-y)=e^{t_0Q}e^{-t_0Q}A(u_0-y)=0$, so $u_0-y\in \mathcal{N}$. Since $u_0-y\in \mathcal{N}^\perp$, it follows that $u_0=y$. This is a contradiction because $$ C\delta\le\|Au_0-f_\delta\|=\|f-f_\delta\|\le\delta, \quad 1<C<2. $$ Thus, \begin{equation} \label{eq03new1} \lim_{\delta\to0}t_\delta=\infty. \end{equation} Let us continue the proof of \eqref{eq10}. From \eqref{eq0302} and the relation $\|Au_\delta(t_\delta)-f_\delta\|=C\delta$, one has \begin{equation} \label{5eq30} \begin{split} C\delta t_\delta &=\|t_\delta e^{-t_\delta Q}Aw_0 - t_\delta e^{-t_\delta Q}(f_\delta - f)\|\\ &\le\|t_\delta e^{-t_\delta Q}Aw_0\| + \| t_\delta e^{-t_\delta Q}(f_\delta - f)\|\\ &\le \|t_\delta e^{-t_\delta Q}Aw_0\| + t_\delta \delta. \end{split} \end{equation} We claim that \begin{equation} \label{hoithay} \lim_{\delta \to 0} t_\delta e^{-t_\delta Q}Aw_0 =\lim_{\delta\to 0}t_\delta Ae^{-t_\delta T}w_0= 0. \end{equation} Note that \eqref{hoithay} holds if $T\ge0$ has finite rank, and $w_0\in \mathcal{N}^\perp$. It also holds if $T\ge 0$ is compact and the Fourier coefficients $w_{0j}:=\langle w_0,\phi_j\rangle$, $T\phi_j=\lambda_j\phi_j$, decay sufficiently fast. In this case $$ \|Ae^{-tT}w_0\|^2\le\|T^{\frac{1}{2}}e^{-tT}w_0\|^2 =\sum_{j=1}^\infty \lambda_je^{-2\lambda_jt}|w_{0j}|^2:=S=o(\frac{1}{t^2}), \quad t\to\infty, $$ provided that $\sum_{j=1}^\infty|w_{0j}|\lambda_j^{-2}<\infty$. Indeed $S=\sum_{\lambda_j\le\frac{1}{t^{\frac{2}{3}}}} +\sum_{\lambda_j>\frac{1}{t^{\frac{2}{3}}}}:=S_1+S_2$. One has $$ S_1\le\frac{1}{t^2}\sum_{\lambda_j\le t^{-\frac{2}{3}}}\frac{|w_{0j}|^2}{\lambda_j^2}=o(\frac{1}{t^2}), \quad S_2\le ce^{-2t^{\frac{1}{3}}}=o(\frac{1}{t^2}),\quad t\to\infty, $$ where $c>0$ is a constant. From \eqref{hoithay} and \eqref{5eq30}, one gets $$ 0\le \lim_{\delta\to0} (C-1)\delta t_\delta \le \lim_{\delta\to 0} \|t_\delta e^{-t_\delta Q}Aw_0\|=0. $$ Thus, \begin{equation} \label{eq03new2} \lim_{\delta\to 0}\delta t_\delta=0 \end{equation} Now \eqref{eq10} follows from \eqref{eq03new1}, \eqref{eq03new2} and Theorem~\ref{thm2}. Theorem 3 is proved. \end{proof} \subsection{An iterative scheme} \label{itersec} Let us solve stably equation \eqref{eq1} assuming that $f$ is not known, but $f_\delta$, the noisy data, are known, where $\|f_\delta-f\|\le \delta$. Consider the following dicrete version of the DSM: \begin{equation} \label{eq7ndelta} u_{n+1,\delta} = u_{n,\delta} - hP(Au_{n,\delta} - f_\delta),\quad u_{\delta,0}=u_0. \end{equation} Let us denote $u_n:=u_{n,\delta}$ when $\delta\not=0$, and set $$ w_n:=u_n-y,\quad T:=PA,\quad w_0:=u_0-y\in \mathcal{N}^\perp. $$ Let $n=n_\delta$ be the stopping rule for iterations \eqref{eq7ndelta}. Let us prove the following result: \begin{thm} \label{4thm2} Assume that $T=T^*\ge0$, $h\|T\|< 2$, $\lim_{\delta\to0}n_\delta h = \infty,\,\lim_{\delta\to 0}n_\delta h \delta=0$, and $w_0\in \mathcal{N}^\perp$. Then \begin{equation} \label{eq32x} \lim_{\delta\to 0}\|w_{n_\delta}\|= \lim_{\delta\to 0}\|u_{n_\delta}-y\| =0. \end{equation} \end{thm} \begin{proof} One has \begin{equation} \label{4eq4} w_{n+1} = w_n -h Tw_n + h \zeta_\delta,\quad \, \zeta_\delta =P(f_\delta - f),\quad \|\zeta_\delta\|\le\|P\|\delta,\quad w_0=u_0-y. \end{equation} The unique solution of equation \eqref{4eq4} is $$ w_{n+1} = (I-hT)^{n+1}w_0 + h\sum_{i=0}^n(I-hT)^i \zeta_\delta. $$ Let us show that $\lim_{\delta\to 0} \|w_{n_\delta}\|=0$. One has \begin{equation} \label{4extra1} \|w_n\| \le \|(I-hT)^{n}w_0\| + \bigg{\|}h\sum_{i=0}^{n-1}(I-hT)^i \zeta_\delta\bigg{\|}. \end{equation} Let $E_\lambda$ be the resolution of identity corresponding to $T$. One uses the spectral theorem and gets: \begin{equation} \label{4eq5} \begin{split} h\sum_{i=0}^{n-1}(I-hT)^i &= h\sum_{i=0}^{n-1}\int_0^{\|T\|} (1-h\lambda)^i dE_\lambda\\ &=h\int_0^{\|T\|} \frac{1 - (1-\lambda h)^{n}}{1-(1-h\lambda)}dE_\lambda =\int_0^{\|T\|} \frac{1 - (1-\lambda h)^{n}}{\lambda}dE_\lambda. \end{split} \end{equation} Note that \begin{equation} \label{4eq6} 0\le\frac{1-(1-h\lambda )^{n}}{\lambda}\le hn,\quad \forall \lambda>0,\quad t\ge 0, \end{equation} since $1-(1-\alpha)^n\le \alpha n$ for all $\alpha \in [0,2]$. From \eqref{4eq5} and \eqref{4eq6}, one obtains \begin{equation} \label{4extra2} \begin{split} \bigg{\|} h\sum_{i=0}^{n-1}(I-hT)^i\zeta_\delta\bigg{\|}^2 &=\int_0^{\|T\|}\big{|}\frac{1 - (1-\lambda h)^{n}}{\lambda}\big{|}^2d\langle E_\lambda\zeta_\delta,\zeta_\delta\rangle\\ &\le (hn)^2\int_0^{\|T\|} d\langle E_\lambda\zeta_\delta,\zeta_\delta\rangle\\ &= (nh)^2\|\zeta_\delta\|^2. \end{split} \end{equation} Since $\|\zeta_\delta\|\le \|P\|\delta$, from \eqref{4extra1} and \eqref{4extra2}, one gets $$ \lim_{\delta\to0} \|w_{n_\delta}\| \le \lim_{\delta\to 0}\bigg{(} \| (I-hT)^{n_\delta}w_\delta(0)\|+ hn_\delta \delta\|P\|\bigg{)}=0. $$ Here we have used the relation: $$ \lim_{\delta\to 0}\|(I-hT)^{n_\delta}w_\delta(0)\|=\|P_\mathcal{N}w_0\|=0, $$ and the last equality holds because $w_0\in \mathcal{N}^\perp$. Theorem \ref{4thm2} is proved. \end{proof} From Theorem \ref{4thm2}, it follows that the relation $$ n_\delta=\frac{C}{h\delta^\gamma},\quad \gamma=\text{const},\quad \gamma\in(0,1) $$ where $C>0$ is a constant, can be used as an \textit{a priori} stopping rule, i.e., for such $n_\delta$ one has \begin{equation}\ \label{4eq7} \lim_{\delta\to0}\|u_{n_\delta}-y\|=0. \end{equation} \subsection{An iterative scheme with a stopping rule based on a discrepancy principle} In this section we assume that $A$ is a linear finite-rank operator. Thus, it is a linear bounded operator. Let us consider equation \eqref{eq1} with noisy data $f_\delta$, and a DSM of the form \begin{equation} \label{5eq8} u_{n+1} = u_n - hP(A u_n - f_\delta),\quad u_0 =u_0, \end{equation} for solving this equation. Equation \eqref{5eq8} has been used in Section~\ref{itersec}. Recall that $y$ denotes the minimal-norm solution of equation \eqref{eq1}. Example of a choice of $P$ is given in Section~\ref{numsec}. Note that $\mathcal{N}:=\mathcal{N}(T)=\mathcal{N}(A)$. \begin{thm} \label{5thm3} Let $T:=PA$, $Q:=AP$. Assume that $\|Au_0-f_\delta\|> C\delta$, $Q=Q^*\ge 0$, $T^*=T\ge0$, $h\|T\|< 2$, $h\|Q\|< 2$, and $T$ is a finite-rank operator. Then there exists a unique $n_\delta$ such that \begin{equation} \label{5eq9} \|Au_{n_\delta}- f_\delta\|\le C\delta < \|Au_{n_\delta-1}- f_\delta\|,\quad C=\text{const},\quad C\in (1,2). \end{equation} For this $n_\delta$ one has: \begin{equation} \label{5eq10} \lim_{\delta\to 0} \|u_{n_\delta} -y\|=0. \end{equation} \end{thm} \begin{proof} Denote $$ v_n:=Au_n - f_\delta,\quad w_n:=u_n-y,\quad w_0:=u_0-y. $$ From \eqref{5eq8}, one gets \begin{align*} v_{n+1} &= Au_{n+1} - f_\delta = Au_n -f_\delta - h AP(Au_n - f_\delta) = v_n - h Qv_n. \end{align*} This implies \begin{equation} \label{5eq11} \begin{split} \|v_{n+1}\|^2 - \|v_{n}\|^2 &= \langle v_{n+1} - v_n, v_{n+1} + v_n \rangle\\ &= \langle -hQ v_n, v_n - hQv_n + v_n \rangle\\ &= -\langle v_n, hQ(2-hQ)v_n \rangle\le 0\\ \end{split} \end{equation} where the last inequality holds because $AP=Q\ge0$ and $\|hQ\| < 2$. Thus, $(\|v_n\|)_{n=1}^\infty$ is a nonincreasing sequence. Let us prove that equation \eqref{5eq9} has a solution for $C\in (1,2)$. One has the following commutation formulas: $$ (I - hT)^nP=P(I - hQ)^n,\quad A(I - hT)^n=(I - hQ)^n A. $$ Using these formulas, the representation $$ u_n = (I - hT)^nu_0 + h\sum_{i=0}^{n-1}(I - hT)^iPf_\delta, $$ and the identity $(I-B)\sum_{i=0}^{n-1}B^i=I-B^n$, with $B=I-hQ$, $I-B=hQ$, one gets: \begin{equation} \label{5eq0302} \begin{split} v_n &= Au_n-f_\delta\\ &= A(I - hT)^n u_0 +Ah\sum_{i=0}^{n-1}(I - hT)^i Pf_\delta -f_\delta\\ &= (I - hQ)^nAu_0+\sum_{i=0}^{n-1}(I - hQ)^ihQf_\delta-f_\delta \\ &= (I - hQ)^nAu_0 - (I-(I - hQ)^n)f_\delta-f_\delta\\ &= (I - hQ)^n(Au_0-f)+(I - hQ)^n(f- f_\delta)\\ &= (I - hQ)^nAw_0 + (I - hQ)^n(f-f_\delta) \end{split} \end{equation} If $V=V^*\geq 0$ is an operator with $||V||\leq 2$, then $||I-V||=\sup_{0\leq s \leq 2}|1-s|\leq 1$. Note that $$ \lim_{n\to\infty}(I - hQ)^n Aw_0=\lim_{n\to\infty}A(I - hT)^nw_0 = AP_\mathcal{N}w_0=0, $$ where $P_\mathcal{N}$ is the orthoprojection onto the null-space $\mathcal{N}$ of the operator $T$, and the continuity of $A$ and the following relation $$ \lim_{n\to\infty}(I - hT)^n w_0=\lim_{n\to\infty}\int_0^{\|T\|}(1-s h)^ndE_sw_0 =(E_0-E_{-0})w_0=P_\mathcal{N}w_0,\quad 0\le sh < 2, $$ were used. Therefore, \begin{equation} \label{5eq12} \lim_{n\to\infty}\|v_\delta(t)\|=\lim_{n\to\infty}\|(I-hQ)^n(f-f_\delta)\|\le \|f-f_\delta\|\le\delta, \end{equation} where $\|I-hQ\|\le 1$ because $Q\ge0$ and $\|hQ\|<2$. The sequence $\{\|v_n\|\}_{n=1}^\infty$ is nonincreasing with $\|v_0\|>C\delta$ and $\lim_{n\to\infty}\|v_n\|\le\delta$. Thus, there exists $n_\delta >0$ such that \eqref{5eq9} holds. Let us prove \eqref{5eq10}. Let $u_{n,0}$ be the sequence defined by the relations: $$ u_{n+1,0} = u_{n,0} - hP(Au_{n,0}-f),\quad u_{0,0}=u_0. $$ First, we have the following estimate: \begin{equation} \label{5eq16} \begin{split} \|Au_{n_\delta,0}-f\|&\le \|Au_{n_\delta}-Au_{n_\delta,0}\|+\|Au_{n_\delta}-f_\delta\|+\|f_\delta -f\|\\ &\le \bigg{\|}\sum_{i= 0}^{n_\delta -1}(I - hQ)^ihQ\bigg{\|} \|f_\delta-f\|+C\delta+\delta. \end{split} \end{equation} Since $0\le hQ<2$, one has $ ||I-hQ||\leq 1$. This implies the following inequality: $$ \bigg{\|}\sum_{i= 0}^{n_\delta -1}(I - hQ)^ihQ\bigg{\|}=\|I-(I - hQ)^{n_\delta}\|\le 2, $$ and concludes from \eqref{5eq16}, that \begin{equation} \label{5eq17} \lim_{\delta\to0}\|Au_{n_\delta,0}-f\|=0. \end{equation} Secondly, we claim that $$ \lim_{\delta\to0}hn_\delta=\infty. $$ Assume the contrary. Then there exist $n_0>0$ and a sequence $(n_{\delta_n})_{n=1}^\infty$, $n_{\delta_n}<n_0$, such that \begin{equation} \label{5eq18} \lim_{n\to\infty}\|Au_{n_\delta,0}-f\|=0. \end{equation} Analogously to \eqref{5eq11}, one proves that $$ \|v_{n,0}\|\le\|v_{n-1,0}\|, $$ where $v_{n,0}=Au_{n,0}-f$. Thus, the sequence $\|v_{n,0}\|$ is nonincreasing. This and \eqref{5eq18} imply the relation $\|v_{n_0,0}\|=\|Au_{n_0,0}-f\|=0$. Thus, $$ 0=v_{n_0,0}=(I - hQ)^{n_0}A(u_0-y). $$ This implies $A(u_0-y)=(I - hQ)^{-n_0}(I - hQ)^{n_0}A(u_0-y)=0$, so $u_0-y\in \mathcal{N}$. Since, by the assumption, $u_0-y\in \mathcal{N}^\perp$, it follows that $u_0=y$. This is a contradiction because $$ C\delta\le\|Au_0-f_\delta\|=\|f-f_\delta\|\le\delta, \quad 1<C<2. $$ Thus, \begin{equation} \label{5eq03new1} \lim_{\delta\to0}hn_\delta=\infty. \end{equation} Let us continue the proof of \eqref{5eq10}. From \eqref{5eq0302} and $\|Au_{n_\delta}-f_\delta\|=C\delta$, one has \begin{equation} \label{eq47x} \begin{split} C\delta n_\delta h &=\|n_\delta h(I - hQ)^{n_\delta}Aw_0 - n_\delta h(I - hQ)^{n_\delta}(f_\delta - f)\|\\ &\le\|n_\delta h(I - hQ)^{n_\delta}Aw_0\| + \|n_\delta h(I - hQ)^{n_\delta}(f_\delta - f)\|\\ &\le \|n_\delta h(I - hQ)^{n_\delta}Aw_0\| + n_\delta h\delta. \end{split} \end{equation} We claim that if $w_0\in\mathcal{N}^\perp$, $0\le hT<2$, and $T$ is a finite-rank operator, then \begin{equation} \label{5hoithay} \lim_{\delta \to 0} n_\delta h(I - hQ)^{n_\delta}Aw_0 =\lim_{\delta\to 0}n_\delta h A(I - hT)^{n_\delta}w_0= 0. \end{equation} From \eqref{eq47x} and \eqref{5hoithay} one gets $$ 0\le \lim_{\delta\to0} (C-1)\delta hn_\delta \le \lim_{\delta\to 0} \|n_\delta h(I - hQ)^{n_\delta}Aw_0\|=0. $$ Thus, \begin{equation} \label{5eq03new2} \lim_{\delta\to 0}\delta n_\delta h=0 \end{equation} Now \eqref{5eq10} follows from \eqref{5eq03new1}, \eqref{5eq03new2} and Theorem~\ref{4thm2}. Theorem~\ref{5thm3} is proved. \end{proof} \section{Numerical experiments} \label{numsec} \subsection{Computing $u_\delta(t_\delta)$} In \cite{R540} an DSM \eqref{eq7delta} was investigated with $P=A^*$ and the SVD of $A$ was assumed known. In general, it is computationally expensive to get the SVD of large scale matrices. In this paper, we have derived an iterative scheme for solving ill-conditioned linear algebraic systems $Au=f_\delta$ without using SVD of $A$. Choose $P=(A^*A+a)^{-1}A^*$ where $a$ is a fixed positive constant. This choice of $P$ satisfies all the conditions in Theorem~\ref{thm3}. In particular, $Q=AP=A(A^*A+aI)^{-1}A^*=AA^*(AA^* + aI)^{-1}\ge 0$ is a selfadjoint operator, and $T=PA=(A^*A+aI)^{-1}A^*A\ge 0$ is a selfadjoint operator. Since $$ \|T\|=\bigg{\|}\int_0^{\|A^*A\|} \frac{\lambda}{\lambda + a}dE_\lambda\bigg{\|} =\sup_{0\le \lambda\le \|A^*A\|}\frac{\lambda}{\lambda + a}<1, $$ where $E_\lambda$ is the resolution of the identity of $A^*A$, the condition $h\|T\|< 2$ in Theorem~\ref{5thm3} is satisfied for all $ 0<h\le 1$. Set $h=1$ and $P=(A^*A+a)^{-1}A^*$ in \eqref{5eq8}. Then one gets the following iterative scheme: \begin{equation} \label{eq30} u_{n+1} = u_n - (A^*A+aI)^{-1}(A^*Au_n - A^*f_\delta),\quad u_0=0. \end{equation} For simplicity we have chosen $u_0=0$. However, one may choose $u_0=v_0$ if $v_0$ is known to be a better approximation to $y$ than $0$ and $v_0\in \mathcal{N}^\perp$. In iterations \eqref{eq30} we use a stopping rule of discrepancy type. Indeed, we will stop iterations if $u_n$ satisfies the following condition \begin{equation} \|Au_n - f_\delta\| \le 1.01 \delta. \end{equation} The choice of $a$ affects both the accuracy and the computation time of the method. If $a$ is too large, one needs more iterations to approach the desired accuracy, so the computation time will be large. If $a$ is too small then the results become less accurate because for too small $a$ the inversion of the operator $A^*A+aI$ is an ill-posed problem since the operator $A^*A$ is not boundedly invertible. Using the idea of the choice of the initial guess of regularization parameter in \cite{R526}, we choose $a$ to satisfy the following condition: \begin{equation} \label{eq32} \delta\le \phi(a):=\|A(A^*A+a)^{-1}A^*f_\delta - f_\delta\| \le 2\delta. \end{equation} This can be done by using the following strategy: \begin{enumerate} \item{Choose $a:=\frac{\delta \|A\|^2}{3\|f_\delta\|}$ as an initial guess for $a$.} \item{Compute $\phi(a)$. If $a$ satisfying \eqref{eq32} we are done. Otherwise, we go to step 3.} \item{If $c=\frac{\phi(a)}{\delta} > 3$ we replace $a$ by $\frac{a}{2(c-1)}$ and go back to step 2. If $2<c\le 3$ then we replace $a$ by $\frac{a}{2(c-1)}$ and go back to step 2. Otherwise, we go to step 4.} \item{ If $c=\frac{\phi(a)}{\delta} <1$ we replace $a$ by $3a$. If the inequality $c< 1$ has occured in some iteration before, we stop the iteration and use $3a$ as our choice for $a$ in iterations \eqref{eq30}}. Otherwise we go back to step 2. \end{enumerate} In our experiments, we denote by DSM the iterative scheme \eqref{eq30}, by VR$_i$ a Variational Regularization method (VR) with $a$ as the regularization parameter and by VR$_n$ the VR in which Newton's method is used for finding the regularization parameter using a discrepancy principle. We compare these methods in terms of relative error and number of iterations, denoted by n$_{iter}$. All the experiments were carried in double arithmetics precision environment using MATLAB. \subsection{A linear algebraic system related to an inverse problem for the heat equation} In this section, we apply the DSM and the VR to solve a linear algebraic system used in \cite{R526}. This linear algebraic system is a part of numerical solutions to an inverse problem for the heat equation. This problem is reduced to a Volterra integral equation of the first kind with $[0,1]$ as the integration interval. The kernel is $K(s,t)=k(s-t)$ with $$ k(t)=\frac{t^{-3/2}}{2\kappa \sqrt{\pi}}\exp(-\frac{1}{4\kappa^2 t}). $$ Here, we use the value $\kappa=1$. In this test in \cite{R526} the integral equation was discretized by means of simple collocation and the midpoint rule with $n$ points. The unique exact solution $u_n$ is constructed, and then the right-hand side $b_n$ is produced as $b_n=A_nu_n$ (see \cite{R526}). In our test, we use $n=10,20,...,100$ and $b_{n,\delta} = b_n + e_n$, where $e_n$ is a vector containing random entries, normally distributed with mean 0, variance 1, and scaled so that $\|e_n\|=\delta_{rel}\|b_n\|$. This linear system is ill-posed: the condition number of $A_{100}$ obtained by using the function {\it cond} provided in MATLAB is $1.3717\times 10^{37}$. This number shows that the corresponding linear algebraic system is severely ill-conditioned. \begin{table}[ht] \caption{Numerical results for the inverse heat equation with $\delta_{rel}=0.05$, $n=10i,\, i=\overline{1,10}$.} \label{heattab1} \centering \small \begin{tabular}{@{ }c@{\hspace{2mm}} @{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}| c@{\hspace{2mm}}c@{\hspace{2mm}}|c@{\hspace{2mm}}r@{\hspace{2mm}}l@{}} \hline &\multicolumn{2}{c|}{DSM}&\multicolumn{2}{c|}{VR$_{i}$}&\multicolumn{2}{c|}{VR$_{n}$}\\ $n$& n$_{\text{iter}}$&$\frac{\|u_\delta-y\|_{2}}{\|y\|_2}$& n$_{\text{iter}}$&$\frac{\|u_\delta-y\|_{2}}{\|y\|_2}$& n$_{\text{iter}}$&$\frac{\|u_\delta-y\|_{2}}{\|y\|_2}$\\ \hline 10 &3 &0.1971 &1 &0.2627 &5 &0.2117\\ 20 &4 &0.3359 &1 &0.4589 &5 &0.3551\\ 30 &4 &0.3729 &1 &0.4969 &5 &0.3843\\ 40 &4 &0.3856 &1 &0.5071 &5 &0.3864\\ 50 &5 &0.3158 &1 &0.4789 &6 &0.3141\\ 60 &6 &0.2892 &1 &0.4909 &6 &0.3060\\ 70 &7 &0.2262 &1 &0.4792 &8 &0.2156\\ 80 &6 &0.2623 &1 &0.4809 &7 &0.2600\\ 90 &5 &0.2856 &1 &0.4816 &7 &0.2715\\ 100 & 7 &0.2358 &1 &0.4826 &7 &0.3405\\ \hline \end{tabular} \end{table} Table~\ref{heattab1} shows that the results obtained by the DSM are comparable to those by the VR$_{n}$ in terms of accuracy. The time of computation of the DSM is comparable to that of the VR$_{n}$. In some situations, the results by VR$_{n}$ and the DSM are the same although the VR$_n$ uses 3 more iterations than does the DSM. The conclusion from this Table is that DSM competes favorably with the VR$_{n}$ in both accuracy and time of computation. Figure~\ref{figheat} plots numerical solutions to the inverse heat equation for $\delta_{rel}=0.05$ and $\delta_{rel}=0.01$ when $n=100$. From the figure we can see that the numerical solutions obtained by the DSM are about the same those by the VR$_{n}$. In these examples, the time of computation of the DSM is about the same as that of the VR$_{n}$. \begin{figure}[!h!tb] \centerline{% \includegraphics[scale=0.9]{heat10.eps}} \caption{Plots of solutions obtained by DSM, VR for the inverse heat equation when $n=100$, $\delta_{rel}=0.05$ (left) and $\delta_{rel}=0.01$ (right).} \label{figheat} \end{figure} The conclusion is that the DSM competes favorably with the VR$_{n}$ in this experiment. \section{Concluding remark} Iterative scheme \eqref{eq30} can be considered as a modification the Landweber iterations. The difference between the two methods is the multiplication by $P=(A^*A+aI)^{-1}$. Our iterative method is much faster than the conventional Landweber iterations. Iterative method \eqref{eq30} is an analog of the Gauss-Newton method. It can be considered as a regularized Gauss-Newton method for solving ill-condition linear algebraic systems. The advantage of using \eqref{eq30} instead of using (4.1.3) in \cite{R526} is that one only has to compute the lower upper (LU) decomposition of $A^*A+aI$ once while the algorithm in \cite{R526} requires computing LU at every step. Note that computing the LU is the main cost for solving a linear system. Numerical experiments show that the new method competes favorably with the VR in our experiments.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{submission} For the last decade, we have witnessed the unprecedented success of deep neural networks (DNN) in various applications in computer vision, classification, medical imaging, etc. Aside from traditional applications such as classification \cite{krizhevsky2012imagenet}, segmentation \cite{ronneberger2015u}, image denoising \cite{zhang2017beyond}, super-resolution \cite{kim2016accurate}, etc, deep learning approaches have already become the state-of-the-art technologies in various inverse problems in x-ray CT, MRI, etc \cite{kang2016deep, jin2017deep,hammernik2018learning} However, the more we see the success of deep learning, the more mysterious the nature of deep neural networks becomes. In particular, the amazing aspects of {\em expressive power, generalization capability}, and {\em optimization landscape} of DNNs have become an intellectual challenge for machine learning community, leading to many new theoretical results with varying capacities to facilitate the understanding of deep neural networks \cite{ge2017optimization,hanin2017relu,yarotsky2017error,nguyen2017loss,arora2016understanding,du2018gradient,raghu2017expressive,bartlett2017spectrally,neyshabur2018towards,nguyen2018optimization,rolnick2017power,shen2018differential}. \begin{figure*}[!hbt] \centerline{\includegraphics[width=14cm,height=4cm]{encoder_decoder.png}} \vspace*{-0.2cm} \caption{An architecture of $\kappa$-layer symmetric encoder-decoder CNN with skipped connections. Here, $q_l$ denotes the number of channels at the $l$-th layer, whereas $m_l$ refers to each channel dimension, and $d_l$ represents the total dimension of the feature at the $l$-th layer.} \vspace*{-0.5cm} \label{fig:network} \end{figure*} In inverse problems, one of the most widely employed network architectures is so-called encoder-decoder CNN architectures \cite{ronneberger2015u}. In contrast to the simplified form of the neural networks that are often used in theoretical analysis, these encoder-decoder CNNs usually have more complicated network architectures such as symmetric network configuration, skipped connections, etc. Therefore, it is not clear how the aforementioned theory can be used to understand the geometry of encoder-decoder CNNs to examine the origin of their superior performance. Recently, the authors in \cite{ye2017deep} proposed so-called deep convolutional framelets to explain the encoder-decoder CNN architecture from a signal processing perspective. The main idea is that a data-driven decomposition of Hankel matrix constructed from the input data provides encoder-decoder layers that have striking similarity to the encoder-decoder CNNs. However, one of the main weaknesses of the theory is that it is not clear where the exponential expressiveness comes from. Moreover, many theoretical issues of neural networks such as generalizability and the optimization landscape, which have been extensively studied in machine learning literature, have not been addressed. Therefore, this work aims at filling the gap and finding the connections between machine learning and signal processing to provide a unified theoretical analysis that facilitates the geometric understanding of encoder-decoder CNNs. Accordingly, we have revealed the following geometric features of encoder-decoder CNNs: \begin{itemize} \item An encoder-decoder CNN with an over-parameterized feature layer approximates a map between two smooth manifolds that is decomposed as a high-dimensional embedding followed by a quotient map. \item An encoder-decoder CNN with ReLU nonlinearity can be understood as deep convolutional framelets that use combinatorial frames of spatially varying convolutions. Accordingly, the number of linear representations increases exponentially with the network depth. This also suggests that the input space is divided into non-overlapping areas where each area shares the common linear representation. \item We derive an explicit form of the Lipschitz condition that determines the generalization capability of the encoder-decoder CNNs. The expression shows that the expressiveness of the network is not affected by the control of the Lipschitz constant. \item We provide explicit conditions under which the optimization landscape for encoder-decoder CNNs is benign. Specifically, we show that the skipped connection play important roles in smoothing out the optimization landscape. \end{itemize} All the proof of the theorems and lemmas in this paper are included in the Supplementary Material. \section{Related Works} Choromanska et al \cite{choromanska2015loss} employed the spin glass model from statistical physics to analyze the representation power of deep neural networks. Telgarsky constructs interesting classes of functions that can be only computed efficiently by deep ReLU nets, but not by shallower networks with a similar number of parameters \cite{telgarsky2016colt}. Arora et al \cite{arora2016understanding} showed that for every natural number $k$ there exists a ReLU network with $k^2$ hidden layers and total size of $k^2$, which can be represented by $\frac{1}{2}k^{k+1}-1$ neurons with at most $k$-hidden layers. All these results agree that the expressive power of deep neural networks increases exponentially with the network depth. The generalization capability have been addressed in terms of various complexity measures such as Rademacher complexity \cite{bartlett2002rademacher}, VC bound \cite{anthony2009neural}, Kolmorogov complexity \cite{schmidhuber1997discovering}, etc. However, a recent work \cite{zhang2016understanding} showed intriguing results that these classical bounds are too pessimistic to explain the generalizability of deep neural networks. Moreover, it has been repeatedly shown that over-parameterized deep neural networks, which are trained with fewer samples than the number of neurons, generalize well rather than overfitting \cite{cohen2018dnn,wei2018margin,brutzkus2017sgd,du2018power}, which phenomenon cannot be explained by the classical complexity results. The optimization landscape of neural networks have been another important theoretical issue in neural networks. Originally observed in linear deep neural networks \cite{kawaguch2016nips}, the benign optimization landscape has been consistently observed in various neural networks \cite{du2018gradient,nguyen2018optimization,du2017gradient,nguyen2017loss}. However, these theoretical works mainly focus on simplified network architectures, and we are not aware of analysis for encoder-decoder CNNs. \section{Encoder-Decoder CNNs} % \subsection{Definition} In this section, we provide a formal definition of encoder-decoder CNNs (E-D CNNs) to facilitate the theoretical analysis. Although our definition is for 1-dimensional signals, its extension to 2-D images is straightforward. \subsubsection{Basic Architecture} Consider encoder-decoder networks in Fig.~\ref{fig:network}. Specifically, the encoder network maps a given input signal $x\in\boldsymbol{\mathcal X}\subset {\mathbb R}^{d_0}$ to a feature space $z \in \boldsymbol{\mathcal Z}\subset {\mathbb R}^{d_\kappa}$, whereas the decoder takes this feature map as an input, process it and produce an output $y \in \boldsymbol{\mathcal Y}\subset {\mathbb R}^{d_L}$. In this paper, symmetric configuration is considered so that both encoder and decoder have the same number of layers, say $\kappa$; the input and output dimensions for the encoder layer ${\mathcal E}^l$ and the decoder layer ${\mathcal D}^l$ are symmetric: \begin{eqnarray*} {\mathcal E}^l:{\mathbb R}^{d_{l-1}} \mapsto {\mathbb R}^{d_l}, \quad {\mathcal D}^l:{\mathbb R}^{d_{l}} \mapsto {\mathbb R}^{d_{l-1}} \end{eqnarray*} where $l\in [\kappa]$ with $[n]$ denoting the set $\{1,\cdots, n\}$; and both input and output dimension is $d_0$. More specifically, the $l$-th layer input signal for the encoder layer comes from $q_{l-1}$ number of input channels $$\xi^{l-1}=\begin{bmatrix} \xi_1^{l-1\top} & \cdots & \xi^{l-1\top}_{q_{l-1}} \end{bmatrix}^\top \in {\mathbb R}^{d_{l-1}}, \quad $$ where $^\top$ denotes the transpose, and $\xi_j^{l-1} \in {\mathbb R}^{m_{l-1}}$ refers to the $j$-th channel input with the dimension $m_{l-1}$. Therefore, the overall input dimension is given by $d_{l-1}:= {m_{l-1}q_{l-1}}$. Then, the $l$-th layer encoder generates $q_l$ channel output using the convolution operation: \begin{eqnarray}\label{eq:encConv} \xi_j^l = \sigma\left(\Phi^{l\top} \sum_{k=1}^{q_{l-1}}\left(\xi_k^{l-1}\circledast \overline \psi_{j,k}^l\right)\right) ,~j\in [q_l] \end{eqnarray} where $\xi_j^l \in {\mathbb R}^{m_l}$ refers to the $j$-th channel output after the convolutional filtering with the $r$-tap filters $\overline\psi_{j,k}^l\in {\mathbb R}^r$ and pooling operation $\Phi^{l\top} \in {\mathbb R}^{m_l \times m_{l-1}}$, and $\sigma(\cdot)$ denotes the element wise rectified linear unit (ReLU). More specifically, $\overline\psi_{j,k}^l\in {\mathbb R}^r$ denotes the $r$-tap convolutional kernel that is convolved with the $k$-th input to contribute to the output of the $j$-th channel, $\circledast$ is the circular convolution via periodic boundary condition to avoid special treatment of the convolution at the boundary, and $\overline v$ refers to the flipped version of the vector $v$. For the formal definition of the convolution operation used in this paper, see Appendix~A in Supplementary Material. Moreover, as shown in Appendix~B in Supplementary Material, an equivalent matrix representation of the encoder layer is then given by \begin{eqnarray*} \xi^l:=\sigma(E^{l\top} \xi^{l-1})=\begin{bmatrix} \xi^{l\top}_1 & \cdots & \xi^{l\top}_{q_{l}} \end{bmatrix}^\top \end{eqnarray*} where $E^l \in {\mathbb R}^{d_{l-1}\times d_{l}}$ is computed by\footnote{Here, without loss of generality, bias term is not explicitly shown, since it can be incorporated into the matrix $E^l$ and $D^l$ as an additional column.} \begin{eqnarray}\label{eq:El} E^l= \begin{bmatrix} \Phi^l\circledast \psi^l_{1,1} & \cdots & \Phi^l\circledast \psi^l_{q_l,1} \\ \vdots & \ddots & \vdots \\ \Phi^l\circledast \psi^l_{1,q_{l-1}} & \cdots & \Phi^l\circledast \psi^l_{q_{l},q_{l-1}} \end{bmatrix} \end{eqnarray} with \begin{eqnarray \begin{bmatrix} \Phi^l \circledast \psi_{i,j}^l \end{bmatrix} :=\begin{bmatrix} \phi^l_1 \circledast \psi_{i,j}^l & \cdots & \phi^l_{m_l} \circledast \psi_{i,j}^l\end{bmatrix} \label{eq:defconv} \end{eqnarray} On the other hand, the $l$-th layer input signal for the decoder layer comes from $q_{l}$ channel inputs, i.e. $\tilde\xi^{l}=\begin{bmatrix}\tilde\xi_1^{l\top} & \cdots & \tilde\xi^{l}_{q_{l}\top} \end{bmatrix}^\top \in {\mathbb R}^{d_{l}},$ and the decoder layer convolution is given by \begin{eqnarray}\label{eq:decConv} \tilde\xi_j^{l-1} = \sigma\left(\sum_{k=1}^{q_{l}}\left(\tilde\Phi^l\tilde\xi^{l}_k\circledast {\tilde\psi_{j,k}^l}\right)\right) ,\quad j\in [q_{l-1}] \end{eqnarray} where the unpooling layer is denoted by $\tilde\Phi^l \in {\mathbb R}^{m_{l-1}\times m_{l}}$. Note that \eqref{eq:encConv} and \eqref{eq:decConv} differ in their order of the pooling or unpooling layers. Specifically, a pooling operation is applied after the convolution at the encoder layer, whereas, at the decoder, an unpooling operation is performed before the convolution to maintain the symmetry of the networks. In matrix form, a decoder layer is given by \begin{eqnarray*} \tilde \xi^{l-1}:=\sigma(D^l \tilde\xi^{l})=\begin{bmatrix} \tilde\xi^{l-1\top}_1 &\cdots & \tilde\xi^{l-1\top}_{q_{l-1}} \end{bmatrix}^\top \ \end{eqnarray*} where $D^l \in {\mathbb R}^{d_{l}\times d_{l-1}}$ is computed by \begin{eqnarray}\label{eq:Dl} D^l= \begin{bmatrix} \tilde\Phi^l\circledast \tilde\psi^l_{1,1} & \cdots & \tilde\Phi^l\circledast \tilde\psi^l_{1,q_l} \\ \vdots & \ddots & \vdots \\ \tilde\Phi^l\circledast \tilde\psi^l_{q_{l-1},1} & \cdots & \tilde\Phi^l\circledast \tilde\psi^l_{q_{l-1},q_{l}} \end{bmatrix} \end{eqnarray} \subsubsection{E-D CNN with skipped connection} As shown in Fig.~\ref{fig:network}, a skipped connection is often used to bypass an encoder layer output to a decoder layer. The corresponding filtering operation at the $l$-th layer encoder is described by \begin{eqnarray}\label{eq:encConvSkip} \begin{bmatrix}\xi_j^l \\ \chi_j^l \end{bmatrix} = \begin{bmatrix} \sigma\left(\Phi^{l\top} \sum_{k=1}^{q_{l-1}}\left(\xi_k^{l-1}\circledast \overline \psi_{j,k}^l\right)\right) \\ \sigma\left(\sum_{k=1}^{q_{l-1}}\left(\xi_k^{l-1}\circledast \overline \psi_{j,k}^l\right)\right) \end{bmatrix} \end{eqnarray} where $\chi_j^l$ and $\xi_j^l$ denote the skipped output, and the pooled output via $\Phi^{l\top}$, respectively, after the filtering with $\overline\psi_{j,k}$. As shown in Fig.~\ref{fig:network}, the skipped branch is no more filtered at the subsequent layer, but is merged at the symmetric decoder layer: \begin{eqnarray* \tilde\xi_j^{l-1} = \sigma\left(\sum_{k=1}^{q_{l}}\left((\tilde\Phi^{l}\tilde\xi^{l}_k+\chi^l_k)\circledast {\tilde\psi_{j,k}^{l}}\right)\right) \end{eqnarray*} In matrix form, the encoder layer with the skipped connection can be represented by $ {\mathcal E}^l: \xi^{l-1} \mapsto \begin{bmatrix} \xi^{l\top} & \chi^{l\top} \end{bmatrix}^\top$$ where \begin{eqnarray} \xi^l := \sigma(E^{l\top} \xi^{l-1}) &, & \chi^l := \sigma(S^{l\top} \xi^{l-1}) \label{eq:feature} \end{eqnarray} where $E^l$ is given in \eqref{eq:El} and the skipped branch filter matrix $S^l$ is represented by \begin{eqnarray}\label{eq:Sl} S^l= \begin{bmatrix} I_{m_{l-1}}\circledast \psi^l_{1,1} & \cdots & I_{m_{l-1}}\circledast \psi^l_{q_l,1} \\ \vdots & \ddots & \vdots \\ I_{m_{l-1}}\circledast \psi^l_{1,q_{l-1}} & \cdots & I_{m_{l-1}}\circledast \psi^l_{q_{l},q_{l-1}} \end{bmatrix} \end{eqnarray} where $I_{m_{l-1}}$ denotes the $m_{l-1}\times m_{l-1}$ identity matrix. This implies that we can regard the skipped branch as the identity pooling $I_{m_{l-1}}$ applied to the filtered signals. Here, we denote the output dimension of the skipped connection as $$s_l:=m_{l-1} q_l \quad .$$ Then, the skipped branch at the $l$-th encoder layer is merged at the $l$-th decoder layer, which is defined as $$ {\mathcal D}^{l}: \begin{bmatrix} \tilde\xi^{l\top} & \chi^{l\top} \end{bmatrix}^\top \mapsto \tilde\xi^{l-1} $$ where \begin{eqnarray}\label{eq:sum} \tilde\xi^{l-1} :=\sigma(D^{l} \tilde\xi^{l}+\tilde S^{l}\chi^l) \end{eqnarray} and $D^{l}$ is defined in \eqref{eq:Dl}, and $\tilde S^l$ is given by \begin{eqnarray}\label{eq:El2} \tilde S^l= \begin{bmatrix} I_{m_{l-1}}\circledast \tilde\psi^l_{1,1} & \cdots & I_{m_{l-1}}\circledast \tilde\psi^l_{1,q_{l}} \\ \vdots & \ddots & \vdots \\ I_{m_{l-1}}\circledast \tilde\psi^l_{q_{l-1},1} & \cdots & I_{m_{l-1}}\circledast \tilde\psi^l_{q_{l-1},q_{l}} \end{bmatrix} \end{eqnarray} \subsection{Parameterization of E-D CNNs} \vspace*{-0.2cm} At the $l$-th encoder (resp. decoder) layer, there are $q_lq_{l-1}$ filter set that generates the $q_l$ (resp. $q_{l-1}$) output channels from $q_{l-1}$ (resp. $q_{l}$) input channels. In many CNNs, the filter lengths are set to equal across the layer. In our case, we set this as $r$, so the number of filter coefficients for the $l$-layer is $$n_l:=rq_lq_{l-1}, \quad l\in[\kappa]$$ These parameters should be estimated during the training phase. Specifically, by denoting the set of all parameter matrices $\boldsymbol{\mathcal W}=\boldsymbol{\mathcal W}_E\times \boldsymbol{\mathcal W}_D$ where $\boldsymbol{\mathcal W}_E:={\mathbb R}^{n_\kappa} \times \cdots \times {\mathbb R}^{n_{1}}$ and $\boldsymbol{\mathcal W}_D:={\mathbb R}^{n_1} \times \cdots \times {\mathbb R}^{n_\kappa}$, we compose all layer-wise maps to define an encoder-decoder CNN as \begin{eqnarray}\label{eq:Fcnn} z = F({\mathbf W},x) . \end{eqnarray} Regardless of the existence of skipped connections, note that the same number of unknown parameters is used because the skipped connection uses the same set of filters. \vspace*{-0.1cm} \section{Theoretical Analysis of E-D CNNs} \vspace*{-0.1cm} \subsection{Differential Topology } First, we briefly revisit the work by Shen \cite{shen2018differential}, which gives an topological insight on the E-D CNNs. \begin{proposition}[Extension of Theorem~3 in \cite{shen2018differential}]\label{thm:embedding} Let $f : \boldsymbol{\mathcal X}\mapsto \boldsymbol{\mathcal Y} \subset {\mathbb R}^q$ be a continuous map of smooth manifolds such that $f=g \circ h$, where $g : {\mathbb R}^p \mapsto {\mathbb R}^q$ with $p\geq q$ is a Lipschitz continuous map. If $p > 2 \dim\boldsymbol{\mathcal X}$, then there exists a smooth embedding $\tilde h: \boldsymbol{\mathcal X} \mapsto {\mathbb R}^p$, so that the following inequality holds true for a chosen norm and all $x\in \boldsymbol{\mathcal X}$ and $\epsilon >0$: $$\|f(x) -g\circ \tilde h(x)\|\leq \epsilon$$ \end{proposition} Here, $p > 2 \dim\boldsymbol{\mathcal X}$ comes from the weak Whitney embedding theorem \cite{whitney1936differentiable,tu2011introduction}. Note that Theorem~\ref{thm:embedding} informs that a neural network, designed as a continuous map of smooth manifolds, can be considered as an approximation of a task map that is composed of a smooth embedding followed by an additional map. In fact, this decomposition is quite general for a map between smooth manifolds as shown in the following proposition: \begin{proposition}\label{thm:quotient}\cite{shen2018differential} Let $f : \boldsymbol{\mathcal X}\mapsto \boldsymbol{\mathcal Y}\subset {\mathbb R}^q$ be a map of smooth manifolds, then the task $ f$ admits a decomposition of $f = g\circ h$, where $ h: \boldsymbol{\mathcal X} \mapsto \boldsymbol{\mathcal Z} \subset {\mathbb R}^p$ with $p \geq 2 \dim \boldsymbol{\mathcal X}$ is a smooth embedding. Furthermore, the task map $f$ is a quotient map, if and only if the map $g$ is a quotient map. \end{proposition} To understand the meaning of the last sentence in Proposition~\ref{thm:quotient}, we briefly review the concept of the quotient space and quotient map \cite{tu2011introduction}. Specifically, let $\sim$ be an equivalence relation on $\boldsymbol{\mathcal X}$. Then, the quotient space, $\boldsymbol{\mathcal Y} = \boldsymbol{\mathcal X}/\sim$ is defined to be the set of equivalence classes of elements of $\boldsymbol{\mathcal X}$. For example, we can declare images perturbed by noises as an equivalent class such that our quotient map is designed to map the noisy signals to its noiseless equivalent image. It is remarkable that Proposition~\ref{thm:embedding} and Proposition~\ref{thm:quotient} give interpretable conditions for design parameters such as network width (i.e. no of channels), pooling layers, etc. For example, if there are no pooling layers, the dimensionality conditions in Proposition~\ref{thm:embedding} and Proposition~\ref{thm:quotient} can be easily met in practice by increasing the number of channels more than twice the input channels. With the pooling layers, one could calculate the number of channels in a similar way. In general, Proposition~\ref{thm:embedding} and Proposition~\ref{thm:quotient} strongly suggest an encoder-decoder architecture with the constraint $d_0\leq d_1\leq \cdots \leq d_\kappa$ with $d_\kappa > 2 d_0$, where an encoder maps an input signal to higher dimensional feature space whose dimension is at least twice bigger than the input space. Then, the decoder determines the nature of the overall neural network. \subsection{Links to the frame representation} One of the important contributions of recent theory of deep convolutional framelets \cite{ye2017deep} is that encoder-decoder CNNs have an interesting link to multi-scale convolution framelet expansion. To see this, we first define filter matrices $\Psi^l \in {\mathbb R}^{rq_{l-1}\times q_{l}}$ and $\tilde\Psi^{l} \in {\mathbb R}^{rq_{l-1}\times q_{l}}$ for encoder and decoder: $$ \Psi^l := \begin{bmatrix} \psi^l_{1,1}& \cdots & \psi^l_{q_l,1} \\ \vdots & \ddots & \vdots \\ \psi^l_{1,q_{l-1}} & \cdots & \psi^l_{q_{l},q_{l-1}} \end{bmatrix} $$ $$ \tilde\Psi^{l} :=\begin{bmatrix} \tilde\psi^{l}_{1,1}& \cdots & \tilde\psi^{l}_{1,q_{l}} \\ \vdots & \ddots & \vdots \\ \tilde\psi^{l}_{q_{l-1},1} & \cdots & \tilde\psi^{l}_{q_{l-1},q_{l}} \end{bmatrix} $$ Then, the following proposition, which is novel and significantly extended from \cite{ye2017deep}, states the importance of the frame conditions for the pooling layers and filters to obtain convolution framelet expansion \cite{yin2017tale}. \begin{proposition}\label{thm:PR} Consider an encoder-decoder CNN without ReLU nonlinearities. Let $\Phi^{l\top}$ and $\tilde\Phi^{l}$ denote the $l$-th encoder and decoder layer pooling layers, respectively, and $\Psi^l$ and $\tilde\Psi^{l}$ refer to the encoder and decoder filter matrices. Then, the following statements are true. 1) For the encoder-decoder CNN without skipped connection, if the following frame conditions are satisfied for all $l\in [\kappa]$ \begin{eqnarray}\label{eq:PRl} \tilde\Phi^{l}\Phi^{l\top}=\alpha I_{m_{l-1}},~ \Psi^l \tilde\Psi^{l\top} = \frac{1}{r\alpha}I_{rq_{l-1}} \end{eqnarray} then we have \begin{eqnarray}\label{eq:PR0} x &=& \sum_{i} \langle b_i, x \rangle \tilde b_i \end{eqnarray} where $b_i$ and $\tilde b_i$ denote the $i$-th column of the following frame basis and its dual: \begin{eqnarray} B&=& E^1E^2 \cdots E^{\kappa},~\quad \label{eq:Bc}\\ \tilde B &=& D^1D^2 \cdots D^{\kappa} \label{eq:tBc} \end{eqnarray} 2) For the encoder-decoder CNN with skipped connection, if the following frame conditions are satisfied for all $l\in [\kappa]$: \begin{eqnarray}\label{eq:PRl2} \tilde\Phi^{l}\Phi^{l\top}=\alpha I_{m_{l-1}},~ \Psi^l \tilde\Psi^{l\top} = \frac{1}{r(\alpha+1)}I_{rq_{l-1}} \end{eqnarray} then \eqref{eq:PR0} holds, where $b_i$ and $\tilde b_i$ denote the $i$-th column of the following frame and its duals: \begin{eqnarray}\label{eq:Btot} B^{skp} \quad ( \in {\mathbb R}^{d_0\times (d_\kappa+\sum_{l=1}^\kappa s_l})) \end{eqnarray} $$:= \begin{bmatrix} E^1\cdots E^\kappa &E^1\cdots E^{\kappa-1}S^\kappa & \cdots & E^1S^2& S^1 \end{bmatrix}$$ \begin{eqnarray}\label{eq:tBtot} \tilde B^{skp} \quad ( \in {\mathbb R}^{d_0\times (d_\kappa+\sum_{l=1}^\kappa s_l})) \end{eqnarray} $$:= \begin{bmatrix} D^1\cdots D^\kappa &D^1\cdots D^{\kappa-1}\tilde S^\kappa & \cdots & D^1\tilde S^2& \tilde S^1 \end{bmatrix}$$ \end{proposition} Furthermore, the following corollary shows that the total basis and its dual indeed come from multiple convolutional operations across layers: \begin{corollary}\label{thm:multi} If there exist no pooling layers, then the $t$-th block of the frame basis matrix for $t\in[q_l]$ is given by $$\left[E^{1}\cdots E^{l}\right]_{t}=\left[E^{1}\cdots E^{l-1}S^l \right]_{t}$$ $$ =I_m \circledast \left(\sum_{j_{l-1},\cdots, j_1=1}^{q_{l-1},\cdots,q_1} \psi_{j_1,1}^l\circledast \cdots \circledast \psi_{t,j_{l-1}}^{l} \right)$$ Similarly, $$\left[D^{1}\cdots D^{l}\right]_{t}=\left[D^{1}\cdots D^{l-1}\tilde S^l \right]_{t}$$ $$ =I_m\circledast \left(\sum_{j_{l-1},\cdots, j_1=1}^{q_{l-1},\cdots,q_1} \tilde\psi_{j_1,1}^l\circledast \cdots \circledast {\tilde\psi}_{t,j_{l-1}}^{l} \right)$$ \end{corollary} This suggests that the length of the convolutional filters increases with the depth by cascading multiple convolution operations across the layers. While Proposition~\ref{thm:PR} informs that the skipped connection increases the dimension of the feature space from $d_\kappa$ to $d_\kappa+\sum_{l=1}^\kappa s_l$, Corollary~\ref{thm:multi} suggest that the cascaded expression of the filters becomes more diverse for the case of encoder-decoder CNNs with skipped connection. Specifically, instead of convolving all $\kappa$ layers of filters, the skipped connection allows the combination of subset of filters. All these make the frame representation from skipped connection more expressive. \subsection{Expressiveness } However, to satisfy the frame conditions \eqref{eq:PRl} or \eqref{eq:PRl2}, we need $q_l\geq rq_{l-1}$ so that the number of output filter channel $q_l$ should increase exponentially. While this condition can be relaxed when the underlying signal has low-rank Hankel matrix structure \cite{ye2017deep}, the explicit use of the frame condition is still rarely observed. Moreover, in contrast to the classical wavelet analysis, the perfect reconstruction condition itself is not interesting in neural networks, since the output of the network should be different from the input due to the task dependent processing. Here, we claim that one of the important roles of using ReLU is that it allows combinatorial basis selection such that exponentially large number of basis expansion is feasible once the network is trained. This is in contrast with the standard framelet basis estimation. For example, for a given target data $Y = \begin{bmatrix} y^{(1)} & \cdots & y^{(T)} \end{bmatrix}$ and the input data $X= \begin{bmatrix} x^{(1)} & \cdots & x^{(T)} \end{bmatrix}$, the estimation problem of the frame basis and its dual in Proposition~\ref{thm:PR} is optimal for the given training data, but the network is not expressive and does not generalize well when the different type of input data is given. Thus, one of the important requirements is to allow large number of expressions that are adaptive to the different inputs. Indeed, ReLU nonlinearity makes the network more expressive. For example, consider a trained two layer encoder-decoder CNN: \begin{eqnarray} y = \tilde B \Lambda(x) B^\top x \end{eqnarray} where $\tilde B\in {\mathbb R}^{d_0\times d_1}$ and $ B\in {\mathbb R}^{d_0\times d_1}$ and $\Lambda(x)$ is a diagonal matrix with 0, 1 elements that are determined by the ReLU output. Now, the matrix can be equivalently represented by \begin{eqnarray}\label{eq:proj} \tilde B\Lambda(x)B^\top = \sum_{i=1}^{d_1} \sigma_i(x) \tilde b_i b_i^{\top} \end{eqnarray} where $\sigma_i(x)$ refers to the $(i,i)$-th diagonal element of $\Lambda(x)$. Therefore, depending on the input data $x\in {\mathbb R}^{d_0}$, $\sigma_i(x)$ is either 0 or 1 so that a maximum $2^{d_1}$ distinct configurations of the matrix can be represented using \eqref{eq:proj}, which is significantly more expressive than using the single representation with the frame and its dual. This observation can be generalized as shown in Theorem~\ref{thm:decexp}. \begin{theorem}[Expressiveness of encoder-decoder networks]\label{thm:decexp} Let \begin{eqnarray} \tilde\Upsilon^l= \tilde\Upsilon^l(x) := \tilde\Upsilon^{l-1} \tilde\Lambda^{l}(x) D^{l} ,~ \label{eq:UD0} \\ \Upsilon^{l}= \Upsilon^{l}(x) := \Upsilon^{l-1} E^{l} \Lambda^{l}(x) ,~ \label{eq:UE0} \end{eqnarray} with $ \tilde\Upsilon^0(x) =I_{d_0}$ and $ \Upsilon^{0}(x) =I_{d_0}$, and \begin{eqnarray} M^l= M^l(x) := S^{l}\Lambda_S^{l}(x)\label{eq:M0} \\ \tilde M^l=\tilde M^l(x) := \tilde \Lambda^{l}(x)\tilde S^{l} \label{eq:tM0} \end{eqnarray} where $\Lambda^l(x)$ and $\tilde\Lambda^l(x)$ refer to the diagonal matrices from ReLU at the $l$-th layer encoder and decoder, respectively, which have 1 or 0 values; $\Lambda_S^l(x)$ refers to a similarly defined diagonal matrices from ReLU at the $l$-th skipped branch of encoder. Then, the following statements are true. 1) Under ReLUs, an encoder-decoder CNN without skipped connection can be represented by \begin{eqnarray}\label{eq:feed} y = \tilde{\mathcal B}(x){\mathcal B}^{\top}(x)x = \sum_i \langle x, b_i(x) \rangle \tilde b_i(x) \end{eqnarray} where \begin{eqnarray}\label{eq:Bcx} {\mathcal B}(x) = \Upsilon^\kappa(x)&,& \tilde {\mathcal B}(x) = \tilde\Upsilon^\kappa(x) \end{eqnarray} Furthermore, the maximum number of available linear representation is given by \begin{eqnarray}\label{eq:nproj} N_{rep} = 2^{\sum_{i=1}^{\kappa}d_i-d_\kappa},\quad \end{eqnarray} 2) An encoder-decoder CNN with skipped connection under ReLUs is given by \begin{eqnarray}\label{eq:skipnet} y = \tilde{\mathcal B}^{skp}(x) {\mathcal B}^{skp \top}(x)x = \sum_i \langle x, b_i^{skp}(x) \rangle \tilde b_i^{skp}(x) \end{eqnarray} where $$ {\mathcal B}^{skp}(x) := $$ \begin{eqnarray}\label{eq:Bcxskip} \begin{bmatrix} \Upsilon^\kappa & \Upsilon^{\kappa-1}M^\kappa & \Upsilon^{\kappa-2}M^{\kappa-1} & \cdots & M^1\end{bmatrix} \end{eqnarray} $$\tilde {\mathcal B}^{skp}(x) := $$ \begin{eqnarray}\label{eq:tBcxskip} \begin{bmatrix}\tilde \Upsilon^\kappa & \tilde\Upsilon^{\kappa-1}\tilde M^\kappa & \tilde \Upsilon^{\kappa-2}\tilde M^{\kappa-1} & \cdots & \tilde M^1\end{bmatrix} \end{eqnarray} Furthermore, the maximum number of available linear representation is given by \begin{eqnarray}\label{eq:nproj2} N_{rep} = 2^{\sum_{i=1}^{\kappa}d_i-d_\kappa}\times 2^{\sum_{i=1}^\kappa s_k} \end{eqnarray} \end{theorem} This implies that the number of representation increase exponentially with the network depth, which again confirm the expressive power of the neural network. Moreover, the skipped connection also significantly increases the expressive power of the encoder-decoder CNN. Another important consequence of Theorem~\ref{thm:decexp} is that the input space $\boldsymbol{\mathcal X}$ is partitioned into the maximum $N_{rep}$ non-overlapping regions so that inputs for each region shares the same linear representation. Due to the ReLU, one may wonder whether the cascaded convolutional interpretation of the frame basis in Corollary~\ref{thm:multi} still holds. A close look of the proof of Corollary~\ref{thm:multi} reveals that this is still the case. Under ReLUs, note that $(I_m\circledast \psi_{j,s}^l)(I_m\circledast \psi_{t,j}^{l+1}) = I_m\circledast (\psi_{j,s}^l\circledast\psi_{t,j}^{l+1})$ in Lemma~\ref{lem:identity} should be replaced with $(I_m\circledast \psi_{j,s}^l)\Lambda_{j}^l(x)(I_m\circledast \psi_{t,j}^{l+1})$ where $\Lambda_{j}^l(x)$ is a diagonal matrix with 0 and 1 values due to the ReLU. This means that the $\Lambda_j^l(x)$ provides spatially varying mask to the convolution filter $\psi_{t,j}^{l+1}$ so that the net effect is a convolution with the the spatially varying filters originated from masked version of $\psi_{t,j}^{l+1}$. This results in a spatially variant cascaded convolution, and only change in the interpretation of Corollary~\ref{thm:multi} is that the basis and its dual are composed of {\em spatial variant} cascaded convolution filters. Furthermore, the ReLU works to diversify the convolution filters by masking out the various filter coefficients. It is believed that this is another source of expressiveness from the same set of convolutional filters. \subsection{Generalizability} To understand the generalization capability of DNNs, recent research efforts have been focused on reducing the gap by suggesting different ways of measuring the network capacity \cite{bartlett2017spectrally,neyshabur2018towards}. These works consistently showed the importance of Lipschitz condition for the encoder and decoder parts of the networks. More specifically, we have shown that the neural network representation varies in exponentially many different forms depending on inputs, so one may be concerned that the output might vary drastically with small perturbation of the inputs. However, Lipschitz continuity of the neural network prevents such drastic changes. Specifically, a neural network $F({\mathbf W},x)$ is Lipschitz continuous, if there exists a constant $K>0$ such that $$\|F({\mathbf W},x^{(1)})-F({\mathbf W},x^{(2)}) \|_2 \leq K \|x^{(1)}-x^{(2)}\|_2 \ .$$ where the Lipschitz constant $K$ can be obtained by \begin{eqnarray}\label{eq:K} K =\sup_{x\in \boldsymbol{\mathcal X}}\|D_2 F({\mathbf W},x)\|_2 \end{eqnarray} where $D_2F({\mathbf W},x)$ is the Jacobian with respect to the second variable. The following proposition shows that the Lipschitz constant of encoder-decoder CNNs is closely related to the frame basis and its duals. \begin{proposition}\label{thm:lipschitz} The Lipschitz constant for encoder-decoder CNN without skipped connection is given by \begin{eqnarray}\label{eq:lip1} K= \sup_{x\in \boldsymbol{\mathcal X}} {\|\tilde{\mathcal B}(x){\mathcal B}(x)^\top\|_2} \end{eqnarray} whereas Lipschitz constant for encoder-decoder CNN with skipped connection is given by \begin{eqnarray}\label{eq:lip2} K= \sup_{x\in \boldsymbol{\mathcal X}} {\|\tilde{\mathcal B}^{skp}(x){\mathcal B}^{skp\top}(x)\|_2 \end{eqnarray} where ${\mathcal B}(x),\tilde{\mathcal B}(x),{\mathcal B}^{skp}(x)$ and $\tilde{\mathcal B}^{skp}(x)$ are defined in \eqref{eq:Bcx}, \eqref{eq:Bcxskip} and \eqref{eq:tBcxskip}. \end{proposition} Recall that the input space $\boldsymbol{\mathcal X}$ is partitioned into regions that share the same linear representation. Therefore, the local Lipschitz constant within the $p$-th partition is given by \begin{eqnarray}\label{eq:lip3} K_{p} &=& \sup_{ z\in \boldsymbol{\mathcal X}_p} {\|\tilde{\mathcal B}(z){\mathcal B}^{\top}(z)\|_2} \notag \\ & = & {\|\tilde{\mathcal B}(z_p){\mathcal B}^{\top}(z_p)\|_2},\quad \forall z_p \in \boldsymbol{\mathcal X}_p \end{eqnarray} for the case of E-D CNN without skipped connections. Here, $\boldsymbol{\mathcal X}_p$ denotes the $p$-th input space partition, and the last equality in \eqref{eq:lip3} comes from the fact that every point in $\boldsymbol{\mathcal X}_p$ shares the same linear representation. Thus, it is easy to see that the global Lipschitz constant can be given by \begin{eqnarray}\label{eq:globalL} K= \sup_{x\in \boldsymbol{\mathcal X}} {\|\tilde{\mathcal B}(x){\mathcal B}(x)^\top\|_2} = \sup_p K_p \end{eqnarray} Furthermore, Theorem~\ref{thm:decexp} informs that the number of partition is bonded by $N_{rep}$. Therefore, \eqref{eq:globalL} suggests that by bounding the local Lipschitz constant within each linear region, one could control the global Lipschitz constant of the neural network. Similar observation holds for E-D CNNs with skipped connection. One of the most important implications of \eqref{eq:globalL} is that the expressiveness of the network is not affected by the control of the Lipschitz constant. This in turn is due to the combinatorial nature of the ReLU nonlinearities, which allows for an exponentially large number of linear representations. \subsection{Optimization landscape} For a given ground truth {\em task map} $f^*:\boldsymbol{\mathcal X}\mapsto\boldsymbol{\mathcal Y}$ and given training data set $\{(x^{(i)}, y^{(i)})\}_{i=1}^T$ such that $y^{(i)} = f^*(x^{(i)})$, an encoder-decoder CNN training problem can be formulated to find a neural network parameter weight ${\mathbf W}$ by minimizing a specific loss function. Then, for the case of $l_2$ loss \begin{eqnarray}\label{eq:cost} C({\mathbf W}) = \frac{1}{2}\sum_{i=1}^T \|F({\mathbf W},x^{(i)})-y^{(i)}\|^2 \ , \end{eqnarray} Nguyen et al \cite{nguyen2018optimization} showed that over-parameterized CNNs can produce zero training errors. Their results are based on the following key lemma. \begin{lemma}\cite{nguyen2018optimization}\label{thm:zero} Consider an encoder-decoder CNN without skipped connection. Then, the Jacobian of the cost function in \eqref{eq:cost} with respect to $E^\kappa$ is bounded as $$\|\nabla_{E^\kappa} C\|_F$$ \begin{eqnarray*} \geq \sigma_{\min} (\Xi^\kappa) \min_{i\in[T]} \sigma_{\min} \left(\Lambda^\kappa(x^{(i)}) \left(\tilde\Upsilon^{\kappa}(x^{(i)})\right)^\top\right) \sqrt{2C({\mathbf W})} \end{eqnarray*} and $$\|\nabla_{E^\kappa} C\|_F$$ \begin{eqnarray*} \leq \sigma_{\max} (\Xi^\kappa) \max_{i\in[T]} \sigma_{\max} \left(\Lambda^\kappa(x^{(i)}) \left(\tilde\Upsilon^{\kappa}(x^{(i)})\right)^\top\right) \sqrt{2C({\mathbf W})} \end{eqnarray*} where $\sigma_{\min}(A)$ and $\sigma_{\max}(A)$ denote the minimum and maximum singular value for a matrix $A\in {\mathbb R}^{n\times m}$ with $n\geq m$, respectively; $\tilde\Upsilon^\kappa$ is defined in \eqref{eq:UD0}, and $\Xi^\kappa $ denotes the feature matrix for the training data $$\Xi^\kappa = \begin{bmatrix} \xi^{\kappa(1)} & \cdots & \xi^{\kappa(T)} \end{bmatrix}\quad \in {\mathbb R}^{d_\kappa\times T} $$ and $C({\mathbf W})$ is the cost in \eqref{eq:cost}. \end{lemma} The authors in \cite{nguyen2018optimization} further showed that if every shifted $r$-segment of training samples is not identical to each other and $d_\kappa\geq T$, then $\Xi^\kappa$ has full column rank. Additionally, if the nonlinearity at the decoder layer is analytic, then they showed that $\tilde\Upsilon^\kappa(x)\Lambda^\kappa(x)$ has almost always full row rank. This implies that both $ \sigma_{\min} (\Xi^\kappa)$ and $\sigma_{\min} (\Lambda^\kappa(\tilde\Upsilon^\kappa)^\top)$ are non-zero so that $\left. \nabla_{E^\kappa} C\right|_{\mathbf W}=0$ if and only if $y^{(i)}=F({\mathbf W},x^{(i)})$ for all $i\in [T]$ (that is, the loss becomes zero, i.e. $C({\mathbf W})=0$). Unfortunately, this almost always guarantee cannot be used for the ReLU nonlinearities at the decoder layers, since the ReLU nonlinearity is not analytic. In this paper, we extend the result of \cite{nguyen2018optimization} for the encoder-decoder CNN with skipped connection when ReLU nonlinearities are used. In addition to Lemma~\ref{thm:zero}, the following lemma, which is original, does hold for this case. \begin{lemma}\label{lem:zero2} Consider an encoder-decoder CNN with skipped connection. Then, the Jacobian of the cost function in \eqref{eq:cost} with respect to $\tilde S^l$ for $l\in [\kappa]$ is bounded as $$\|\nabla_{\tilde S^l} C\|_F $$ \begin{eqnarray*} \geq \sigma_{\min} (\Gamma^l) \min_{i\in[T]} \sigma_{\min} \left(\tilde\Lambda^l(x^{(i)}) \left(\tilde\Upsilon^{l-1}(x^{(i)})\right)^\top\right) \sqrt{2C({\mathbf W})} \end{eqnarray*} and $$\|\nabla_{\tilde S^l} C\|_F $$ \begin{eqnarray*} \leq \sigma_{\max} (\Gamma^l) \max_{i\in[T]} \sigma_{\max} \left(\tilde\Lambda^l(x^{(i)}) \left(\tilde\Upsilon^{l-1}(x^{(i)})\right)^\top\right) \sqrt{2C({\mathbf W})} \end{eqnarray*} where $\Gamma^l $ denotes the feature matrix from the skipped branch $$\Gamma^l = \begin{bmatrix} \chi^{l(1)} & \cdots & \chi^{l(T)} \end{bmatrix}\quad \in {\mathbb R}^{s_l\times T} $$ and $C({\mathbf W})$ is the cost in \eqref{eq:cost}. \end{lemma} Lemma~\ref{lem:zero2} leads to the following key results on the optimization landscape for the encoder-decoder network with skipped connections. \begin{theorem} Suppose that there exists a layer $l \in [\kappa]$ such that \begin{itemize} \item skipped features $\chi^{l(1)},\cdots, \chi^{l(T)}$ are linear independent. \item $\tilde\Upsilon^{l-1}(x) \tilde\Lambda^l(x)$ has full row rank for all training data $x\in[x^{(1)},\cdots, x^{(T)}]$. \end{itemize} Then, $\left. \nabla_{\tilde S^l} C\right|_{\mathbf W}=0$ if and only if $y^{(i)}=F({\mathbf W},x^{(i)})$ for all $i\in [T]$ (that is, the loss becomes zero, i.e. $C({\mathbf W})=0$). \end{theorem} \begin{proof} Under the assumptions, both $\sigma_{\min} (\Gamma^l)$ and $\sigma_{\min} (\tilde\Lambda^l(\tilde\Upsilon^{l-1})^\top)$ are non-zero. Therefore, Lemma~\ref{lem:zero2} leads to the conclusion. \end{proof} Note that the proof for the full column rank condition for $\Xi^\kappa$ in \cite{nguyen2018optimization} is based on the constructive proof using independency of intermediate features $ \chi^{l(1)}, \cdots , \chi^{l(T)}$ for all $l\in [\kappa]$. Furthermore, for the case of ReLU nonlinearities, even when $\tilde\Upsilon^\kappa(x)\Lambda^\kappa(x)$ does not have full row rank, there are chances that $\tilde\Upsilon^{l-1}(x)\tilde\Lambda^l(x)$ has full row rank at least one $l\in [\kappa]$. Therefore, our result has more relaxed assumptions than the optimization landscape results in \cite{nguyen2018optimization} that relies on Lemma~\ref{thm:zero}. This again confirms the advantages of the skipped connection in encoder-decoder networks. \section{Discussion and Conclusion} In this paper, we investigate the geometry of encoder-decoder CNN from various theoretical aspects such as differential topological view, expressiveness, generalization capability and optimization landscape. The analysis was feasible thanks to the explicit construction of encoder-decoder CNNs using the deep convolutional framelet expansions. Our analysis showed that the advantages of the encoder-decoder CNNs comes from the expressiveness of the encoder and decoder layers, which are originated from the combinatorial nature of ReLU for decomposition and reconstruction frame basis selection. Moreover, the expressiveness of the network is not affected by controlling Lipschitz constant to improve the generalization capability of the network. In addition, we showed that the optimization landscape can be enhanced by the skipped connection. This analysis coincides with our empirical verification using deep neural networks for various inverse problems. For example, in a recent work of $k$-space deep learning \cite{han2018k}, we showed that a neural network for compressed sensing MRI can be more effectively designed in the $k$-space domain, since the frame representation is more concise in the Fourier domain. Similar observation was made in sub-sampled ultrasound (US) imaging \cite{yoon2018efficient}, where we show that the frame representation in raw data domain is more effective in US so that the deep network is designed in the raw-date domain rather than image domain. These empirical examples clearly showed that the unified view between signal processing and machine learning as suggested in this paper can help to improve design and understanding of deep models. \section*{Acknowledgements} The authors thank to reviewers who gave useful comments. This work was supported by the National Research Foundation (NRF) of Korea grant NRF-2016R1A2B3008104.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{General discussion} \label{sec:analysis} Our objective is to describe hadrons containing heavy quarks using a lattice action which includes a number of improvement terms chosen to remove all finite lattice spacing errors to a given order in $|\vec p| a$ and all orders in $ma$. Further, as discussed earlier, we expect that a different treatment will be required for the spatial and temporal momenta of the heavy quark, implying a lattice action which is axis-exchange asymmetric. As we will demonstrate, the desired $O(|\vec p| a)$ and $O(ma)^n$ accuracy can be achieved if we begin with a lattice fermion action of the form \begin{eqnarray} S_{\rm lat} &=& \sum_{n',n} \overline{\psi}_{n'} \Bigl( \gamma^0 D^0 + \zeta \vec{\gamma} \cdot \vec{D} + m_0 - \frac{r_t}{2} (D^0)^2 - \frac{r_s}{2} \vec{D}^2 \nonumber \\ && + \sum_{i,j} \frac{i}{4} c_B \sigma_{ij}F_{ij} + \sum_{i} \frac{i}{2} c_E \sigma_{0i}F_{0i} \Bigr)_{n',n}\psi_n \label{eq:action_lat} \end{eqnarray} and a simple choice of the bare lattice parameters: $r_s=r_t=1$, $c_E=c_B$. Here $\psi_n$ is the heavy quark field at the site $n$, $U_\mu(n)$ is the $SU(3)$ matrix providing gauge parallel transport from the site $n+\mu$ to the site $n$ and \begin{eqnarray} (D_{\mu}\psi)_n & = & \frac{1}{2}\left[U_\mu(n)\psi_{n+\hat{\mu}} - U_\mu(n-\hat{\mu})^{\dagger}\psi_{n-\hat{\mu}}\right]\\ (D_{\mu}^2\psi)_n& = &\left[U_\mu(n)\psi_{n+\hat{\mu}} +U_\mu(n-\hat{\mu})^{\dagger}\psi_{n-\hat{\mu}}-2\psi_n\right]\\ (F_{\mu\nu}\psi)_n&=& \frac{1}{8}\!\!\!\!\sum_{s,s^\prime=\pm 1}\!\!\!\! ss'\left[U_{s\mu}(n)U_{s^\prime \nu}(n+s\hat\mu)\right.\nonumber\\ && \times \left.U_{-s\mu}(n+s\hat\mu+s^\prime \hat\nu)U_{-s^\prime \nu}(n+s^\prime \hat\nu) -\mathrm{h.c.}\right]\psi_n. \end{eqnarray} We are using Hermitian gamma matrices $\gamma_\mu$ obeying $\{\gamma_\mu,\gamma_\nu\}=2\delta_{\mu\nu}$ with $\sigma_{\mu\nu} = \frac{i}{2}[\gamma_\mu,\gamma_\nu]$ and have defined the Yang-Mills field strength tensor $F_{\mu\nu}$ to be an anti-Hermitian color matrix. \subsection{Continuum effective action} In the limit that the lattice spacing becomes small we can analyze the resulting theory and enumerate the largest lattice spacing errors by constructing the Symanzik effective action describing a continuum theory which approximates the lattice theory, including the discretization errors through a given order. Including terms representing errors of order $a$ this effective action can be written: \begin{eqnarray} S_{\rm eff} &=& \int d^4 x \; \overline{\psi}(x) \Bigl( \gamma^0 D^0 + \zeta^c \vec{\gamma} \cdot \vec{D} + m_r - a \frac{r_t^c}{2} (D^0)^2 - a \frac{r_s^c}{2} \vec{D}^2 \nonumber \\ && + \sum_{i,j} \frac{i}{4} c_B^c a\sigma_{ij}F_{ij} + \sum_{i} \frac{i}{2} c_E^c a\sigma_{i0}F_{i0} + \sum_{i} \frac{1}{8} \delta^c a\sigma_{i0}\{D^i,D^0\}. \Bigr)\psi(x). \label{eq:action_cont} \end{eqnarray} The superscript label $c$ representing ``continuum'' has been added to the parameters appearing in this effective continuum action to distinguish them from the similar parameters which enter the lattice action of Eq.~\ref{eq:action_lat}. The corresponding continuum mass has been written $m_r$. Here we are anticipating a choice of lattice parameters which violate axis-interchange symmetry and have therefore introduced all possible dimension 3, 4 and 5 terms which obey only the requirement of rotational symmetry. In Eq.~\ref{eq:action_cont} $\psi(x)$ and $\overline{\psi}(x)$ are the usual continuum fermion fields with normalization chosen to make the coefficient of the $\gamma^0 D^0$ equal to $1$. The derivatives $\vec{D}$ and $D^0$ are the usual gauge-covariant continuum derivatives and $F_{\mu,\nu} = [D_\mu, D_\nu]$ is the Yang-Mills field strength tensor. Since we are interested in treating the case where terms of the form $(m_0a)^n$ or $(D^0 a)^n$ may be large, we will generalize Eq.~\ref{eq:action_cont} to include correction terms containing arbitrary powers of these two quantities but, unless accompanied by such a factor of the heavy quark energy or mass, we neglect all other terms of order $a^2$ or higher. The resulting general heavy quark Symanzik effective action might be written: \begin{equation} {\cal L}_{\rm eff} = {\cal L}_{\rm eff,-1} + {\cal L}_{\rm eff,0} + {\cal L}_{\rm eff,1} + \ldots. \label{eq:eff} \end{equation} where \begin{eqnarray} {\cal L}_{\rm eff,-1} &=& \overline{\psi}\Bigl( \frac{1}{a} B^{-1,1} + \gamma^0 D^0 C^{-1,1}\Bigr)\psi \label{eq:eff_-1} \\ {\cal L}_{\rm eff,0} &=& \overline{\psi}\Bigl(\{\vec\gamma \vec D, B^{0,1}\} + a\{[\vec\gamma \vec D, \gamma^0 D^0], C^{0,1}\}\Bigr)\psi \label{eq:eff_0} \\ {\cal L}_{\rm eff,1} &=& a \overline{\psi}\Bigl(\vec D^2 B^{1,1} + a\{\vec D^2, \gamma^0 D^0\} C^{1,1} \label{eq:eff_1} \\ &&+ [\gamma^i,\gamma^j] [D^i,D^j] B^{1,2} + a\{[\gamma^i,\gamma^j] [D^i,D^j], \gamma^0 D^0 \}C^{1,2} \nonumber \\ &&+ [\gamma^i,\gamma^0] [D^i,D^0] B^{1,3} + a[[\gamma^i,\gamma^0] [D^i,D^0], \gamma^0 D^0] C^{1,3} \Bigr)\psi. \nonumber \end{eqnarray} Here the coefficient functions $B^{i,j}$ and $C^{i,j}$ are actually polynomials of arbitrary order in the product $m_0a$, the operator $(aD^0)^2$ and the gauge coupling $g^2$: \begin{eqnarray} B^{i,j}= \sum_{k,l,n} b_{k,l,n}^{i,j} (m_0a)^k\Bigl((aD^0)^2\Bigr)^{l} g^{2n} \nonumber \\ C^{i,j}= \sum_{k,l,n} c_{k,l,n}^{i,j} (m_0a)^k\Bigl((aD^0)^2\Bigr)^{l} g^{2n}. \label{eq:coef_series} \end{eqnarray} Because we will work to arbitrary order in $m_0a$ and $a D^0$ it is natural to adopt an expansion in lattice spacing $a$ where we count only powers of $a$ which are not compensated by added powers of $m_0$ or $D^0$. We will refer to such an expansion as ``relativistic heavy quark'' or RHQ power counting. The subscripts appearing on the three terms in Eq.~\ref{eq:eff} refer to such a scheme. \subsection{Discrete symmetries} The coefficients in the Symanzik effective action appearing in Eqs.~\ref{eq:action_cont} and \ref{eq:eff_1} can be constrained if, as is conventional, we work with an underlying lattice action which obeys various discrete symmetries and reality conditions. The simplest are the four symmetries corresponding to the change in sign of one of four Euclidean coordinates: $x_\mu \rightarrow (-1)^{\delta_{\mu\nu}}x_\mu$ where $\nu$ is the direction being inverted. In lattice coordinates we replace the fields at the site with coordinates $n_\mu$ with those at the site $n^{P(\nu)}_\mu$ where $n^{P(\nu)}_\mu = L_\mu-1-n_\mu$ for $\mu=\nu$ and $n^{P(\nu)}_\mu = n_\mu$ otherwise. Here we are assuming a general space-time volume of size $L_0 \times L_1 \times L_2 \times L_3$ with $0 \le n_\mu < L_\mu$ and $L_\mu$ even. Our lattice fields transform as: \begin{eqnarray} \psi_n &\rightarrow& \gamma^\nu\gamma^5\psi_{n^{P(\nu)}} \\ \overline{\psi}_n &\rightarrow& \overline{\psi}_{n^{P(\nu)}}\gamma^5\gamma^\nu \\ U_\nu(n) &\rightarrow& U_\nu^\dagger\Bigl(n^{P(\nu)}-\hat{e_\nu}(1- L \delta_{n_\nu,L_\nu-1})\Bigr) \\ U_\mu(n) &\rightarrow& U_\mu(n^{P(\nu)}) \quad \mbox{for $\mu \ne \nu$}. \end{eqnarray} Here $\hat e_\nu$ is a vector extending one site in the $\nu$ direction. For $ma \ge 1$ the mass shell condition $p_0 = +m$ will imply that the negative energy, anti-quark states are far outside the domain of validity of our approximation. Thus, it is important that the improved lattice action obey charge-conjugation symmetry so that both heavy quarks and heavy anti-quarks will be treated with the same accuracy. This can be accomplished if we require that our improved lattice action and therefore the continuum Symanzik action are symmetric under the following change of variables: \begin{eqnarray} \psi_n &\rightarrow& C\overline{\psi}_n^t \label{eq:cc1}\\ \overline{\psi}_n &\rightarrow& -\psi_n^tC^{-1} \label{eq:cc2}\\ U_\mu(n) &\rightarrow& U_\mu(n)^* \label{eq:cc3} \end{eqnarray} where the Dirac charge conjugation matrix $C$ obeys \begin{equation} C^{-1}\gamma_{\mu}C = -\gamma_{\mu}^t. \end{equation} Here we are treating the Grassmann variables $\psi$ and $\overline{\psi}$ as $4 \times 1$ and $1 \times 4$ spinor matrices respectively which requires the appearance of the transpose operation in Eqs.~\ref{eq:cc1} and \ref{eq:cc2} indicated by the superscript $t$. The lattice action given in Eq.~\ref{eq:action_lat} already obeys the above axis reversal symmetry given our requirement that only even powers of the operator $aD_0$ appear. All of the terms in Eq.~\ref{eq:action_lat} are also charge conjugation even except for the terms containing the functions $C^{0,1}$ and $C^{1,3}$. These are odd under $C$ and can be set to zero. Finally we should determine the phases of the coefficients appearing in the effective action of Eq.~\ref{eq:eff}. We begin with the lattice action given in Eq.~\ref{eq:action_lat}. Here we have introduced factors of $i$ in such a way that this action will yield a Hermitian~\cite{Luscher:1976ms}, but possibly not positive~\cite{El-Khadra:1997mp} transfer matrix if the bare, lattice parameters $m_0$, $\zeta$, $r_s$, $r_t$, $c_B$ and $c_E$ are all chosen real. The phases of the parameters appearing in the continuum effective action of Eq.~\ref{eq:eff} can then be easily constrained if we recognize that when the above 6 bare lattice parameters are real, the underlying lattice action obeys a simple symmetry under complex conjugation. Specifically, we consider the fermion path integral in a fixed gauge background: \begin{equation} Z[\eta,\overline{\eta}] = \int d[\psi] d[\overline\psi] \exp\Biggl\{S[\psi,\overline\psi]_{\rm lat} +\int d^4x \bigl\{ \overline\psi(x)\eta(x) + \overline\eta(x)\psi(x) \bigr\}\Biggr\}, \label{eq:gen_fctn} \end{equation} where we have introduced explicit sources $\eta$ and $\overline{\eta}$ so that arbitrary Green's functions can be determined. The integral in Eq.~\ref{eq:gen_fctn} will evaluate to a polynomial in the Grassmann variables $\eta$ and $\overline{\eta}$ with complex coefficients. If we define $Z[\eta,\overline{\eta}]^*$ as that same polynomial but with the coefficients replaced by their complex conjugates, then one can easily show by a standard change of variables in the path integral in Eq.~\ref{eq:gen_fctn}, \begin{equation} \psi \rightarrow \gamma^5\overline\psi^t \quad \overline\psi \rightarrow -\psi^t\gamma^5, \label{eq:ccc_trans} \end{equation} that when $m_0$, $\zeta$, $r_s$, $r_t$, $c_B$ and $c_E$ are real the following relation is obeyed: \begin{equation} Z[\eta,\overline{\eta}]^* = Z[\gamma^5\overline{\eta}^t, -\eta^t\gamma^5]. \label{eq:ccc_cond} \end{equation} Since the continuum effective action is determined directly from the lattice action, it also must obey this reality condition. This requires that each of the functions $B^{i,j}$ and $C^{i,j}$ appearing in Eq.~\ref{eq:eff} be polynomials in the three quantities $m_0$, $(aD^0)^2$ and $g^2$ with real coefficients. \subsection{Field transformations} As is well known, many of the unwanted terms in Eq.~\ref{eq:eff} have no effect on physical states or fermion Green's functions evaluated on the mass shell and can be removed by a redefinition of the fermion fields $\psi$ and $\overline{\psi}$. We will therefore make a series of such transformations chosen to remove many of the terms that appear in the Symanzik effective action of Eq.~\ref{eq:eff}. The coefficients of those terms that remain after these transformations are then presumed to be potentially important lattice artifacts that must be eliminated by an explicit choice of additional improvement terms in the underlying lattice action. The removal of these redundant terms is most easily analyzed in a series of steps exploiting the ordering of the terms in Eq.~\ref{eq:eff}: $O(1/a)$, $O(a^0)$, $O(a)$, {\it etc.} in the RHQ expansion. The largest field transformation introduces terms of order $a^0$ in this RHQ expansion and can be written: \begin{eqnarray} \psi &=& (1+R^{0,1} + a\gamma^0 D^0 S^{0,1})\psi' \label{eq:field_trans_0a} \\ \overline{\psi} &=& \overline{\psi}'(1+\overline{R}^{0,1} - a\gamma^0 \overleftarrow{D}^0 \overline{S}^{0,1}), \label{eq:field_trans_0b} \end{eqnarray} where $R^{0,1}$, $S^{0,1}$, $\overline{R}^{0,1}$ and $\overline{S}^{0,1}$ are arbitrary polynomials in $m_0a$, $(aD^0)^2$ and $g^2$. We adopt the convention in the transformation equations above and the four to follow, that the $aD^0$ argument will always act to the right in the equations for $\psi$ and to the left in the equations for $\overline{\psi}$. (Note that as the covariant derivative, the operator $D_\mu$ will have a different form when acting on $\overline{\psi}$, a color vector whose gauge transformation properties are the hermitian conjugate of those of $\psi$, see Appendix A, Eqs.~\ref{eq:cov_derivative_1} and \ref{eq:cov_derivative_2}.) This transformation will effect all three terms shown in Eq.~\ref{eq:eff}, $O(a^{-1})$, $O(a^{0})$ and $O(a^{1})$ and will generate extra terms that can be used to simplify the resulting action. Relevant to the order in $a$ to which we are working are two further transformations. The first transformation introduces terms of order $a^1$ in $\psi$ and $\overline{\psi}$ and takes the form: \begin{eqnarray} \psi &=& (1+ a \vec\gamma \vec D R^{1,1} + a[\vec\gamma \vec D,a\gamma^0 D^0] S^{1,1})\psi' \label{eq:field_trans_1a} \\ \overline{\psi} &=& \overline{\psi}'(1 - a \overline{R}^{1,1} \vec\gamma \overleftarrow{D} - a \overline{S}^{1,1} [\vec\gamma \overleftarrow{D},a\gamma^0 \overleftarrow{D}^0] ). \label{eq:field_trans_1b} \end{eqnarray} This transformation will act on the ${\cal L}_{{\rm eff},-1}$ and ${\cal L}_{{\rm eff},0}$ terms in Eq.~\ref{eq:eff} and produce terms of order $a^0$ and $a^1$ in the transformed action. Finally, we must discuss a third transformation which is of order $a^2$: \begin{eqnarray} \psi &=& \Biggl(1+ a^2 \vec D^2 R^{2,1} + a^2\{\vec D^2,a\gamma^0 D^0\}S^{2,1} \label{eq:field_trans_2a} \\ &&+a^2[\gamma^i, \gamma^j][D^i, D^j] R^{2,2} +a^2\Bigl\{[\gamma^i, \gamma^j][D^i, D^j],a\gamma^0 D^0\Bigr\}S^{2,2} \nonumber \\ &&+a^2[\gamma^i, \gamma^0][D^i, D^0] R^{2,3} +a^2\Bigl[[\gamma^i, \gamma^0][D^i, D^0],a\gamma^0 D^0\Bigr]S^{2,3} \Biggr)\psi' \nonumber \\ \overline{\psi} &=& \overline{\psi}'\Biggl(1 + a^2 \overleftarrow{D}^2\overline{R}^{2,1} - a^2\{\overleftarrow{D}^2,a\gamma^0 \overleftarrow{D}^0\}\overline{S}^{2,1} \label{eq:field_trans_2b} \\ &&+a^2[\gamma^i, \gamma^j][\overleftarrow{D}^i, \overleftarrow{D}^j] \overline{R}^{2,2} -a^2\Bigl\{[\gamma^i, \gamma^j][\overleftarrow{D}^i, \overleftarrow{D}^j], a\gamma^0 \overleftarrow{D}^0\Bigr\}\overline{S}^{2,2} \nonumber \\ &&+a^2[\gamma^i, \gamma^0][\overleftarrow{D}^i, \overleftarrow{D}^0] \overline{R}^{2,3} +a^2\Bigl[ [\gamma^i, \gamma^0][\overleftarrow{D}^i, \overleftarrow{D}^0], a\gamma^0 \overleftarrow{D}^0\Bigr]\overline{S}^{2,3} \Biggr). \nonumber \end{eqnarray} This order $a^2$ transformation was not investigated in Ref.~\cite{Aoki:2001ra} nor in Section III on redundant couplings in Ref.~\cite{El-Khadra:1997mp} although later in that paper this transformation is discussed, see Eq.~5.23. This transformation acts on only the ${\cal L}_{{\rm eff},-1}$ term in Eq.~\ref{eq:eff} to produce terms of order $a^1$ in the transformed action. The effects of these transformations will be considered below, first in a simplified context in Sec.~\ref{sec:example} and then in generality in Sec.~\ref{sec:induction}. Here we will specialize these three transformations to preserve the charge conjugation symmetry and reality properties discussed above. In fact, with the choice of signs in Eqs.~\ref{eq:field_trans_0a}-\ref{eq:field_trans_2b}, charge conjugation requires, $R^{i,j}=\overline{R}^{i,j}$ and $S^{i,j}=\overline{S}^{i,j}$ while preservation of the form of the reality condition requires that all coefficients in the polynomials $R^{i,j}$ and $S^{i,j}$ be real. This completes our general discussion of the lattice action, the resulting effective continuum action and the field transformations that can be applied to that effective action consistent with our charge conjugation and reality conditions. \section{Simplified example} \label{sec:example} In Sec.~\ref{sec:induction} we use induction to apply the field transformations discussed in the previous section to systematically eliminate all terms from the effective action of Eq.~\ref{eq:eff} except for three, mass-dependent coefficients. These field transformations will leave the effective action in the form given in Eq.~\ref{eq:action_cont} with only the three coefficients $m_r$, $\zeta^c$ and $c^c_P \equiv c_B^c=c_E^c$ non-zero functions of the quark mass times the lattice spacing, $m a$. However, in this section we will present this argument in a simplified case which should make the conclusion and the essential ingredients needed to reach it easier to understand. We will consider the case that the effective continuum action is determined by the Lagrangian given in Eq.~\ref{eq:action_cont} which can be written: \begin{eqnarray} S_{\rm eff} &=& \sum_n \overline{\psi}(x) \Bigl( \gamma^0 D^0 + \zeta^c \vec{\gamma} \cdot \vec{D} + m_r - a \frac{r_t^c}{2} (D^0)^2 - a \frac{r_s^c}{2} \vec{D}^2 \nonumber \\ && + \sum_{i,j} \frac{i}{4} c_B^c a\sigma_{ij}F_{ij} + \sum_{i} \frac{i}{2} c_E^c a\sigma_{i0}F_{i0} \Bigr)\psi(x). \label{eq:action_cont2} \end{eqnarray} Here we are simplifying the general problem by dropping potentially large time derivative terms, $(aD^0)^{2n}$, beyond those appearing explicitly in Eq.~\ref{eq:action_cont2}. We have also omitted the final term proportional to $\delta$ in Eq.~\ref{eq:action_cont} since it violates charge conjugation symmetry. Through a combination of tuning the bare lattice parameters and redefinition of the fields $\psi$ and $\overline{\psi}$ we will be able to put the Lagrangian above into the standard continuum form: \begin{equation} {\cal L}_{\rm eff} = \overline{\psi^\prime}\{\gamma^0 D^0 + \gamma^i D^i + m_r \}\psi^\prime. \label{eq:cont} \end{equation} As is conventional, we will work backward from Eq.~\ref{eq:cont}, performing transformations on the fields $\psi^\prime$ and $\overline{\psi^\prime}$ in an attempt to generate as many as possible of the terms appearing in Eq.~\ref{eq:action_cont2}. We can then be guaranteed that if the other terms, not created by these transformations, are set to zero by tuning an improved lattice action, these remaining terms can then be eliminated by a field transformation. Let us now extend the usual transformations $\psi^\prime \rightarrow \psi$ and $\overline{\psi}^\prime \rightarrow \overline{\psi}$ to demonstrate the redundancy of all but the three parameters listed above: $m_0a$, $\zeta$, $c_{P}$. As in the more complete discussion of Sec.~\ref{sec:induction}, we will organize this discussion using RHQ power counting where the quantities $m$ and $D^0$ are treated as order $a^{-1}$ instead of $a^0$. We begin by making transformations of $O(a^0)$ in the RHQ power counting sense and $O(a^1)$ in the usual sense: \begin{eqnarray} \psi^\prime &=& (1 + a \gamma^0 D^0 S^{0,1})\psi \label{eq:trans_0a} \\ \overline{\psi}^\prime &=& \overline{\psi}(1 - a \gamma^0 \overleftarrow{D}^0 S^{0,1}), \label{eq:trans_0b} \end{eqnarray} where the function $S^{0,1}$ is real and Eqs.~\ref{eq:trans_0a} and Eqs.~\ref{eq:trans_0b} are related by charge conjugation symmetry. We have adopted a somewhat cumbersome notation that will be useful later: the first integer in the superscript of $S^{i,j}$ identifies the RHQ power counting order of the transformation and the second enumerates the different terms of that order. This transformation generates two terms when acting on the action of Eq.~\ref{eq:cont}: \begin{eqnarray} \overline{\psi}\Bigl\{2 m_r a \gamma^0 D^0 S^{0,1} + 2 S^{0,1} a(D^0)^2 \Bigr\}\psi. \label{eq:action_trans_0} \end{eqnarray} As is customary, we neglect terms quadratic in $S^{0,1}$ treating these terms as small. (This issue will be dealt with in a more systematic way in Sec.~\ref{sec:induction}.) Since the quantity in Eq.~\ref{eq:action_trans_0} is generated by a change of Grassmann variables in the path integral, we can treat such a combination of terms as zero, were it to appear in the effective Lagrangian of our improved lattice theory. Of course, since by construction the expression in Eq.~\ref{eq:action_trans_0} is linear in the Dirac operator appearing in the final action, one can also describe the vanishing of these terms as a consequence of the equations of motion. These two styles of derivation are really one and the same. The vanishing of the combination of terms in Eq.~\ref{eq:action_trans_0} implies we can adjust the function $S^{0,1}$ to set $r_t^c$ to zero. (Note, this gives us the freedom in the improved lattice Lagrangian to choose the conventional value of 1 for the bare version of $r_t$.) The only effect on the resulting action will be that of the first term, $2 m_r a S^{0,1} \overline{\psi}\gamma^0 D^0\psi$, which can be removed by a rescaling of $\psi$ and $\overline{\psi}$. Next consider transformations of order $O(a^1)$ in the RHQ power counting sense and also $O(a^1)$ in the usual sense: \begin{eqnarray} \psi^\prime &=& (1 + a \vec\gamma \vec D R^{1,1})\psi \label{eq:trans_1a} \\ \overline{\psi}^\prime &=& \overline{\psi}(1 - a \vec\gamma \overleftarrow{D} R^{1,1}). \label{eq:trans_1b} \end{eqnarray} Acting on the continuum Lagrangian in Eq.~\ref{eq:cont}, These transformations will produce the terms \begin{eqnarray} \overline{\psi}\Bigl(2 m_r a \vec\gamma \vec D + \frac{1}{2}a[\gamma^i,\gamma^0] [D^i,D^0] + 2 a(D^i)^2 \Bigr)R^{1,1}\psi. \label{eq:action_trans_1} \end{eqnarray} Hence with a proper choice for $R^{1,1}$ we can use the $a(D^i)^2$ in Eq.~\ref{eq:action_trans_1} to set $r_s^c=0$ for any choice of $r_s$ in the bare lattice Lagrangian (including our conventional value $r_s=\zeta$). Thus, using the set of two transformation considered so far we have been able to argue that an effective Lagrangian with any set of values of $r_s$ and $r_t$ can be transformed to the proper continuum form. This is the standard argument reducing the number of relevant parameters from six to four. However, there is one further transformation which is of $O(a^2)$ in the sense of both RHQ and conventional power counting that can remove one more parameter: \begin{eqnarray} \psi^\prime &=& (1 + a^2[\gamma^i,\gamma^0][D^i,D^0] R^{2,3})\psi \label{eq:trans_2a} \\ \overline{\psi}^\prime &=& \overline{\psi}(1 + a^2[\gamma^i,\gamma^0][\overleftarrow{D}^i,\overleftarrow{D}^0] R^{2,3}), \label{eq:trans_2b} \end{eqnarray} where we using the label $R^{2,3}$ to maintain consistency with Eqs.~\ref{eq:field_trans_2a} and \ref{eq:field_trans_2b}. This transformation, when acting on the two $O(1/a)$ terms in the continuum action, will produce the following combination of terms of $O(a^1)$ according to RHQ power counting: \begin{eqnarray} a^2\overline{\psi} \Bigl(2m_r [\gamma^i,\gamma^0][D^i,D^0] +\gamma^i \Bigl[[D^i,D^0],D^0\Bigr] \Bigr) R^{2,3}\psi. \label{eq:action_trans_2} \end{eqnarray} As before, we can treat this combination of terms as vanishing either because they were generated by a transformation of path integration variables or as a result of the equations of motion since it was obtained as a sum of left and right multiplication by the continuum Dirac operator. While the first term in Eq.~\ref{eq:action_trans_2} involves the usual $\sigma^{i0} F^{i0}$ associated with $c_E$ and is nominally of order $a$ in RHQ power counting, the second term in which both factors of $D^0$ appear in commutators has no compensating factor of $m$ and hence is $O(a^2)$. Thus, the vanishing of the sum of terms in Eq.~\ref{eq:action_trans_2} on-shell implies that the $c_E^c$ term in the effective action can be related to other terms that are explicitly of order $a^2$ in the sense of RHQ power counting. Because of the presence of the $m_r a$ factor appearing in this term, we cannot completely remove the $c_E$ term in the effective action since that term will contain contributions that are not proportional to the mass. However, the difference between $c_E$ and $c_B$ can be arranged to be proportional to the heavy quark mass. We must merely avoid a gratuitous violation of axis-interchange symmetry when choosing arbitrary parameters in the lattice Lagrangian, {\it e.g.} we must choose $r_t - r_s \propto (m_ra)^1$. That is, if only axis-interchange asymmetry proportional to $m_r a$ is introduced, the difference between $c_E$ and $c_B$ will also vanish as $m_r a \rightarrow 0$. Thus, we can adjust the transformation parameter $R^{2,3}$ to set $c^c_E=c^c_B \equiv c^c_{P}$. Note, in the limit $m_r a \ll 1$, $c_P(m_r a) \rightarrow c_{SW}$, the usual Sheikholeslami and Wohlert coefficient of Ref.~\cite{Sheikholeslami:1985ij}. It is natural to consider also a transformation of $O(a^2)$ in which the coefficient of $c_B$ appears: \begin{eqnarray} \psi^\prime &=& (1 + a^2[\gamma^i,\gamma^j][D^i, D^j] R^{2,2})\psi \label{eq:trans_2c} \\ \overline{\psi}^\prime &=& \overline{\psi}(1 + a^2[\gamma^i,\gamma^j][\overleftarrow{D}^i, \overleftarrow{D}^j] R^{2,2}). \label{eq:trans_2d} \end{eqnarray} However, in contrast to the previous transformation in Eqs.~\ref{eq:trans_2a} and \ref{eq:trans_2b}, this transformation results in a collection of terms which involves the combination: $\Bigl\{[\gamma^i,\gamma^j][D^i,D^j], \gamma^0 D^0\Bigr\}$. This is a new term, not included in the simplified action of Eq.~\ref{eq:action_cont2}, which is nominally of order $a$ in our RHQ power counted scheme and hence potentially significant. Replacing the $c_B$ term with this one is merely trading one non-redundant term for another. Of course, the appearance of this new term indicates the limitations of our simplified example and motivates the complete discussion given in Sec.~\ref{sec:induction}. This result that the difference between $c_B^c$ and $c_E^c$ in the continuum effective Lagrangian contributes a term of order $(\vec p a)^2$ can be understood qualitatively as follows. In the case that $m_r \ll 1/a$ we are dealing with the standard $O(a)$ improvement of Sheikholeslami and Wohlert with $c_B^c = c_E^c$. To the extent that $m_r \approx 1/a$, asymmetries between space and time will be visible and we expect $c_B^c-c_E^c \propto m_ra$. However, for such a heavy quark case we expect the matrix elements of the correction terms $\overline{\psi}\sigma_{\mu\nu} F^{\mu\nu}\psi$ to be of order $1/m_r$. The resulting combination of an overall factor of $a$ present because this is a dimension-5 correction term, the factor $m_r a$ coming from $c_B^c-c_E^c$ and this $1/m_r$ estimate gives an over-all size of $O(a^2)$ with no compensating factors for $m_r$, demonstrating that their difference can be neglected to our intended order of accuracy. Thus, to construct an improved lattice Lagrangian which will yield heavy quark spectral quantities which are accurate up to but not including $O(\vec p a)^2$ we need only tune 3 lattice parameters: $m_0$, $\zeta$, and $c_P$. \section{On-shell improvement and earlier work} \label{sec:on_shell} In the previous sections we have determined the number of parameters that must be tuned in the lattice action if the resulting effective continuum action is to be equivalent to the standard continuum fermion action after a redefinition of the fermion fields. In this section we will consider the limitations of the resulting improved theory and its relation to the results of the Fermilab~\cite{El-Khadra:1997mp} and Tsukuba~\cite{Aoki:2001ra} groups. As is well known, the physical masses determined by an effective theory are not changed by a change of field variables in the path integral defining the Green's functions of that theory. This observation underlies the reduction of parameters that we have been investigating. Those parameters that can be removed by a redefinition of fields cannot effect the predicted masses. In the earlier work of the Fermilab group, the total number of parameters remaining after compensating for the redundancy implied by field transformations was given as four: $m_r$, $\zeta^c$, $c_B^c$ and $c_E^c$ in our notation. By considering the additional field transformation given in Eqs.~\ref{eq:trans_2a} and \ref{eq:trans_2b} we have shown that the number of relevant parameters can be reduced to three: $m_r$, $\zeta^c$, $c_P^c$. Before comparing with the results of the Tsukuba group, we should discuss the question of computing on-shell Green's functions with the effective actions under consideration. While our ability to remove redundant terms from the action is established by examining possible field transformations, such transformations are not actually made. Making these field transformations and casting the effective action in the desired continuum form would require knowing these extra, ``redundant'' parameters. Thus, the quark fields that appear in a lattice calculation with properly tuned values for the three relevant input parameters (here denoted $\psi_0$ and $\overline{\psi}_0$) are un-transformed fields which correspond to an effective action which is not in the continuum form. Thus, we will obtain appropriate, continuum on-shell Green's functions only after we relate the un-transformed, interpolating fields appearing in a lattice calculation with the transformed fields corresponding to a proper, continuum-like effective theory (here labeled $\psi^c$ and $\overline{\psi}^c$). While the fields $\psi_0$,$\overline{\psi}_0$ and $\psi^c$, $\overline{\psi}^c$ are related by a complicated transformation, non-linear in the gluon fields, we need to relate only their on-shell matrix elements. For such ``pole'' contributions all of the added powers of the gluon field present in the lattice fields $\psi_0$ and $\overline{\psi}_0$ must be contracted within field renormalization subdiagrams. (These are one-particle-irreducible subdiagrams with two external lines, which contain the external quark line and the internal quark line contributing to the single particle pole, illustrated in Fig.~\ref{fig:wf_renorm}.) Thus, for the purposes of evaluating on-shell Green's functions these two sets of fields are related by a simple spinor renormalization factor: \begin{eqnarray} (\psi_0)_\alpha &=& \sum_\beta Z_{\alpha,\beta} (\psi^c)_\beta \\ (\overline{\psi}_0)_\alpha &=& \sum_\beta (\overline{\psi}^c)_\beta \overline{Z}_{\beta,\alpha} \end{eqnarray} Here $Z_{\alpha,\beta}$ is a simple $4 \times 4$ spinor matrix that can be written: \begin{equation} Z = Z_1 + Z_2 a \vec\gamma \vec \partial. \end{equation} Here the coefficients $Z_i$ are arbitrary polynomials $m_0a$ and $(\partial_0 a)^2$. Imposing charge conjugation and reality constraints we find that \begin{equation} \overline{Z} = Z_1 - Z_2 a \vec\gamma \overleftarrow{\partial}. \end{equation} and that the polynomials $\{Z_i\}_{i=1,2}$ have real coefficients. Note, we have used the equations of motion to remove a possible $\gamma^0\partial_0$ term. Since these relations are only to be used on-shell, the argument $(\partial^0 a)^2 = (m_r a)^2 + (\vec p a)^2$ and we can drop the final $(\vec p a)^2$ term. Thus, we will adopt the form \begin{eqnarray} Z &=& Z_q^{-1/2}(1+\delta a \vec \gamma \vec \partial) \label{eq:Z_factor_a} \\ \overline{Z} &=& Z_q^{-1/2}(1-\delta a \vec \gamma \overleftarrow \partial). \label{eq:Z_factor_b} \end{eqnarray} where $Z_q$ and $\delta$ are functions of $m a$ only. Thus, our failure to actually transform to the proper continuum fields requires that on-shell Green's functions in which the quark fields appear as interpolating fields must have the additional renormalization matrices $Z$ and $\overline{Z}$ applied to obtain the correct continuum form. Of course, such factors are not needed to extract the correct mass from the large-time limit of such Green's functions. With this background, we can now discuss the work of the Tsukuba group. They emphasizes the importance of working with five parameters, one more than the number determined in the Fermilab paper. By introducing a fifth parameter, they are able to include an additional field transformation which eliminates the parameter $\delta$ above, insuring a lattice action which will yield on-shell quark propagators which take directly the continuum form. This is useful for lattice perturbative calculations where such on-shell quark propagators have meaning. The Tsukuba group uses the field equations to derive their results, not the approach using field transformations taken in the Fermilab work and used in the present paper. However, there is no difference between these two methods because one typically justifies the use of the equations of motion when evaluating an on-shell amplitude by applying field transformations in the path integral. These two approaches are formally equivalent in this situation. The additional field transformation of Eq.~\ref{eq:trans_2a} which permits us to use $c_E=c_B$ can also be cast as a field equation implying the same result. Thus, we conclude that from both approaches only three parameters are needed if an improved lattice action is to yield continuum on-shell Green's functions, up to the spinor transformations of Eqs.~\ref{eq:Z_factor_a} and \ref{eq:Z_factor_b}. Since our objective is to use the improved lattice action to compute non-perturbative quantities, we do not benefit from simplifying on-shell quark Green's functions. However, the field renormalization discussed above applies equally well to composite spin-1/2 operators that might be used to create, for example, a charmed baryon. Since such a composite operator will receive significant contributions from lattice-distorted short-distances, the quantities $Z$ and $\delta$ appropriate for such a physical heavy fermion will be different from a single quark field and the Tsukuba choice of a fifth (now fourth) parameter will not make $\delta$ vanish for the case of a charmed baryon operator. Fortunately, from a non-perturbative perspective, the $Z$-factors above are relatively easy to deal with. They do not need to be known in advance and do not effect the action used in a simulation. Instead, they can be easily determined {\it a posteriori} from the large time behavior of the heavy baryon propagator and then used elsewhere to accurately remove the lattice artifacts associated with using that heavy quark composite field. \section{Inductive transformation of the effective action} \label{sec:induction} In this section we study the complete continuum effective action given in Eqs.~\ref{eq:eff}-\ref{eq:eff_1} whose coefficients are polynomials of arbitrary order in $m_0a$, $(aD^0)^2$ and $g^2$. We will use induction in the order of these polynomials to demonstrate that by applying the field transformations of Eqs.~\ref{eq:field_trans_0a}-\ref{eq:field_trans_2b} this general effective action can be transformed to that given in Eq.~\ref{eq:action_cont} where only the coefficients $m_r$, $\zeta^c$ and $c^c_B=c^c_E$ are non-zero and functions of $m_0a$ and $g^2$ alone. The coefficient functions $B^{i,j}$ and $C^{i,j}$ appearing in the original effective action of Eqs.~\ref{eq:eff}-\ref{eq:eff_1} are polynomials of arbitrary order in the product $m_0a$, the operator $(aD^0)^2$ and the gauge coupling $g^2$: \begin{eqnarray} B^{i,j}= \sum_{k,l,n} b_{k,l,n}^{i,j} (m_0a)^k(aD^0)^{2l} g^{2n} \nonumber \\ C^{i,j}= \sum_{k,l,n} c_{k,l,n}^{i,j} (m_0a)^k(aD^0)^{2l} g^{2n}. \label{eq:coef_series2} \end{eqnarray} For later purposes it is important to recognize that only the usual terms in the Dirac action will have non-zero coefficients in leading order: \begin{equation} b_{0,0,0}^{-1,1}=0, \quad b_{1,0,0}^{-1,1}=c_{0,0,0}^{-1,1}=2 b_{0,0,0}^{0,1}=1. \label{eq:cont_tree} \end{equation} We will find it convenient to reorganize the sums in Eq.~\ref{eq:coef_series2} collecting terms into homogenous polynomials of degree $N$ in the three variables, $m_0a$, $(aD^0)^2$ and $g^2$: \begin{eqnarray} B^{i,j}= \sum_N b_N^{i,j} \nonumber \\ C^{i,j}= \sum_N c_N^{i,j}. \label{eq:homo_series} \label{eq:coef_series_1} \end{eqnarray} where $b_N^{i,j}$ and $c_N^{i,j}$ are such homogenous polynomials of degree $N$ in these three variables. In terms of these polynomials, the character of the tree-level, continuum limit of the standard Wilson action can be summarized by the requirement that $b_0^{-1,1}=0$, $b_1^{-1,1}=m_0a$, and $c_0^{-1,1}= 2 b_0^{0,1} = 1$ (equivalent to Eq.~\ref{eq:cont_tree}) and that all the other $N=0$ coefficients $b_0^{i,j}$ and $c_0^{i,j}$ must vanish. Equations~\ref{eq:eff}, \ref{eq:eff_-1}, \ref{eq:eff_0} and \ref{eq:eff_1} are organized in increasing powers of the lattice spacing where we treat $m$ and $D^0$ as order $1/a$ to accommodate the possibility that $m \sim 1/a$. However, it is important to bear in mind that the term ${\cal L}_{{\rm eff}, n}$ is characterized only by the lack of terms of lower order in $a$ than $a^n$. This term will necessarily contain terms that are of higher order in $a$. Commutators/anti-commutators have been introduced into the definitions in Eqs.~\ref{eq:eff_-1}, \ref{eq:eff_0} and \ref{eq:eff_1} in an attempt to organize these higher order terms. The polynomials $B^{i,j}$ and $C^{i,j}$ above are labeled so the left index indicates the order of the term in this scheme for RHQ power counting, {\it e.g.} $O(a^{i})$ while the right index enumerates the various terms that can occur in that order. Note, we are using two separate expansions. One expansion is in powers of $a$, presuming that $m$ may be of order $1/a$. The second is the expansion in the over-all order of the three variables $m_0a$, $(aD^0)^2$ and $g^2$ were the term $(m_0a)^k (aD^0)^{2l} g^{2n}$ is identified as of order $N=k+l+n$. \subsection{Field Transformations} As is discussed above, many of the unwanted terms in Eq.~\ref{eq:eff} have no effect on physical states or fermion Green's functions evaluated on the mass shell and can be removed by a redefinition of the fermion fields $\psi$ and $\overline{\psi}$. We will now make a series of such transformations chosen to remove many of the terms that appear in the Symanzik effective action of Eq.~\ref{eq:eff}. The coefficients of those terms that remain after these transformations are then presumed to be potentially important lattice artifacts that should be eliminated by an explicit choice of additional improvement terms in the underlying lattice action. The removal of these redundant terms is most easily analyzed in a series of steps exploiting the ordering of the terms in Eq.~\ref{eq:eff}: $O(1/a)$, $O(a^0)$, $O(a)$, {\it etc.} in the RHQ expansion. We will first make the large, $O(a^0)$, transformation of Eqs.~\ref{eq:field_trans_0a} and \ref{eq:field_trans_0b} which we will be able to chose to return the $O(1/a)$ terms in ${\cal L}_{{\rm eff},-1}$ to the form found in the conventional continuum action: \begin{equation} {\cal L}_{\rm sym} = \overline{\psi}'(\gamma^\mu D_\mu + m_r)\psi'. \label{eq:eff_target} \end{equation} We will then consider the effect of both this $O(a^0)$ transformation as well the most general $O(a)$ transformation given in Eqs.~\ref{eq:field_trans_1a} and \ref{eq:field_trans_1b} on the $O(a^0)$ term ${\cal L}_{{\rm eff},0}$. Finally, the effect of all three transformations, $O(a^0)$, $O(a)$ and the $O(a^2)$ given in Eqs.~\ref{eq:field_trans_2a} and \ref{eq:field_trans_2b} will be studied on the final term of interest, ${\cal L}_{{\rm eff},1}$. \subsubsection{Redundant terms in ${\cal L}_{{\rm eff},-1}$} In order to analyze the $O(1/a)$ terms in the Symanzik effective action, we must consider the effects of a field transformation of $O(a^0)$ on that action. The most general such field transformation are given in Eqs.~\ref{eq:field_trans_0a} and \ref{eq:field_trans_0b} and repeated here for convenience, incorporating charge conjugation symmetry: \begin{eqnarray} \psi &=& (1+R^{0,1} + a\gamma^0 D^0 S^{0,1})\psi' \label{eq:field_trans_0a2} \\ \overline{\psi} &=& \overline{\psi}'(1+R^{0,1} - a\gamma^0 \overleftarrow{D}^0 S^{0,1}). \label{eq:field_trans_0b2} \end{eqnarray} The two functions $R^{0,1}$ and $S^{0,1}$ are polynomials of arbitrary order in $m_0a$, $(aD^0)^2$ and $g^2$, similar to the coefficient functions $B^{i,j}$ and $C^{i,j}$ of Eqs.~\ref{eq:eff_-1}-\ref{eq:eff_1} and \ref{eq:coef_series2}. These transformations are most easily analyzed if we proceed in a systematic fashion, removing sequentially terms in ${\cal L}_{{\rm eff},-1}$ of increasing order $m_0a$, $(aD^0)^2$ and $g^2$ where, as above, we identify a term of the form $(m_0a)^k(aD^0)^{2l} g^{2n}$ as being of order $N=k+l+n$. Reliance on such a formal expansion is a standard approach to linearize the problem at hand, at the expense of requiring that an inductive argument be created to deal with polynomials of arbitrary order. Specifically, we will achieve the general field transformation described in Eqs.~\ref{eq:field_trans_0a2} and \ref{eq:field_trans_0b2} by performing a sequence of simpler transformations where each involves a homogenous polynomial of order $N$ in the three variables $m_0a$, $(aD^0)^2$ and $g^2$: \begin{eqnarray} \psi &=& (1+r_N^{0,1} + a\gamma^0 D^0 s_N^{0,1})\psi' \label{eq:field_trans_0_Na} \\ \overline{\psi} &=& \overline{\psi}'(1+r_N^{0,1} - a\gamma^0 \overleftarrow{D}^0 s_N^{0,1}). \label{eq:field_trans_0_Nb} \end{eqnarray} Here the quantities $r_N^{i,j}$ and $s_N^{i,j}$ are homogenous polynomials of order $N$ in the three variables $m_0a$, $(aD^0)^2$ and $g^2$. The index $i$ identifies the order of the term in the RHQ expansion and the index $j$ labels the specific operator appearing in the transformation. \underline{Theorem} By proper choice of the transformation coefficients, $r_N^{0,1}$ and $s_N^{0,1}$ it is possible to transform ${\cal L}_{\rm eff,-1}$ into the form: \begin{equation} {\cal L}_{\rm eff,-1} = \overline{\psi}'\{\gamma^0 D^0 + m_r\}\psi' \label{eq:induct_0} \end{equation} where $m_r$ is a polynomial in the variables $m_0a$ and $g^2$. This theorem can be proven by induction in $N$. \underline{Proof} To leading order in $N$, Eq.~\ref{eq:induct_0} is satisfied without any transformation. As observed above, the coefficient of $\gamma^0 D^0$, $C^{-1,1} = c_0^{-1,1} =1$ to order $N=0$. Likewise at order $N=1$, the coefficient of $1/a$, $B^{-1,1} = b_0^{-1,1} + b_1^{-1,1} = m_0a$ so that through order $N=1$, $m_r=m$. Thus, as the first step in our induction proof, we note that $c_0^{-1,1} = 1$, $b_0^{-1,1}=0$ and $b_1^{-1,1}=m_0a$. Next we assume Eq.~\ref{eq:induct_0} is valid to order $N=N_0$ in the sense that after the previous $N_0$ steps, the resulting coefficients in the Lagrangian ${\cal L}_{\rm eff,-1}$ of Eq.~\ref{eq:eff_-1} obey: $C^{-1,1} = c_0^{-1,1} = 1$ and $B^{-1,1} = (m_ra)_{N_0+1}$. Thus, we must attempt to remove the next order terms in ${\cal L}_{\rm eff,-1}$: \begin{equation} {\cal L}_{\rm eff,-1} = \overline{\psi}\Bigl\{\gamma^0 D^0(1+c_{N_0+1}^{-1,1}) + \frac{1}{a} \Bigl( (m_ra)_{N_0+1} + b_{N_0+2}^{-1,1} \Bigr) \Bigr\}\psi. \label{eq:induct_1} \end{equation} Here, by induction, $(m_ra)_{N_0+1}$ is assumed to be a polynomial of order $N \le N_0+1$ in the variables $m_0a$ and $g^2$. The coefficients $b_{N_0+2}^{-1,1}$ and $c_{N_0+1}^{-1,1}$ are closely related to those appearing in Eqs.~\ref{eq:coef_series2} and \ref{eq:coef_series_1}, differing only by the effects of the field transformations made previously to achieve the form in Eq.~\ref{eq:induct_1}: \begin{eqnarray} \psi &=& \prod_{N=0}^{N_0}\Bigl(1+r_{N}^{0,1} + a\gamma^0 D^0 s_{N}^{0,1}\Bigr)\psi' \\ \overline{\psi} &=& \overline{\psi'}\prod_{N=0}^{N_0}\Bigl(1 +r_{N}^{0,1} -a\gamma^0 \overleftarrow{D}^0 s_{N}^{0,1}\Bigr). \label{eq:field_trans_gen} \end{eqnarray} Performing the next transformation of order $N_0+1$: \begin{eqnarray} \psi &=& (1+r_{N_0+1}^{0,1} + a\gamma^0 D^0 s_{N_0+1}^{0,1})\psi' \label{eq:field_trans_0_1c} \\ \overline{\psi} &=& \overline{\psi'}(1+r_{N_0+1}^{0,1} - a\gamma^0 \overleftarrow{D}^0 s_{N_0+1}^{0,1}). \label{eq:field_trans_0_1d} \end{eqnarray} ${\cal L}_{\rm eff,-1}$ of Eq.~\ref{eq:induct_1} becomes: \begin{eqnarray} {\cal L}_{\rm eff,-1} &=& \overline{\psi'}\Bigl\{\gamma^0 D^0\bigl[1+c_{N_0+1}^{-1,1} + 2r_{N_0+1}^{0,1} \bigr] \nonumber \\ &&+ \frac{1}{a} \bigl[ (m_ra)_{N_0+1} + 2 b_{N_0+2}^{-1,1} + 2 (aD^0)^2 s_{N_0+1}^{0,1} \bigr] \Bigr\}\psi'. \label{eq:induct_2} \end{eqnarray} Thus, we can establish our theorem to order $N_0+1$ if we require: \begin{equation} c_{N_0+1}^{-1,1} + 2 r_{N_0+1}^{0,1} = 0 \label{eq:induct_3} \end{equation} and choose $s_{N_0+1}^{0,1}$ to remove the $(aD^0)^{2N}$ terms for $1 \le N \le N_0+2$ from the coefficient $b_{N_0+2}^{-1,1}$ so that the definition \begin{eqnarray} (m_ra)_{N_0+2} &=& (m_ra)_{N_0+1} + b_{N_0+2}^{-1,1} + 2(aD^0)^2 s_{N_0+1}^{0,1} \label{eq:induct_4} \end{eqnarray} will contain no $(aD^0)^2$ terms as required. Following this inductive procedure, we are thus able to express the $O(1/a)$ Symanzik Lagrangian in the standard continuum form. Only the mass parameter $m_r$ must be tuned by an appropriate choice of lattice action to agree with the mass of the heavy quark which this Lagrangian is intended to describe. \subsubsection{Redundant terms in ${\cal L}_{{\rm eff},0}$} The order $a^0$, Symanzik effective Lagrangian, ${\cal L}_{{\rm eff},0}$ is altered by two sorts of field transformations. The first is the $O(a^0)$ transformations discussed above. The second are the $O(a)$ transformations of Eqs.~\ref{eq:field_trans_1a} and \ref{eq:field_trans_1b} which act on ${\cal L}_{{\rm eff},-1}$ and generate terms of the type which appear in ${\cal L}_{{\rm eff},0}$. Including the constraints of charge conjugation symmetry these $O(a)$ transformations can be written: \begin{eqnarray} \psi &=& \Bigl(1+ a \vec\gamma \vec D R^{1,1} + a^2[\vec\gamma \vec D,\gamma^0 D^0] S^{1,1}\Bigr)\psi' \label{eq:field_trans_1_1a} \\ \overline{\psi} &=& \overline{\psi}'\Bigl((1 - a \vec\gamma \overleftarrow{D} R^{1,1} - a^2[\vec\gamma \overleftarrow{D},\gamma^0 \overleftarrow{D}^0] S^{1,1}\Bigr). \label{eq:field_trans_1_1b} \end{eqnarray} As in the previous discussion, it will be convenient to view the coefficient functions $R^{1,1}$ and $S^{1,1}$ as a sum of homogenous polynomials in the three variables $m_0a$, $(aD^0)^2$ and $g^2$: \begin{eqnarray} R^{i,j} &=& \sum_N r_N^{i,j} \quad\quad S^{i,j} = \sum_N s_N^{i,j} \label{eq:poly_exp_1} \end{eqnarray} Again, we will proceed inductively to prove the following result: \underline{Theorem} By proper choice of the transformation coefficients, $r_N^{1,1}$ and $s_N^{1,1}$, it is possible to transform ${\cal L}_{\rm eff}$ so that ${\cal L}_{\rm eff,0}$ takes the form: \begin{equation} {\cal L}_{\rm eff,0} = \overline{\psi}\vec\gamma \vec D \psi. \label{eq:induct_0b} \end{equation} \underline{Proof} This result is automatically valid to order $N=0$ which is the case of the tree-level Lagrangian with $b_0^{0,1}=1/2$ and $c_0^{0,1}=0$. Next, assume the inductive hypothesis that when working to order $N_0$ we are able to simplify ${\cal L}_{{\rm eff},0}$ so that all terms of order $N_0+1$ and lower take the form: \begin{equation} {\cal L}_{{\rm eff},0} = \overline{\psi} \Bigl\{\vec\gamma \vec D, (1/2 + b_{N_0+1}^{0,1})\Bigr\} \psi. \label{eq:eff_0_N_0a} \end{equation} (Recall that the $C^{0,1}$ term in Eq.~\ref{eq:eff_0} vanishes when charge conjugation symmetry is imposed.) We will now apply the transformations of order $N_0+1$ given in Eqs.~\ref{eq:field_trans_0_1c} and \ref{eq:field_trans_0_1d} and those in Eqs.~\ref{eq:field_trans_1_1a} and \ref{eq:field_trans_1_1b} specialized to the polynomials of order $N_0$, \begin{eqnarray} \psi &=& \Bigl(1+ a \vec\gamma \vec D r_{N_0}^{1,1} + a^2[\vec\gamma \vec D,\gamma^0 D^0] s_{N_0}^{1,1}\Bigr)\psi' \label{eq:field_trans_1_Na} \\ \overline{\psi} &=& \overline{\psi'}\Bigl(1 -a r_{N_0}^{1,1}\, \vec\gamma \overleftarrow D - a^2 s_{N_0}^{1,1} [\vec\gamma \overleftarrow{D},\gamma^0 \overleftarrow{D}^0] \Bigr), \label{eq:field_trans_1_Nb} \end{eqnarray} to ${\cal L}_{{\rm eff},-1}+{\cal L}_{{\rm eff},0}$. These transformations yield ${\cal L}_{{\rm eff},0}$ of the following form: \begin{eqnarray} {\cal L}_{{\rm eff},0} &=& \overline{\psi'} \Bigl\{\vec\gamma \vec D, \Bigl(\frac{1}{2} + b_{N_0+1}^{0,1} + r^{0,1}_{N_0+1} + m_0 a\,r^{1,1}_{s,N_0} -(aD^0)^2s^{1,1}_{N_0} \Bigr)\Bigr\}\psi'. \label{eq:eff_0_N_0} \end{eqnarray} Since the difference between the coefficient of $\gamma^0 D^0$ which has now been set to one and that of $\vec \gamma \vec D$ must vanish when the anisotropic effects of the special treatment of $m_0a$ and $D^0$ are absent, the combination $b_{N_0+1}^{0,1} + r^{0,1}_{N_0+1}$ must be proportional to a linear combination of $m_0a$ and $(aD^0)^2$ and can therefore be completely canceled by an appropriate choice of the terms $m_0 a\, r^{1,1}_{N_0}$ and $-2(aD^0)^2s^{1,1}_{N_0}$, completing our inductive proof. As the preceding discussion reveals, our inductive approach to determining the redundant parameters in ${\cal L}_{\rm eff}$ requires that both the order $a^0$ and order $a^1$ field transformations are to be applied at the same time so that a common inductive step is taken to show that the desired form will hold at order $N_0+1$ provided it holds at order $N_0$. It is in this sense that we are combining the order $a^0$ and $a^1$ transformations in Eq.~\ref{eq:eff_0_N_0}. \subsubsection{Redundant terms in ${\cal L}_{{\rm eff},1}$} The last step in this discussion is an analysis of the freedom to simplify the terms of order $a$ in ${\cal L}_{\rm eff}$, {\it i.e.} ${\cal L}_{\rm eff,1}$. These can be effected by three different sorts of field transformations: transformations of order $a^0$ acting on ${\cal L}_{\rm eff,1}$, transformations of order $a$ acting on ${\cal L}_{\rm eff,0}$ and transformations of order $a^2$ acting on ${\cal L}_{\rm eff,-1}$. We will again state our result in the form of a theorem to be proven by induction in the order of the polynomials appearing in ${\cal L}_{\rm eff,1}$: \underline{Theorem} By an appropriate field transformation ${\cal L}_{\rm eff,1}$ can be cast in the form: \begin{equation} {\cal L}_{\rm eff,1} = -\overline{\psi} c_{P}\Bigl\{ \frac{1}{8} [\gamma^i,\gamma^j][D^i,D^j] + \frac{1}{4}[\gamma^i,\gamma^0][D^i,D^0]\Bigr\} \psi \label{eq:induct_0c} \end{equation} where $c_{P}=B^{1,2}=B^{1,3}/2$ is a polynomial in $m_0a$ and $g^2$ only. \underline{Proof} We begin by observing that to order $N=0$ Eq.~\ref{eq:induct_0c} is automatically obeyed with $c_{P}=-8 B_{N=0}^{1,2}=1$, the original, tree-level result of Sheikholeslami and Wohlert. Next we assume that this is true to order $N_0$ so that to order $N_0+1$, ${\cal L}_{\rm eff,1}$ takes the form: \begin{eqnarray} {\cal L}_{\rm eff,1} &=& a \overline{\psi}\Biggl\{\vec D^2 b_{N_0+1}^{1,1} + a\{\vec D^2, \gamma^0 D^0\} c_{N_0+1}^{1,1} \label{eq:induct_1c} \\ &&\quad+ [\gamma^i, \gamma^j] [D^i,D^j]\Bigl(-\frac{1}{8}(c_{P})_{N_0}+ b_{N_0+1}^{1,2}\Bigr) \nonumber \\ &&\quad+ a\{[\gamma^i, \gamma^j][D^i,D^j], \gamma^0 D^0 \}c_{N_0+1}^{1,2} \nonumber \\ &&\quad+ [\gamma^i, \gamma^0] [D^i,D^0]\Bigl(-\frac{1}{4}(c_{P})_{N_0}+ b_{N_0+1}^{1,3}\Bigr) \Biggr\}\psi. \nonumber \end{eqnarray} We will now attempt to remove the redundant terms in Eq.~\ref{eq:induct_1c} by the following three field transformations. The first is the $O(a^0)$ transformations of Eqs.~\ref{eq:field_trans_0_Na} and \ref{eq:field_trans_0_Nb} that involve polynomials in $m_0a$, $(aD^0)^2$ and $g^2$ of combined order $N=N_0+1$. These $O(a^0)$ transformations will have the following $O(a)$ effects. When acting on ${\cal L}_{\rm eff,-1}$, these $O(a^0)$ transformations produce only terms of $O(1/a)$. No terms of $O(a^0)$ or $O(a)$ are produced. If these $O(a^0)$ transformations act on ${\cal L}_{\rm eff,0}$ both terms of $O(a^0)$ and of $O(a)$ are created. Those of $O(a^0)$ appear in Eq.~\ref{eq:eff_0_N_0} and have been removed by the transformations of order $a^1$. The terms of $O(a)$ which will effect ${\cal L}_{\rm eff,1}$ take the following form: \begin{eqnarray} \Delta{\cal L}_{\rm eff,1}^{0,0} &=& a\overline{\psi} \{\vec \gamma \vec D, \gamma^0 D^0\}\Bigl(s_{s,N_0+1}^{0,1} +2(aD^0)^2 \frac{s_{s,N_0+1}^{0,1}}{\partial((aD^0)^2)}\Bigr)\psi \label{eq:delta_0_0} \end{eqnarray} Here the $i,j$ superscript on $\Delta{\cal L}_{\rm eff,1}^{i,j}$ identifies this expression as the change in ${\cal L}_{\rm eff,1}$ coming from applying a transformation of order $a^i$ to ${\cal L}_{\rm eff,j}$. Next we should consider the effect of this $O(a^0)$ transformation on the $O(a)$ Lagrangian ${\cal L}_{\rm eff,1}$. However, since we will not need to use the effects of this transformation on ${\cal L}_{\rm eff,1}$, we will assume that its effects have already be taken into account in the coefficients $b_{N_0+1}^{1,j}$ and $c_{N_0+1}^{1,j}$ that appear in Eq.~\ref{eq:induct_1c}. Having completely accounted for the effects on ${\cal L}_{\rm eff}$ of the transformations of $O(a^0)$ given in Eqs.~\ref{eq:field_trans_0_1c} and \ref{eq:field_trans_0_1d}, we will now consider the transformations of $O(a)$ given in Eqs.~\ref{eq:field_trans_1_Na} and \ref{eq:field_trans_1_Nb}. First as they act on ${\cal L}_{\rm eff, -1}$ they will produce the following changes in ${\cal L}_{\rm eff,1}$: \begin{eqnarray} \Delta{\cal L}_{\rm eff,1}^{1,-1} &=& \frac{a}{2}\overline{\psi} [\gamma^i,\gamma^0][ D^i,D^0] r_{N_0+1}^{1,1} \psi \label{eq:delta_1_-1} \end{eqnarray} Note, this term was generated from the $\gamma^0 D^0$ term in ${\cal L}_{\rm eff, -1}$. No terms of order $a$ are produced from the $m$ term. The next case to consider is the effect of these transformations of $O(a)$ on ${\cal L}_{\rm eff, 0}$. The resulting changes to ${\cal L}_{\rm eff,1}$ are: \begin{eqnarray} \Delta{\cal L}_{\rm eff,1}^{1,0} &=& \overline{\psi}\Biggl\{ a\Bigl(2 \vec D^2 + \frac{1}{2}[\gamma^i,\gamma^j][D^i,D^j]\Bigr) r^{1,1}_{N_0+1} \label{eq:delta_1_0} \\ && + a\Bigl\{\Bigl(2 \vec D^2 + \frac{1}{2}[\gamma^i,\gamma^j][D^i,D^j]\Bigr), a\gamma^0 D^0\Bigr\}s^{1,1}_{N_0+1} \Biggr\}\psi. \nonumber \end{eqnarray} The final $O(a)$ effects to consider are those of transformations of $O(a^2)$ acting on ${\cal L}_{\rm eff, -1}$. If Eqs.~\ref{eq:field_trans_2a} and \ref{eq:field_trans_2b} are specialized to respect charge conjugation symmetry, the relevant $O(a^2)$ field transformations can be written: \begin{eqnarray} \psi &=& \Biggl(1+ a^2 \vec D^2 r_{N_0+1}^{2,1} + a^2\{\vec D^2,a\gamma^0 D^0\}s_{N_0}^{2,1} \label{eq:field_trans_2_Na} \\ &+&a^2[\gamma^i, \gamma^j][D^i, D^j] r_{N_0+1}^{2,2} \nonumber \\ &+&a^2\Bigl\{[\gamma^i, \gamma^j][D^i, D^j],a\gamma^0 D^0\Bigr\}s_{N_0}^{2,2} \nonumber \\ &+&a^2[\gamma^i, \gamma^0][D^i, D^0] r_{N_0}^{2,3} \nonumber \\ &+&a^2\Bigl[[\gamma^i, \gamma^0][D^i, D^0],a\gamma^0 D^0\Bigr]s_{N_0}^{2,3} \Biggr)\psi' \nonumber \\ \overline{\psi} &=& \overline{\psi}'\Biggl(1 + a^2 \overleftarrow{D}^2 r_{N_0+1}^{2,1} - a^2\{\overleftarrow{D}^2,a\gamma^0 \overleftarrow{D}^0\} s_{N_0}^{2,1} \label{eq:field_trans_2_Nb} \\ &+&a^2[\gamma^i, \gamma^j][\overleftarrow{D}^i, \overleftarrow{D}^j] r_{N_0+1}^{2,2} \nonumber \\ &-&a^2\Bigl\{[\gamma^i, \gamma^j][\overleftarrow{D}^i, \overleftarrow{D}^j], a\gamma^0 \overleftarrow{D}^0\Bigr\} s_{N_0}^{2,2} \nonumber \\ &+&a^2[\gamma^i, \gamma^0][\overleftarrow{D}^i, \overleftarrow{D}^0] r_{N_0}^{2,3} \nonumber \\ &+& a^2\Bigl[ [\gamma^i, \gamma^0][\overleftarrow{D}^i, \overleftarrow{D}^0], a\gamma^0 \overleftarrow{D}^0\Bigr] s_{N_0}^{2,3} \Biggr). \nonumber \end{eqnarray} The resulting $O(a)$ terms are: \begin{eqnarray} \Delta{\cal L}_{\rm eff,1}^{2,-1} &=& \overline{\psi}\Biggl\{ a \vec D^2\Bigl( 2m_0a\,r_{N_0+1}^{2,1} +4(aD^0)^2 s_{N_0}^{2,1}\Bigr) \label{eq:delta_2_-1} \\ && +a\{\vec D^2,a\gamma^0 D^0\}( 2m_0a\, s_{N_0}^{2,1} +r_{N_0+1}^{2,1}) \nonumber \\ && +a[\gamma^i,\gamma^j][D^i,D^j]\Bigl( 2m_0a\,r_{N_0+1}^{2,2} +4(aD^0)^2s_{N_0}^{2,2}\Bigr) \nonumber \\ && +a\{[\gamma^i,\gamma^j][D^i,D^j],\gamma^0D^0\}\Bigl( 2m_0a\, s_{N_0}^{2,2} +r_{N_0+1}^{2,2}\Bigr) \nonumber \\ && +a[\gamma^i,\gamma^0][D^i,D^0]\Bigl( 2m_0a\, r_{N_0+1}^{2,3} -4(aD^0)^2s_{N_0}^{2,3}\Bigr) \Biggr\}\psi. \nonumber \end{eqnarray} We can now combine the $O(a)$ terms created by these three field transformations with those already present in Eq.~\ref{eq:induct_1c}. We will do this by considering in turn each of the three types of operators appear in Eq.~\ref{eq:induct_1c} with coefficients whose right hand superscript is $1 \le j \le 3$, which we will denote ${\cal L}_{\rm eff,1}^{(j=1,2,3)}$. We first examine ${\cal L}_{\rm eff,1}^{(1)}$ constructed by collecting terms from Eqs.~\ref{eq:induct_1c}, \ref{eq:delta_1_0} and \ref{eq:delta_2_-1}: \begin{eqnarray} {\cal L}_{\rm eff,1}^{(1)} &=& a \overline{\psi}\Biggl\{\vec D^2 \Bigl(b_{N_0+1}^{1,1} +2r^{1,1}_{N_0+1} + 2m_0a\,r_{N_0+1}^{2,1} +4(aD^0)^2s_{N_0}^{2,1} \Bigr) \label{eq:collected_1_1} \\ &&+ a\{\vec D^2, \gamma^0 D^0\} \Bigl( c_{N_0+1}^{1,1} +2s^{1,1}_{N_0+1}+ 2m_0a\, s_{N_0}^{2,1} +r_{N_0+1}^{2,1}\Bigr) \Biggr\}\psi. \nonumber \end{eqnarray} Since we will adopt the usual conventions of fixing the spatial Wilson term in the lattice action to have normalization 1, we will adjust $r^{1,1}_{N_0+1}$ appearing above to remove the corresponding $\vec D^2$ term. This implies that we cannot make the choice described in the previous discussion of ${\cal L}_{\rm eff,0}$ to set the coefficient of $\vec \gamma \vec D$ to one, which was also accomplished by a different choice of these same coefficients, see Eq.~\ref{eq:eff_0_N_0} and following. The second term of the form $a\{\vec D^2,a \gamma^0 D^0\}$ in Eq.~\ref{eq:collected_1_1} will be removed the choice of $r_{N_0+1}^{2,1}$. We next examine ${\cal L}_{\rm eff,1}^{(2)}$ constructed by collecting terms from Eqs.~\ref{eq:induct_1c}, \ref{eq:delta_1_0} and \ref{eq:delta_2_-1}: \begin{eqnarray} {\cal L}_{\rm eff,1}^{(2)} &=& a \overline{\psi}\Biggl\{ a[\gamma^i,\gamma^j][D^i,D^j] \Bigl(-\frac{1}{8}(c^c_{P})_{N_0} + b_{N_0+1}^{1,2} + r^{1,1}_{N_0+1} + 2m_0a\,r_{N_0+1}^{2,2} +4(aD^0)^2 s_{N_0}^{2,2} \Bigr) \nonumber\\ &&+ a \Bigl\{[\gamma^i,\gamma^j][D^i,D^j],a\gamma^0 D^0\Bigr\}\Bigl\{ c_{N_0+1}^{1,2} + \frac{1}{2} s^{1,1}_{N_0+1} + 2m_0a\, s_{N_0}^{2,2} +r_{N_0+1}^{2,2} \Bigr\} \Biggr\}\psi. \label{eq:collected_1_2} \end{eqnarray} Since the coefficient $r^{1,1}_{N_0+1}$ has already been used to remove the $\vec D^2$ term and the coefficient $r_{N_0+1}^{2,2}$ will be used below, we have only the freedom to adjust the combination $s_{N_0}^{2,2}$ to remove the terms proportional to $(aD^0)^2$ for the coefficient of $a[\gamma^i,\gamma^j][D^i,D^j]$. Thus, the parameter $c_{P}$ will require mass-dependent tuning. However, the second term, $a \{[\gamma^i,\gamma^j][D^i,D^j],a\gamma^0 D^0\}$ can be entirely removed by a choice of $r_{N_0+1}^{2,2}$. Finally we consider the term ${\cal L}_{\rm eff,1}^{(3)}$ constructed by collecting terms from Eqs.~\ref{eq:induct_1c}, \ref{eq:delta_0_0}, \ref{eq:delta_1_-1} and \ref{eq:delta_2_-1}: \begin{eqnarray} {\cal L}_{\rm eff,1}^{(3)} &=& a \overline{\psi} a[\gamma^i,\gamma^0][D^i,D^0]\Bigl\{ -\frac{1}{4}(c_{P})_{N_0}+b_{N_0+1}^{1,3}+s_{s,N_0+1}^{0,1} \label{eq:collected_1_3} \\ && +2(aD^0)^2 \frac{s_{s,N_0+1}^{0,1}}{\partial((aD^0)^2)} +\frac{1}{2}r_{N_0+1} +2 m_0a\, r_{s,N_0+1}^{2,3} -4(aD^0)^2s_{N_0}^{2,3} \Bigr\}\psi. \nonumber \end{eqnarray} We can now exploit the freedom to choose the coefficient $s_{N_0}^{2,3}$ to remove the terms containing $(aD^0)^{2l}$ from the coefficient of $[\gamma^i,\gamma^0][D^i,D^0]$ and can determine $m_0a\, r_{N_0}^{2,3}$ to set this coefficient equal to that of $[\gamma^i,\gamma^j][D^i,D^j]/2$ since their difference must be proportional to $m_0a$. Thus, we have shown that with the proper choice of field transformations to remove redundant terms, the general Symanzik action, invariant under axis reversal and charge conjugation will contain only three independent parameters. This result is summarized in Table~\ref{tab:improvement} where the various field transformations and the terms which they eliminate are listed. We conclude that a lattice calculation accurate through order $|\vec p a|$ and to arbitrary order in $ma$ requires the determination of the three parameters $m_0$, $\zeta$ and $c_P$ appearing in the improved lattice action. \section{Tree-level results} \label{sec:tree} In order to investigate the number of required parameters further, we have carried out a tree-level calculation of both the quark propagator and the quark gluon vertex for a general, heavy-quark lattice Lagrangian, but evaluated in the limit $|\vec p a| \ll 1$. We begin with the general lattice action given in Eq.~\ref{eq:action_lat} which depends on six parameters, $m_0$, $\zeta$, $r_s$, $r_t$, $c_B$ and $c_E$. We then demonstrate that a continuum result can be obtained on-shell, accurate through $O(|\vec p a|)$ and to all orders in $m_r a$ by adjusting only the expected three parameters $m_0$, $c_P \equiv c_B = c_E$ and $\zeta$ while at the same time performing a simple $4 \times 4$ matrix rotation on the Dirac spinors. The presence of hyperbolic trigonometric functions in the Minkowski-space lattice propagator makes the algebra in this section somewhat complex. This complexity is compounded by the approximation $|\vec p a| \ll 1$ which is being made to functions of the two variables $|\vec p a|$ and $m_r a$. Depending on the size of the second variable $m_r a$, the treatment of the quantity $|\vec p a|$ can be quite different. It is natural to divide the possible values of $m_r a$ into two regions. In the first region $m_r a \ll 1$, and we have the kinematics of standard, light fermions. In this case we cannot neglect $|\vec p|/m_r$ but can treat $m_r a$ as a small parameter. In second region we assume $p \ll m_r$. Here we cannot neglect errors of order $m_ra$ but can treat $|\vec p|/m_r$ as small. Since these two regions have a non-vanishing overlap, $|\vec p| \ll m_r \ll 1/a$, we will be able to demonstrate that the tree level amplitudes are consistent with our treatment for all values of $m_r a$ if we are able to provide satisfactory bounds on the errors in both of these two regions. We propose to do this as follows. First we introduce a small parameter $\epsilon = |\vec p a|$. Our objective is to show that at tree level, working with the improved, 3-parameter action and an appropriate $4 \times 4$ spinor transformation matrix, we can reproduce continuum results up to errors of order $\epsilon^2$. We will divide the range of values of $m_r a$ into two non-overlapping regions. In the first, Region I, we require $m_r a \le \sqrt{\epsilon}$. Here we can Taylor expand in the parameter $m_r a$ but must control errors up to order $(m_r a)^4$. Region II corresponds to the remaining range of $m_r$: $\sqrt{\epsilon} < m_r a$. Now we can expand in $|\vec p|/m_r = |\vec p| a/(m_ra)\le \sqrt{\epsilon}$ but must therefore work up to $O((\vec p/m_r)^4)$. \subsection{Momentum-dependent energy} The quark wave function renormalization constant $Z_q$ and the parameters $m_0$, $\zeta$ and $r_s$ can be constrained by demanding that the lattice quark propagator $G_q(p)$ derived from Eq.(1) should reproduce the relativistic form \begin{equation} G_q(p_0,p_i)=\frac{1}{Z_q}\frac{-i\gamma^0p_0-i\vec\gamma\cdot\vec p+m_r} {p_0^2+\vec p^2+m_r^2} +(\mbox{non-pole terms}) +O\Bigl((p_ia)^2\Bigr) \label{eq:cont_prop} \end{equation} at the heavy quark pole in the limit $|p_i a| \ll 1$. The location of the pole in the tree-level lattice propagator is that value of $p_0$ at which the inverse propagator vanishes: \begin{eqnarray} aG^{-1}_q(p_0,p_i)=i\gamma^0\sin(p_0a)+i\zeta\sum_i\gamma^i\sin(p_ia)+m_0a \nonumber \\ +r_t\Bigl(1-\cos(p_0a)\Bigr)+r_s\sum_i\Bigl(1-\cos(p_ia)\Bigr)=0. \label{eq:pole} \end{eqnarray} By first examining the simplest case of zero spatial momenta, $p_i=0$, we can obtain equations for $m_0$ and $Z_q$: \begin{equation} m_ra=\ln\Biggl(\frac{m_0a+r_t+\sqrt{(m_0a)^2+2r_tm_0a+1}}{1+r_t}\Biggr) \label{eq:pole_zp} \end{equation} \begin{equation} Z_q=\cosh(m_ra)+r_t\sinh(m_ra) \label{eq:Z_q} \end{equation} To obtain constraints on $\zeta$ and $r_s$ we need to examine the case of finite spatial momentum. From the dispersion relation $p_0=i\sqrt{m_r^2+\vec p^2}$ which we would like to reproduce, we can get a relationship between $r_s$ and $\zeta$. Starting from Eq.~\ref{eq:pole} and defining a new variable $\tilde{p}_0 \equiv -ip_0$, we obtain: \begin{equation} (r_t^2-1)\cosh^2(\tilde{p}_0a)-2r_tB\cosh(\tilde{p}_0a)+1+\zeta^2\sin^2(p_ia)+B^2=0 \end{equation} where $B=r_t+m_0a+r_s\sum_i(1-\cos(p_ia))$. Neglecting quantities of order $O((p_ia)^2)$ and higher, the two roots of the quadratic equation for $\cosh(\tilde{p}_0a)$ can be written: \begin{equation} R_{\pm}=\frac{r_tB\pm\sqrt{r_t^2B^2-(r_t^2-1)(1+B^2+\zeta^2\vec p^2)}}{r_t^2-1} \end{equation} Here we choose $R_{-}$ as the physical root since $R_{+}$ goes to infinity when $r_t \rightarrow 1$. After we substitute the expression for $B$ into the $R_{-}$ and expand to first order in the quantity $(\vec p a)^2$, we find: \begin{eqnarray} \cosh{(\tilde{p}_0a)}&=&\frac{r_t(m_0a+r_t)}{r_t^2-1}+\frac{r_tr_s(p_ia)^2/2}{r_t^2-1} \\ &-&\frac{\sqrt{[(m_0a+r_t)^2-(r_t^2-1)]+[r_s(m_0+r_t)-\zeta^2(r_t^2-1)](p_ia)^2}}{r_t^2-1} \nonumber \\ &=&\frac{r_t(m_0a+r_t)-\sqrt{(m_0a+r_t)^2-(r_t^2-1)}}{r_t^2-1} \nonumber \\ &+&\{\frac{r_tr_s}{2(r_t^2-1)}-\frac{r_s(m_0a+r_t) -\zeta^2(r_t^2-1)}{2(r_t^2-1)\sqrt{(m_0a+r_t)^2-(r_t^2-1)}}\}(p_ia)^2 \\ &=&\cosh(m_ra)+\frac{r_s\sinh(m_ra)+\zeta^2}{2(r_t\sinh(m_ra)+\cosh(m_ra))}(p_ia)^2 \label{eq:pole_nzp} \end{eqnarray} where the last line is obtained using Eq.~\ref{eq:pole_zp}. Equation~\ref{eq:pole_nzp} can be rewritten in the suggestive form: \begin{equation} \tilde{p}_0a = \sinh^{-1}\Biggl\{\sqrt{ \sinh^2(m_r a) + (\vec p a)^2 \cosh(m_r a)\frac{r_s\sinh(m_ra)+\zeta^2}{r_t\sinh(m_ra)+\cosh(m_ra)}}\Biggr\}. \label{eq:pole_nzp2} \end{equation} If $m_r a \ll 1$ then $\sinh(z)$ and $\sinh^{-1}(z)$ can both be replaced by $z$ and Eq.~\ref{eq:pole_nzp2} gives the usual relativistic dispersion relation if we set $\zeta =1$. If $m_r a$ is sufficiently large that this approximation to $\sinh(z)$ and $\sinh^{-1}(z)$ is a poor one, then we can expand the square root in Eq.~\ref{eq:pole_nzp2} to first order in $(\vec p a)^2$ and obtain the result: \begin{equation} \tilde{p}_0a = m_r a + \frac{(\vec p a)^2}{2\sinh{m_r a}}\,\frac{r_s\sinh(m_ra)+\zeta^2} {r_t\sinh(m_ra)+\cosh(m_ra)}. \label{eq:pole_nzp3} \end{equation} Thus, we will obtain the correct dispersion relation in both cases if we require: \begin{equation} r_s\sinh(m_ra)+\zeta^2 =\frac{\sinh(m_ra)}{m_ra}\Bigl(r_t\sinh(m_ra)+\cosh(m_ra)\Bigr). \label{eq:r_s-zeta_relation} \end{equation} As discussed above, we can establish the equivalence of Eq.~\ref{eq:pole_nzp2} to the usual dispersion relation \begin{equation} \tilde{p}_0a = \sinh^{-1} \Biggl \{ \sqrt{\sinh^2(m_r a) +\frac{\sinh(m_r a)\cosh(m_r a)}{m_r a} (\vec p a)^2 }\Biggr\} =\sqrt{(m_r a)^2 + (\vec p a)^2} \label{eq:pole_nzp4} \end{equation} up to relative errors of order $(\vec p a)^2 \equiv \epsilon^2$ for all values of $m_r a$ by showing it to holds to this accuracy in the two regions $m_r a \le \sqrt{\epsilon}$ (region I) and $\sqrt{\epsilon} < m_r a$ (region II). Here the left-hand equality in Eq.~\ref{eq:pole_nzp4} is simply Eq.~\ref{eq:pole_nzp2} with the constraint in Eq.~\ref{eq:r_s-zeta_relation} imposed. Establishing that the right-hand equality holds without errors larger than $O(\epsilon^2)$ requires in Region I that we examine to next-leading order a Taylor series expansion in the variables $(m_r a)^4/((m_r a)^2+(\vec p a)^2)$ and $(m_r a)^2$. In Region II, we need only continue the expansion in $(\vec pa/m_r a)^2$ begun in Eq.~\ref{eq:pole_nzp3} to demonstrate that this second equality holds up to relative errors of order $(\vec p a/m_r a)^4 \sim \epsilon^2$. While straight-forward, some care is required to verify that the $m_r a$-dependent coefficient of $(\vec p a/m_r a)^4$ is bounded throughout the region $\epsilon \le m_r a$. Thus, we can reproduce the correct momentum dependence of the heavy quark energy if and only if the parameters $r_s$ and $\zeta$ satisfy the relationship in Eq.~\ref{eq:r_s-zeta_relation}. \subsection{Propagator spinor structure} Without a second constraint which then determines both $r_s$ and $\zeta$, the general action under consideration will not reproduce the correct spinor structure for the propagator. However, as discussed earlier, we can also achieve the conventional, on-shell spinor structure for the propagator by applying a simple matrix transformation to the on-shell spinor fields even if we have chosen an arbitrary $r_s$ and an appropriate value for $\zeta(r_s)$ so that Eq.~\ref{eq:r_s-zeta_relation} is obeyed. If we adopt this approach then we have freedom to choose $r_s$ in the action for convenience, {\it e.g.} $r_s=\zeta$, thereby reducing the number of parameters in the action by one. We begin by examining the matrix form of the propagator as presently determined: \begin{equation} aG_q(p_0,p_i)=\frac{-i\gamma^0\sin(p_0a) -i\zeta\vec\gamma\cdot\vec p\, a + F}{\sin^2(p_0 a)+\zeta^2(\vec p a)^2+F^2} \label{eq:matrix_res} \end{equation} where $F$ is given by \begin{equation} F = m_0 a+r_t(1-\cos(p_0a))+\frac{r_s}{2}(\vec p a)^2. \end{equation} We will now try to find a pair of $4 \times 4$ spinor matrices $U_L(\vec p)$ and $U_R(\vec p)$ able to transform the matrix in the numerator of Eq.~\ref{eq:matrix_res} to the correct one: \begin{eqnarray} U_L(\vec p)\frac{-i\gamma^0\sin(p_0a)-i\zeta\gamma^i\sin(p_ia)+F} {\sin^2(p_0a)+\zeta^2\sum_i\sin^2(p_ia)+F^2}\ U_R(\vec p)\nonumber\\ \approx \frac{1}{Z_q}\frac{-i\gamma^0p_0-i\sum_i\gamma^ip_i+m_r} {p_0^2+\sum_ip_i^2+m_r^2}, \label{eq:matrix_trans} \end{eqnarray} in the sense that both expressions should have the same residue at the heavy quark pole. We begin by examining the numerator of the left-hand side of Eq.~\ref{eq:matrix_trans} and substitute $p_0 \equiv i \tilde p_0$ \begin{equation} \gamma^0 \sinh(\tilde p_0 a) -i \zeta\vec\gamma\cdot\vec p a + m_0 a +r_t\Bigl(1-\cosh(\tilde p_0 a)\Bigr) + \frac{r_s}{2}(\vec p a)^2. \label{eq:spinor_matrix1} \end{equation} Two steps are needed to put this equation in a convenient form. First we replace the coefficient $\sinh(\tilde p_0 a)$ multiplying the $\gamma^0$ in Eq.~\ref{eq:spinor_matrix1} by an expression closer to the continuum value: \begin{equation} \sinh(\tilde p_0 a) \approx \tilde p_0 a\frac{\sinh(m_r a)}{m_r a}. \label{eq:approx_1} \end{equation} This approximation can be justified by using Eqs.~\ref{eq:pole_nzp2} and \ref{eq:pole_nzp4} obeyed by $\tilde p_0$ to write: \begin{equation} \frac{\sinh(\tilde p_0 a)}{\tilde p_0} = \frac{\sinh(m_r a)}{m_r a} \left[\frac{1+\frac{(\vec p a)^2}{m_r a}\frac{\cosh(m_r a)}{sinh(m_r a)}} {1 + \frac{(\vec p a)^2}{(m_r a)^2} } \right]^{1/2}. \end{equation} We can then evaluate the difference between the contents of the square bracket in this equation and one: \begin{equation} \Bigl[\ldots\Bigr]-1 = \frac{(\vec p a)^2}{m_r a} \left( \frac{\frac{\cosh(m_r a)}{\sinh(m_r a)} - \frac{1}{m_r a}} {1 + \frac{(\vec p a)^2}{(m_r a)^2}}\right) \le (\vec p a)^2. \end{equation} Here the final inequality, showing that this difference can be neglected, follows from the relation $x\coth(x) \le (1+x+x^2)/(1+x)$. The second relation that we need approximates: \begin{eqnarray} \cosh(\tilde p_0 a) &=& \cosh(m_r a)\left[1 + \frac{(\vec p a)^2}{m_r a} \frac{\sinh(m_r a)}{\cosh(m_r a)}\right]^{1/2} \nonumber \\ &\approx& \cosh(m_r a) +\frac{(\vec p a)^2}{2 m_r a}\sinh(m_r a). \label{eq:approx_2} \end{eqnarray} Here the equality follows directly from Eq.~\ref{eq:pole_nzp2} while the inequality requires the neglect of a term of order $(\vec p a)^4$ whose coefficient can be shown to be bounded through use of the relation $\tanh(x) \le x$. Next we substitute Eqs.~\ref{eq:approx_1} and \ref{eq:approx_2} into Eq.~\ref{eq:spinor_matrix1} writing the numerator of the propagator as: \begin{equation} \gamma^0 \tilde p_0 a\frac{\sinh(m_r a)}{m_r a} -i \zeta\vec\gamma\cdot\vec p a + \sinh(m_r a) + \frac{1}{2}(\vec p a)^2\Bigl(r_s - r_t \frac{\sinh(m_r a)}{m_r a}\Bigr). \label{eq:spinor_matrix2} \end{equation} We must now find matrices $U_L$ and $U_R$ which will transform this expression into the desired continuum form. Thus, we must make the coefficient of $\vec \gamma \cdot \vec p$ agree with that of $\gamma^0 p^0$ and remove the $\vec p\;^2$ term. This can be accomplished by matrices of the form \begin{eqnarray} U_L &=& U_R = (1 +i \delta \vec \gamma\cdot\vec p a) \quad \mbox{where} \label{eq:spin_trans1} \\ \delta &=& \frac{\zeta}{2\sinh(m_r a)} -\frac{1}{2m_r a} \label{eq:spin_trans2} \end{eqnarray} It is easy to see that when these transformations act on the $\sinh(m_r a)$ term in Eq.~\ref{eq:spinor_matrix2}, a term is generated which precisely replaces $-i \zeta\vec\gamma\cdot\vec p a$ with the desired expression: \newline $-i\vec\gamma\cdot\vec p \sinh(m_r a)/m_r$. However, the elimination of the $(\vec p a)^2$ term appearing in Eq.~\ref{eq:spinor_matrix2} is less direct. For this term the effect of our transformation generates a $(\vec p a)^2$ contribution which is only approximately zero: \begin{equation} (\vec p a)^2 \Bigl\{2\zeta(\frac{\zeta}{2\sinh(m_r a)} -\frac{1}{2m_r a}) + \frac{1}{2}(r_s - r_t\frac{\sinh(m_r a)}{m_r a})\Bigr\} \approx 0. \label{eq:p2_term} \end{equation} To neglect the expression in Eq.~\ref{eq:p2_term} we must make two observations. First, we recognize that when expanded in a power series in $m_r a$ the expression in curly brackets in Eq.~\ref{eq:p2_term} begins at order $(m_r a)^1$ when $\zeta$ is determined by Eq.~\ref{eq:r_s-zeta_relation}. This implies that for small $m_r a$, this unwanted $(\vec p a)^2$ term has the size $(\vec p a)^2m_r a$ and is therefore $O(\vec p a)^2$ relative to the mass term, $m_r a$. Second, as $m_r a$ increases this expression grows no faster than the other $\sinh(m_r a)$ factors in Eq.~\ref{eq:spinor_matrix2}. Thus, the unphysical $(\vec p a )^2$ term is actually of order $(\vec p a )^2$ relative to the continuum terms in the Dirac propagator for all values of $m_r a$. The fact that a single choice of the transformation parameter $\delta$ is sufficient to both replace the coefficient of $\vec \gamma \cdot \vec p$ by its proper value and to remove the $(\vec p a)^2$ term is a result of the relationship in Eq.~\ref{eq:r_s-zeta_relation} between $\zeta$ and $r_s$, derived previously to insure the correct dispersion relation. \subsection{Quark-gluon vertex} We will now determine the parameters $c_B$ and $c_E$ by computing the quark-gluon vertex after the transformation of Eq.~\ref{eq:spin_trans1} has been applied to the initial and final spinors. In particular, $c_E$ and $c_B$ should be chosen so that the tree-level lattice vertex agrees with the corresponding continuum expression, with errors no larger than $(\vec p' a)^2$, $(\vec p a)^2$ and $\vec p'\cdot \vec p a^2$. There should be no contribution of order $(m_r a)^n$, $|\vec p\,' a|(m_r a)^n$ or $|\vec p a|(m_r a)^n$ for all values of $n$. Following the conventions listed in Appendix~\ref{ap:conventions}, we can determine the quark-gluon vertex $\Lambda_\mu(p',p)$ and then impose the on-shell conditions: \begin{eqnarray} \overline{u}(\vec p\,')\Lambda_\mu(p',p)u(\vec p) = Z_q \overline{u}(\vec p\,')\gamma_\mu u(\vec p). \end{eqnarray} Here all quantities are evaluated following our Euclidean space conventions with the exception of the time components of the on-shell fermion momenta $p'_0 = i\tilde p\,'_0 = i\sqrt{(\vec p\,')^2+m_r^2}$ and $p_0 = i\tilde p_0 = i\sqrt{(\vec p)^2+m_r^2}$. The tree-level lattice vertex matrices $\Lambda_\mu(p',p)$ can be derived from the lattice action of Eq.~\ref{eq:action_lat} and written without approximation as: \begin{eqnarray} \Lambda^k(p',p) &=& \gamma^k\zeta \cos\left[(p_k'+p_k)a/2\right] -i r_s\sin\left[(p_k'+p_k)a/2\right] \nonumber \\ &&+\frac{c_B}{2}\sum_j\sigma_{kj}\cos\left[(p'_k-p_k)a/2\right] \sin\left[(p'_j-p_j)a\right] \nonumber \\ &&+i\frac{c_E}{2}\sigma_{k0}\cos\left[(p'_k - p_k)a\right] \sinh\left[(\tilde p_0' - \tilde p_0)a\right] \label{eq:vert_space_relation} \\ \Lambda^0(p',p) &=& \gamma^0 \cosh\left[(\tilde p_0'+\tilde p_0)a/2\right] + r_t\sinh\left[(\tilde p_0'+\tilde p_0)a/2\right] \nonumber \\ &&+\frac{c_E}{2}\sum_j\sigma_{0j}\cosh\left[(p'_0 - p_0)a/2\right] \sin\left[(p_j' - p_j)a\right]. \label{eq:vert_time_relation} \end{eqnarray} \subsubsection{Spatial component of the quark-gluon vertex} We first examine the spatial quark gluon vertex $\Lambda^k$ transformed by the spinor matrices $U_L(\vec p\,')$ and $U_R(\vec p)$: \begin{eqnarray} [\Lambda_k(p',p)]_T &=& U_L(\vec p\,')^\dagger\Lambda_k(p',p)U_R(\vec p)^\dagger \\ &=& \zeta\gamma_k -i(\frac{r_s}{2}+\delta\zeta)(p_k+p'_k)a +(\frac{c_B}{2}+\delta\zeta)\sum_j\sigma_{kj}(p'_j-p_j)a \nonumber \\ &&+i\frac{c_E}{2}\sigma_{k0}\sinh[(\tilde p'_0 -\tilde p_0) a] \label{eq:vert_space_af_trans} \end{eqnarray} where the subscript $T$ indicates that we have applied the spinor transformations $U_L(\vec p\,')$ and $U_R(\vec p)$. In addition, some terms of relative order $(\vec p\,' a)^2$ and $(\vec p a)^2$have been neglected. The expression in Eq.~\ref{eq:vert_space_af_trans} can be simplified if we recognize that this matrix is to be evaluated between the spinors $\overline u(\vec p\,')$ and $u(\vec p)$ so that we can use the relevant Dirac equation: \begin{eqnarray} \left(\gamma^0\tilde p_0 -i\vec\gamma\cdot\vec p - m_r \right)u(\vec p) &=&0 \\ \overline u'(\vec p\,') \left(\gamma^0\tilde p'_0 -i\vec\gamma\cdot\vec p\,' - m_r \right) &=&0. \end{eqnarray} These two equations can be multiplied by $\gamma^k$ on the left and right respectively to derive an equation for $\sigma_{kj}(p_j'-p_j)$: \begin{equation} \sigma_{kj}(p_j'-p_j) = 2m_r \gamma^k + i(p_k'+p_k) -i\sigma_{k0}(\tilde p_0'-\tilde p_0) \label{eq:gordon_id_cont} \end{equation} Substituting the relation above for the $\sigma_{kj}$ term in Eq.~\ref{eq:vert_space_af_trans}, we find \begin{eqnarray} [\Lambda_k(p',p)]_T &=& \gamma^k\left(\zeta + m_r a(c_B + 2\delta\zeta)\right)\nonumber\\ &&+i\frac{c_B-r_s}{2}(p'_k+p_k)a\nonumber\\ &&+i\sigma_{k0}\left(\frac{c_E}{2}\sinh[(\tilde p'_0-\tilde p_0)a] -(\delta\zeta+\frac{c_B}{2})(\tilde p'_0-\tilde p_0)a\right) \label{eq:vert_space_last} \end{eqnarray} The matrix $\Lambda^k(p',p)$ will reduce to the desired continuum quantity $Z_q\gamma^k$ provided the following conditions are obeyed: \begin{eqnarray} &&c_B=r_s\label{eq:c_B_cond1}\\ &&\zeta + m_r a(c_B+2\delta\zeta)=Z_q \label{eq:c_B_cond2}\\ &&\overline u'(\vec p\,')\sigma_{k0}u(\vec p) \left(\frac{c_E}{2}\sinh[(\tilde p'_0-\tilde p_0)a] -(\delta\zeta+\frac{c_B}{2})(\tilde p'_0-\tilde p_0)a\right) = O(\vec{p} a)^2 Z_q \label{eq:c_B_E_cond} \end{eqnarray} We will treat the first of these conditions, Eq.~\ref{eq:c_B_cond1}, as determining the quantity $c_B$. The second equation, Eq.~\ref{eq:c_B_cond2}, is then automatically obeyed as can be seen by using $Z_q$ as determined by Eq.~\ref{eq:Z_q}, and substituting the expressions given for $\delta$ and $\zeta$ in Eqs.~\ref{eq:spin_trans2} and \ref{eq:r_s-zeta_relation} respectively. Establishing the final condition, Eq.~\ref{eq:c_B_E_cond}, requires a little more effort since it is not exact and must hold for the full range of a variety of values of $r_s$ and $r_t$. First we demonstrate the argument of the difference $(\tilde p'_0 - \tilde p_0)a$ is small so that an expansion of the $\sinh(x)$ function is justified. We consider the square: \begin{eqnarray} (\tilde p'_0 - \tilde p_0)^2a^2 &=& \left(\frac{(\tilde p'_0)^2 - (\tilde p_0)^2} {\tilde p'_0 + \tilde p_0}a\right)^2 \\ &=& \frac{(\vec p\,')^2 - (\vec p)^2}{(\tilde p'_0 + \tilde p_0)^2} ((\vec p\,'a)^2 - (\vec p a)^2). \label{eq:c_B_E_cond_prf_1} \end{eqnarray} Here the first factor on the right hand side of Eq.~\ref{eq:c_B_E_cond_prf_1} is bounded for all values of $\vec p\,'$ and $\vec p$, while the second factor is $O(\vec p a)^2$. This justifies keeping only the first term in an expansion of the $\sinh(x)$ function and replacing the third condition by \begin{equation} \overline u'(\vec p\,')\sigma_{k0} u(\vec p) \left(\frac{c_E-c_B}{2} -\delta\zeta\right)(\tilde p'_0-\tilde p_0)a = O(\vec{p} a)^2 Z_q \label{eq:c_B_E_cond_prf_2} \end{equation} Next we make the choice $c_E = c_B$, multiply and divide the left hand side of Eq.~\ref{eq:c_B_E_cond_prf_2} by $m_r a$ and divide by $Z_q$, writing the resulting condition as: \begin{equation} \frac{m_r \overline u'(\vec p\,')\sigma_{k0} u(\vec p)}{\tilde p'_0+\tilde p_0} \left((\vec p\,'a)^2 - (\vec p a)^2\right) \frac{\delta\zeta}{m_raZ_q} = O(\vec{p} a)^2. \label{eq:c_B_E_cond_prf_3} \end{equation} The left-most ratio in Eq.~\ref{eq:c_B_E_cond_prf_3} is a kinematic function, which is bounded for all values of $m_r$. The central factor provides the desired $O(\vec pa)^2$ suppression. We need to show that the final factor, $\delta\zeta/(m_r a Z_q)$ is bounded for all $m_r a$. To do this we must require that for small $m_r a$, $r_t -r_s \propto m_r a$. Without this requirement, $\delta\zeta$ approaches a constant as $m_r a \rightarrow 0$ and this factor diverges for small $m_r$ as $1/m_r a$. Were we to choose non-covariant values for $r_t$ and $r_s$ in the limit of small $m_r a$, then the non-covariant choice $c_E \ne c_B$ would also be required. For simplicity, we make the choice $r_t=r_s$ for all values of $m_r$. Under these circumstances, it is easy to see by direct numerical evaluation that the factor $\delta\zeta/(m_r a Z_q) \le 1/12$, its value at $m_r a=0$ for all values of $r_s>0$ and $m_r a$. Thus, condition~\ref{eq:c_B_E_cond_prf_3} is also satisfied for the choice $c_E = c_B$ and the spatial components of the quark gluon coupling agree with the expected continuum values to the claimed accuracy. \subsubsection{Temporal component of the quark-gluon vertex} Finally we examine the time component of the quark-gluon vertex given in Eq.~\ref{eq:vert_time_relation}. As a first step we will simplify this expression by recognizing that: \begin{eqnarray} \cosh[(\tilde p'_0 + \tilde p_0)a/2] &=& \cosh(m_r a) +O(\vec p a)^2 \\ \sinh[(\tilde p'_0 + \tilde p_0)a/2] &=& (\tilde p'_0 + \tilde p_0) \frac{\sinh(m_r a)}{2 m_r a} +O(\vec p a)^2 \end{eqnarray} and neglecting the $O(\vec p a)^2$ terms. These two equations are easy to derive from Eq.~\ref{eq:approx_1} using $\cosh(x) = \sqrt{\sinh^2(x)+1}$, the formula for the hyperbolic sine of the sum of two angles and the inequality $\sinh(x) \le x \cosh(x)$. With these simplifications $\Lambda^0$ becomes: \begin{eqnarray} \Lambda^0(p',p) &=& \gamma^0 \cosh(m_r a) + r_t a (\tilde p'_0 + \tilde p_0)\frac{\sinh(m_r a)}{2 m_ra} \nonumber \\ && + \frac{r_s}{2}\sum_j\sigma_{0j}(p'_j-p_j)a \end{eqnarray} where we have replaced $c_E$ by the value determined earlier, $c_E=c_B=r_s$. Next, the spinor transformations $U_L(p')$ and $U_R(p)$ are made yielding \begin{eqnarray} \Lambda^0(p',p)_T &=& U_L(p')^\dagger \Lambda^0(p',p) U_R(p)^\dagger \\ &=& \gamma^0 \cosh(m_r a) + r_t a (\tilde p'_0 + \tilde p_0)\frac{\sinh(m_r a)}{2 m_r a} \nonumber \\ && + \left(\frac{r_s}{2}+\delta\cosh(m_r a) \right)\sum_j\sigma_{0j}(p'_j-p_j)a. \label{eq:c_B_E_cond_prf_4} \end{eqnarray} Here we have neglected the term: \begin{equation} \overline u'(\vec p\,')\gamma^j (p'_j+p_j)a u(p) \; r_t a(\tilde p'_0 + \tilde p_0) \frac{\sinh(m_r a)}{2 m_r a} \label{eq:c_B_E_cond_prf_5} \end{equation} because, as in the case of Eq.~\ref{eq:c_B_E_cond_prf_3}, the spinor structure mixes upper and lower spinor components implying that this expression of order $(\vec p a)^2$ and therefore negligible. As a final step we use the time-component equivalent of Eq.~\ref{eq:gordon_id_cont} multiplied by $r_t \sinh(m_r a)/(2 m_r a)$, \begin{equation} \frac{r_t \sinh(m_r a)}{2 m_r a} a (\tilde p'_0 + \tilde p_0) = \frac{r_t \sinh(m_r a)}{2 m_r a}\Bigl\{2m_r a \gamma^0 - \sum_j\sigma_{0j}(p'_j-p_j)a\Bigr\}, \label{eq:c_B_E_cond_prf_6} \end{equation} to eliminate the $\tilde p'_0 + \tilde p_0$ term from Eq.~\ref{eq:c_B_E_cond_prf_4}. The resulting expression is \begin{eqnarray} \Lambda^0(p',p)_T &=& \gamma^0\Bigr\{\cosh(m_r a) + r_t \sinh(m_r a)\Bigr\} \nonumber \\ && + \Bigl\{\frac{r_s}{2}+\delta\cosh(m_r a) -r_t\frac{r_t \sinh(m_r a)}{2 m_r a} \Bigr\}\sum_j\sigma_{0j}(p'_j-p_j)a. \label{eq:c_B_E_cond_prf_7} \end{eqnarray} The first term in this equation is precisely the desired matrix $\gamma^0 Z_q$ while the second can be shown to be of order $(\vec p a)^2 Z_q$ using the same style of argument that permitted us to neglect the similar term in Eq.~\ref{eq:c_B_E_cond_prf_3} and the expression~\ref{eq:c_B_E_cond_prf_5}. In conclusion, we have verified at tree level that only three, mass-dependent parameters, $m_0$, $\zeta$ and $c_P=c_B=c_E$, are needed to realize a heavy quark action that is accurate through order $|\vec{p}|a$ and to arbitrary order in $m_r a$. We have the freedom to choose $r_s$ and $r_t$ as is convenient but must require that as $m_r a$ approaches zero, $r_s \rightarrow r_t$. The on-shell quark propagator and quark-gluon vertex take their continuum form after a simple $4 \times 4$ transformation is performed on the two external spinors. \section{conclusion} \label{sec:conclusion} It is presently impractical to study charm or bottom physics on a sufficiently fine lattice to control discretization errors of order $ma$. However, as established in Refs.~\cite{El-Khadra:1997mp} and \cite{Aoki:2001ra} such errors can be avoided even when $ma \ge 1$ by using an improved heavy quark action. Such an action will accurately describe heavy quark states which are at rest or have small spatial momenta and, as the quark mass is made lighter or the lattice spacing finer, will smoothly approach the usual $O(a)$-improved fermion action of Sheikholeslami and Wohlert~\cite{Sheikholeslami:1985ij}. Here we are referring to this improved action as the ``relativistic heavy quark'' action because of this smooth connection with relativistic fermions as $ma \rightarrow 0$ and to distinguish it from the non-relativistic and static approximations which do not have this property. By carrying out a systematic expansion in powers of $a$ but working to all orders in the product $ma$, we have established that only three parameters, $m_0$, $\zeta$ and $c_P$, need to be tuned to remove all discretization errors of order $\Lambda_{\rm QCD}$. It is interesting to point out that the possible overestimate of the number of relevant parameters in the Fermilab and Tsukuba results was actually suggested to us by the numerical work described in the companion paper~\cite{Lin:2006}. In that paper we attempt to determine the relativistic heavy quark parameters by a process of step scaling, beginning with a very fine lattice where a direct use of the domain wall fermion formulation gives accurate results. We initially attempted to determine the four parameters, $m_0$, $\zeta$, $c_B$ and $c_E$, that could be used on at $16^3 \times 32$, $1/a=3.6$~GeV lattice to reproduce the heavy-heavy and heavy-light spectra given by a domain wall fermion calculation on a $24^3 \times 48$, $1/a=5.4$~GeV lattice. To our dismay, this was not a solvable problem, at least with masses measured on the one percent level. We found a one-dimensional subspace in this four-dimension parameter space along which all of the seven, finite-volume masses that we computed did not change. This surprising numerical result lead us to study more closely the underpinnings for the relativistic heavy quark formalism and to the 3-parameter result presented here. As is explained in detail in the companion paper, the problem of determining three parameters by step-scaling is numerically very stable and determining these heavy quark parameters to a few percent is not difficult. Although this first exploratory numerical work is done within the quenched approximation, as discussed in Ref.~\cite{Lin:2006}, we believe that similar results will be possible in full QCD. Thus, this approach to heavy quark physics, especially in the charm region where only 2 or 3 step-scaling steps are needed, may provide a first-principles approach with no reliance on perturbation theory. The three parameters needed in the heavy quark action as well as those required for improved operators can be determined non-perturbatively by this step-scaling approach, with the three in the action requiring the most effort. Further, as available resources increase, one can work at increasingly fine lattice spacing, minimizing the higher order errors that have not been explicitly removed. No change in formalism is needed. However, extrapolation to the continuum limit is in general not possible with this approach. For example, the $O(\Lambda_{\rm QCD}a)^2$ terms neglected in the treatment above are expected to enter with coefficients which are themselves functions of $ma$. Thus, a simple $a^2$ behavior for small $a$ will be seen only in the limit $ma \approx 0$, a region in which the improvements we have discussed are not needed. We would like Sinya Aoki, Peter Boyle, Changhoan Kim, Yoshinobu Kuramashi, Chris Sachradja and our colleagues in the RBC collaboration for helpful discussions and suggestions. This work was supported in part by DOE Grant DE-FG02-92ER40699.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} As a byproduct of the high temperature superconducting copper oxides, many interesting quasi-one-dimensional copper oxides have been discovered or rediscovered recently. Sr$_{14}$Cu$_{24}$O$_{41}$ is one of them and consists of two kinds of unique building blocks as shown in Fig. 1a. \cite{ta10,ta13} One is simple chains of copper ions which are coupled by the nearly 90$^\circ$ Cu-O-Cu bonds. The other is two-leg ladders of copper ions, which are coupled by the nearly 180$^\circ$ Cu-O-Cu bonds along the $a$ and $c$ axes. This compound has been extensively studied since both of the building blocks show interesting ground states. The ground state of a two-leg spin ladder system is a singlet state as observed in SrCu$_2$O$_3$. \cite{ta16} It was also shown that the ladder in the related compound, Sr$_{14}$Cu$_{24}$O$_{41}$ has a singlet ground state with a fairly large gap. \cite{ecc,kuma,kita} The property of the energy gap in spin ladders is interesting from the view point of quantum phenomena in a low-dimensional (between 1 and 2) Heisenberg antiferromagnet. It was theoretically predicted that the spin $\frac{1}{2}$ Heisenberg ladder with even numbers of legs has an excitation gap and that the excitation is gapless for the spin ladder with odd numbers of legs. \cite{ta1} The two-leg ladder system has also attracted many researchers since superconductivity is expected in the carrier doped spin ladder system. \cite{ta2,ta3} Recently, Uehara $et$ $al.$ \cite{aki} found that Sr$_{0.4}$Ca$_{13.6}$Cu$_{24}$O$_{41}$ shows superconductivity below 12 K under a high pressure of 3 GPa. The superconductivity in the ladder system is considered to be crucial to understand the mechanism of the high temperature superconductivity. As Matsuda $et$ $al.$ \cite{matsu0,matsu1} showed, the simple chain in Sr$_{14}$Cu$_{24}$O$_{41}$ also has an interesting singlet ground state originating from a dimerization. Surprisingly, the dimers are formed between spins which are separated by 2 and 4 times the distance between the nearest-neighbor (n.n.) copper ions in the chain. This is probably related to localized holes in the chain which are expected to make the interaction between copper spins longer-ranged. The trivalent yttrium substitution for divalent strontium is expected to decrease hole carriers. The dimerized state in the chain depends critically on the number of holes ($N_h$). The magnetic inelastic peaks become broader in energy with yttrium substitution. \cite{matsu1} Furthermore, La$_6$Ca$_8$Cu$_{24}$O$_{41}$ in which $N_h$=0 shows a long-range magnetic order with ferromagnetic correlations within the chain. \cite{carter,matsu2} We have performed neutron scattering experiments to study the effects of yttrium substitution on the magnetic and structural properties of the chains in Sr$_{14}$Cu$_{24}$O$_{41}$. We previously reported the results of neutron scattering experiments using polycrystalline samples of Sr$_{14-x}$Y$_x$Cu$_{24}$O$_{41}$ ($x$=0, 1, and 3). \cite{matsu1} In order to make a detailed study of the magnetic excitation in a wide range of energy ($\omega$) - momentum ($Q$) space and the crystallographic structure, we performed new experiments on high quality single crystals. Furthermore, we concentrated on the samples with low concentration of yttrium to study systematically how the dimerized state is changed since the dimerized state is destroyed with only a small amount of yttrium substitution. \cite{matsu1} It was observed that when yttrium is lightly substituted for strontium, strong and sharp magnetic inelastic peaks which originate from the dimerized state in the chain become broader. With further yttrium substitution, the inelastic peaks become much broader and the excitation energy is decreased. The interesting point is that the inelastic peaks become broader only in energy but not in momentum space. This means that the dimerized state becomes unstable but the spin correlations are unchanged with yttrium substitution. It was also observed that nuclear Bragg intensities originating from the chain show strong temperature and yttrium concentration dependence. One possible explanation for this would be that the chains shift along the $c$ axis with temperature and yttrium substitution. The format of this paper is as follows: Experimental details are described in Sec. II. The magnetic and structural studies are presented in Sec. III and IV, respectively. In Sec. V the experimental results are discussed. \section{Experimental Details} The single crystals of Sr$_{14-x}$Y$_x$Cu$_{24}$O$_{41}$ ($x$=0, 0.10, 0.25, and 1.0) were grown using a traveling solvent floating zone (TSFZ) method at 3 bars oxygen atmosphere. The effective mosaic of the single crystals is less than 0.4$^\circ$ with the spectrometer conditions as is described below. It is expected that yttrium is distributed homogeneously in the sample since the lattice constant $b$ is systematically decreased and the linewidth of the nuclear Bragg peaks does not change with yttrium substitution. The lattice constant $b$ is 13.36 $\AA$ and 13.08 $\AA$ at 10 K for the $x$=0 and $x$=1 samples, respectively. The neutron scattering experiments were carried out on the HB3 triple-axis spectrometer at the High Flux Isotope Reactor at Oak Ridge National Laboratory and the H8 triple-axis spectrometer at the High Flux Beam Reactor at Brookhaven National Laboratory. The horizontal collimator sequence was open-40'-S-60'-120' for the experiments on the HB3 and 40'-40'-S-80'-80' on the H8. The neutron measurements of Sr$_{14}$Cu$_{24}$O$_{41}$, Sr$_{13.9}$Y$_{0.1}$Cu$_{24}$O$_{41}$, and Sr$_{13.75}$Y$_{0.25}$Cu$_{24}$O$_{41}$ were performed on HB3 and Sr$_{13}$Y$_{1}$Cu$_{24}$O$_{41}$ on H8. The final neutron energy was fixed at $E_f$=14.7 meV. Pyrolytic graphite crystals were used as monochromator and analyzer; contamination from higher-order beam was effectively eliminated using a pyrolytic graphite filter after the sample. The single crystals were mounted in closed-cycle refrigerators which allowed us to perform the measurements over a wide temperature range 10 - 300 K. The experiments for scattering in the $(0,k,l)$ zone were performed. As described in Ref. \onlinecite{ta10}, there are three different values for the lattice constant $c$. Since we will mainly show the magnetic and structural properties in the chain, $c_{chain}$ will be used to express Miller indices. \section{Magnetic Excitations} Figure 1b shows the temperature dependence of magnetic susceptibility parallel to the $c$ axis in single crystals of Sr$_{14-x}$Y$_x$Cu$_{24}$O$_{41}$ ($x$=0, 0.25, and 1.0). Since the spin gap originating from the ladder has a large value of $\sim$400 K, \cite{ecc,kuma,kita} the susceptibility below room temperature comes predominantly from the chain. The susceptibility in Sr$_{14}$Cu$_{24}$O$_{41}$ shows a broad peak around 80 K and the Curie-Weiss tail can be seen at low temperatures. The Curie-Weiss term is increased and the broad peak is shifted to lower temperature with yttrium substitution. We show in Fig. 2 inelastic neutron scattering spectra at $T$=10 K observed at (0,3,0.085), (0,3,0.17), and (0,3,0.25) in single crystals of Sr$_{14-x}$Y$_x$Cu$_{24}$O$_{41}$ ($x$=0 and 0.25). Note that the indices correspond to (0,3,0.12)$_{ladder}$, (0,3,0.24)$_{ladder}$, and (0,3,0.36)$_{ladder}$, respectively. \cite{matsu1} Two sharp, intense inelastic peaks are observed in Sr$_{14}$Cu$_{24}$O$_{41}$. The peaks are the sharpest at (0,3,0.17) since the resolution ellipsoid is almost parallel to the dispersion curve so that the focusing effect is expected. Thus, the intrinsic linewidth of the inelastic peaks we observed here is almost resolution-limited. Note that in the previous paper \cite{matsu1} we reported the constant-$Q$ scans at (0,3,-$L$) since the focusing condition was different for the spectrometer. The inelastic peak positions slightly change with $Q$ which follows the $\omega$-$Q$ dispersion relation \cite{matsu1} as shown in the inset of Fig. 2. One puzzling feature is the presence of two excitations originating from the chain. The presence of two peaks could be due to the anisotropy in fluctuations parallel and perpendicular to the chain direction or the presence of other interactions. In Sr$_{13.75}$Y$_{0.25}$Cu$_{24}$O$_{41}$ the linewidth of the inelastic peaks becomes broader. Whereas, the peak positions are almost unchanged. These results indicate that the dimerized state becomes unstable with yttrium substitution. As described above, the broad peak in susceptibility is shifted to lower temperature in Sr$_{13.75}$Y$_{0.25}$Cu$_{24}$O$_{41}$. This is probably because an increase of the Curie-Weiss tail shifts the peak to lower temperature even though the intrinsic peak position is almost unchanged. Figure 3a shows a constant-$Q$ scan at $T$=10 K observed at (0,3,-0.14) in Sr$_{13}$Y$_1$Cu$_{24}$O$_{41}$. The focusing effect is also expected at (0,3,-0.14) as at (0,3,0.17). The inelastic peaks become much broader and spread over from 6 to 13 meV. This behavior is consistent with that observed in the powder sample. \cite{matsu1} Since we used a single crystal, it is also possible to measure $\vec{Q}$-dependence of the magnetic excitations. We show in Fig. 3b a constant-$E$ scan at $\Delta E$= 8 meV observed at (0,3,-$L$). A broad peak can be seen around $L_{chain}$=0.25. The peak position is similar to the position at which the correlation function $S(Q)$ shows a maximum in Sr$_{14}$Cu$_{24}$O$_{41}$, \cite{matsu1} suggesting that spin correlations are unchanged with yttrium substitution. Figure 4 shows inelastic neutron scattering spectra of Sr$_{14-x}$Y$_x$Cu$_{24}$O$_{41}$ ($x$=0 and 0.25) at $T$=10 K observed at (0,0,1.1)$_{ladder}$. In Sr$_{14}$Cu$_{24}$O$_{41}$ a resolution-limited sharp peak in energy was observed around 12 meV. Although the index at which the dispersion curve has a minimum corresponds to the ladder, this peak does not originate from the intra-ladder coupling since the spin gap energy is about 35 meV \cite{ecc} and the dispersion curve is expected to have minima at (0,0,$L_{ladder}$) when $L_{ladder}$=$n$+1/2 or at ($H_{ladder}$,0,0) when $H_{ladder}$=$n$+1/2 ($n$: integer). In the previous paper \cite{matsu1} we speculated that the peak originates from the dimerized state in the ladder which is formed between the nearest-neighbor copper ions connected by the inter-ladder coupling, although the number of the dimers are considered to be small. Very recently Mikeska and Neugebauer \cite{mike} showed that the spin gap due to the inter-ladder coupling should be much larger than 12 meV. Then the dimerization originating from the inter-ladder coupling would be realized if a local distortion occurs due to the localized holes. They also showed that the excitation at (0,0,1)$_{ladder}$ is explained with the theory of non-magnetic impurities in decoupled ladder. The sharp (0,0,1)$_{ladder}$ peak in Sr$_{14}$Cu$_{24}$O$_{41}$ becomes much broader in Sr$_{13.75}$Y$_{0.25}$Cu$_{24}$O$_{41}$. This strongly suggests that the excitation is closely related with the hole in the ladder, which probably couples with copper spin to form a singlet as in the chain and acts as a non-magnetic impurity. \section{Structural Change in the Chain} Figure 5 shows the unusual temperature dependence of the nuclear Bragg peak intensity originating from the chain for different values of $x$. For $x$=0 (Fig. 5a) the peak intensity remains constant below 30 K and then decreases with increasing temperature above 30 K. A surprising behavior was observed upon yttrium substitution. In Sr$_{13.9}$Y$_{0.1}$Cu$_{24}$O$_{41}$ (Fig. 5b) the intensity decreases up to 60 K and increases above 60 K with increasing temperature. In Sr$_{13.75}$Y$_{0.25}$Cu$_{24}$O$_{41}$ (Fig. 5c) the intensity remains constant below 30 K and then increases above 30 K. Other nuclear Bragg peaks from the chain also show fairly large temperature and yttrium concentration dependence. In Sr$_{13.9}$Y$_{0.1}$Cu$_{24}$O$_{41}$, for example, the intensity of the (0,2,2) Bragg peak shows similar temperature dependence as that of (0,0,2). The intensity at (0,0,4) decreases with increasing temperature. On the other hand, the intensity at (0,1,1) slightly decreases with increasing temperature. Since ferromagnetic long-range ordering was observed in La$_6$Ca$_8$Cu$_{24}$O$_{41}$ \cite{matsu2} which has no holes, the (0,0,2) Bragg peak in Sr$_{14-x}$Y$_{x}$Cu$_{24}$O$_{41}$ might originate from a ferromagnetic ordering in the chain. The x-ray diffraction experiments were performed in Sr$_{14}$Cu$_{24}$O$_{41}$ to clarify this point. \cite{cox} The results revealed that the (0,0,2) Bragg peak intensity shows the same temperature dependence as in Fig. 5a, indicating that the Bragg intensity is nuclear in origin. No major change of the intensity of the nuclear Bragg peak from the ladder was observed for 10 $\leq T \leq$ 300 K. \section{Discussion} We observed a drastic change of magnetic and structural properties with yttrium substitution. If the holes preferably exist on the chain \cite{ta25}, one can estimate that the number of holes $N_h$ in the chain is 60$\%$ of the copper ions in the chain in Sr$_{14}$Cu$_{24}$O$_{41}$. Whereas, in Sr$_{13.75}$Y$_{0.25}$Cu$_{24}$O$_{41}$ $N_h$ in the chain is estimated to be 57.5$\%$ of the Cu ions in the chain. The change of $N_h$ is only 4.2$\%$. It is surprising that such a small change of $N_h$ would affect the magnetic and structural properties so drastically. Our motivation to perform the experiment using a sample with low yttrium concentrations is to clarify how the intensity around $L_{chain}$=1/8 and 1/4 changes with yttrium substitution. As reported by Matsuda $et$ $al.$, \cite{matsu1} the dimers are formed between copper spins which are separated by 2 and 4 times the distance between n.n. copper ions in Sr$_{14}$Cu$_{24}$O$_{41}$. The trivalent yttrium substitution for divalent strontium is expected to decrease hole carriers and thus increase the number of Cu$^{2+}$ in the chain. Then number of the dimers which are formed between copper spins which are separated by 4 times the distance between n.n. copper ions is expected to be decreased. With further yttrium substitution, the dimers which are formed between n.n. copper spins are expected to appear. Then we expect that the scattering around $L_{chain}$=1/8 will first disappear and then the scattering around $L_{chain}$=1/2 appear with increasing yttrium concentration. As shown in Fig. 2, the linewidth in energy becomes broader with slight yttrium substitution. However, the integrated intensities of the inelastic peaks over energy are just slightly decreased around $L_{chain}$=1/8 and 1/4. Furthermore, we did not observe distinct inelastic peaks around $L_{chain}$=1/2 in Sr$_{13.75}$Y$_{0.25}$Cu$_{24}$O$_{41}$ or in Sr$_{13}$Y$_{1}$Cu$_{24}$O$_{41}$. Thus, the inelastic peaks become broader only in energy but not in momentum space, i.e. $S(Q)$ is unchanged. This implies that the dimerized state becomes unstable but the spin correlations are unchanged with yttrium substitution. These results are consistent with the speculation on the dimerized state in Ref. \onlinecite{matsu2} that the dimerized state in the chain becomes unstable because the reduction of the holes makes the ferromagnetic nearest-neighbor interactions more dominant and the antiferromagnetic further-neighbor interaction less dominant. This is deduced, in part, from the fact that La$_6$Ca$_8$Cu$_{24}$O$_{41}$ shows a ferromagnetic long-range order in the chain. The remaining puzzle is why the scattering around $L_{chain}$=1/8 is not suppressed with increasing yttrium concentration. We also observed that when yttrium is lightly substituted for strontium ($x \leq$0.25), the gap energies are almost unchanged. With further yttrium substitution ($x$=1.0), the excitation energy is decreased. This suggests that the exchange interaction between the spins which form the dimer is mediated by the hole at the oxygen site and that the hole probably makes the interaction longer-ranged due to the hopping mechanism. The longer-ranged exchange interaction becomes weaker when $N_h$ is reduced. We now discuss the structural change in the chain as observed by the changes in nuclear Bragg peak intensities (Fig. 5). It is known that the lanthanide or calcium substitution for strontium affects the crystal structure. \cite{ta10,ta13} Adjacent chains are staggered in Sr$_{8}$Y$_{6}$Cu$_{24}$O$_{41}$, La$_{6}$Ca$_{8}$Cu$_{24}$O$_{41}$, and Sr$_{8}$Ca$_{6}$Cu$_{24}$O$_{41}$ (Fig. 6a-i), whereas the chains of Sr$_{14}$Cu$_{24}$O$_{41}$ slightly shift alternately along the $c$ axis (Fig. 6a-ii). However, the structure of the ladder and strontium layers does not change with the lanthanide or calcium substitution. This suggests that the chains are almost independent of the strontium and ladder layer, which form a solid structure, and movable along the $c$ axis relatively. In order to explain the temperature and yttrium substitution dependence of the nuclear Bragg intensity from the chain we performed a qualitative model calculation. Due to the small number of the Bragg reflections purely from the chain, it is difficult to determine the structure of the chain quantitatively. In Fig. 6a we show simple models which describe the shift of the copper ions. We also assumed that both the copper and oxygen ions shift along the $c$ axis synchronously. The only parameter is a deviation $\delta$ along the $c$ axis. $\delta$ equals to 0.33 at room temperature in Sr$_{14}$Cu$_{24}$O$_{41}$. \cite{ta10} Figure 6b shows the calculated intensity of (0,0,2) when $\delta$ is varied from 0 (Fig. 6a-i) to 0.5 (Fig. 6a-iii). In the calculation the contribution from the copper and oxygen in the chain was considered. The calculated intensity has maxima at $\delta$=0 and 0.5 and a minimum at $\delta$=0.25. We try to explain the experimental results based on this model. Since the scattering intensity of (0,0,2) observed in the $x$=0 sample increases with decreasing temperature, $\delta$ should become larger monotonically at lower temperature. To explain the temperature dependence of the nuclear Bragg intensity in Sr$_{13.9}$Y$_{0.1}$Cu$_{24}$O$_{41}$, $\delta$ should be slightly below 0.25 at room temperature and increase with decreasing temperature. In Sr$_{13.75}$Y$_{0.25}$Cu$_{24}$O$_{41}$ $\delta$ should be further decreased. This yttrium substitution dependence of $\delta$ is consistent with the fact that adjacent chains in Sr$_{8}$Y$_{6}$Cu$_{24}$O$_{41}$ are staggered ($\delta$=0) as described above. This simple model also explains the temperature dependence of the Bragg intensity at (0,1,0), (0,2,2), and (0,0,4). As shown in Fig. 5, the observed intensity has a finite value at the temperature where the intensity shows a minimum. The minimum value of the calculated intensity should become zero as in Fig. 6b. This is probably due to higher-order neutrons or a small distortion in the chain. \cite{ta13} We also calculated the intensity by assuming lattice distortions which could cause the spin dimerization in the chain. The intensity calculated at (0,0,2) is decreased when the lattice distortions are introduced, which is inconsistent with the experimental results in Sr$_{14}$Cu$_{24}$O$_{41}$. In summary, we have studied magnetic and structural properties of the chains in Sr$_{14-x}$Y$_x$Cu$_{24}$O$_{41}$ (0$\leq x \leq$1). We observed that when yttrium is substituted for strontium, strong and sharp magnetic inelastic peaks which originate from the dimerized state in the chain become broader. The peaks become broader only in energy but not in momentum space. This means that the dimerized state becomes unstable but the spin correlations are unchanged with yttrium substitution. It was also observed that nuclear Bragg peak intensities originating from the chains show strong temperature and yttrium concentration dependence. We proposed a model that the chains shift along the $c$ axis with temperature and yttrium substitution to explain these behavior. \section*{Acknowledgments} We would like to thank H.-J. Mikeska and Y. Kitaoka for many helpful discussions. M. M., S. M. S., and G. S. would like to thank H. Mook, S. Nagler, and A. Tennant for their warm hospitality during their stay at Oak Ridge National Laboratory. This work was partially supported by the U. S.-Japan Cooperative Program on Neutron Scattering operated by the U. S. Department of Energy and the Japanese Ministry of Education, Science, Sports, and Culture, and by the NEDO International Joint Research Grant. Work at Brookhaven National Laboratory was carried out under Contract No. DE-AC02-76CH00016, Division of Material Science, U. S. Department of Energy. Part of the study was performed at Oak Ridge National Laboratory which is supported by the Department of Energy, Division of Materials Sciences under Contract No. DE-AC05-96OR22464.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Accreting black holes and neutron stars often show flat (i.e. approximately constant flux per unit frequency) radio spectra. These are traditionally explained as coming from compact conical jets. If the random energy lost by the jet's electrons is replaced continuously, the wavelength at which the jet becomes transparent to synchrotron self-absorption is linearly proportional to the height up the jet, and the flat spectrum results \citep{BK79,HjellmingJohnston}. While this approach has successfully explained many of the properties of the radio emission from active galactic nuclei (AGN) and X-ray binaries, direct tests of the model are relatively few. For AGN, the shifts in the positions of the cores of a modest-sized sample of objects observed with the Very Long Baseline Array are consistent with the model \citep{Sokolovsky2011}; these data confirm the general geometric picture, but do not give information about how the particle acceleration takes place. For X-ray binaries, only the very nearest and most powerful jet sources are resolvable along the jet axis, and then barely so \citep{Fomalont,Stirling}, and are not resolvable perpendicular to the jet axis. Over the past decade, and especially in the past few years, attempts have been made to use variability information both to supplement the mapping of the parameter space from the core shifts\citep{ZdziarskiTetarenkoSikora}, and to try to understand the particle acceleration process \citep{Casella2010,Gandhi2017, Tetarenko2019,Tetarenko2021,Vincentelli2021}. The results, particularly in \citet{Tetarenko2021}, are in qualitative, but not quantitative agreement with the standard model of \citet{BK79}, as the time lags are not linearly proportional to the wavelength. In Cyg X-1, the deviations may occur due to the opacity of the stellar wind \citep{JamesCyg}. In MAXI~J1820+070, the deviations are present, but modest, and could be due to acceleration in the jet, and/or changes in the jet opening angle. When lags are measured using cross-correlation functions, the correlation coefficients at the peaks are typically much less than unity. Thus, there is a ``typical'' lag, but the lags vary. A theoretical model from \citet{Malzac2018} can explain this due to internal shocking, in agreement with the idea that the acceleration of particles in jets comes from this process \citep{Spada2001,Jamil}. Because of the incoherence of the lags, a large number of characteristic timescales of the variability must be sampled to measure the lags accurately. This, combined with the much longer timescales of variability for AGN relative to X-ray binaries, means that X-ray binaries are the preferred class of systems for studying the lags, and hence for studying the acceleration process. A few other properties of the X-ray binaries make them more desirable targets for these studies. The masses of the compact objects in X-ray binaries are typically (although not always) better measured than the masses of supermassive black holes. The inclination angles of the X-ray binaries are also usually better measured, with the caveat that the the inclination angles that are measured are predominantly from the binary orbits, and there can be cases where the orbital and jet axes are misaligned in the long term (e.g. \citealt{Fragile2001,Maccarone20002,Miller-Jones2019,Poutanen2022}) and where the jet axes vary with time (e.g. \citealt{Milgrom1979,Tetarenko2017}). Furthermore, neutron star X-ray binaries have jets which show a broad set of similarities to the jets from black hole X-ray binaries, but where their speeds have been measured, they tend to be significantly slower than the speed of light, and typically close to the expected escape velocities from neutron stars \citep{Fomalont}. For these slower jets, the expected ratio of fluxes from the approaching jet to those from the counterjet due to relativistic Doppler boosting are typically less than 10 { (see Figure \ref{ratiofig})}, meaning that the signatures of the counterjets are likely to be much more prominent than they are for black hole X-ray binaries. \section{Counterjet lags} We start from the \citep{BK79} model for jets. In this model, the height $h$ along the jet from which emission at wavelength $\lambda$ is emitted follows the relation $h\propto\lambda$. We then define $h'$ to be the height along the counterjet where emission is produced at the same frequency. The relativistic doppler factors for the approaching and counterjet will be $\delta_{\rm app}$ and $\delta_{\rm cj}$, respectively, and will be given by: \begin{equation} \delta_{\rm app} = \frac{1}{\Gamma (1-\beta~{\rm cos}~\theta)} \end{equation} and \begin{equation} \delta_{\rm cj} = \frac{1}{\Gamma (1+\beta~{\rm cos}~\theta)}, \end{equation} where $\Gamma$ is the Lorentz factor for the jet, $\beta$ is the ratio of the jet speed to the speed of light and $\theta$ is the angle between the jet axis and the line of sight. For jets where the jet axis is aligned with the angular momentum axis of the orbit, $\theta$ will be equal to the binary inclination angle, although alignment is not necessary for our conclusions to hold. At a given height along the jet axis, the rest frame wavelength of the emission along the counterjet will be longer than that rest frame wavelength of the emission in the approaching jet by $\delta_{\rm ratio}$, where: \begin{equation} \delta_{\rm ratio} = \frac{\delta_{\rm app}}{\delta_{\rm cj}} = \frac{(1+\beta~{\rm cos}~\theta)}{(1-\beta~{\rm cos}~\theta)}. \end{equation} Given the relation between wavelength and height along the jet, then, $h'$ will be smaller than $h$ by this same factor. Next, we can consider the relevant lags in the problem. There will be light travel time delays and jet propagation delays. The light travel time delays come from the fact that the emission from the approaching jet has a shorter distance to travel to the observer than the emission from the counterjet. For the approaching jet, the emission will have $\Delta x = h {\rm cos}~\theta$ less distance to travel than light from the central compact object, while for the counterjet, the emission will have $\Delta x' = h' {\rm cos}~\theta$ more distance to travel. This distance will be travelled at the speed of light and the sum of the distances will be: \begin{equation} \left(\frac{h}{c}\right){\rm cos}~\theta \left (1+\frac{1}{\delta_{\rm ratio}}\right). \label{travellag} \end{equation} For the jet propagation timescales, the difference, rather than the sum, is relevant. This difference will be: \begin{equation} \frac{h'}{\beta c}-\frac{h}{\beta c}=\left(\frac{h}{\beta c}\right)\left(\frac{1}{\delta_{\rm ratio}}-1\right), \label{jetlag} \end{equation} and by writing the equation in this order, we can sum the results from equations \ref{travellag} and \ref{jetlag}, and obtain the final total lag: \begin{equation} \left(\frac{h}{c}\right)\left[\left(1+\frac{1}{\delta_{\rm ratio}}\right){\rm cos}~\theta +\left(\frac{1}{\beta}\right)\left(\frac{1}{\delta_{\rm ratio}}-1\right)\right]. \end{equation} Factoring the terms in the denominators inside the parentheses, we get: \begin{equation} \left(\frac{h}{\delta_{\rm ratio}\beta c}\right)\left[(1+\delta_{\rm ratio})\beta {\rm cos}~\theta +\left({1-\delta_{\rm ratio}}\right)\right]. \end{equation} Then, we can write out $\delta_{\rm ratio}$, and obtain: \begin{equation} \left(\frac{h}{\delta_{\rm \rm ratio}\beta c}\right)\left[\left(1+\frac{(1+\beta~{\rm cos}~\theta)}{(1-\beta~{\rm cos}~\theta)}\right)\beta {\rm cos}~\theta +\left({1-\frac{(1+\beta~{\rm cos}~\theta)}{(1-\beta~{\rm cos}~\theta)}}\right)\right]. \end{equation} We can then multiply through by $(1-\beta~{\rm cos}~\theta)$, and it will become readily apparent that the expression is zero. \section{Highlighting the assumptions and applicability of the above result} Several specific assumptions have been made above, and violations of those assumptions could lead to different arrival times at the observer for the approaching and counterjets. One of the more important assumptions is that the jet is powered symmetrically. If the jet is not powered symmetrically, then the light curves of the approaching and counterjets should be statistically similar, but independent of one another. Statistically, the delays from the accretion emission to the jet emission would be the same for the approaching jet and counterjet. Non-steady jets sometimes appear to be consistent with being symmetric \citep{Mirabel, Tetarenko2017}, and in other cases appear to have some inherent asymmetry \citep{HjellmingR.M1995Eeor}. It may be possible though, that even the case of GRO~J1655--40 studied by \citet{HjellmingR.M1995Eeor} is symmetric, but with larger swings in inclination angle than considered in that work, more similar to the large swings seen in V404 Cyg \citep{Tetarenko2017}. Next, it has been assumed that the jet speed is constant. From studies of the few active galactic nuclei with well-resolved jets, it is clear that they are being accelerated as they move out from the central black hole, even within the region in which they emit strongly (e.g. \citealt{Lister2021}). This can lead to a non-zero counterjet lag behind the approaching jet, while conversely deceleration can lead to the counterjet emission arriving first. \section{Testing the twin jet hypothesis} A clear model test exists for whether the jets are symmetric in the steady states. Many theoretical calculations suggest that the jets should be asymmetric for short amounts of time (e.g. \citealt{McKinney2012}, as well as some of the simulations released via the Event Horizon Telecope collaboration\footnote{https://www.youtube.com/watch?v=1Sv7djCASDg{\&}t=1s}). Anecdotally, it is well known that general relativistic magnetohydrodynamic simulations often produce short-term asymmetries in jet power, but this effect has not been systematically studied in simulations (P.C. Fragile, private communication). If two independent jet components, with largely similar power spectra are being summed, this should not affect the statistical properties of the power spectrum at all if there is no systematic lag between them. It {\it will} lead to a loss of cross-coherence\footnote{This is often referred to as just the coherence, but as coherence is sometimes used to refer to the quality factor of periodic oscillations, we prefer to use the term cross-coherence to avoid confusion.} between the radio and X-ray bands -- see \citet{Vaughan1997} for a discussion of the cross-coherence, if calculated on sufficiently short timescales and for a source for which the counterjet represents a sufficiently large faction of the flux. In the context of the \citet{Malzac2018} model, the approaching and counterjets should be identical, and this model does a good job of explaining the observations. Still, given the lack of cross-coherence, it is possible that the approaching jet responds primarily to the ``top" half of the accretion flow, and the receding jet responds to the ``bottom" half, then the two components could have statistically identical, but full independent correlations. This could take place in the context of a model that preserves the successful features of the \citet{Malzac2018} work. In this case, if we take $\alpha$ as the ratio of the counterjet's flux to that of the approaching jet, we find that the expected cross-coherence of the summed jet emission, $\gamma_j^2$ should be: \begin{equation} \gamma^2_j=\gamma^2\left(\frac{1+\alpha^2}{1+2\alpha+\alpha^2}\right), \label{coherencesummed} \end{equation} with the derivation of this result shown in Appendix A. For $\alpha$=1, corresponding to a 90 degree inclination angle for the jet, the cross-coherence should be reduced by a factor of 2. For fast, pole-on jets, the cross-coherence should be, unsurprisingly, nearly unaffected by the presence of the counterjet emission. For neutron star jets, then, where $\alpha$ can be in the 0.1-0.5 range for typical speeds and inclination angles, it is reasonable to expect that (1) the cross-coherence will be weaker than it is for black hole jets and (2) that it will be lower for slower jets and for jets closer to perpendicular to the line of sight than it is for jets that are pole-on, or are at the faster end of the range of parameters for accretion neutron stars. These quantities have not yet been well measured for rapid variability from neutron star X-ray binaries as sensitive radio data sets with high time resolution have not yet been obtained for accreting neutron star X-ray binaries. The most likely candidate among the known stellar mass black holes for showing effects from its counterjet is GRO~J1655-40, which has shown outbursts in 1994, 1996, and 2005 \citep{1655outburst,WATCHDOG}, and for which the best estimate of the jet inclination angle is 85 degrees \citep{HjellmingR.M1995Eeor}. This system, too, should have relatively similar fluxes from the approaching jet and counterjet. Given the broad similarities of neutron star jets to those of black hole X-ray binary jets, we can draw intuition for what to expect from the black hole systems. For black hole X-ray binaries, there are good cross-coherence measurements between infrared and X-ray emission (e.g. \citealt{Vincentelli}), for GX~339-4. The infrared band for typical black hole X-ray binaries will be only from the approaching jet, because the counterjet in the infrared will be behind the optically thick part of the accretion disk at the heights where the infrared emission is produced \citep{Maccarone2020}. \citet{Tetarenko2021} does not report cross-coherences between radio and X-rays explicitly, but given that the normalizations of the cross-correlation functions between X-ray and radio are typically $\sim0.5$, it is likely that the cross-coherence is of the same order over the frequency range at which the variability is strongest. A substantial fraction of the neutron star X-ray binaries, primarily the ``Z-sources" have sufficient radio flux densities to perform rapid variability analyses with current instrumentation (noting that only at certain positions along the Z-track do they show flat spectra at centimeter wavelengths), as do a few of the atoll sources, but a comprehensive study of the rapid variability of neutron star jets is likely to require the Next Generation Very Large Array (ngVLA; \citealt{ngVLA}). A clear prediction can be made that if the high Fourier frequency cross-coherence between radio and X-ray emission is a strong function of binary inclination angle, then the approaching and counterjets are likely to be independent of one another, while if there is no inclination angle dependence, the ``twin jet" model genuinely applies on short timescales. \section{Challenges in looking for jet acceleration} Two major challenges exist in looking for the signatures of jet acceleration by using counterjets. One is that Doppler boosting makes the counterjets much fainter than the approaching jets, especially for speeds $\beta \gtrsim 0.9$, as shown in figure \ref{ratiofig}. \begin{figure*} \includegraphics[width=0.6\textwidth]{flux_ratio_counter2approaching_sameaxis.pdf} \caption{The ratio of the flux for the counterjet to that for the approaching jet as a function of inclination angle. Left: for $\beta=0.3$. Right: for $\beta=0.9$. { We assume that the flux will scale as the ratio of the Doppler factors to the 2.2 power \citep{2016MNRAS.463.1153Z}}. } \label{ratiofig} \end{figure*} From this, it is clear that only for relatively slow jets, or relatively high inclination angles is there a substantial fraction of the flux from the counterjet. The other is that the time lag difference for the approaching jet and counterjet is likely to be at least ten times shorter than the lag for the approaching jet in cases where there is modest acceleration. Empirically, the smearing of the signature of the jet gives a break timescale in the jet power spectrum that is comparable to the time lag for the approaching jet \citep{Tetarenko2021}. This smearing occurs mostly because the \citealt{BK79} model leads to the jet emission at a given wavelength coming from a range of heights along the jet axis. Because of the self-similarity of the jets, the result above about the lag timescales being the same for approaching and counterjets with constant speeds still holds. A mathematical technique called the `cepstrum' has been developed \citep{Bogert:1963:FAT} to search for echoes in light curves. The cepstrum is the power spectrum (produced from an inverse Fourier transform) of the logarithm of the power spectrum (produced using a forward Fourier transform). In many cases, a cepstrum of a signal added to its own echo will yield a delta-function, with the time lag associated with the echo. This can be seen following the work of \citep{oppenheim2004frequency}. One can take: \begin{equation} x(t)=s(t)+\alpha s(t-\tau), \end{equation} where $s(t)$ is the initial signal, and $\tau$ is the timescale of the echo. The, the power spectrum $|X(f)|^2$ will be given by: \begin{equation} |X(f)|^2 = |S(f)|^2\left[1+\alpha^2+2\alpha{\rm cos}(2\pi f\tau)\right]. \end{equation} The logarithm then converts the multiplication into a sum of the log of the power spectrum of the underlying signal times the term inside the square brackets. The inverse Fourier transform power spectrum then pulls out the period of the cosine term. An important caveat remains. The second maximum of the cosine will occur at $f=\frac{1}{\tau}$. If $|S(f)|^2$ is extremely small at this frequency, then the power spectrum $|X(f)|^2$ of the summed signal will be dominated there by Poisson noise. The cepstrum is thus sensitive only for cases where the echo timescale is slow relative to the characteristic timescale of the variability. Because, as mentioned above, the break frequency in the power spectrum is typically comparable to the reciprocal of the time lag, only with either exquisite signal-to-noise, or with lags between the approaching and counterjets that are large fractions of the lags of the approaching jet behind the disk emission, can we expect to detect the counterjets via this approach. There appear to be realistic scenarios in which the cepstral lags could be measured, provided that the approaching jet reaches a relatively high speed ($\Gamma$ of at least about 5). The large speed is needed to ensure that the doppler factors for the approaching jet and counterjet are sufficiently different that the precise cancellation for the non-accelerated jet starts to fail substantially. If we take the case, for example, of a jet which accelerates with constant acceleration from $\beta=0.7$ at its base to $\beta=0.98$ at the region of interest for emission of the approaching jet, and take an inclination angle of 60 degrees the same wavelength will come from a region with $\beta=0.91$, only about 70\% the distance to the black hole. It has been shown for blazar jets that there cannot be very high speeds on very small spatial scales, or Compton drag would slow the jets down, so the general picture of whether such accelerations take place is well worth testing \citep{BegelmanSikora}, and strong empirical evidence exists for acceleration of blazar jets in VLBI data \citep{Lister2021}, but due to the long variability timescales of AGN jets, studies of them are inherently nonstationary. In such a scenario, the jet to counterjet flux ratio for relative edge-on jets would be mitigated, both because (1) both jets are deboosted and (2) the counterjet's speed is slower at the relevant wavelength, so the deboosting ratio is mild. For the scenario laid out above, the jet to counterjet flux ratio should only be a factor of about 3. Cepstral searches of existing data for such objects are thus well-motivated, and long, high time resolution data sets in radio through infrared are well-motivated for future outbursts because of this. One potential complicating factor of which future observers should be mindful is that a delay may also be present in some systems in relatively short wavelength bands like the optical and near infrared due to thermal reprocessing in the accretion disc (and, in many systems, due to the fact that the accretion disc itself will block the inner parts of the counterjets that produce emission in these bands). The effects on the cross-coherence of the jets will be more complicated in the situation where there is jet acceleration that is relevant. In such a case, the counterjet will be the sum of two distinct emission regions, with a fixed relation between the regions. Following \citet{Vaughan1997}, equation 10, this will give an intermediate cross-coherence between the two extremes. \section{Summary and conclusions} We have found several key results from this work: \begin{enumerate} \item For the standard assumptions of a constant speed jet following the \citet{BK79} model, the emission at a given wavelength from the approaching jet and the receding jet will arrive to a distant observer simultaneously. \item In that scenario, one can test whether the jet is genuinely symmetric by looking at the cross-coherence between X-ray emission and emission from some band produced by the jet. \item In the case of jet which is accelerating within the emission region at a given wavelength, there can be time lags between the approaching and receding jets that should be measurable using the cepstrum. \item In both cases (ii) and (iii) above, the measurements should be most valuable in the highest frequency bands for which the counterjet is not blocked by the outer accretion disk. This will typically be the far-infrared or submillimetre band, but as the ngVLA project \citep{ngVLA} starts to collect data, its superior sensitivity may make it the instrument of choice. { One core change that may be necessary relative to current observational set-ups is that longer data sets should be obtained to make precise measurements of the coherence than are needed to estimate the lags.} \end{enumerate} \section{Acknowledgments} We thank the anonymous referee for a helpful report. We thank Sara Motta, James Miller-Jones, Greg Sivakoff, Piergiorgio Casella for useful discussions. We also thank the participants of a workshop in honour of Omer Blaes' 62nd and Chris Fragile's 52nd birthdays (and particularly the honourees) for useful discussions. Support for this work was provided by NASA through the NASA Hubble Fellowship grant \#HST--HF2--51494.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5--26555. \section{Data availability} No new data were collected for the work presented here. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Wormholes are theoretical constructs constituting short cuts or tunnels or openings to otherwise distant parts of the cosmos or different universes \citep{morris1,visser1}. The idea of such spacetime geometries came from Einstein and Rosen's proposal of the Einstein-Rosen bridge \citep{ER_bridge}. The term {\em wormhole} was coined by Misner and Wheeler \citep{wormhole_first}. One of the first wormhole solutions, the Ellis-Bronnikov wormhole, was found in the framework of general relativity (GR), using a wrong sign (phantom) in the scalar field Lagrangian \citep{Ellis,Bronnikov}. Subsequently, the possibility of having time-machine models using wormholes was introduced \citep{time_machine_1,time_machine_2,time_machine_3}. This led to growing interest in wormholes \citep{visser1}. A wormhole can be thought of as a defocusing lens in the sense that an initially converging family of radial null rays, while passing through the wormhole, first becomes parallel at the wormhole throat and then starts diverging on the other side. This defocusing of the family of null rays passing through the wormhole is the outcome of the fact that the null convergence condition (NCC) is violated in the vicinity or at least at the wormhole throat \citep{visser1}, as can be seen from an analysis of the Raychaudhuri equation for this family. In GR, a violation of the NCC leads to a violation of the null energy condition (NEC) which, in turn, leads to violations of the other various energy conditions (weak, strong, dominant, etc.) \cite{hawking,wald}. Therefore, in the framework of GR, wormholes require energy-condition-violating matter (often termed ``exotic matter") to be supported \cite{morris1,visser1}. However, the above-mentioned requirement of exotic matter to support a wormhole can be avoided in many alternative or modified theories of gravity. In such gravity theories, since the structures of the field equations are different from that of GR, the violation of the NCC does not necessarily lead to the violations of the various energy conditions \cite{capozziello1,capozziello2}. Therefore, in such theories, wormholes can be supported by matter which satisfies all the energy conditions but violates the convergence condition. See \cite{bhawal_1992,maeda_2008,lobo_2009,dehghani_2009,kanti_2011,garcia_2011,boemer_2012, harko_2013,kar_2015,bronnikov_2015,bambi_2016,rajibul_2016,myrzakulov_2016,moraes_2017,moradpour_2017, hohmann_unpub,ovgun_unpub} and references therein for some such examples. Also see \cite{nandi_2006,tsukamoto_2012,rajibul_2017,jusufi_2018a,jusufi_2018b,bambi_2013a,nedkova_2013, ohgami_2015,abdujabbarov_2016b,rajibul_2018b,amir_2018,harko_2009,bambi_2013b,cardoso_2016, konoplya_2016,aneesh_2018} for some recent works on different aspects of wormholes. Eddington inspired Born-Infeld gravity (EiBI) \citep{banados}, which is a modified theory of gravity, belongs to a class of Born-Infeld inspired gravity theory first proposed by Deser and Gibbons \citep{deser}, inspired by the earlier work of Eddington \citep{eddington} and the nonlinear electrodynamics of Born and Infeld \citep{born}. The theory is equivalent to Einstein's GR in vacuum but differs from it within matter. Since its introduction, various aspects of EiBI gravity have been studied by many researchers in the recent past, including black holes \citep{banados,BH_1,BH_2,karan_2015,BH_3,soumya_2015,BH_5,BH_6,BH_7,jana_shaikh,BH_8}, wormholes \citep{WH_1,rajibul_2015,WH_2,WH_3}, compact stars \citep{CS_1,CS_2,CS_3,CS_4,CS_5}, cosmological aspects \citep{banados,delsate1,cho1,COS_1,COS_2,perturbation1,perturbation2,COS_3,COS_4,COS_5,COS_6,COS_7}, astrophysical aspects \citep{ASTRO_1,ASTRO_2,ASTRO_3}, gravitational collapse \citep{collapse_1,rajibul_2018a}, gravitational waves \citep{GW_1,GW_2}, implications in nongravitational contexts like particle physics \citep{EiBI_particle} etc. See \cite{EiBIreview} for a recent review on various studies in EiBI gravity. In this work, we first show that, in this gravity theory, the violation of the NCC does not necessarily lead to the violation of the NEC. We then obtain exact solutions of the field equations in EiBI gravity coupled to arbitrary nonlinear electrodynamics and anisotropic fluids. The general solutions can represent both black holes and wormholes. In this work, we focus on the wormhole solutions which are supported by nonexotic matter. The plan of the paper is as follows. In the next section, we briefly recall the EiBI theory. In Sec. \ref{sec:NCC}, we establish a relationship between the NCC and the NEC along a congruence of radial null geodesics. In Sec. \ref{sec:wormholes}, we obtain exact solutions, which represent both black holes and wormholes, of the field equations in EiBI gravity coupled to arbitrary nonlinear electrodynamics and anisotropic fluids and analyze the wormhole solutions. We work out some specific examples in Sec. \ref{sec:examples}. We conclude in Sec. \ref{sec:conclusion} with a summary of the key results. \section{Eddington-inspired Born-Infeld gravity} \label{sec:EiBI} The action in EiBI gravity developed in \cite{banados} is given by \begin{eqnarray} S_{BI}[g,\Gamma,\Psi]&=&\frac{c^4}{8\pi G\kappa}\int d^4x\left[\sqrt{-\left\vert g_{\mu\nu}+\kappa R_{\mu\nu}(\Gamma)\right\vert}-\lambda \sqrt{-g}\right]+S_{M}(g,\Psi), \nonumber \end{eqnarray} where $c$ is the speed of light, $G$ is Newton's gravitational constant, $\lambda=1+\kappa\Lambda$, $\kappa$ is the EiBI theory parameter, $\Lambda$ is the cosmological constant, $R_{\mu\nu}(\Gamma)$ is the symmetric part of the Ricci tensor built with the independent connection $\Gamma$, $S_{M}(g,\Psi)$ is the action for the matter field, and the vertical bars stand for the matrix determinant. Variations of this action with respect to the metric tensor $g_{\mu\nu}$ and the connection $\Gamma$ yield, respectively, \citep{banados,cho1,delsate1} \begin{equation} \sqrt{-q}q^{\mu\nu}=\lambda \sqrt{-g}g^{\mu\nu}-\bar{\kappa} \sqrt{-g}T^{\mu\nu} \label{eq:field_equation1} \end{equation} \begin{equation} \nabla^\Gamma_\alpha \left(\sqrt{-q} q^{\mu\nu} \right)=0, \label{eq:metric_compatibility} \end{equation} where $\bar{\kappa}=\frac{8\pi G\kappa}{c^4}$, $\nabla^\Gamma$ denotes the covariant derivative defined by the connection $\Gamma$ and $q^{\mu\nu}$ is the inverse of the auxiliary metric $q_{\mu\nu}$ defined by \begin{equation} q_{\mu\nu}=g_{\mu\nu}+\kappa R_{\mu\nu}(\Gamma). \label{eq:field_equation2} \end{equation} In obtaining the field equations from the variation of the action, it is assumed that both the connection $\Gamma$ and the Ricci tensor $R_{\mu\nu}(\Gamma)$ are symmetric, i.e., $\Gamma^\mu_{\nu\rho}=\Gamma^\mu_{\rho\nu}$ and $R_{\mu\nu}(\Gamma)=R_{\nu\mu}(\Gamma)$. Equation (\ref{eq:metric_compatibility}) gives the metric compatibility equation which yields \begin{equation} \Gamma^\mu_{\nu\rho}=\frac{1}{2}q^{\mu\sigma}\left(q_{\nu\sigma,\rho}+q_{\rho\sigma,\nu}-q_{\nu\rho,\sigma} \right). \end{equation} Therefore, the connection $\Gamma^\mu_{\nu\rho}$ is the Levi-Civita connection of the auxiliary metric $q_{\mu\nu}$. Either in vacuum or in the limit $\kappa\to 0$, GR is recovered \citep{banados}. \section{Convergence condition and energy conditions in EiBI gravity} \label{sec:NCC} As we have mentioned in the Introduction, at or in the vicinity of a wormhole throat, the NCC is violated along a congruence of radial null geodesics passing through it, and unlike in GR, in many modified gravity theories, the violation of the NCC may not lead to violations of the different energy conditions. In this section, we explore this possibility in the context of EiBI gravity. To study the NCC along a radial null geodesic congruence, we consider, respectively, the following $Ans\ddot{a}tze$ for the physical and the auxiliary metric: \begin{equation} ds_g^2=-e^{2\alpha(r)} dt^2+e^{2\beta(r)} dr^2+r^2(d\theta ^2+\sin ^2\theta d\phi ^2), \label{eq:physical_metric0} \end{equation} \begin{equation} ds_q^2=-e^{2\nu(r)} dt^2+e^{2\Psi(r)} dr^2+H^2(r)(d\theta ^2+\sin ^2\theta d\phi ^2). \label{eq:auxiliary_metric0} \end{equation} For an energy-momentum tensor of the form $T^\mu_{\;\nu}={\rm diag}(-\rho,p_r,p_\theta,p_\theta)$, the field equation (\ref{eq:field_equation1}) yields \begin{equation} e^{\alpha(r)}=e^{\nu(r)}\sqrt{\tau(1+\kappa\rho)}, \hspace{0.3cm} e^{\beta(r)}=e^{\Psi(r)}\sqrt{\tau(1-\kappa p_r)} , \hspace{0.3cm} r=H(r)\sqrt{\tau(1-\kappa p_{\theta})}, \label{eq:relations} \end{equation} where $\tau=\frac{1}{\sqrt{(1+\kappa\rho)(1-\kappa p_r)(1-\kappa p_{\theta})^2}}$. Here, we have taken $G=c=1$, $\Lambda=0$ and $8\pi=1$. Later, in the matter Lagrangian also, we will set $8\pi=1$. This is for convenience. Note that the Ricci tensor appearing in the field equation (\ref{eq:field_equation2}) is that of the auxiliary metric $q_{\mu\nu}$. But, the Ricci tensor appearing in the NCC is that of the physical metric $g_{\mu\nu}$. However, for a family of radial null geodesics with four-velocity $k^{\alpha}$, we can express the NCC in terms of $\rho$, $p_r$, $p_\theta$ and their derivatives by using (\ref{eq:field_equation2}) and (\ref{eq:relations}). To this end, we first note that, for a family of radial null geodesics in the equatorial plane of the physical metric (\ref{eq:physical_metric0}), $k^t=e^{-2\alpha}$ and $k^r=\pm e^{-(\alpha+\beta)}$. Therefore, using (\ref{eq:relations}), we obtain \begin{eqnarray} R_{\mu\nu}(\Gamma)k^\mu k^\nu &=& e^{-2\alpha}\left[-R^t_t(\Gamma)e^{2(\nu-\alpha)}+R^r_r(\Gamma) e^{2(\Psi-\beta)}\right] \nonumber\\ &=& \frac{\kappa(\rho+p_r)e^{-2\alpha}}{\tau(1+\kappa\rho)(1-\kappa p_r)}R^t_t(\Gamma)+\frac{e^{-2\alpha}}{\tau(1-\kappa p_r)}\left[-R^t_t(\Gamma)+R^r_r(\Gamma)\right], \label{eq:cc1} \end{eqnarray} where $R_{\mu\nu}(\Gamma)$ is the Ricci tensor of the auxiliary metric $q_{\mu\nu}$ and its indices are raised by using the auxiliary metric (\ref{eq:auxiliary_metric0}). Using the field equation $\kappa R_{\mu\nu}(\Gamma)=q_{\mu\nu}-g_{\mu\nu}$, the null geodesic equation $g_{\mu\nu}k^\mu k^\nu=0$ and Eqs. (\ref{eq:physical_metric0})-(\ref{eq:relations}), we obtain, along the family of radial null geodesics, \begin{equation} R_{\mu\nu}(\Gamma)k^\mu k^\nu=\frac{1}{\kappa}(q_{\mu\nu}-g_{\mu\nu})k^\mu k^\nu=\frac{(\rho+p_r)e^{-2\alpha}}{\tau(1+\kappa\rho)(1-\kappa p_r)}. \label{eq:cc2} \end{equation} Now, for the auxiliary metric (\ref{eq:auxiliary_metric0}), it can be shown that \begin{equation} -R^t_t(\Gamma)+R^r_r(\Gamma)=-\frac{2}{H}e^{\nu-\Psi}\frac{d}{dr}\left[H'e^{-\nu-\Psi}\right], \end{equation} where a prime denotes a derivative with respect to $r$. Using (\ref{eq:relations}), the last equation can be rewritten as \begin{equation} -R^t_t(\Gamma)+R^r_r(\Gamma)=\sqrt{\frac{\tau(1-\kappa p_r)(1-\kappa p_\theta)}{(1+\kappa\rho)}}\left[-\frac{2}{r}e^{\alpha-\beta}\frac{d}{dr}\left(e^{-\alpha-\beta}\right)Y-\frac{2}{r}e^{-2\beta}Y'\right], \label{eq:cc3} \end{equation} where $Y=\tau\sqrt{(1+\kappa\rho)(1-\kappa p_r)}H'$. Denoting the Ricci tensor of the physical metric $g_{\mu\nu}$ by $R_{\mu\nu}$, we obtain, for the physical metric (\ref{eq:physical_metric0}), \begin{equation} -R^t_t+R^r_r=-\frac{2}{r}e^{\alpha-\beta}\frac{d}{dr}\left(e^{-\alpha-\beta}\right), \end{equation} where the indices of $R_{\mu\nu}$ are raised by using the physical metric (\ref{eq:physical_metric0}). Therefore, along the family of radial null geodesics, we have \begin{equation} R_{\mu\nu}k^{\mu}k^{\nu}=(-R^t_t+R^r_r)e^{-2\alpha}=-\frac{2}{r}e^{-(\alpha+\beta)}\frac{d}{dr}\left(e^{-\alpha-\beta}\right). \label{eq:cc4} \end{equation} Also, from Eqs. (\ref{eq:field_equation2}) and (\ref{eq:relations}), we obtain \begin{equation} R^t_t(\Gamma)=\frac{1}{\kappa}[1-\tau(1+\kappa \rho)]. \label{eq:cc5} \end{equation} Using Eqs. (\ref{eq:cc2}), (\ref{eq:cc3}), (\ref{eq:cc4}) and (\ref{eq:cc5}) in (\ref{eq:cc1}), we obtain, after some manipulations, \begin{equation} R_{\mu\nu}k^{\mu}k^{\nu}=\frac{(\rho+p_r)e^{-2\alpha}}{(1-\kappa p_r)X(r)}+\frac{2}{r}e^{-2(\alpha+\beta)}\frac{d}{dr}\log\left[\frac{(1+\kappa\rho)^{1/4}(1-\kappa p_r)^{1/4}}{(1-\kappa p_\theta)}X(r)\right], \label{eq:NCC_EiBI} \end{equation} where \begin{equation} X(r)=\left[1+\frac{\kappa r}{4}\left(\frac{\rho'}{1+\kappa\rho}-\frac{p_r'}{1-\kappa p_r}\right)\right], \end{equation} and we have used the expression $H(r)=r/\sqrt{\tau(1-\kappa p_{\theta})}$. In the GR limit ($\kappa\to 0$), we obtain \begin{equation} \lim_{\kappa\to 0} R_{\mu\nu}k^{\mu}k^{\nu}=(\rho+p_r)e^{-2\alpha}. \end{equation} To satisfy the energy conditions, we must have $\rho+p_r\geq 0$ which, in turn, implies $R_{\mu\nu}k^{\mu}k^{\nu}\geq 0$ along the family of radial null geodesics in GR. Therefore, a violation/satisfaction of the NCC in GR means violations/satisfactions of different energy conditions. However, in EiBI gravity, the second term on the right-hand side of Eq. (\ref{eq:NCC_EiBI}), which vanishes in the limit $\kappa\to 0$, makes the difference between the NCC and the NEC. Therefore, in this gravity theory, the second term on the right-hand side of (\ref{eq:NCC_EiBI}) can lead to the violation of the NCC, which is required to maintain a wormhole, even though the NEC or any other energy conditions remain satisfied. In the next section, we show this explicitly by obtaining a class of wormhole solutions which violate the NCC but satisfy the NEC as well as all other energy conditions. \section{Exact wormhole solutions satisfying all the energy conditions} \label{sec:wormholes} In the previous section, we have seen that the second term on the right-hand side of Eq. (\ref{eq:NCC_EiBI}), which vanishes in the GR limit $\kappa\to 0$, makes the difference between the NCC and the NEC. To see whether or not this second term alone can support wormholes without violating the energy conditions, we consider an energy-momentum tensor of the form $T^\mu_{\;\nu}={\rm diag}(-\rho,-\rho,p_\theta,p_\theta)$, such that $p_r=-\rho$ and the first term appearing on the right-hand side of Eq. (\ref{eq:NCC_EiBI}) vanishes. This type of energy-momentum can be interpreted as that due to an anisotropic fluid, or it can be obtained from a nonlinear electrodynamics action of the form \begin{equation} S_{M}=\frac{1}{8\pi}\int d^4x\sqrt{-g}\varphi(F), \end{equation} where $\varphi(F)$ is a function of the electromagnetic field invariant $F=-\frac{1}{2}F_{\mu\nu}F^{\mu\nu}$ and $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the electromagnetic field tensor. For the electrostatic case, the energy-momentum tensor obtained from the variation of the above action becomes \citep{olmo_universe}, \begin{equation} T^{\mu}_{\;\nu}=\frac{1}{8\pi} {\rm diag}(\varphi-2F\varphi_F,\varphi-2F\varphi_F,\varphi,\varphi), \end{equation} where $\varphi_F$ is the derivative of $\varphi(F)$ with respect to $F$. Expressing the above energy-momentum tensor in the anisotropic fluid form, we have \begin{equation} \rho=-p_r =2F \varphi_F-\varphi, \hspace{0.3cm} p_\theta =\varphi, \label{eq:nonlinear_electrodynamics} \end{equation} where we have set $8\pi=1$, as discussed in the previous section. For the above form of the energy-momentum tensor, the conservation equation $\nabla_\mu T^{\mu\nu}=0$ becomes \begin{equation} \rho'+\frac{2}{r}(\rho+p_\theta)=0. \label{eq:conservation_eqn} \end{equation} In the subsequent calculations, we shall use the above conservation equation whenever $\rho'$ appears. Using the last equation, we obtain \begin{equation} X(r)=\frac{1-\kappa p_\theta}{1+\kappa \rho}. \end{equation} Therefore, Eq. (\ref{eq:NCC_EiBI}) becomes \begin{equation} R_{\mu\nu}k^{\mu}k^{\nu}=\frac{2\kappa}{r^2}\left(\frac{\rho+p_\theta}{1+\kappa \rho}\right) e^{-2(\alpha+\beta)}. \label{eq:cc6} \end{equation} Now Eqs. (\ref{eq:cc4}) and (\ref{eq:cc6}) can be combined to obtain \begin{equation} \frac{d}{dr}\log\left(e^{\alpha+\beta}\right)=-\frac{d}{dr}\log\left(\sqrt{1+\kappa\rho}\right), \end{equation} where we have used the conservation equation (\ref{eq:conservation_eqn}). The integration of the last equation yields \begin{equation} e^{\alpha+\beta}=\frac{1}{\sqrt{1+\kappa\rho}}. \label{eq:psi_0} \end{equation} Therefore, for an energy-momentum tensor of the form $T^\mu_{\;\nu}={\rm diag}(-\rho,-\rho,p_\theta,p_\theta)$ in EiBI gravity, we have, along a congruence of radial null geodesics, \begin{equation} R_{\mu\nu}k^{\mu}k^{\nu}=\frac{2\kappa}{r^2}\left(\rho+p_\theta\right). \end{equation} Note that, for the energy-momentum mentioned above, the necessary and sufficient conditions to satisfy all the energy conditions are $\rho\geq 0$, $p_\theta\geq 0$ and $\rho\geq \vert p_\theta\vert$. Therefore, we must have $\kappa<0$ for the violation of the NCC, and hence, to have wormholes without violating the energy conditions. The spacetime geometry of a spherically symmetric, static wormhole of the Morris-Thorne class is generically written as \begin{equation} ds^2=-e^{2\Phi(r)} dt^2+\frac{dr^2}{1-\frac{b(r)}{r}}+r^2(d\theta ^2+\sin ^2\theta d\phi ^2), \label{eq:MT_wormhole1} \end{equation} where $\Phi(r)$ and $b(r)$ are, respectively, the redshift function and the wormhole shape function. The wormhole throat, where two different regions are connected, is given by $\left(1-\frac{b(r)}{r}\right)\Big\vert_{r_0}=0$, i.e., by $b(r_0)=r_0$, with $r_0$ being the radius of the throat. The redshift function $\Phi(r)$ is finite everywhere (from the throat to spatial infinity). Now, using (\ref{eq:psi_0}), we find that the physical metric (\ref{eq:physical_metric0}) becomes \begin{equation} ds_g^2=-e^{2\alpha(r)} dt^2+\frac{dr^2}{e^{2\alpha(r)}(1+\kappa\rho)}+r^2(d\theta ^2+\sin ^2\theta d\phi ^2). \label{eq:MT_wormhole2} \end{equation} Comparing (\ref{eq:MT_wormhole1}) and (\ref{eq:MT_wormhole2}), we find that the above spacetime represents a wormhole, provided the throat radius $r_0$ is a solution of $(1+\kappa\rho)\vert_{r_0}=0$, and $\alpha(r)$ is finite from the throat to spatial infinity. This can also be seen from the fact that the expansion scalar \begin{equation} \hat{\theta}=\nabla_\mu k^\mu=\pm \frac{2}{r}e^{-(\alpha+\beta)}=\pm \frac{2}{r}\sqrt{1+\kappa\rho} \end{equation} of a congruence of radial null geodesics passing through the wormhole must vanish at the wormhole throat. Therefore, the wormhole throat radius $r_0$ must be a solution of $(1+\kappa\rho)\vert_{r_0}=0$. To obtain exact wormhole solutions, we rewrite the physical and auxiliary metrics in the following forms: \begin{equation} ds_g^2=-\psi^2(r)f(r)dt^2+\frac{dr^2}{f(r)}+r^2\left(d\theta^2+\sin^2\theta d\phi^2\right), \end{equation} \begin{equation} ds_q^2=-G^2(r)F(r)dt^2+\frac{dr^2}{F(r)}+H^2(r)\left(d\theta^2+\sin^2\theta d\phi^2\right). \end{equation} Comparing the above ansatze with Eqs. (\ref{eq:physical_metric0}) and (\ref{eq:auxiliary_metric0}), we find $e^{2\alpha}=\psi^2f$, $e^{2\beta}=\frac{1}{f}$, $e^{2\nu}=G^2F$ and $e^{2\Psi}=\frac{1}{F}$. Therefore, from Eqs. (\ref{eq:relations}) and (\ref{eq:psi_0}), we find that \begin{equation} f(r)=F(r)(1-\kappa p_\theta), \hspace{0.3cm} \psi(r)=\frac{1}{\sqrt{1+\kappa \rho}}, \label{eq:relation1} \end{equation} \begin{equation} G(r)=\frac{1-\kappa p_\theta}{\sqrt{1+\kappa\rho}}, \hspace{0.3cm} H(r)=r\sqrt{1+\kappa \rho}. \label{eq:relation2} \end{equation} Using Eq. (\ref{eq:relation2}) and the conservation equation (\ref{eq:conservation_eqn}), it can be shown that $H'=\frac{1-\kappa p_\theta}{\sqrt{1+\kappa\rho}}=G$. With $G=H'$, the $tt$ and $rr$ components of the field equation (\ref{eq:field_equation2}) become identical to each other. Therefore, we are left with two equations coming from the $tt$ (or $rr$) and the $\theta\theta$ component. The energy conservation equation (\ref{eq:conservation_eqn}) can be solved to obtain $\rho(r)$ for a given nonlinear electrodynamics model or equation of state between $\rho$ and $p_{\theta}$ of the anisotropic fluid. The $\theta\theta$ component of the field equation (\ref{eq:field_equation2}) can be solved to obtain $F(r)$. The other equation (i.e., the $tt$ or the $rr$ component of the field equation) will automatically be satisfied because of the energy conservation equation. The $\theta\theta$ component of the field is given by \begin{equation} 2\frac{H''}{H}+\frac{H'^2}{H^2}+\frac{H'F'}{HF}-\frac{1}{H^2F}=\frac{1}{\kappa F}\left[\frac{1}{1+\kappa\rho}-1\right], \end{equation} which can be integrated to obtain \begin{equation} F= \frac{1}{HH'^2}\left[C_1+H-\frac{H^3}{3\kappa}+\frac{1}{\kappa}\int^r \frac{H^2H'}{1+\kappa\rho} dr\right], \end{equation} where $C_1$ is an integration constant. Therefore, we obtain \begin{eqnarray} f(r) &=& \frac{1-\kappa p_\theta}{HH'^2}\left[C_1+H-\frac{H^3}{3\kappa}+\frac{r^2H}{\kappa} -\frac{2}{\kappa}\int^r rH dr\right] \nonumber \\ &=&\frac{1+\kappa\rho}{1-\kappa p_{\theta}}\left[1+\frac{C_1}{r\sqrt{1+\kappa\rho}}-\frac{r^2}{3\kappa}(1+\kappa\rho)+\frac{r^2}{\kappa}-\frac{2}{\kappa r\sqrt{1+\kappa\rho}}\int^r r^2\sqrt{(1+\kappa\rho)} dr \right], \label{eq:general_f} \end{eqnarray} where we have used $H'=\frac{1-\kappa p_\theta}{\sqrt{1+\kappa\rho}}$. Since in vacuum, the EiBI gravity reduces to vacuum GR, we must recover the Schwarzschild solution. This gives $C_1=-2M$, with $M$ being related to the mass. Therefore, we have obtained a complete general solution of the field equations. In the GR limit $\kappa\to 0$, $\psi=1$ and \begin{eqnarray} f(r)\big\vert_{\kappa\to 0}&=&1-\frac{2M}{r}-\frac{r^2}{3\kappa}(1+\kappa\rho)+\frac{r^2}{\kappa}-\frac{2}{\kappa r}\left(1-\frac{\kappa\rho}{2}\right)\int^r r^2\left(1+\frac{\kappa\rho}{2}\right) dr \nonumber\\ &=& 1-\frac{2M}{r}-\frac{1}{r}\int^r \rho r^2 dr, \end{eqnarray} which is the same as that in GR \citep{GR_NED}. For a Maxwell electric field ($p_\theta=\rho$), the integration of the energy conservation equation (\ref{eq:conservation_eqn}) gives $\rho=\frac{Q^2}{r^4}$, where $Q$ is an integration constant representing the charge. Putting this in the last equation, we obtain the metric function of the Reissner-Nordstrom spacetime. The general solution (\ref{eq:general_f}) can represent both black holes and wormholes, depending on the signs and values of the different parameters. The black hole solutions are characterized by event horizons given by the roots of $f(r_H)=0$, with $r_H$ being the radius of an event horizon. However, in this work, we only analyze the wormhole solutions. As we have already shown, wormhole solutions are possible only when $\kappa<0$. The radius $r_0$ of a wormhole throat is given by $(1+\kappa\rho)\vert_{r_0}=0$. Other necessary conditions which have to be satisfied to construct a wormhole are the no-horizon condition and the flare-out condition at the throat. The metric function $\psi^2f$ must be nonzero, positive (no-horizon condition) and finite everywhere (from the throat to spatial infinity). However, because of the $(1+\kappa\rho)$ factor in the denominator of the terms containing the mass $M$ and the integration of (\ref{eq:general_f}), $\psi^2f$ diverges as $r\to r_0$. However, we can remove this divergence if we demand that, at the wormhole throat $r_0$, \begin{equation} M=\frac{1}{|\kappa|}\int^{r_0} r^2\sqrt{1+\kappa\rho} dr. \label{eq:general_condition} \end{equation} In fact, the above condition not only removes the divergence in $\psi^2f$, it also removes the curvature divergences, thereby making the spacetime regular everywhere. This can be checked by finding the Ricci scalar $\mathcal{R}$. Expanding the metric functions around $r=r_0$ or using the l'H$\hat{\text{o}}$tal rule at $r=r_0$, we obtain \begin{equation} \mathcal{R}\big\vert_{r_0}=-\frac{(1-x)\kappa p_0'}{r(1-\kappa p_0)}+\frac{2x}{3r_0^2} \kappa p_0+\frac{2}{r_0^2}(2x-1), \label{eq:ricci} \end{equation} where $x=\frac{r_0^2}{|\kappa|}$ and $p_0$ is the tangential pressure at the throat $r=r_0$, i.e., $p_0=p_\theta(r_0)$. Note that the Ricci scalar is finite at the throat. In terms of $f(r)$, the flare-out condition reads $\frac{f'}{2(1-f)^2}>0$ \citep{rajibul_2015}. At $r=r_0$, we have \begin{equation} f(r)\big|_{r_0}=0, \hspace{0.3cm} \psi^2(r)f(r)\big|_{r_0}=\frac{1}{1-\kappa p_0}(1-x) \end{equation} \begin{equation} \frac{f'}{2(1-f)^2}\Big|_{r_0}=\frac{1}{r_0}(1-x). \end{equation} Note that, to satisfy the no-horizon condition as well as the flare-out condition at the throat, we must have $x<1$, i.e., $r_0<\sqrt{|\kappa|}$. Since, for $x<1$, $f=0$ and $f'>0$ at the throat, $f(r)$ does not have any zeroes at $r>r_0$. On the other hand, for $x>1$, it always possesses zeroes at $r>r_0$. Therefore, we always have a wormhole solution for $x<1$ and a regular black hole (or a wormhole whose throat is covered by an event horizon) solution for $x>1$. The critical value $x_c=1$ distinguishes the wormhole and black hole solutions. \section{Some specific examples} \label{sec:examples} \subsection{Power-law Maxwell field} For a power-law Maxwell electric field, $\varphi= F^{\beta}$. From Eq. (\ref{eq:nonlinear_electrodynamics}), we obtain \begin{equation} \rho=-p_r =(2\beta-1)F^\beta, \hspace{0.3cm} p_\theta =F^\beta=\alpha\rho, \end{equation} where $\alpha=\frac{1}{2\beta-1}$, i.e., $\beta=\frac{1+\alpha}{2\alpha}$. For $\alpha=1$, it represent the energy-momentum tensor of a Maxwell field. Wormhole solutions with the above type of energy-momentum tensor have already been obtained in \citep{rajibul_2015}. Here, we show that we can retrieve these solutions as a special case of the general solution (\ref{eq:general_f}). The energy conservation equation (\ref{eq:conservation_eqn}) can be integrated to obtain \begin{equation} \rho=\frac{C_0}{r^{2(\alpha+1)}}, \end{equation} where $C_0$ is an integration constant. The integration in $f(r)$ can be performed to obtain \begin{eqnarray*} \int^r r^2\sqrt{1+\frac{\kappa C_0}{r^{2(\alpha+1)}}}dr &=& \frac{r^3}{3}\sqrt{1+\frac{\kappa C_0}{r^{2(\alpha+1)}}}+\frac{1}{3}(\alpha+1)\kappa C_0 I(r), \end{eqnarray*} where \begin{eqnarray} I(r)&=&\int^r \frac{1}{r^{2\alpha}\sqrt{1+\frac{\kappa C_0}{r^{2(\alpha+1)}}}} dr\label{eq:integration_I(r)}\\ &=& \left\{ \begin{array}{lr} \frac{2}{3}\log\left[\left(\frac{r}{r_0}\right)^{\frac{3}{2}}+\sqrt{\left(\frac{r}{r_0}\right)^{3}\mp 1}\right] & : \alpha =\frac{1}{2}\\ \frac{r^{1-2\alpha}}{1-2\alpha} {}_2F_1\left[\frac{1}{2},\frac{2\alpha-1}{2\alpha+2},\frac{4\alpha+1}{2\alpha+2};\pm\left(\frac{r_0}{r}\right)^{2\alpha+2} \right] & : \alpha \neq \frac{1}{2} \end{array} \right. , \nonumber \end{eqnarray} where the upper and lower signs are for $\kappa<0$ and $\kappa>0$, respectively, and $r_0=(|\kappa| C_0)^{\frac{1}{2(\alpha+1)}}$. Therefore, we obtain \begin{equation} f(r)=\frac{1+\frac{\kappa C_0}{r^{2(\alpha+1)}}}{1-\frac{\alpha\kappa C_0}{r^{2(\alpha+1)}}}\left[1-\frac{2M}{r\sqrt{1+\frac{\kappa C_0}{r^{2(\alpha+1)}}}}-\frac{C_0}{3r^{2\alpha}}-\frac{2(\alpha+1)C_0}{3r\sqrt{1+\frac{\kappa C_0}{r^{2(\alpha+1)}}}}I(r) \right], \nonumber \end{equation} which is the same as that obtained in \citep{rajibul_2015}. For $\kappa<0$, $r_0=(|\kappa| C_0)^{\frac{1}{2(\alpha+1)}}$ is the wormhole throat radius. Therefore, for $\kappa<0$, Eqs. (\ref{eq:general_condition}) and (\ref{eq:ricci}) become \begin{eqnarray*} M &=&-\frac{(\alpha+1)r_0^{2(\alpha+1)}}{3|\kappa|}I(r_0)\\ &=& \left\{ \begin{array}{lr} 0 & : \alpha =\frac{1}{2}\\ \frac{(\alpha+1)r_0^3}{3(2\alpha-1)|\kappa|} \hspace{0.1cm} {}_2F_1\left[\frac{1}{2},\frac{2\alpha-1}{2\alpha+2},\frac{4\alpha+1}{2\alpha+2};1 \right] & : \alpha \neq \frac{1}{2} \end{array} \right. , \end{eqnarray*} \begin{equation} \mathcal{R}\big|_{r_0}=-\frac{1}{r_0^2}\left[2(\alpha+1)-4\left(\frac{\alpha}{3}+1\right)x\right]. \nonumber \end{equation} The above results match with those obtained in \citep{rajibul_2015}. For the Maxwell electrodynamics $\alpha=1$ and $C_0=Q^2$, with $Q$ being the charge. In this Maxwell electrodynamics case, $f(r)$ becomes \begin{equation} f(r) = \left(\frac{1+\frac{\kappa Q^2}{r^4}}{1-\frac{\kappa Q^2}{r^4}}\right)\left[1-\frac{2M}{r\sqrt{1+\frac{\kappa Q^2}{r^4}}}-\frac{Q^2}{3r^2}+\frac{4Q^2}{3r^2\sqrt{1+\frac{\kappa Q^2}{r^4}}}\,{}_2F_1\left(\frac{1}{2},\frac{1}{4};\frac{5}{4};-\frac{\kappa Q^2}{r^4}\right)\right], \end{equation} where we have used $r_0^4=|\kappa|Q^2$ in $I(r)$. For $\kappa<0$, $r_0$ is the throat radius. \subsection{Born-Infeld electrodynamics} For a static electric field in Born-Infeld electrodynamics, $\varphi$ is given by \begin{equation} \varphi(F)=2b^2\left(1-\sqrt{1-\frac{F}{b^2}}\right), \label{eq:born_infeld_electrodynamics} \end{equation} where $b$ is the Born-Infeld electrodynamics parameter. In the limit $b^2\to \infty$, it reduces to Maxwell electrodynamics. Black hole solutions in EiBI gravity coupled to the above Born-Infeld electrodynamics have been obtained in \citep{soumya_2015}. Here, we highlight the wormhole solutions supported by the above nonlinear electrodynamics. From Eqs. (\ref{eq:nonlinear_electrodynamics}) and (\ref{eq:born_infeld_electrodynamics}), it can be shown that \begin{equation} p_\theta=\frac{\rho}{1+\frac{\rho}{2b^2}}, \label{eq:BI_pressure} \end{equation} which can be used to integrate the conservation equation (\ref{eq:conservation_eqn}) to obtain \begin{equation} \rho=2b^2\left(\sqrt{1+\frac{Q^2}{b^2r^4}}-1\right), \label{eq:BI_energy} \end{equation} where $Q$ is an integration constant representing the charge. In this case, however, it is difficult to perform the integration in $f(r)$ analytically in the Schwarzschild gauge. To perform the integration analytically, we consider the coordinate transformation \begin{eqnarray} H(r)&=&r\sqrt{1+\kappa\rho}=\bar{r}(r) \nonumber\\ \Rightarrow \bar{r}(r)&=&r\sqrt{1+2\kappa b^2\left(\sqrt{1+\frac{Q^2}{b^2r^4}}-1\right)} \label{eq:coordinate_transformation} \end{eqnarray} and obtain \begin{equation} \int^r r^2\sqrt{(1+\kappa\rho)} dr=\frac{1}{2}r^2\bar{r}-\frac{1}{2}\int^r r^2\bar{r}' dr=\frac{1}{2}r^2\bar{r}-\frac{1}{2}\int^{\bar{r}} \frac{\bar{r}^2}{1+\kappa\rho} d\bar{r}. \label{eq:BI_integration_1} \end{equation} Now putting $r=\bar{r}/\sqrt{1+\kappa\rho}$ in (\ref{eq:coordinate_transformation}), we obtain \begin{equation} \frac{1}{1+\kappa\rho}=\frac{1-2\kappa b^2\left(1+\sqrt{1+\frac{Q^2}{b^2\bar{r}^4}-\frac{4\kappa Q^2}{\bar{r}^4}}\right)}{1-4\kappa b^2}. \end{equation} Using the above expression and defining $4\kappa b^2=\alpha$, the integration on the right-hand side of (\ref{eq:BI_integration_1}) becomes \begin{eqnarray} \int^{\bar{r}} \frac{\bar{r}^2}{1+\kappa\rho} d\bar{r}&=&\frac{2-\alpha}{6(1-\alpha)}\bar{r}^3-\frac{\alpha}{2(1-\alpha)}\int^{\bar{r}} \bar{r}^2\sqrt{1+\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4}}d\bar{r}\nonumber\\ &=&\frac{2-\alpha}{6(1-\alpha)}\bar{r}^3-\frac{\alpha}{6(1-\alpha)}\bar{r}^3\sqrt{1+\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4}}\nonumber\\ & & -\frac{4}{3}\kappa Q^2\int^{\bar{r}}\frac{d\bar{r}}{\bar{r}^2\sqrt{1+\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4}}}. \label{eq:BI_integration_2} \end{eqnarray} Note that the integration on the right-hand side of (\ref{eq:BI_integration_2}) is similar to that in Eq. (\ref{eq:integration_I(r)}) (with $\alpha=1$ there). This gives \begin{equation} \int^{\bar{r}}\frac{d\bar{r}}{\bar{r}^2\sqrt{1+\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4}}}=-\frac{1}{\bar{r}} {\;}_2F_1\left[\frac{1}{2},\frac{1}{4},\frac{5}{4};-\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4} \right]. \label{eq:BI_integration_3} \end{equation} Combining (\ref{eq:BI_integration_1}), (\ref{eq:BI_integration_2}) and (\ref{eq:BI_integration_3}), we obtain \begin{eqnarray} \int^r r^2\sqrt{(1+\kappa\rho)} dr &=& \frac{\bar{r}^3}{6(1-\alpha)}\left(2-\alpha-\alpha\sqrt{1+\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4}}\right)\nonumber\\ & & -\frac{2\kappa Q^2}{3\bar{r}} {\;}_2F_1\left[\frac{1}{2},\frac{1}{4},\frac{5}{4};-\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4} \right]. \label{eq:BI_integration_4} \end{eqnarray} So, finally, we obtain \begin{eqnarray} f(\bar{r}) &=& \frac{1+\kappa\rho}{1-\kappa p_\theta}\left[1-\frac{2M}{\bar{r}}+\frac{\alpha\bar{r}^2}{6\kappa(1-\alpha)}\left(1-\sqrt{1+\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4}}\right)\right. \nonumber\\ & & \left. +\frac{4Q^2}{3\bar{r}^2}{\;}_2F_1\left[\frac{1}{2},\frac{1}{4},\frac{5}{4};-\frac{4\kappa Q^2(1-\alpha)}{\alpha\bar{r}^4} \right]\right], \end{eqnarray} which is the same as that obtained in \cite{soumya_2015} for $-\infty<\alpha<1$. In the above expression, $p_\theta$, $\rho$ and $\bar{r}(r)$ are, respectively, given by (\ref{eq:BI_pressure}), (\ref{eq:BI_energy}) and (\ref{eq:coordinate_transformation}). The above solution represents a wormhole for $\kappa<0$, i.e., for $\alpha<0$. The wormhole throat radius $r_0$ is given by $(1+\kappa\rho)|_{r_0}=0$. This gives $\bar{r}(r_0)=0$ or $r_0=\left[4|\kappa|^2b^2Q^2/(1+4|\kappa|b^2)\right]^{1/4}$. In the Maxwell electrodynamics limit $b^2\to \infty$, the throat radius becomes $r_0=(|\kappa|Q^2)^{1/4}$ which is the same as that obtained in the previous subsection. \subsection{Anisotropic fluid with $p_\theta=\rho(1+\kappa\rho)$} As the third example, we consider an anisotropic fluid with the equation of state $p_\theta=\rho(1+\kappa\rho)$. The energy conservation equation (\ref{eq:conservation_eqn}) can be integrated to obtain \begin{equation} \rho=\frac{C_0}{r^4}\frac{1}{1-\frac{\kappa C_0}{2r^4}}, \nonumber \end{equation} where $C_0$ is an integration constant. Putting $r=\frac{1}{z}$, the integration in $f(r)$ can be performed using MATHEMATICA. We obtain \begin{eqnarray*} \int^r r^2\sqrt{(1+\kappa\rho)} dr&=&-\int^z \frac{\sqrt{1+\frac{\kappa C_0}{2}z^4}}{\sqrt{1-\frac{\kappa C_0}{2}z^4}}\frac{dz}{z^4}\\ &=& \frac{1}{3z^3}\sqrt{1-\frac{\kappa^2 C_0^2}{4}z^8}-\frac{\kappa C_0}{2}z {\;}_2F_1\left[\frac{1}{2},\frac{1}{8},\frac{9}{8};\frac{\kappa^2C_0^2}{4}z^8 \right]\\ && +\frac{\kappa^2 C_0^2}{4}\frac{z^5}{15} {\;}_2F_1\left[\frac{1}{2},\frac{5}{8},\frac{13}{8};\frac{\kappa^2 C_0^2}{4}z^8 \right]. \end{eqnarray*} Therefore, we obtain, after some manipulations, \begin{eqnarray} f(r) &=& \frac{1+\kappa\rho}{1-\kappa p_{\theta}}\left[1-\sqrt{\frac{1-\frac{\kappa C_0}{2r^4}}{1+\frac{\kappa C_0}{2r^4}}}\left(\frac{2M}{r}-\frac{C_0}{r^2}{\;}_2F_1\left[\frac{1}{2},\frac{1}{8},\frac{9}{8};\frac{\kappa^2C_0^2}{4r^8} \right]\right. \right. \nonumber\\ & & \left. \left. +\frac{\kappa C_0^2}{30r^6}{\;}_2F_1\left[\frac{1}{2},\frac{5}{8},\frac{13}{8};\frac{\kappa^2 C_0^2}{4r^8} \right]\right) -\frac{\kappa C_0^2}{6r^6}\right]. \end{eqnarray} For $\kappa<0$, the wormhole throat radius is given by $(1+\kappa\rho)\vert_{r_0}=0$ which gives $r_0=\left(\frac{|\kappa| C_0}{2}\right)^{1/4}$. In this case, Eqs. (\ref{eq:general_condition}) and (\ref{eq:ricci}) become \begin{equation} M=\frac{r_0^3}{|\kappa|} {\;}_2F_1\left[\frac{1}{2},\frac{1}{8},\frac{9}{8};1 \right]+\frac{r_0^3}{15|\kappa|} {\;}_2F_1\left[\frac{1}{2},\frac{5}{8},\frac{13}{8};1 \right], \end{equation} \begin{equation} \mathcal{R}\big\vert_{r_0}=\frac{2x}{r_0^2}. \end{equation} Note that, for $\kappa<0$, $p_\theta$ vanishes at the throat and approaches $\rho$ asymptotically. Therefore, $0\leq p_{\theta}\leq \rho$ always, implying that all the energy conditions are satisfied. \section{Conclusion} \label{sec:conclusion} In this work, we have established a relationship between the NCC and the NEC in EiBI gravity. We have shown that, in contrast to GR, in EiBI gravity, a violation of the NCC does not necessarily lead to violations of the various energy conditions, thereby implying that wormholes can be supported by nonexotic matter in this gravity theory. Subsequently, we have obtained exact solutions of the field equations in EiBI gravity coupled to arbitrary nonlinear electrodynamics and anisotropic fluids having energy-momentum of the form $T^\mu_{\;\nu}={\rm diag}(-\rho,-\rho,p_\theta,p_\theta)$. Depending on the signs and values of different parameters, the general solutions can represent both black holes and wormholes. In this work, we have analyzed the wormhole solutions. We have found that the EiBI theory parameter $\kappa$ must be negative so that the wormholes are supported by matter which satisfies all the energy conditions, even though the NCC is violated. As special cases of our general solutions, we have obtained several specific wormhole solutions by considering Maxwell, power-law, Born-Infeld electrodynamics models and an anisotropic fluid having energy-momentum of the form $T^\mu_{\;\nu}={\rm diag}(-\rho,-\rho,\rho(1+\kappa\rho),\rho(1+\kappa\rho))$. Currently, we are studying the black hole aspects of these solutions and hope to report our results in the future. \section*{Acknowledgments} The author acknowledges the Council of Scientific and Industrial Research, India under whose fellowship program a part of the work was done at IIT Kharagpur. He also acknowledges Sayan Kar for a careful reading of the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $K$ be a field, $X$ be a countable set and $K\langle X \rangle$ denote the free associative algebra, freely generated by a set $X$ (i.e., the set of noncommutative polynomials in the variables of $X$). If $f(x_1, \dots, x_m)\in K\langle X\rangle$, and $A$ is a $K$-algebra, $f$ defines a map (also denoted by $f$) \[\begin{array}{cccc} f: & A^m & \longrightarrow & A\\ & (a_1, \dots, a_m) & \longmapsto & f(a_1, \dots, a_m) \end{array}\] by evaluation of variables on elements of $A$. One may ask what is the image of a given polynomial, or either which subsets of $A$ are the image of some polynomial in $K\langle X \rangle$. Problems of this type were attributed to Kaplansky, when $A=M_n(K)$. Also, if $f$ is a multilinear polynomial, Lvov asked if the image of $f$ is always a vector subspace of $M_n(K)$ (see \cite[Problem 1.98]{Dniester}). One can prove that if the answer to this question is true, then the image of $f$ must be one of the following: \[\{0\}, \quad K, \quad sl_n(K) \quad \textrm{ or } \quad M_n(K).\] Here $K$ represents the set of scalar matrices and $sl_n(K)$ the set of trace zero matrices. This is now known as the Lvov-Kaplansky conjecture: \begin{conjecture}[Lvov-Kaplansky conjecture] If $f(x_1, \dots, x_m) \in K\langle X \rangle$ is a multilinear polynomial, then its image on $M_n(K)$ is $\{0\}$, $K$, $sl_n(K)$ or $M_n(K)$. \end{conjecture} A solution to the above conjecture is known only for $m=2$ or $n=2$ (under some restrictions on the base field $K$), and there are partial results for $n=3$ and $m=3$, see \cite{survey} for a compilation of known results about this conjecture and other topics related to images of polynomials on algebras. This kind of problem was also studied for other algebras, not necessarily associative. For instance, when $A$ is the algebra of upper triangular matrices, $UT_n(K)$, or its subset of strictly upper triangular matrices, a complete solution is known under some conditions on the base field $K$ (see \cite{GargatedeMello, Wang_nxn, Fagundes}). If $A$ is the quaternion algebra, a complete solution was given in \cite{MalevQ}. In the non-associative setting, a complete solution is known for the octonion algebra \cite{MalevO} and for some classes of Jordan Algebras, including the (simple) algebra of a symmetric bilinear form \cite{MalevJ}. A related conjecture is the so called Mesyan Conjecture (see \cite{Mesyan} and \cite{MesyanRestated}). It is a weaker version of Lvov-Kaplansy conjecture and it can be stated as follows \begin{conjecture}[Mesyan conjecture] Let $K$ be a field $n\geq 2$ and $m\geq 1$ be integers, and let $f(x_1, \dots, x_m)$ be a nonzero multilinear polynomial in $K\langle x_1, \dots, x_m\rangle$. If $m\leq 2n-1$ then the image of $f$ on $M_n(K)$ contains $sl_n(K)$. \end{conjecture} Such conjecture was proved for $m\leq 4$ (\cite{Mesyan, BuzinskiWinstanley, MesyanRestated}). The theory of image of polynomials on algebras is strongly connected with the theory of algebras with polynomial identities (PI-algebras). For instance a polynomial $f$ is a polynomial identity (PI) of $A$ if its image is $\{0\}$, and $f$ is a central polynomial for $A$ if the image of $f$ is contained in the center of $A$. Also, the theory of polynomial identities provides interesting results on which one can rely to study images of polynomials. For instance, in the solution of the case $n=2$ of Lvov-Kaplansky conjecture, a key argument is based on the fact that the algebra of generic matrices is a domain. Recall that the algebra of generic matrices is an algebra generated by matrices in which the entries are distinct variables (see \cite[Chapter 7]{Drensky}). It is well-known that such algebra is isomorphic to the quotient algebra $\frac{K\langle X \rangle}{Id(M_n(K))}$, where $Id(A)$ denotes the ideal of polynomial identities of an algebra $A$, i.e., the algebra of polynomials module the identities of $A$. An usual approach in studying polynomial identities of algebras, is the use of gradings, specially after the seminal work of Kemer \cite{Kemer}, where gradings were used to give a positive solution to the Specht problem in characteristic zero. Gradings provide an interesting approach to study identities, once one usually reduces the problem of evaluating elements in the whole algebra to evaluating elements in some particular vector subspaces. For instance, one can show that if two algebras satisfy the same graded polynomial identities, then they satisfy the same ordinary identities. Following this line of research, it is a natural step in studying images of polynomials, to consider images of graded polynomials on graded algebras. This was done recently in the papers \cite{CentronedeMello, PlamenPedro} for full and upper triangular matrices. In this case, one needs to work with the so called \emph{graded polynomials} (see \cite{CentronedeMello}). Although in the present paper we are still considering gradings, our approach here is somewhat different to the above mentioned papers. We will use gradings and images of multilinear polynomials, to obtain results about ordinary polynomial identities and central polynomials, and to present an equivalent statement to the Lvov-Kaplansky conjecture. The paper is organized as follows: in section 2 we present the preliminary concepts and results needed in the paper. In section 3, we present an statement (Theorem \ref{identities}) that gives necessary and sufficient conditions for a multilinear polynomial $f$ to be an identity for $M_n(K)$. In section 4, we present necessary and sufficient conditions for a multilinear polynomial to be a central polynomial and we present an equivalence to the Lvov-Kaplansky conjecture and in section 5, we give some applications of our results. \section{Preliminaries} Let $K$ be a field and $G$ be a group (with multiplicative notation). We say a $K$-algebra $A$ is a $G$-graded algebra, if there exist subspaces $A_g$, for each $g\in G$ such that \[A = \oplus_{g\in G}A_g\] and $A_gA_h\subseteq A_{gh}$ for each $g, h\in G$. The elements of the subspace $A_g$ are called homogeneous of degree $g$. Gradings on a matrix algebras $A=M_n(K)$ have been completely classified if $K$ is a an algebraically closed field of characteristic zero. Essentially, they can be presented as a tensor product of a matrix algebra with a kind of grading called \emph{elementary} and a graded division algebra \cite{BahturinZaicev}. A particular kind of elementary grading on $M_n(K)$ is the so called Vasilovsky grading over the group $\mathbb{Z}_n$. In such grading, the component $g$ is the subspace spanned by the matrices $E_{i,j}$ such that $j-i = g $ in $\mathbb{Z}_n$. Here $E_{i,j}$ denotes the matrix with 1 in entry $(i,j)$ and 0 elsewhere (if $i$ or $j\not \in \{1, \dots, n\}$, we consider their representatives module $n$). So for the Vasilovsky grading, the component $t$ is as below \[\left( \begin{array}{ccccccc} 0 & \cdots & 0 & a_{1,t+1} & 0 & \cdots & 0 \\ 0 & \cdots & 0 & 0 & a_{2,t+2} & \cdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & 0 & 0 & \cdots & a_{n-t,n} \\ a_{n-t+1,1} & \cdots & 0 & 0 & 0 & \cdots & 0 \\ 0 & \ddots & 0 & 0 & 0 & \cdots & 0 \\ 0 & \cdots & a_{n,t} & 0 & 0 & \cdots & 0 \\ \end{array} \right), \] The graded polynomial identities of matrices with this grading have been described in \cite{Vasilovsky} for fields of zero characteristic and in \cite{Azevedo} for infinite fields. Also, the central polynomials were described in \cite{Brandao}. When dealing with polynomial identities over fields of characteristic zero, the multilinear polynomials play an important role. Namely, the ideal of identities of a given algebra $A$ is generated (as a T-ideal) by its multilinear polynomial identities. In particular, the sequence of codimensions of a given PI-algebra provides an important way to study (asymptotically) identities of a given algebra or variety of algebras. A polynomial $f(x_1, \dots, x_m)\in K\langle X \rangle$ is called multilinear if it can be written as \[f(x_1, \dots, x_m) = \sum_{\sigma\in S_m}\alpha_\sigma x_{\sigma(1)} \cdots x_{\sigma(m)},\] for some $\alpha_{\sigma}\in K$. Here $S_m$ stands for the symmetric group on $\{1, \dots, m\}$ Since the Lvov-Kaplansky conjecture asks if the image of a multilinear polynomial is a vector subspace, it is important to know what kind of structure such a set has. One can easily see that the image of a polynomial $f$ on an algebra $A$ is invariant under automorphisms of such algebra. Indeed, one just need to notice that if $\varphi$ is an automorphism of $A$, for any $a_1, \dots, a_m\in A$ one has \[\varphi(f(a_1, \dots, a_m)) = f(\varphi(a_1), \dots, \varphi(a_m)).\] In the case of $A=M_n(K)$ this is the same as saying that the image of a polynomial is invariant under conjugation. Also, if the polynomial $f$ is linear in one of its variables, then the image of $f$ is closed under scalar multiplication. The notion of \emph{invariant cone} was defined in \cite{K-BMR} for matrix algebras. We say that a subset of $A$ is an \emph{invariant cone} of $A$ if it is closed under conjugation and scalar multiplication. One such cone is said \emph{irreducible} if it contains no proper invariant cone. Observe that if $S$ is an irreducible invariant cone in $A$, then if the image of $f$ intersects $S$ nontrivially, then $S$ is contained in the image of $f$. \section{A new approach to polynomial identities of matrices} In this section we present a new approach to study polynomial identities on matrices, that relies on the fact that the image of a polynomial is invariant under endomorphisms of algebras. In order to present our main result, we consider $A=\oplus_{g\in \mathbb{Z}_n}A_g$ to be the algebra of $n\times n$ matrices over $K$ endowed with the Vasilovsky grading and we consider the following statement for a multilinear polynomial $f\in K\langle X \rangle$. \begin{enumerate} \item [(S0)] If $a_1, \dots, a_m\in M_n(K)$ are homogeneous matrices satisfying $\sum _{i=1}^m\deg(a_i)=0$ then $f(a_1,\dots, a_m)=0$. \end{enumerate} \begin{theorem}\label{identities} Let $f(x_1,\dots, x_m) \in K\langle X \rangle$ be a multilinear polynomial. Then $f$ is a polynomial identity for $M_n(K)$ if and only if $f$ satisfies \emph{(S0)}. \end{theorem} \begin{proof} The ``only if" part is trivial. Let us assume $f$ satisfies (S0), that is, for any homogeneous $a_1,\cdots,a_m$ satisfying $\sum_{i=1}^m \deg(a_i)= 0$ we have $f(a_1, \dots, a_m)=0$. Taking arbitrary elements $b_1,\dots,b_m\in M_n(K)$ and writing them as \[b_j=\displaystyle \sum_{i\in \mathbb{Z}_n}a_i^{(j)}, \textrm{ for each } j\in \{1, \dots, m\},\] with $\deg(a_i^{(j)})=i$, for $j\in \{1, \dots, m\}$ and $i\in \mathbb{Z}_n$, the multilinearity of $f$ implies that we may open the brackets to obtain \begin{align*} f(b_1,\dots,b_m) & =\sum_{i_j\in \mathbb{Z}_n}f(a^{(1)}_{i_1}, \dots, a^{(m)}_{i_m})\\ & = \sum_{i_1+\cdots +i_m=\overline{0}}f(a^{(1)}_{i_1}, \dots, a^{(m)}_{i_m}) + \sum_{i_1+\cdots +i_m\neq \overline{0}}f(a^{(1)}_{i_1}, \dots, a^{(m)}_{i_m}) \end{align*} Then we obtain \[f(b_1,\dots,b_m) = \sum_{i_1+\cdots +i_m\neq \overline{0}}f(a^{(1)}_{i_1}, \dots, a^{(m)}_{i_m})\] The above means that the image of $f$ on $M_n(K)$ is a subset of the set of zero diagonal matrices (also known as \emph{hollow matrices}). But Theorem 2 of \cite{Fillmore} (see also \cite{Fillmorerestated}) implies that if $a\in M_n(K)$ ($n\geq 2$) is nonzero and has zero diagonal, then it is conjugated to a matrix whose $(1,1)$ entry is nonzero. Since the image of $f$ is invariant under conjugation, it follows that $f(b_1,\dots,b_m)=0$, and $p$ is a polynomial identity for $M_n(K)$. \end{proof} Notice that the above theorem provides a weaker condition to verify whether a multilinear polynomial is an identity for $M_n(K)$. If $f$ is a multilinear polynomial, in order to verify it is an identity for $M_n(K)$, it is enough to verify it vanishes under the evaluation of $f$ in a set which generates $M_n(K)$. As a consequence, we have \begin{corollary} Let $f(x_1, \dots, x_m)\in K\langle X \rangle$ be a multilinear polynomial. Then $f$ is a polynomial identity for $M_n(K)$ if and only if for any set of homogeneous elements $a_1, \dots, a_k$ which spans $M_n(K)$, $f(a_{i_1}, \dots, a_{i_m}) = 0$ whenever $i_j\in \{1, \dots, k\}$ and $\sum_{j=1}^m\deg(a_{i_j})=0$. \end{corollary} The above theorem and more specifically its corollary may be a useful tool when considering computational approaches to polynomial identities (for instance, as in \cite{Bondari}), since it reduces the computational effort to verify if some multilinear polynomial is an identity. \begin{remark} An analogous result (with completely similar proof) holds when considering $M_n(K)$ endowed with any elementary $G$-grading by an abelian group $G$, whose neutral component is the set of diagonal matrices. \end{remark} \section{An equivalence to the Lvov-Kaplansky Conjecture} Let $f(x_1,\dots,x_m)\in K\langle X \rangle$ be a multilinear polynomial and let $A=\oplus_{g\in \mathbb{Z}_n}A_g$ be the algebra of $n\times n$ matrices over $K$ endowed with the Vasilovsky grading. Let us consider the following statements concerning the image of $f$ on $A$: \begin{enumerate} \item [(S1)] If $a_1, \dots, a_m\in M_n(K)$ are homogeneous matrices satisfying $\sum_{i=1}^m \deg(a_i)\neq 0$ then $f(a_1,\dots, a_m)=0$. \item [(S2)] If $a_1, \dots, a_m\in M_n(K)$ are homogeneous matrices satisfying $\sum _{i=1}^m\deg(a_i)=0$ then $tr(f(a_1,\dots, a_m))=0$. \end{enumerate} In this section we show that statements as the above can be used to have a better understanding of the image of $f$. In particular, the above are useful to characterize central polynomials and polynomial identities for matrix algebras. We recall that we denote by $K$ the set of scalar matrices. In particular, $Im(f)\subseteq K$ means that $f$ is a central polynomial for $M_n(K)$. \begin{lemma}\label{S1S2} Let $f(x_1,\dots,x_m)\in K\langle X \rangle$ be a multilinear polynomial. Then \begin{enumerate} \item $Im(f)\subseteq K$ if and only if $f$ satisfies \emph{(S1)}. \item $Im(f)\subseteq sl_n(K)$ if and only if $f$ satisfies \emph{(S2)}. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item The proof is similar to the proof of Theorem \ref{identities}. Again, the ``only if" part is trivial. Assume $f$ satisfies {(S1)}, that is, for any homogeneous $a_1,\cdots,a_m$ satisfying $\sum_{i=1}^m \deg(a_i)\neq 0$ we have $f(a_1,\cdots,a_m)=0$. Let $b_1,\cdots, b_m\in M_n(K)$ and write them as \[b_j=\displaystyle \sum_{i\in \mathbb{Z}_n}a_i^{(j)},\] with $\deg(a_i^{(j)})=i$, for $j\in \{1, \dots, m\}$ and $i\in \mathbb{Z}_n$. Since $f$ is multilinear \begin{align*} f(b_1,\cdots,b_m)&=\displaystyle \sum_{i_1+\cdots+i_m=0}f(a_{i_1}^{(1)},\cdots,a_{i_m}^{(m)}) +\displaystyle \sum_{i_1+\cdots+i_m\neq 0}f(a_{i_1}^{(1)},\cdots,a_{i_m}^{(m)})\\ &=\displaystyle \sum_{i_1+\cdots+i_m=0}f(a_{i_1}^{(1)},\cdots,a_{i_m}^{(m)}). \end{align*} This would imply that $Im(f)$ lies in the subset of diagonal matrices. But it is well know that if a diagonal matrix is not scalar, then it is conjugated to a nondiagonal matrix, which cannot occur, since $Im(f)$ is invariant under conjugation. As a consequence, $Im(f)\subseteq K$, i.e., $f$ is a central polynomial. \item Again, the only if part is trivial. Let us now assume that $f$ satisfies {(S2)}, that is, for any homogeneous $a_1,\cdots,a_m$ satisfying $\sum_{i=1}^m \deg(a_i)= 0$ we have $tr(f(a_1, \dots, a_m))=0$. Again, taking arbitrary $b_1,\cdots, b_m\in M_n(K)$ and writing them as \[b_j=\displaystyle \sum_{i\in \mathbb{Z}_n}a_i^{(j)},\] with $\deg(a_i^{(j)})=i$, for $j\in \{1, \dots, m\}$ and $i\in \mathbb{Z}_n$, the multilinearity of $f$ implies that \[tr(f(b_1,\dots, b_m)) = tr(\sum_{i_1+\cdots +i_m\neq \overline{0}} f(a^{(1)}_{i_1}, \dots, a^{(m)}_{i_m}))=0.\] This means $Im(f)\subseteq sl_n(K)$. \end{enumerate} \end{proof} From now on, we assume $K$ to be a field of characteristic $0$ or $p$ such that $p$ does not divide $n$. Putting together the above lemma and Theorem \ref{identities}, we obtain \begin{corollary} The multilinear polynomial $f(x_1,\dots, x_m)$ satisfies \emph{(S0)} if and only if $f$ satisfies both statements \emph{(S1)} and \emph{(S2)}. In particular, $f$ is a polynomial identity for $M_n(K)$ if and only if $f$ satisfies both \emph{(S1)} and \emph{(S2)}. Also, $Im(f)=K$ if and only if $f$ satisfies (S1) and do not satisfy \emph{(S2)}. \end{corollary} \begin{proof} The proof follows straightforward from Lemma \ref{S1S2} and Theorem \ref{identities} and from the fact that $sl_n(K)\cap K = \{0\}$. \end{proof} Summarizing the results up to now, we have for a multilinear polynomial $f$: \begin{itemize} \item $f$ satisfies (S1) and (S2) if and only if $f$ is a polynomial identity \item $f$ satisfies (S1) and do not satisfy (S2), if and only if the image of $f$ is $K$. \item $f$ satisfies (S2) and do not satisfy (S1), if and only if $f$ is not an identity and the image of $f$ lies in $sl_n(K)$. \end{itemize} Our hope is that the last case is equivalent to $Im(f)= sl_n(K)$ and that if $f$ does not satisfy neither (S1) nor (S2) than $Im(f)=M_n(K)$. For now, we are not able to prove this, so we weaken this to consider the linear span of the image of $f$. \begin{proposition}\label{notS1} The polynomial $f$ does not satisfy \emph{(S1)} if and only if the linear span of $Im(f)$ contains $sl_n(K)$. \end{proposition} \begin{proof} The ``if part" is trivial. Let us assume (S1) is false. Then, there exists $a_1, \dots, a_m\in M_n(K)$ homogeneous such that $\sum_{i=1}^m \deg(a_i)\neq 0$ and $f(a_1, \dots, a_m)\neq 0$. In particular, the matrix $f(a_1, \dots, a_m)$ is nonzero and nondiagonal. Writing each $a_i$ as a linear combination of matrix units and considering that $f$ is a multilinear polynomial, opening brackets, gives us that some matrix unit $E_{i,j}$ with $i\neq j$ lies in the image of $f$. Since all the $E_{i,j}$, with $i\neq j$ are conjugated to each other, we obtain that any zero diagonal matrix lies in the linear span of the image of $f$. Again, since each trace zero matrix is equivalent to a matrix with zero diagonal by Theorem 2 of \cite{Fillmore}, we obtain that the linear span of $Im(f)$ contains $sl_n(K)$. \end{proof} \begin{corollary} The multilinear polynomial $f(x_1, \dots, x_m)$ do not satisfy \emph{(S1)} and satisfies \emph{(S2)} if and only if the linear span of $Im(f)$ is $sl_n(K)$. \end{corollary} \begin{proof} It is a direct consequence of Proposition \ref{notS1} and Lemma \ref{S1S2} (2). \end{proof} \begin{theorem} The polynomial $f$ does not satisfy \emph{(S1)} and does not satisfy \emph{(S2)} if and only if the linear span of $Im(f)$ equals $M_n(K)$. \end{theorem} \begin{proof} The ``if part" is trivial. Assuming (S2) is false, there exist $a_1, \dots, a_m\in M_n(K)$ homogeneous with $\sum_{i=1}^m\deg(a_i)=0$ and $tr(f(a_1, \dots, a_m))\neq 0$. This means that $Im(f)$ contains a nonscalar diagonal matrix. In particular, since $Im(f)$ is closed under scalar multiplication, we obtain $Im(f)$ contains diagonal matrices of arbitrary traces. Let now $a\in M_n(K)$ and let $b\in Im(p)$ be a diagonal matrix such that $tr(a)=tr(b)$. Then $tr(a-b)=0$, and assuming $f$ does not satisfy (S1), Proposition \ref{notS1} asserts that $a-b$ lies in the linear span of image of $f$. Since $a=b+(a-b)$, we obtain that $a$ lies in the linear span of $Im(f)$. \end{proof} \begin{remark}\label{considerations} Summarizing again the results up to now, we have for a multilinear polynomial $f$: \begin{enumerate} \item $f$ satisfies \emph{(S1)} and \emph{(S2)} if and only if $f$ is a polynomial identity \item $f$ satisfies \emph{(S1)} and do not satisfy \emph{(S2)}, if and only if the image of $f$ is $K$. \item $f$ satisfies \emph{(S2)} and do not satisfy \emph{(S1)}, if and only if the linear span of the image of $f$ is $sl_n(K)$. \item $f$ does not satisfy neither \emph{(S1)} nor \emph{(S2)}, if and only if the linear span of the image of $f$ is $M_n(K)$. \end{enumerate} \end{remark} Let us now discuss the above situation under the hypothesis that the Lvov-Kaplansky conjecture is true. Of course the Lvov-Kaplansky conjecture is true if and only if the linear span of $Im(f)$ equals $Im(f)$ for each multilinear polynomial $f$. In particular, by the above remark, if the Lvov-Kaplansky conjecture is true, then we may replace the linear span of $Im(f)$ by $Im(f)$ in the last two cases. And of course, if the last two cases are still true when one replaces the linear span of $Im(f)$ by $Im(f)$, then the Lvov-Kaplansky conjecture is also true. We have just proved the following theorem. \begin{theorem}\label{equivalence} The Lvov-Kaplansky is true if and only if the following assertions hold: \begin{enumerate} \item If $f$ does not satisfy \emph{(S1)} and satisfies \emph{(S2)}, then $Im(f)=sl_n(K)$. \item If $f$ does not satisfy neither \emph{(S1)} nor \emph{(S2)}, then $Im(f)=M_n(K)$. \end{enumerate} \end{theorem} Now assume $f=f(x_1, \dots, x_m)$ multilinear and $m\leq 2n-1$. Then by Remark \ref{considerations} $f$ is not an identity nor a central polynomial. In particular, $f$ does not satisfy $(S1)$ and we obtain the following \begin{theorem} The Mesyan conjecture is true if the following assertion holds: \begin{enumerate} \item If $f$ does not satisfy $(S1)$ then $Im(f) \supseteq sl_n(K)$. \end{enumerate} \end{theorem} The above gives rise to the following conjecture, which is stronger than Mesyan's and weaker than Lvov-Kaplansky's. \begin{conjecture} If a multilinear polynomial $f(x_1, \dots, x_m)$ does not satisfy $(S1)$ then $Im(f)\supseteq sl_n(K)$. \end{conjecture} \section{Applications} Now we give some applications of the results presented above. Our examples will be based on the results of \cite{CentronedeMello}. These are interesting examples showing that the knowledge of images of graded polynomials may be useful to understand images of ordinary polynomials. First we will give an alternative proof for the following theorem of \cite{K-BMRJPAA}. \begin{theorem}[Theorem 1 of \cite{K-BMRJPAA}] Let $f(x_1, \dots, x_m)$ be any multilinear polynomial evaluated on $n\times n$ matrices over an infinite field. Assume that $f$ is neither central nor PI. Then $Im(f)$ contains a matrix of the form $\sum_{i=1}^nc_i E_{i,i+1}$, where $c_1\cdots c_n\neq 0$. When $char(K)$ is $0$ or prime to $n$, $Im(f)$ contains a matrix with eigenvalues $\{c,c\varepsilon, \dots, c\varepsilon^{n-1}\}$ for some $0\neq c \in K$. \end{theorem} \begin{proof} From our hypothesis we have $Im(f)\not\subseteq K$. Then, by part (1) of Lemma \ref{S1S2}, we obtain that $f$ does not satisfy (S1). In particular, there exist $a_1, \dots, a_m$ homogeneous elements with $g=\sum_{i=1}^m \deg(a_i) \neq 0$ such that $f(a_1, \dots, a_m) \neq 0$. In particular, $E_{i,i+g}\in Im(f)$, for some $g\neq 0$ in $\mathbb{Z}_n$. Since all $E_{i,j}$ with $i\neq j$ are conjugated with each other, for each $h\neq 0$, we obtain that $E_{i,i+g}\in Im(f)$ and this can be realized as an evaluation of $f$ by matrix untis $b_1, \dots, b_m$. But all matrix units are homogeneous in the Vasilovsky grading. So if we consider the algebra of $\mathbb{Z}_n$-graded polynomials $K\langle Y|\mathbb{Z}_n \rangle$ and take $y_i\in Y$ such that $\deg(y_i)=\deg(b_i)$ for $i=1, \dots, m$, then $f(y_1, \dots, y_m)$ is a nonzero $\mathbb{Z}_n$-graded polynomial of degree $h\in \mathbb{Z}_n\setminus \{0\}$. By \cite[Lemma 14]{CentronedeMello}, there exists a nonsingular matrix in the image of the graded polynomial $f(y_1, \dots, y_m)$. This is a matrix of the form $\sum_{i=1}^nc_iE_{i,i+h}$ with $c_1, \dots, c_n\neq 0$. The above holds for any $h\neq 0$ in $\mathbb{Z}_n$. In particular, for $h=1$, we obtain matrix of the form $\sum_{i=1}^n c_iE_{i,i+1}$ with $c_1, \dots, c_n\neq 0$, and the proof is complete. \end{proof} As another application, we present a proof of the following theorem, which is the first part of Theorem 1 of \cite{MalevJAA}: \begin{theorem} If $f$ is a multilinear polynomial evaluated on the matrix ring $M_2(K)$ (where $K$ is an arbitrary field of characteristic different from 2), then $Im(f)$ is either $\{0\}$, or $K$ (the set of scalar matrices), or $Im(f) \supseteq sl_2(K)$. \end{theorem} \begin{proof} Assume $f$ is not a central polynomial nor an identity. Then $Im(f)\not \subseteq K$. By Lemma \ref{S1S2}, $f$ does not satisfy $(S1)$. This means that there exist homogeneous elements $a_1, \dots, a_m\in M_2(K)$ with $\sum_{i=1}^m \deg(a_i) \neq 0$ (i.e., $\sum_{i=1}^m \deg(a_i) =1$ in $\mathbb{Z}_2$) and $f(a_1, \dots, a_m) \neq 0$. As in the previous example, set $\deg(y_i) = \deg(a_i)$. Then $f(y_1, \dots, y_m)$ is a graded polynomial of degree $1$ in $\mathbb{Z}_2$. Now we use Lemma 14 of \cite{CentronedeMello}, which states that since $f(y_1, \dots, y_m)$ is a nonzero multilinear graded polynomial of nonzero degree, then the image of $f$ contains a nonzero singular matrix of degree $1$ in $\mathbb{Z}_n$. As a consequence, the image of $f$ contains the set $(M_2(K))_1$, which is the set of all hollow matrices (matrices with zero in the main diagonal). But any trace zero matrix is equivalent to a hollow matrix. As a consequence, we obtain that the image of $f$ contains all trace zero matrices. \end{proof} As a last application of the results of the previous section, we give a new prove of \cite[Lemma 9]{K-BMR}. This lemma is a key step in the proof the Lvov-Kaplansky conjecture for $2\times 2$ matrices over a quadratically closed field. This will be an easy consequence of the following Lemma: \begin{lemma} Let $f(x_1,\dots, x_m) \in K\langle X \rangle$ be a multilinear polynomial. Then the image of $f$ evaluated on $M_2(K)$ is $sl_2(K)$ if and only if $f$ satisfies \emph{(S2)} and $f$ do not satisfy \emph{(S1)}. \end{lemma} \begin{proof} The "only if" part is trivial. From the proof of the above lemma, if $f$ does not satisfy $(S1)$, then $Im(f) \supseteq sl_2(K)$. Also, by Lemma \ref{S1S2}, if $f$ satisfy $(S2)$, then $Im(f)\subseteq sl_2(K)$. \end{proof} \begin{lemma}[Lemma 9 of \cite{K-BMR}] If $f$ is a multilinear polynomial evaluated on the matrix ring $M_2(K)$, then $Im(f)$ is either $\{0\}$, $K$, $sl_2(K)$, $M_2(K)$, or $M_2(K)\setminus \tilde K$, where $\tilde K$ is the set of all nondiagonalizible and non-nilpotent matrices. \end{lemma} \begin{proof} Assume $f$ is neither central nor PI. Then $f$ does not satisfy (S1). Now we have two cases to consider. If $f$ satisfies (S2) then $Im(f)$ is $sl_2(K)$, by the above lemma. If $f$ does not satisty (S2), then $Im(f)\supseteq sl_2(K)$ is a proper inclusion. In particular, there exists a diagonal matrix with nonzero trace in $Im(f)$. As a consequence, $Im(f)$ contains all diagonalizable matrices. So $Im(f)$ contains $M_n(K)\setminus\tilde K$. Now we have two cases to consider. If $Im(f)$ contains some element of $\tilde K$ then it contains the whole $\tilde K$, since it is an irreducible invariant cone, and in this case we have $Im(f) = M_2(K)$. Otherwise, we have $Im(f)= M_n(K)\setminus \tilde K$. \end{proof} \section{Funding} This work was supported by São Paulo Research Foundation (FAPESP), grant 2018/23690-6. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \hspace{3ex} Optimal transport problem that was formulated by Monge in the XVIII century, then reformulated fifty years ago by Kantorovich and it has recently been rediscovered by C. Villani \cite{CV}. This problem then was applied in different contexts, for example in option pricing \cite{HPL2013}. Particle methods which were extensively researched by P. Del Moral in \cite{DM2004}, \cite{DM2013} and \cite{DM} allow to find so-called "optimal transport". For this purpose a set of discrete weighted samples, i.e. particles, is used to approximate an importance measure, and then to predict a posterior distribution by propagating the set of particles until we get an estimate. Another approach has been proposed by Daum's et al. \cite{DH2013}, \cite{DH2011} that allows the reduction of the number of particles we need in order to get a tolerable level of errors in the filtering problem. The main idea behind this method is the evolution in homotopy parameter $\lambda$ (a "pseudotime") from prior to the target density. They introduced a particle flow, in which particles are gradually transported without the necessity to randomly sample from any distribution. This approach as an optimal transport problem allows optimally move the set of particles according to Bayes' rule. In other words, the particles are progressively transported according to their flow. One can in this way reduce the number of needed samples, since the variance and bias of the estimator is lower and as a result reduce the computational burden in both the estimation and the prediction steps. In this paper we adapt homotopy transport in Stein-Stein stochastic volatility model \cite{SS1991} to price a European option and extend Daum's et al. method by reweighing the generated particles' trajectories that allows to efficiently transport the particles from a prior transition density to a posterior one under the measurement impact. The idea of transportation and reweighing mechanism is to transport particles through the sequence of densities that move the least during the synthetic time until they reach the posterior distribution. By regenerating particles according to their weight at each time step we are able to direct the flow and further minimize the variance of the estimates. The transportation of particles can be understood as a geodesic flow in a convex subset of a Euclidean space. We show that homotopy transport allows to significantly reduce the variance compared to a particle filtering technique. Path reweighing allows further reduce both the variance and the bias of estimators. The rest of the article is organized as follows. Section 2 formulates the problem of computing the expectation when we have partially observed variables and shows the solution using particle filter method. Section 3 formulates the problem defined in section 2 in the context of optimal transport and presents the homotopy transport approach to solve the problem. Section 4 shows the mixture of homotopy tranport and path reweighing approach and, actually, extends the method proposed in section 3. Section 5 provides numerical results. Section 6 concludes. \section{Particle Filtering} \subsection{Problem formulation} \hspace{3ex} Many problems arises in financial applications when one has to compute expectations with partially observed information. A simple example is an option pricing with hidden volatility dynamics. Assume that we denote by $\{Y_t\}_{t\geq 0} \in \mathbb{R}^{n_Y}$ asset returns, which are observed from the dynamics of prices, while the hidden factor $\{X_t\}_{t\geq 0} \in \mathbb{R}^{n_X}$ is unobservable. Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and the set of observed data at time $t$ be $(\mathcal{F}_t)$ is a filtration generated by a process $(Y_t)_{t\geq 0}$. The classical problem, where particle filtering is applied, is to extract a sequence of hidden variables $X_t$. It is formalized in the following way, given an initial $\mathbb{R}^{n_X}$-dimensional random variables $x_0$ with distribution $\mathbb{P}_{x_0}$, then for $t\in\mathbb{N}$: \begin{equation} \left\{ \begin{array}{c} X_t = f(X_{t-1},\epsilon_t)\\ Y_t = h(X_t,Y_{t-1},\eta_t) \end{array} \right. \end{equation} where the first equation is the hidden process, with $\epsilon_t:\Omega \rightarrow \mathbb{R}^{n_X}$ are i.i.d random variables, the map $f:\mathbb{R}^{n_X} \rightarrow \mathbb{R}^{n_X}$ is $\mathcal{B}(\mathbb{R}^{n_X})$ - measurable. The second equation is called a measurement model with $\eta_t:\Omega \rightarrow \mathbb{R}^{n_Y}$ are i.i.d. random variables and the map $h:\mathbb{R}^{n_X}\times\mathbb{R}^{n_Y} \rightarrow \mathbb{R}^{n_Y}$ is $\mathcal{B}(\mathbb{R}^{n_X})\otimes \mathcal{B}(\mathbb{R}^{n_Y})$ - measurable. Given above stochastic dynamical system, we would like to compute the following conditional expectation: \begin{equation} \label{exp1} \mathbb{E}[z(X_{t})|\mathcal{F}_t]=\frac{1}{\mathcal{Z}}\int \nu(dx_{0:t})\rho_t(x_{0:t},Y_{1:t})z(x_{t}) \end{equation} with a distribution of $X_{0:t}$: \begin{equation} \nu(dx_{0:t})=p_0(dx_0)\prod_{l=0}^t k_l(X_{l-1},X_l)\mu(dx_l) \end{equation} and normalizing constant $\mathcal{Z}$: \begin{equation} \mathcal{Z}=\int \nu(dx_{0:t})\rho_t(x_{0:t},Y_{1:t}) \end{equation} where $(X_t)_{t\geq 0}$ forms a Markov Chain in $(\mathbb{R}^{n_X},\mathcal{B}(\mathbb{R}^{n_X}))$ with transition density $k_t:\mathbb{R}^{n_X}\times \mathbb{R}^{n_X} \rightarrow \mathbb{R}^{n_X}_+$ with respect to the measure $\mu(dx)$. The random variables $(Y_t)_{t\geq 0}$ in $(\mathbb{R}^{n_Y},\mathcal{B}(\mathbb{R}^{n_Y}))$ are conditionally independent given $(X_t)_{t\geq 0}$ with transition density (likelihood) $\rho_t:\mathbb{R}^{n_X}\times \mathbb{R}^{n_Y} \rightarrow \mathbb{R}_+^{n_Y}$ with reference measure $\gamma$. Intuitively, one could think that we could use naive Monte Carlo technique to approximate (\ref{exp1}): \begin{equation} \mathbb{E}[z(X_{t})|\mathcal{F}_t] \approx \int(\mathbb{M}^{n_X}\mathbb{P})(d (x_{0:t})z(x_t)=\frac{1}{n_X}\sum_{i=1}^{n_X} f(X^{(i)}_t) \end{equation} where the sampling operator $\mathbb{M}^{n_X}\nu=\frac{1}{n_X}\sum_{i=1}^{n_X}\delta_{X^{(i)}}$, $X \in \mathbb{R}^{n_X}$ and $\forall i=1,...,n_X$, $X^{(i)}$ are i.i.d. draws from $\nu$. The problem with naive Monte Carlo Sampling lies in the fact that we don't know how to sample from conditional distribution $\mathbb{P}(d(x_{0:t}))=\mathbb{P}(X_{0:t}\in dx_{0:t}|Y_{1:t})$. Moreover, computation of normalization constant $\mathcal{Z}$ is a big challenge. There is a lot of research made to tackle this problem, for example \cite{DM2004}, where the problem is transformed from a partially observed to a fully observed, by introducing a so called filtering distribution, that links observed and latent variables and recursively updates it. \begin{proposition} Conditional probability (filtering distribution) $\Xi_t=P(X_t\in \cdot|Y_1,...,Y_t)$ with prior $X_0\sim p_0$ could be computed sequentially: \begin{equation} \label{op1} \Xi_tz=\frac{\int \Xi_{t-1}(dx_{t-1})k_t(X_{t-1},x)\mu(dx)\rho_t(x,Y_t)z(x)}{\int \Xi_{t-1}^{\mu}(dx_{t-1})k_t(X_{t-1},x)\mu(dx)\rho_t(x,Y_t)} \end{equation} with $\Xi_0=p_0$ and $\Xi_tz=\int P(x_{0:t}\in dx_{0:t}|Y_1,...,Y_t)z(x_{0:t})$ \end{proposition} Denote the corresponding values of the hidden process as $(X_0,...,X_t)$ and the values of the measurement process as $(Y_0,...,Y_t)$. If there exists an absolutely continuous probability measure $\mathbb{P}\ll\mathbb{Q}$, than for $t=0,...,N$ we have: \begin{equation} \label{RDN1} \mathbb{E}^{\mathbb{P}}[z(X_{t})|\mathcal{F}_t]=\mathbb{E}^{\mathbb{Q}}[z(X_{t})\frac{d\mathbb{P}}{d\mathbb{Q}}(X_{0:t})|\mathcal{F}_t] \end{equation} An importance measure $\mathbb{Q}$ could be chosen arbitrarily as soon as the continuity of the measure is preserved. But usually in a sequential importance sampling literature it is common to see the approximation of $\mathbb{Q}$, given that there exists an absolutely continuous importance kernel $\widetilde{K}_t$, such that for $K\ll \widetilde{K}_t$ as: \begin{equation} \mathbb{Q}(B)=\sum_{i=1}^M \omega^{(i)}_t \widetilde{K}_t(X^{(i)}_{t-1},A_i), \ \ B\in\mathcal{B}(\mathbb{R}^{n_X}) \end{equation} where $A_i=\{X_{t}\in\mathbb{R}^{n_X}|\mathbb{1}_{B}(X^{(i)}_{t-1},X_{t})=1\}$, $(\omega_{t-1}^{(i)})_{i=1}^M$ is the weight function, and for $i=1,...,n_X$, $(X^{(i)}_0,...,X^{(i)}_t)$ are independent trajectory realizations. Now assume that the prior and sampling kernels $K_t$ and $\widetilde{K}_t$ have densities $k_t$ and $\widetilde{k}_t$ with respect to the measure $\mu$, $\forall t=1,...,T$. For $0<...<t$, the Radon-Nikodym derivative in (\ref{RDN1}) is: \begin{equation} \frac{d\mathbb{P}}{d\mathbb{Q}}(X_{0:t})=\frac{1}{\mathcal{Z}}\rho_1(X_1,Y_1)\frac{k_1(X_0,X_1)}{\widetilde{k}_1(X_0,X_1)}...\rho_t(X_{t},Y_t)\frac{k_t(X_{t-1},X_t)}{\widetilde{k}_{t}(X_{t-1},X_t)} \end{equation} where the importance measure is given by: \begin{equation} \mathbb{Q}(dx_{0:t})=p_0(dx_0)\widetilde{k}_1(x_0,x_1)\mu(dx_1)...\widetilde{k}_t(x_{t-1},x_t)\mu(dx_t) \end{equation} Observe, that we still can not compute a normalization constant $Z$, otherwise to compute the filtering distribution $\Xi_t$ will not be a problem, so we will need to apply normalized operator $\mathbb{M}^{N_x}$ to approximate filtering distribution: \begin{equation} \mathbb{E}^{\mathbb{P}}[z(X_{0:t})|\mathcal{F}_t]\approx \int\mathbb{M}^{n_X}\mathbb{P}(dx_{0:t})z(x_t)=\sum_{i=1}^{n_X}\widetilde{\omega}^{(i)}_tz(X_t^{(i)})\delta_{X_t^{(i)}}(dx_t) \end{equation} where the normalized importance weight function: \begin{equation} \label{w11} \widehat{\omega}_{t}^{(i)}(X_{t}^{(i)})=\frac{\omega_{t}^{(i)}(X_{t}^{(i)})}{\sum_{i=1}^M \omega_{t}^{(j)}(X_{t}^{(j)})} \end{equation} and an unnormalized weight is given by: \begin{equation} \omega_{t}^{(i)}(X_{t}^{(i)})=\prod_{l=1}^{t}\rho_{l}(X_{l}^{(i)},Y_{l})\frac{ k_l(X_{t-1}^{(i)},X_{l}^{(i)})}{\widetilde{k}_l(X_{l-1}^{(i)},X_{l}^{(i)})} \end{equation} Observe that importance weights $\{\widehat{\omega}_{t}^{(i)}\}_{i=1}^{n_X}$ are positive and $\sum_{i=1}^{n_X}\widehat{\omega}_{t}^{(i)}=1$. Since Particle filters showed weight degeneracey as number of time steps increased, Gordon et al. (1993) proposed a resampling step to the algorithm, which could be described by the following nonlinear equation: \begin{equation} \Xi_t=\Phi_t\Xi_{t-1}^{\mu} \ \ \mbox{with} \ \ \Xi_{0}=p_0 \end{equation} where the nonlinear operator $\Phi_t$ is given by: \begin{equation} (\Phi_t\nu)z=\frac{\int \nu(dx_{t-1})k_t(X_{t-1},x)\mu(dx)\rho_t(x,Y_t)z(x)}{\int \nu(dx_{t-1})k_t(X_{t-1},x)\mu(dx)\rho_t(x,Y_t)} \end{equation} The action of the operator $\Phi$ could be schematically described as: \begin{equation} \Xi_{t-1} \xrightarrow[]{Mutation} \mathcal{M}\Xi_{t-1} \xrightarrow{Reweighing} \Omega_t \mathcal{M}\Xi_{t-1} \end{equation} where the mutation operator $\mathcal{M}$ is given by \begin{equation} (\mathcal{M}\nu)(z)=\int \nu(dx_{t-1})p(x_{t-1},x)\mu(dx)z(x) \end{equation} and the reweighing operator $\Omega_t$ has the form \begin{equation} \Omega_t(\nu)z=\frac{\int \nu(dx)\rho_t(x,Y_t)f(x)}{\int \nu(dx)g(x,Y_t)} \end{equation} After the reweighing step we get the following approximation of the filtering distribution $\Xi_{t-1}$: \begin{equation} \widehat{\Xi}_{t-1}=\sum_{i=1}^{n_X} \widetilde{\omega}_{t-1}^{(i)}\delta_{X_{t-1}^{(i)}} \end{equation} where $\{X_{t-1}^{(i)}\}_{i=1}^{n_X} \sim \mathcal{M}\widehat{\Xi}_{t-2}$. We see from above equations that $n_X$ particles are sampled from an empirical distribution $\widehat{\Xi}_t$, i.e. it is itself defined through $n_X$ particles. Let us give the intuition behind the reweighing step. The idea behind it is in the fact, that at this step particles with low weights have lower probability to be sampled compared with particles with high importance weights. Consequently, in this step particles with low weights will be neglected, while particles with high weights will be sampled more frequently. \subsection{Particle Filtering Algorithm} \label{alg0} \hspace{3ex} The algorithm allows to approximate $\Xi_{t-1}^{\mu}$ by the empirical distribution $\widehat{\Xi}_{t-1}^{\mu}$ compute by the following reccurence equations: \begin{equation} \widehat{\Xi}_t=\widehat{\Phi}_t\widehat{\Xi}_{t-1} \ \ \mbox{with} \ \ \widehat{\Xi}_{0}=p_0 \end{equation} where $\widehat{\Phi}_t:=\Omega_t \mathbb{M}^{n_X}\mathcal{M}$. It consists of three steps: \begin{equation} \widehat{\Xi}_{t-1} \xrightarrow[]{Mutation} \mathcal{M}\widehat{\Xi}_{t-1} \xrightarrow[]{Sampling} \mathbb{M}^{n_X}\mathcal{M}\widehat{\Xi}_{t-1} \xrightarrow{Reweighing} \Omega_t \mathbb{M}^{n_X}\mathcal{M}\widehat{\Xi}_{t-1} \end{equation} At time $t=0$, we generate $M$ i.i.d. random variables from the prior distribution. For $t=1,...,N-1$ we propagate $X_t \in \mathbb{R}^{n_X}$ according to the dynamics of the hidden process, update the measurement, to get a couple of random vectors $(X_{t+1},Y_{t+1})$ in the first step. Resample particles according to their probability weights $\widehat{\omega}_{t+1}(X_{t+1})$ and set resampled particles $\widehat{X}_t$. At the final time step $t$ compute the estimate of (\ref{op1}): \begin{equation} \widehat{C}^{PF}=\frac{1}{n_X}\sum_{i=1}^{n_X} z(\widehat{X}_t^{(i)}) \widehat{\omega}_{t-1}^{(i)}(X_{t-1}^{(i)})\delta_{X_{t-1}^{(i)}} \end{equation} where $\{X_{t-1}^{(i)}\}_{i=1}^{n_X} \sim \mathcal{M}\widehat{\Xi}_{t-2}$. \begin{algorithm}[H] Initialization: $i=1,...,n_X$ - $\#$(simulations), $t=1,...,T$ - $\#$(time steps) \\ Draw $\{X_0^{(i)}\}_{i=1}^{n_X}$ from the prior $p_0(x)$. Set $\{\omega_0^{(i)}\}_{i=1}^{n_X}=\frac{1}{n_X}$;\ \For{$t=1,...,N$}{ \For{$i=1,...,n_X$}{ Propagate particles using state equation $X_t^{(i)} = f(X_{t-1}^{(i)},Y_{t-1}^{(i)},\epsilon_t)$\; Measurement update: $Y_t = h(X_t^{(i)},Y_{t-1}^{(i)},\eta_t)$\; Compute effective sample size $M_{eff}:\frac{1}{\sum_{i=1}^M (\omega^{(i)}_t)^2}$\; \If{$M_{eff}<M$ or $k<N$}{ Resample using weight $\widehat{\omega}_t^{(i)}(X^{(i)}_t)\frac{\omega_t^{(i)}(X^{(i)}_t)}{\frac{1}{n_X}\sum_{j=1}^M \omega_t^{(j)}X^{(j)}_t)}$} } Set resampled particles as $\widehat{X}^{(i)}_t$ } \caption{PF Algorithm} \end{algorithm} Despite the advantage of sampling from highly non-linear and non-gaussian filtering distributions, we need to mention its limitations. In fact, today we have to deal with high-dimensional data, as it was shown in \cite{BLB2008}, \cite{SBB2008}, \cite{SP2015}, the collapse of weights occurs unless the sample size grows super-exponentially. Homotopy transport allows us to sample efficiently in high-dimensional framework, while avoiding the explosion of the sample size. \section{Homotopy Transport} \hspace{3ex} The classical optimal transport problem is to find over all maps $\mathcal{T}:\mathbb{R}^{n_X} \rightarrow \mathbb{R}^{n_X}$, such that for $X\sim \mathbb{P}$, $\mathcal{T}(X)\sim \mathbb{Q}$ and $\mathcal{T}\in \mathcal{C}^1$; which optimizes the following criterion: \begin{equation} \begin{array}{c} \inf_{\mathcal{T}} \mathbb{E}[||\mathcal{T}(X)-X||^2] \\ \mbox{s.t.} \ \ \mathbb{Q} = \mathcal{T}_{\sharp} \mathbb{P} \end{array} \end{equation} In other words, we would like to find a continuous transformation that minimizes the distance between measure $\mathbb{P}$ and measure $\mathbb{Q}$ among all these that pusheforward a prior measure $\mathbb{P}$ towards a measure $\mathbb{Q}$. In the context of filtering problem we would like to find a transformation $\mathcal{T}$, that transport particles from a sampling measure $\mathbb{P}$ to $\mathbb{Q}$: \begin{equation} \label{E1} \mathbb{E}^{\mathbb{Q}}[z(X_{t})\frac{d\mathbb{P}}{d\mathbb{Q}}(X_{0:t})|\mathcal{F}_t]=\mathbb{E}^{\mathbb{P}}\left[z(\mathcal{T}(X_{t}))|\mathcal{F}_t\right] \end{equation} One can solve this problem using variational methods \cite{MTM2012}. For the sake of exposition we represent posterior distribution, presented in the form of a normalized importance weight in the following way: \begin{equation} \label{poster} \psi(X_t|\textbf{Y}_{t})=\frac{1}{\mathcal{Z}_{t}}p(X_t|\textbf{Y}_{t-1})\rho(Y_t|X_t) \end{equation} where $\textbf{Y}_{t}=(Y_0,...,Y_t)$, the prior is $p(X_t|\textbf{Y}_{t-1})$, the likelihood is $\rho(Y_t|X_t)$ and $\mathcal{Z}_t$ is a normalization factor: $\mathcal{Z}_t=\int p(X_t|\textbf{Y}_{t-1})\rho(Y_t|X_t) dX_t$. Actually, the equation (\ref{poster}) is equivalent to the normalized importance weight in the eq. (\ref{w11}). Now, if we consider a continuous map $\mathcal{T}:\mathbb{R}^{n_X}\rightarrow \mathbb{R}^{n_X}$, then: \begin{equation} \psi(\mathcal{T}(X_t)|\textbf{Y}_{t})=\frac{1}{\mathcal{Z}_{t}}p(\mathcal{T}(X_t)|\textbf{Y}_{t-1})\rho(Y_t|\mathcal{T}(X_t)) \end{equation} Homotopy gradually modifies the prior density into the posterior density, as a scaling parameter $\lambda \in [0,1]$ increases from $0$ to $1$. In other words, by iterating we will transport homotopy $\psi(X_{t,\lambda}|Y_t)$ to a true posterior $\psi(X_t|Y_t)$, while minimizing the cost of transport. There are several conditions that homotopy has to satisfy. First, at $\lambda_0$ we should have our prior, i.e. $\psi(x_{t,\lambda_0}|Y_t)=p(X_t)$ and at some point $\lambda_{0\rightarrow 1}$, we will get approximation of our posterior density. Define a new set of density functions: $\psi(X_{t,\lambda}|\textbf{Y}_{t}):=\psi(X_t|\textbf{Y}_{t})$, $p(X_{t,\lambda}|\textbf{Y}_{t-1}):=p(X_t|\textbf{Y}_{t-1})$, $\rho(Y_t|X_{t,\lambda})^{\lambda}:=\rho(Y_t|X_{t,\lambda})$ and $\mathcal{Z}_{\lambda}:=\int p(X_{t,\lambda}|\textbf{Y}_{t-1})\rho(Y_t|X_{t,\lambda})^{\lambda} dx_{\lambda}$, so that homotopy is defined as: \begin{equation} \psi(X_{t,\lambda}|\textbf{Y}_{t})=\frac{1}{\mathcal{Z}_{\lambda}}\underbrace{p(X_{t,\lambda}|\textbf{Y}_{t-1})}_{prior}\underbrace{\rho(Y_t|X_{t,\lambda})^{\lambda}}_{likelihood} \end{equation} In order to simplify the calculation we take the logarithm of homotopy: \begin{equation} \label{loghom} \Psi(X_{t,\lambda}|\textbf{Y}_{t})=G(X_{t,\lambda})+\lambda L(X_{t,\lambda})-\log \mathcal{Z}_{\lambda} \end{equation} where $\Psi(X_{t,\lambda})=\log\psi(X_{t,\lambda}|\textbf{Y}_{t})$, $G(X_{t,\lambda}) = \log p(X_{t,\lambda}|\textbf{Y}_{t-1})$, $L(X_{t,\lambda})=\log \rho(Y_t|X_{t,\lambda})$. The dynamics of homotopy transport in the artificial time $\lambda$ is known as $log$-homotopy \cite{DH2013}. In some sense, the dynamics of transport will be given by the flow movement in the aritficial time $\lambda$, so we will look for a flow $\frac{dx}{d\lambda}$ that rules the movement of particles following log-homotopy. If we assume that in pseudo-time $\lambda$, the flow $\frac{dx}{d\lambda}$ follows the following SDE: \begin{equation} dX_{t,\lambda}=g(X_{t,\lambda})d\lambda+\eta(X_{t,\lambda})dW_{\lambda} \end{equation} where $W_{\lambda}$ is a vector field that pushes forward particles from prior to posterior distribution. We impose the following assumptions: \begin{enumerate}[I.] \item The densities $p(X_{t,\lambda}|Y_{t-1})$ and $\rho(Y_t|X_{t,\lambda})$ are twice differentialble with respect to $X_{t,\lambda}$; \item The function that governs the particle transport $g(X_{t,\lambda})$ is differentiable with respect to $X_{t,\lambda}$; \item The Hessian matrix of the density $\Psi$ is non-singular; \end{enumerate} Now given the conditional probability density function (\ref{loghom}), we can compute the function $g(X_{t,\lambda})=\frac{dX_{t,\lambda}}{d\lambda}$ using the forward Kolmogorov equation: \begin{equation} \frac{\partial \psi(X_{t,\lambda})}{\partial\lambda}=-tr \left[\frac{\partial}{\partial X_{t,\lambda}}(g(X_{t,\lambda})\psi(X_{t,\lambda})) \right]+\frac{1}{2}tr\left[ \frac{\partial}{\partial X_{t,\lambda}} Q(X_{t,\lambda})\frac{\partial \psi(X_{t,\lambda})}{\partial X_{t,\lambda}}\right] \end{equation} where $Q(X_{t,\lambda})=\eta(X_{t,\lambda})\eta^T(X_{t,\lambda})$ is the diffusion tensor of the process, and $tr(\cdot)$ is a trace operator. The forward Kolmogorov equation is used to relate the flow of particles $\frac{dX_{t,\lambda}}{d\lambda}$ with the evolution of log-homotopy as $\lambda_{0\rightarrow 1}$, under the diffusion process. \begin{multline} \label{1} \frac{\partial \psi(X_{t,\lambda})}{\partial\lambda}=-tr \left[\psi(X_{t,\lambda})\frac{\partial g(X_{t,\lambda})}{\partial X_{t,\lambda}}+g(X_{t,\lambda})^T\frac{\partial \psi(X_{t,\lambda})}{\partial X_{t,\lambda}} \right]+\frac{1}{2}div\left[ \frac{\partial}{\partial X_{t,\lambda}} Q(X_{t,\lambda})\frac{\partial \psi(X_{t,\lambda})}{\lambda}\right] = \\ = -\psi(X_{t,\lambda})tr\left[\frac{\partial g(X_{t,\lambda})}{\partial X_{t,\lambda}}\right]-g(X_{t,\lambda})^T\frac{\partial \psi(X_{t,\lambda})}{\partial X_{t,\lambda}}+\frac{1}{2}div\left[ \frac{\partial}{\partial X_{t,\lambda}} Q(X_{t,\lambda})\frac{\partial \psi(X_{t,\lambda})}{\partial X_{t,\lambda}}\right] \end{multline} where $div(\cdot)$ is a divergence operator. On the other hand if we take the derivative of equation (\ref{loghom}) with respect to $\lambda$, we have: \begin{equation} \label{11} \frac{\partial \Psi(X_{t,\lambda})}{\partial \lambda}=L(X_{t,\lambda})-\frac{\partial}{\partial \lambda}\log \mathcal{Z}_{\lambda} \end{equation} Since $\Psi(X_{t,\lambda})$ is a composition of two functions, we will need to use the chain rule: \begin{equation} \label{12} \frac{\partial \Psi(X_{t,\lambda})}{\partial \lambda}=\frac{1}{\psi(X_{t,\lambda})}\frac{\partial \psi(X_{t,\lambda})}{\partial \lambda} \end{equation} By substituting eq. \eqref{12} into \eqref{11} and rearranging the terms: \begin{equation} \label{2} \frac{\partial \psi(X_{t,\lambda})}{\partial \lambda}=\psi(X_{t,\lambda})\left[L(X_{t,\lambda})-\frac{\partial}{\partial \lambda}\log \mathcal{Z}_{\lambda}\right] \end{equation} Observe that \eqref{1} and \eqref{2} are identical, so by equating and dividing on $\psi(X_{t,\lambda})$ we get: \begin{multline} \label{13} L(X_{t,\lambda})-\frac{\partial}{\partial \lambda}\log \mathcal{Z}_{\lambda}=-g(X_{t,\lambda})^T\frac{1}{\psi(X_{t,\lambda})}\frac{\partial \psi(X_{t,\lambda})}{\partial X_{t,\lambda}}-\\- tr\left[\frac{\partial g(X_{t,\lambda})}{\partial X_{t,\lambda}} \right]+\frac{1}{2\psi(X_{t,\lambda})}div\left[ \frac{\partial}{\partial X_{t,\lambda}} Q(X_{t,\lambda})\frac{\partial \psi(X_{t,\lambda})}{\partial X_{t,\lambda}}\right] \end{multline} In \cite{DH2015}, authors propose to take the derivative of \eqref{13} with respect to $X_{t,\lambda}$ in order to find explicitely the equation of flow on the one hand, and to get rid of the normalization constant $\mathcal{Z}_{\lambda}$ that lead to instabilities on the other hand. \begin{multline} \label{14} \frac{\partial L(X_{t,\lambda})}{\partial X_{t,\lambda}}=-g(X_{t,\lambda})^T \frac{\partial^2 \Psi(X_{t,\lambda})}{\partial X_{t,\lambda}^2}-\frac{\partial \Psi(X_{t,\lambda})}{\partial X_{t,\lambda}}\frac{\partial g(X_{t,\lambda})}{\partial X_{t,\lambda}}-\frac{\partial}{\partial X_{t,\lambda}}tr\left[\frac{\partial g(X_{t,\lambda})}{\partial X_{t,\lambda}} \right]+\\+\frac{\partial }{\partial X_{t,\lambda}}\left(\frac{1}{2\psi(X_{t,\lambda})}div\left[Q(X_{t,\lambda})\frac{\partial \psi(X_{t,\lambda})}{\partial X_{t,\lambda}}\right]\right) \end{multline} Observe that we get a highly nonlinear PDE. We use the solution found in \cite{DH2013} and \cite{DH2011}, which states that if we could find a vector field $g(X_{t,\lambda})$ and diffusion tensor $Q(X_{t,\lambda})$, such that sum of the last three terms in \eqref{14} are equal to zero. The PDE, then simplifies to: \begin{equation} \frac{\partial L(X_{t,\lambda})}{\partial X_{t,\lambda}}=-g(X_{t,\lambda})^T \frac{\partial^2 \Psi(X_{t,\lambda})}{\partial X_{t,\lambda}^2} \end{equation} Using the assumption III, i.e. the Hessian matrix $\frac{\partial^2 \Psi(X_{t,\lambda})}{\partial X_{t,\lambda}^2}$ is non-singular, we get explicitely the flow $g(X_{t,\lambda})$: \begin{equation} g(X_{t,\lambda})=-\left[\frac{\partial^2 \Psi(X_{t,\lambda})}{\partial X_{t,\lambda}^2}\right]^{-1} \left[\frac{\partial L(X_{t,\lambda})}{\partial X_{t,\lambda}} \right]^T \end{equation} \subsection{Homotopy Transport Algorithm} \label{alg1} \hspace{3ex} \textbf{Sampling from the prior.} First we generate $M$ i.i.d random variables $X^{(i)}_t$ from the prior density $p_0(x)$, initialize pseudo-time $\lambda$ and set the state variables that will be transported as: $X_{t,\lambda}^{(i)}=X_{t|t-1}^{(i)}$. \textbf{Transportation Stage.} For $t=2,...,N-1$, compute the derivative with respect to $X_{t,\lambda}$ of the measurement function. If $h$ is non-linear, a second order Taylor expansion at $X_{t,\lambda}$ allows speeding up the calculation by linearizing the first derivative. After that, update the pseudo time by setting : $\lambda=\lambda + \Delta \lambda$. Compute the flow $g(X_{t,\lambda}^{(i)})$. Note, that the first Hessian could be derived by twice differentiating a log-homotopy equation (\ref{loghom}): \begin{equation} \label{Psi} \frac{\partial^2 \Psi(X_{t,\lambda}^{(i)})}{\partial X_{t,\lambda}^2}=\frac{\partial^2 G(X_{t,\lambda}^{(i)})}{\partial X_{t,\lambda}^2}+\lambda \frac{\partial^2 L(X_{t,\lambda}^{(i)})}{\partial X_{t,\lambda}^2} \end{equation} The first term in (\ref{Psi}) $\frac{\partial^2 G(X_{t,\lambda}^{(i)})}{\partial X_{t,\lambda}^2}$ is estimated by using a sample covariance matrix of $t$ patricles generated form the prior distribution: \begin{equation} \frac{\partial^2 G(X_{t,\lambda})}{\partial X_{t,\lambda}^2} \approx - \widehat{S}^{-1}_{M_x} \end{equation} Compute the transportation of particles from the measure $\mathbb{P}$ to the measure $\mathbb{Q}$: \begin{equation} X^{(i)}_{t,\lambda}=X^{(i)}_{t,\lambda}+\Delta \lambda g(X_{t,\lambda}^{(i)}) \end{equation} And finally update the state parameter: \begin{equation} \breve{X}_t=\frac{1}{n_X}\sum_{i=1}^{n_X} X^{(i)}_{t,\lambda} \end{equation} \textbf{Maturity.} At the final time interval $]N-1,N]$ compute the estimator of (\ref{E1}): \begin{equation} \widehat{C}^{HT}=\frac{1}{n_X}\sum_{i=1}^{n_X} z(\breve{X}_t^{(i)}) \end{equation} \begin{algorithm}[H] Initialization: $i=1,...,n_X$ - $\#$(simulations), $t=1,...,N$ - $\#$(time steps) \\ Draw $\{X_0^{(i)}\}_{i=1}^{n_X}$ from the prior $p_0(x)$.\\ Set $\{\omega_0^{(i)}\}_{i=1}^{n_X}=\frac{1}{n_X}$ \For{$t=1,...,N$}{ \For{$i=1,...,n_X$}{ Propagate particles using state equation $X_t^{(i)} = f(X_{t-1}^{(i)},Y_{t-1}^{(i)},\epsilon_t)$\; Measurement update: $Y_t = h(X_t^{(i)},Y_{t-1}^{(i)},\eta_t)$\; Initialize pseudo-time $\lambda=0$\; Set $X_{t,\lambda}^{(i)}=X_{t|n-1}^{(i)}$\; \While{$\lambda < 1$}{ Compute SCM $\widehat{S}_M$;\ Calculate an estimate: $X_{t,\lambda}=\frac{1}{n_X}\sum_i X_{t,\lambda}^{(i)}$ Compute the matrix $\widehat{H}=\frac{\partial h(X_{t,\lambda}^{(i)})}{\partial X_{t,\lambda}}$\; Update the time: $\lambda=\lambda+\Delta \lambda$\; Calculate the flow $\frac{dX^{(i)}_{t,\lambda}}{d\lambda}=-\left[\frac{\partial^2 \Psi(X^{(i)}_{t,\lambda})}{\partial X_{t,\lambda}^2}\right]^{-1} \left[\frac{\partial L(X^{(i)}_{t,\lambda})}{\partial X_{t,\lambda}} \right]^T$\; Transport particles according to its flow: $X_{t,\lambda}^{(i)}=X_{t,\lambda}^{(i)}+\Delta \lambda \frac{d X_{t,\lambda}^{(i)}}{d\lambda}$;\ } Update state estimate:\\ $\breve{X}_t=\frac{1}{n_X}\sum_{i=1}^{n_X} X_{t,\lambda}^{(i)}$ } } \caption{Homotopy Transport Algorithm} \end{algorithm} \section{Homotopy Transport with Particle Reweighing} \hspace{3ex} Taking into account the difficulties one faces in non-Gaussian and high-dimensional problems, the idea of a particle transport without any use of sampling techniques is very elucidating. The next question that arises is whether we could direct the transportation by choosing those particles that have higher probability of reaching rarely visited areas of the state space? We propose a mixture of homotopy particle transport with a particle reweighing at each time step. The numerical test that we performed on the toy example of a Stein-Stein stochastic volatility model showes that we significantly reduce the variance and bias of our estimator. The algorithm consists of two steps: first we transport particles according to its flow, and second, we choose those particles that have higher probability of faster exploring the state space. \begin{equation} \label{RN4.1} \mathbb{E}^{\widetilde{\mathbb{Q}}}[z(X_{t})\frac{d\mathbb{P}}{d\widetilde{\mathbb{Q}}}(X_{0:t})|\mathcal{F}_t]=\mathbb{E}^{\mathbb{P}}\left[z(\mathcal{T}(X_{t}))|\mathcal{F}_t\right] =\mathbb{E}^{\mathbb{Q}}\left[z(\mathcal{T}(X_{t}))\frac{d\mathbb{P}}{d\mathbb{Q}}(X_{0:t})| \mathcal{F}_t\right] \end{equation} where $\mathcal{T}$ is a flow of particles under the pseudotime $\lambda$ decribed in the section $\ref{alg1}$. By setting $\textbf{X}_t=(X_0,...,X_t)$, we could express our Radon-Nikodym derivative in a product form: \begin{equation} \frac{d\mathbb{P}}{d\mathbb{Q}}(X_{0:t})=\frac{d\mathbb{P}}{d\widetilde{\mathbb{Q}}}\times \frac{d\widetilde{\mathbb{Q}}}{d\mathbb{Q}}(X_{0:t}) \end{equation} where the first Radon-Nikodym derivative denotes the transport of particles from a mesure $\mathbb{P}$ to a measure $\widetilde{\mathbb{Q}}$, then we choose the particles that have high probability of reaching rare corners of the state space, using $\frac{d\widetilde{\mathbb{Q}}}{d\mathbb{Q}}$ that allows us to reassess the weights of the particles. As in the section 2, an importance measure $\mathbb{Q}$ that will play a resampling to choose the trajectories with higher weight, given that there exists an importance kernel $\widetilde{K}_t$, such that $K_t\ll \widetilde{K}_t$, could be defined as: \begin{equation} \mathbb{Q}(B)=\sum_{i=1}^{n_X} \omega^{(i)}_t \widetilde{K}_t(X^{(i)}_t,A_i), \ \ B\in\mathcal{B}(\mathbb{R}^{n_X}) \end{equation} where the set $A_i=\{\mathcal{T}(X_{t+1})\in\mathbb{R}^{n_X}|\mathbb{1}_{B}(X^{(i)}_t,\mathcal{T}(X_{t+1}))=1\}$. Assuming, that the prior and sampling kernels $K_t$ and $\widetilde{K}_t$ have densities $k_t$ and $\widetilde{k}_t$ respectively, then the Radon-Nikodym derivative is \begin{equation} \label{RN} \frac{d\widetilde{\mathbb{Q}}}{d\mathbb{Q}}(X_{0:t})=\prod_{l=0}^{t} \rho_{l}(\mathcal{T}(X_{l}),Y_{l})\frac{ \omega_{l-1}(X_{l-1})k_l(X_{l-1},\mathcal{T}(X_{l}))}{\omega_{l-1}(X_{l-1})\widetilde{k}_l(X_{l-1},\mathcal{T}(X_{l}))} \end{equation} such that $\omega_t(X_t)=\omega_t^{(i)}(X_t^{(i)})$ if $X_t=X_t^{(i)}$, and $\omega_t(X_t)=1$ otherwise. The an unnormalized weight is given by: \begin{equation} \omega_{t}^{(i)}(\mathcal{T}(X_{t}^{(i)}))=\prod_{l=1}^t\rho_{l}(\mathcal{T}(X_{l}^{(i)}),Y_{l})\frac{ k_l(X_{l-1},\mathcal{T}(X_{l}^{(i)}))}{\widetilde{k}_l(X_{l-1},\mathcal{T}(X_{l}^{(i)}))} \end{equation} So, now we have homotopy transport with particle reweighing estimator: \begin{equation} \widehat{C}^{TRW}=\frac{1}{n_X}\sum_{i=1}^{n_X} z(\mathcal{T}(X_t^{(i)})) \widehat{\omega}_{t-1}^{(i)}(\mathcal{T}(X_{t-1}^{(i)})) \end{equation} \subsection{PF-Enhanced Homotopy Transport Algorithm} \hspace{3ex} The algorithm could be described by the following scheme, $\forall i = 1,...,n_X$: \begin{equation} X_t^{(i)} \xrightarrow[]{Sampling} X_{t+1}^{(i)} \xrightarrow[]{Transportation} \mathcal{T}(X_{t+1}^{(i)})=\breve{X}_{t+1}^{(i)} \xrightarrow{Reweighing} \Phi(\breve{X}_{t+1}^{(i)})= \widehat{X}_{t+1}^{(i)} \end{equation} where $\Phi$ is an operator that denotes the resampling mechanism of particles. If we assume that there is a continuous kernel $\widetilde{K}_t$, such that $K_t\ll \widetilde{K}_t$ with densities $k_t$ and $\widetilde{k}_t$ respectively, then we can define a weight function $\omega_{t}^{(i)}$: \begin{equation} \omega_{t}^{(i)}(\breve{X}_{t}^{(i)})=\prod_{l=1}^t\rho_{l}(\breve{X}_{l}^{(i)},Y_{l})\frac{ k_l(\widehat{X}_{l-1}^{(i)},\breve{X}_{l}^{(i)})}{\widetilde{k}_l(\widehat{X}_{l-1}^{(i)},\breve{X}_{l}^{(i)})} \end{equation} \subsubsection{Detailed Algorithm} \hspace{3ex} \textbf{Sampling from the prior.} As in the section \ref{alg1}, we start with $M$ particles sampled from the prior distribution $p_0$, initialize pseudo-time $\lambda$ and set the state variables that will be transported as: $X_{t,\lambda}^{(i)}=X_{t|n-1}^{(i)}$. \textbf{Transportation Stage.} Follow steps 6-8 of the Algorithm 2 in the section \ref{alg1}. \textbf{Path Reweighing Stage.} Compute the normalized importance weight: \begin{equation} \widehat{\omega}_{t}^{(i)}(\breve{X}_{t}^{(i)})=\frac{\omega_{t}^{(i)}(\breve{X}^{(i)}_{t})}{\sum_{i=1}^{n_X} \omega_{t}^{(j)}(\breve{X}_{t}^{(j)})} \end{equation} \textbf{Maturity} At the time interval $]N-1,N]$ compute the final Homotopy transport reweighted estimator: \begin{equation} \widehat{C}^{TRW}=\frac{1}{n_X}\sum_{i=1}^{n_X} z(\widehat{X}_t^{(i)}) \widehat{\omega}_{t-1}^{(i)}(\breve{X}_{t-1}^{(i)}) \end{equation} \begin{algorithm}[H] Initialization: $i=1,...,n_X$ - $\#$(simulations), $t=1,...,T$ - $\#$(time steps) \\ Draw $\{X_0^{(i)}\}_{i=1}^{n_X}$ from the prior $p_0(x)$.\\ Set $\{\omega_0^{(i)}\}_{i=1}^{n_X}=\frac{1}{n_X}$ \For{$t=1,...,N$}{ \For{$i=1,...,n_X$}{ Follow steps 6-8 of the Algorithm 2 in the section \ref{alg1}. Follow steos 7-12 of the Algorithm 1 in the section \ref{alg0} } } \caption{Homotopy Transport with Particle Reweighing Algorithm} \end{algorithm} \section{Numerical Applications and Results} \hspace{3ex} As a toy example, we decided to test the algorithms on a Stein-Stein stochastic volatility model. We set log-returns as $Y_t=\log(S_t)$, then the model takes the following form: \begin{equation} \left\{\begin{array}{c} dY_t=(\mu-\frac{X_t^2}{2})dt+X_tdB_t\\ dX_t=\kappa(\theta-X_t)dt+\sigma dW_t \end{array} \right. \end{equation} where $X_t$ is a volatility process, $Y_t$ the dynamics of log-returns, $\mu$ is a drift, $\theta$ is a long-term variance, $\kappa$ - the rate of reversion, $\sigma$ is the volatility of volatility, and $B_t$ and $W_t$ are two independent Brownian motions, in the sense that $\langle dB_t,dW_t\rangle=0$. Using the above presented stochastic volatility model, we would like to compute estimates for a European option. For a given interest rate $r$, maturity $T$, strike price $K$, and for a function $z(\cdot,x)=\max(x-K,0)$, the call price of the option is given by: \begin{equation} C(X_t,Y_t)=B_{t,T}\mathbb{E}^{\mathbb{P}}\left[z(X_T,Y_T)| \mathcal{F}_t\right] \end{equation} where $\mathcal{F}_t=\sigma\{(Y_0,...,Y_t)\}$. We chose Euler-Muruyama discretization scheme, which gives: \begin{equation} \left\{\begin{array}{c} Y_t-Y_{t-1}=(\mu-\frac{X_{t-1}^2}{2})\Delta t+X_{t-1} \sqrt{\Delta t} \epsilon_t\\ X_t-X_{t-1}=\kappa(\theta-X_{t-1})\Delta t+\sigma \sqrt{\Delta t} \eta_t \end{array} \right. \end{equation} where $\Delta t$ is a discretization size, $\epsilon_t$ and $\eta_t$ are independent Gaussian variates, $\mathcal{N}(0,1)$. We compare each approach by estimating the standard deviations, the root mean squared error (RMSE), the bias, the relative mean squared error(RRMSE), the time required to compute each estimate and the figure of merit (FOM). We run 20 Monte Carlo experiments. For $l=1,...,M_s$ the RMSE estimator is given by: \begin{equation} RMSE = \sqrt{\frac{1}{M_s}\sum_{l=1}^{M_s} ||C-\widehat{C}_{l}||^2} \end{equation} where $C$ is the price computed analytically, $\widehat{C}_{l}$ are Monte Carlo estimates and $M_s=20$ is the number of Monte Carlo experiments. As a reference price, we used the article by EM Stein \cite{SS1991}. \begin{equation} Bias = \sqrt{RMSE^2 - St.dev^2} \end{equation} where $St.dev$ are standard deviations of MC estimates. The RRMSE is computed using the following formula: \begin{equation} RRMSE = \frac{RMSE}{\widehat{C}} \end{equation} To measure the efficiency of each method presented in the article, we will use the figure of merit(FOM)\cite{RT2009}: \begin{equation} FOM = \frac{1}{R^2\times CPU_t} \end{equation} where $CPU_t$ is the CPU time need to compute the estimator and $R$ is a relative error, which is the measure of statistical precision: \begin{equation} R = \frac{St. dev}{\bar{C}} \propto \frac{1}{\sqrt{M}} \end{equation} where $\bar{C}=\sum_{l=1}^{M_s} \widehat{C}_{l}$ We used 20 000 and 40 000 simulations over 64 time intervals for our MC experiments. Table 1. shows that homotopy and reweighted(RW)-homotopy algorithms shows less statistical errors then traditional PF. If we compare homotopy and RW-homotopy, we could see that FOM says that the first is more efficient the latest, due to the fact that we need more time to reweight the paths. Meanwhile RW-homotopy shows less erros and st. deviations. \begin{table}[H] \caption{Stein-Stein Stochastic volatility option price estimates statistics. $S_0=100, \ K=90, \ r=0.0953, \sigma = 0.2, \ \kappa=4, \ \theta=0.25, \ V_0=0.25, \ T=1/2$, and dividends $d=0$ True price: $16.05$, $t=20000$, $M=64$} \centering \begin{tabular}{|c|c|c|c|c|} \hline Stat & MC & PF & Homotopy & RW-Homotopy \\ \hline St. dev. & 0.127495344 & 0.106264197 & 0.102775848 & 0.08360908\\ \hline RMSE & 0.148073563 & 0.115032508 & 0.105302932 & 0.084510606\\ \hline Bias & 0.075304165 & 0.044049955 & 0.022931037 & 0.012311146\\ \hline RRMSE & 0.00137298 & 0.000827032 & 0.000827032 & 0.000444367 \\ \hline CPU time & 0.1327525 & 0.31177 & 0.179 & 0.38819 \\ \hline FOM & 118181.69 & 72715.84 & 135692.61 & 95193.97 \\ \hline \end{tabular} \end{table} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{PF_vs_Hom_HomPF.png} \caption{Volatily dynamics, PF (Blue), Homotopy (Red), RW-homotopy(Yellow)} \end{figure} \begin{figure}[H] \includegraphics[width=0.49\linewidth]{Hom_zoom.png} \includegraphics[width=0.49\linewidth]{PFHom.png} \caption{Zoomed volatilty dynamics. Homotopy (left), RW-homotopy (right)} \end{figure} \begin{table}[H] \caption{Stein-Stein Stochastic volatility option price estimates statistics. $S_0=100, \ K=90, \ r=0.0953, \sigma = 0.2, \ \kappa=4, \ \theta=0.25, \ V_0=0.25, \ T=1/2$, and dividends $d=0$ True price: $16.05$, $t=40000$, $M=64$} \centering \begin{tabular}{|c|c|c|c|c|} \hline Stat & MC & PF & Homotopy & RW-Homotopy \\ \hline St. dev. & 0.070351719 & 0.060799052 & 0.048943672 & 0.045246118\\ \hline RMSE & 0.130446299 & 0.079273246 & 0.04921257 & 0.045762201 \\ \hline Bias & 0.109849318 &0.050869665 & 0.005137504 & 0.006853309\\ \hline RRMSE & 0.001067583 & 0.000392831 & 0.00015101 & 0.000130578\\ \hline CPU time & 0.278895 & 0.54737 & 0.26618 & 0.581495\\ \hline FOM & 184049.069 & 126479.8136 & 403391.758 & 216062.7397\\ \hline \end{tabular} \end{table} Despite the fact that Monte Carlo estimate showed higher FOM, than PF, due to the fact that it takes less time to compute Monte Carlo estimator. Whereas PF has lower RMSE and the bias. \section{Conclusions and Further Research} \hspace{3ex} The estimation of latent variables has a lot of applications in engineering and finance. We provide homotopy based algorithm and its extension with reweighted trajectories that permits to solve the optimal transportation problem. Numerical results that we applied in European option pricing with stochastic volatility demonstrated the efficiency of the proposed algorithms with respect to error, bias and other stastics. Both algorithms ourperformed Particle filtering. The path-reweighing allowed to reduce standard deviations, and in some cases the bias and the RMSE compared to the homotopy transport algorithm. From our experiments we could observe the following: \begin{itemize} \item Homotopy transport is fast algorithm, which is spectacularily demonstrated in the figure of merit statistics. \item Efficiency of homotopy transport algorithm increases as the number of particles increases. \item Implementation of homotopy transport requires less effort than a vanilla Monte Carlo algorithm. \item Homotopy transport proved to be unbiased estimator. \item Homotopy with path reweighing proved to reduce the bias when the number of particles is small compared to homotopy transport without reweighing. \end{itemize} While reweighted homotopy transport approach showed the reduced RMSE and Bias in low-dimensions, the mixture of homotopy transport and bootstrap resampling, the importance weight could converge to unity in high-dimensional problems(\cite{SP2015}). In the next article we plan to investigate this issue. It will be also interesting to check the homotopy transport on non-gaussian examples.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Aim of the paper} In a series of papers starting with \cite{char} an arithmetic analogue of the concept of (not necessarily linear) ordinary differential equation was developed; cf. the monographs \cite{book} and \cite{foundations} for an exposition of (and references for) this theory and \cite{pj, dmf, local, fourier} for some purely arithmetic applications. (There is a version for partial differential equations \cite{laplace, pde1, pde2, foundations} which we will not discuss here.) The rough idea behind this theory is to replace differentiation operators $$y\mapsto \delta_x y:=\frac{dy}{dx},$$ acting on smooth functions $y=y(x)$ in one variable $x$, by {\it Fermat quotient operators} with respect to an odd rational prime $p$, $$z\mapsto \delta_p z:=\frac{\phi_p(z)-z^p}{p},$$ acting on a complete discrete valuation ring $A$ with maximal ideal generated by $p$ and perfect residue field; here we denoted by $\phi_p$ the unique Frobenius lift on $A$, i.e. the unique ring endomorphism of $A$ whose reduction mod $p$ is the $p$-power Frobenius. Then classical differential equations, $$F(y, \delta_x y,...,\delta_x^n y)=0,$$ where $F$ is a smooth function, are replaced by {\it arithmetic differential equations}, $$F(z,\delta_p z,...,\delta_p^n z)=0,$$ where $F$ is a $p$-adic limit of polynomials with coefficients in $A$. More generally one can consider systems of such equations. One would then like to introduce arithmetic analogues of some remarkable classical ordinary differential equations such as: \medskip \noindent {\it Linear differential equations, Riccati equations, Weierstrass equation satisfied by elliptic functions, Painlev\'{e} equations, Schwarzian equations satisfied by modular forms, Ramanujan differential equations satisfied by Einsenstein series, Euler equations for the rigid body, Lax equations, etc.} \medskip \noindent A first temptation, in developing the theory, would be to consider the function(s) $F$ in each of these classical equations and formally replace, in the corresponding equation, the quantities $\delta_x^i y$ by the quantities $\delta_p^i z$. This strategy of preserving $F$, i.e., the {\it shape of the equations}, turns out to destroy the underlying {\it geometry of the equations} and, therefore, seems to lead to a dead end. Instead, what turns out to lead to an interesting theory (and to interesting applications) is to discover how to change the shape of the equations (i.e. change $F$) so that the geometry of the equations is (in some sense) preserved. The first step in then to ``geometrize" the situation by introducing {\it arithmetic jet spaces} \cite{char} which are arithmetic analogues of the classical jet spaces of Lie and Cartan and to seek a conceptual approach towards arithmetic analogues of the classical equations we just mentioned; this has been done in the series of papers and in the monographs mentioned in the beginning of this paper, for the list of classical equations we just mentioned. In achieving this one encounters various obstacles and surprises. For instance, the question, {\it ``What is the arithmetic analogue of linear differential equations?"} is already rather subtle; indeed, linearity of arithmetic differential equations turns out to be not an absolute but, rather, a relative concept; more precisely there is no concept of linearity for {\it one} arithmetic differential equation but there is a concept of an arithmetic differential equation being {\it linear} with respect to another arithmetic differential equation. We refer to \cite{foundations, gln2, gln3} for the problem of linearity. The question we would like to address in this paper is a different one (although a related one), namely: \medskip {\it ``What is the arithmetic analogue of a Hamiltonian system?"} \medskip \noindent A number of remarkable classical differential equations admit Hamiltonian structures; this is the case, for instance, with the following 3 examples: Painlev\'{e} VI equations, Euler equations, and (certain special) Lax equations. Arithmetic analogues of these 3 types of equations have been developed, in 3 separate frameworks, in a series of papers as follows: the Painlev\'{e} case in \cite{BYM}; the Euler case in \cite{euler, canonical}; and the Lax case in \cite{gln2}, respectively. Cf. also \cite{foundations}. One is tempted to believe that these 3 examples are pieces of a larger puzzle. The aim of the present paper is to review these 3 examples and attempt to give hints as to a possible unification of these 3 pictures by proving some new facts (and providing some new comments) that connect some of the dots. The task of setting up a general framework (and a more general array of examples) for an arithmetic Hamiltonian formalism is still elusive; we hope that the present paper contains clues as to what this general formalism could be. \subsection{Structure of the paper} Each of the following sections contains $2$ subsections. In each section the first of the subsections offers a treatment of the classical differential setting while the second subsection offers a treatment of the arithmetic differential setting. Section 2 of the paper is devoted to an exposition of the main general concepts. Sections 3, 4, 5 are devoted to the main examples under considerations: the Painlev\'{e} equations, the Euler equations, and the Lax equations respectively. The main new results of the paper are Theorems \ref{new1} and \ref{new2} which hint towards a link between the formalisms in the Painlev\'{e} and Euler examples. \subsection{Ackowledegment} The present work was partially supported by the Institut des Hautes \'{E}tudes Scientifiques in Bures sur Yvette, and by the Simons Foundation (award 311773). The author would also like to express his debt to Emma Previato for an inspiring collaboration and a continuing interaction. \section{General concepts} \subsection{The classical case} We begin by reviewing some of the main concepts in the theory of classical (ordinary) differential equations. We are interested in the purely algebraic aspects of the theory so we will place ourselves in the context of differential algebra \cite{Kolchin}; our exposition follows \cite{ajm93, foundations, BYM}. Let us start with a ring $A$ equipped with a derivation $\delta^A:A\rightarrow A$ (i.e., an additive map satisfying the Leibniz rule). For simplicity we assume $A$ is Noetherian. Let $X$ be a scheme of finite type over $A$. One defines the {\it jet spaces} $J^n(X)$ of $X$ (over $A$) as follows. If $X$ is affine, \begin{equation} \label{X} X=Spec\ A[x]/(f),\end{equation} with $x$ an $N$-tuple of indeterminates and $f$ a tuple of polynomials, then one sets \begin{equation} \label{J} J^n(X):=Spec\ A[x,x',...,x^{(n)}]/(f,\delta f, ... ,\delta^nf)\end{equation} where $x',...,x^{(n)}$ are new $N$-tuples of indeterminates and $\delta=\delta^{\text{univ}}$ is the unique (``universal") derivation on the ring of polynomials in infinitely many indeterminates, $$A[x,x',...,x^{(n)},...],$$ extending $\delta^A$, and satisfying \begin{equation} \delta x=x',...,\delta x^{(n)}=x^{(n+1)},... \end{equation} One gets induced derivations $$\delta=\delta^{\text{univ}}:\mathcal O(J^n(X))\rightarrow \mathcal O(J^{n+1}(X)).$$ If $X$ is not necessarily affine then one defines $J^n(X)$ by gluing $J^n(X_i)$ where $X=\cup X_i$ is a Zariski affine open cover. The family $(J^n(X))_{n\geq 0}$ has the structure of a projective system of schemes depending functorially on $X$, with $J^0(X)=X$. If $X$ is smooth and descends to (a scheme over) {\it the ring of $\delta$-constants} of $A$, $$A^{\delta}=\{a\in A;\ \delta a=0\},$$ then $J^1(X)$ identifies with the (total space of the) tangent bundle $T(X)$ of $X$; if we drop the condition that $X$ descend to $A^{\delta}$ then $J^1(X)$ is only a torsor under $T(X)$. If $G$ is a group scheme over $A$ then $(J^n(G))_{n\geq 0}$ forms a projective system of group schemes over $A$; if $A$ is a field of characteristic zero, say, the kernels of the homomorphisms $J^n(G)\rightarrow J^{n-1}(G)$ are isomorphic as algebraic groups to powers of the additive group scheme ${\mathbb G}_a$. By a {\it differential equation} on $X$ we understand a closed subscheme of some $J^n(X)$. By a $\delta^A$-{\it flow} on $X$ we will understand a derivation $\delta^X$ on the structure sheaf of $X$, extending $\delta^A$; giving a $\delta^A$-flow is equivalent to giving a section of the canonical projection $J^1(X)\rightarrow X$, and hence, to giving a differential equation $Z\subset J^1(X)$ for which the projection $Z\rightarrow X$ is an isomorphism. A {\it prime integral} for a $\delta^A$-flow $\delta^X$ is a function $H\in \mathcal O(X)$ such that $\delta^XH=0$. For any $A$-point $$P\in X(A),\ \ \ P:Spec\ A \rightarrow X,$$ one defines the jets $$J^n(P)\in J^n(X)(A),\ \ \ J^n(P):Spec\ A\rightarrow J^n(X),$$ as the unique morphisms lifting $P$ that are compatible with the actions of $\delta^{\text{univ}}$ and $\delta^A$. A {\it solution} (in $A$) for a differential equation $Z\subset J^n(X)$ is an $A$-point $P\in X(A)$ such that $J^n(P)$ factors through $Z$. If $P$ is a solution to the differential equation defined by a $\delta^A$-flow and if $H$ is a prime integral for that $\delta^A$-flow then $\delta^A(H(P))=0$; intuitively $H$ is ``constant" along any solution. If $X$ is affine and $Z\subset J^1(X)$ is the differential equation corresponding to a $\delta^A$-flow $\delta^X$ on $X$ then a point $P\in X(A)$ is a solution to $Z$ if and only if the ring homomorphism $P^*:\mathcal O(X)\rightarrow A$ defined by $P$ satisfies $\delta^A\circ P^*=P^*\circ \delta^X$. For any smooth affine scheme $Spec\ B$ over $A$ we may consider the algebraic deRham complex of abelian groups, \begin{equation} B\stackrel{d}{\longrightarrow} \Omega^1_{B/A} \stackrel{d}{\longrightarrow} \Omega^2_{B/A}\stackrel{d}{\longrightarrow} ..., \end{equation} where $\Omega^i_{B/A}:=\wedge^i \Omega_{B/A}$, $\Omega_{B/A}$ the (locally free) $B$-module of K\"{a}hler differentials. Recall that for any $A$-algebra $B$ a {\it Poisson structure/bracket} on $B$ (or on $Spec\ B$) is an $A$-bilinear map $$\{\ ,\ \}:B\times B\rightarrow B$$ which is a derivation in each of the arguments and defines a structure of Lie $A$-algebra on $B$. In what follows we would like to review the classical concept of {\it Hamiltonian} system/equation from a purely algebro-geometric viewpoint; later we will introduce its arithmetic analogues. We will restrict ourselves to the case of affine surfaces since our main examples fit into this setting. Let $S=Spec \ B$ be a smooth affine surface (i.e. smooth scheme of relative dimension $2$) over a Noetherian ring $A$. By a {\it symplectic form} we will understand a basis $\eta$ of the $B$-module $\Omega^2_{B/A}$. (The usual condition that this form be closed is automatically satisfied because $S$ is a surface.) By a {\it contact form} we will understand an element $\nu\in \Omega^1_{B/A}$ such that $d\nu$ is a symplectic form. Given a symplectic form $\eta$ on $S=Spec\ B$ one can define a Poisson structure on $S$ by the formula: $$\{ \ ,\ \}_{\eta}:B\times B\rightarrow B,\ \ \ \{ f, g\}_{\eta}:=\frac{df \wedge dg}{\eta}.$$ Assume now that we are given a derivation $\delta^A:A\rightarrow A$. Recall that by a {\it $\delta^A$-flow} on $S$ we mean a derivation $\delta:=\delta^B:B\rightarrow B$ extending our derivation $\delta^A:A\rightarrow A$. Recall that $\delta^B$ induces then unique additive maps, referred to as {\it Lie derivatives}, $$\delta=\delta^{\Omega^i}:\Omega^i_{B/A}\rightarrow \Omega^i_{B/A},\ \ \ i=0,1,2,$$ such that $\delta^{\Omega^0}=\delta^B$, $\delta$ commute with $d$, and $\delta$ induce a derivation on the exterior algebra $\wedge \Omega_{B/A}$. Recall that a function $H\in B$ is called a {\it prime integral} for $\delta^B$ (or a $\delta^B$-{\it constant}) if $\delta^BH=0$. Let us say that a $\delta^A$-flow $\delta=\delta^B$ on $S$ is {\it Hamiltonian} (or, more accurately, {\it symplectic-Hamiltonian}) with respect to a symplectic form $\eta$ on $S$ if \begin{equation} \label{Lieder} \delta\eta=0.\end{equation} Let us say that $\delta^A$-flow $\delta=\delta^B$ on $S$ is {\it Hamiltonian} (or, more accurately, {\it Poisson-Hamiltonian}) with respect to a Poisson structure $\{\ ,\ \}$ on $B=\mathcal O(S)$ if $S$ descends to a smooth scheme $S_0=Spec\ B_0$ over $A_0:=A^{\delta}$, with $\{B_0,B_0\}\subset B_0$, and there exists a function (called {\it Hamiltonian}) $H\in B_0$ such that \begin{equation} \label{poisson} \delta f=\{f,H\},\ \ \text{for all}\ \ f\in B_0.\end{equation} A direct computation with \'{e}tale coordinates shows that if a $\delta^A$-flow $\delta=\delta^B$ on $S$ is {\it Hamiltonian} with respect to the Poisson structure $\{\ ,\ \}_{\eta}$ on $B$ attached to a symplectic form $\eta$ on $S$ that comes from $S_0$ then $\delta$ is Hamiltonian with respect to $\eta$; moreover if $H$ is a Hamiltonian then, trivially, $H$ is a prime integral for $\delta$. As we shall see the examples of Painlev\'{e} and Euler equations are symplectic-Hamiltonian but {\it not} Poisson-Hamiltonian simply because the surfaces $S$ on which these equations ``live" do not descend to surfaces $S_0$ over the constants. A large class of examples coming from Lax equations are Poisson-Hamiltonian. Both symplectic-Hamiltonian and Poisson-Hamiltonian equations have arithmetic analogues. The discussion above has, of course, a higher dimensional analogue in which $S$ is a smooth affine scheme of arbitrary dimension; the Poisson-Hamiltonian picture is valid word for word; the symplectic-Hamiltonian picture has to be modified by asking that $S$ have relative dimension $2d$, $\eta$ be a closed $2$-form, and $\eta^d$ be invertible. Going back to the case when $S$ is a surface, let $\nu$ be a contact form, let $\delta=\delta^S$ be a $\delta^A$-flow on $S$, consider the symplectic form $\eta:=d\nu$, and define the {\it Euler-Lagrange} form $$\epsilon:=\delta\nu\in \Omega^1_{B/A}.$$ Since $d$ and $\delta$ commute we have that $\delta^S$ is Hamiltonian with respect to $\eta$ if and only if $\epsilon$ is closed, i.e. $d\epsilon=0$. If in addition $\epsilon$ is exact, i.e. $\epsilon=d{\mathcal L}$ for some ${\mathcal L}\in \mathcal O(S)$, we call ${\mathcal L}$ a {\it Lagrangian} for $(\nu,\delta^S)$. A special case that plays a role in the theory is that in which our surface $S$ is the first jet space of a smooth curve $Y$ over $A$, $$S=J^1(Y).$$ In this case a $1$-form $\nu$ on $S$ is called {\it canonical} if $\nu=f\beta$ where $f\in B=\mathcal O(S)$ and $\beta$ is a pull-back of a $1$-form on $Y$. Assume $\nu$ is a canonical contact form, assume $\delta=\delta^S$ is Hamiltonian with respect to $\eta:=d\nu$, assume $\epsilon$ is exact with Lagrangian ${\mathcal L}$, and assume $x\in \mathcal O(Y)$ is an \'{e}tale coordinate on $Y$. Assume in addition that $x,\delta x$ are \'{e}tale coordinates on $S$ (which is ``generically the case" and is automatic, for instance, for the {\it canonical} $\delta^A$-flows to be introduced below). Then there are unique $A$-derivations $\frac{\partial}{\partial x}, \frac{\partial}{\partial \delta x}$ on $\mathcal O(S)$ sending $x,\delta x$ into $1,0$ and $0,1$ respectively. It is then trivial to check that $\delta \left( \frac{\partial {\mathcal L}}{\partial \delta x}\right)=\frac{\partial {\mathcal L}}{\partial x}$ in $\mathcal O(S)$. In particular if $Z\subset J^1(S)$ is the differential equation corresponding to the $\delta^A$-flow $\delta^S$ on $S$ then any solution in $S(A)$ to $Z$ is a solution to the {\it Euler-Lagrange equation} $EL(Z) \subset J^1(S)$ defined by $$\delta^{\text{univ}} \left( \frac{\partial {\mathcal L}}{\partial \delta x}\right)-\frac{\partial {\mathcal L}}{\partial x}\in \mathcal O(J^1(S)).$$ Contact forms $\nu$ that are canonical should be viewed as generalizing the {\it canonical contact forms} on cotangent bundles in differential geometry (and classical mechanics); also our Lagrangians and Euler-Lagrange equation correspond, formally, to the Lagrangians and Euler-Lagrange equation in classical mechanics. Note however the following discrepancy with the usual definition in differential geometry: our $J^1(Y)$ is related to (is a torsor under) the {\it tangent} bundle while in classical differential geometry {\it canonical forms} live on the {\it cotangent} bundle. This discrepancy is resolved, in usual differential geometry, by identifying the tangent and the cotangent bundle via $d\nu$; in our setting (when $J^1(Y)$ is not a trivial torsor) no such identification is available. By the way, as we shall explain, it is the definition of {\it canonical contact form} that we just gave above (and not the usual definition in differential geometry) that will have an arithmetic analogue. Finally, we make the following definition: a {\it canonical $\delta^A$-flow} on $S=J^1(Y)$ is a $\delta^A$-flow $\delta^{J^1(Y)}$ on $J^1(Y)$ with the property that the composition of $$\delta^{J^1(Y)}:\mathcal O(J^1(Y))\to \mathcal O(J^1(Y))$$ with the pull back map $$\mathcal O(Y)\to \mathcal O(J^1(Y))$$ equals the universal derivation $$\delta^{\text{univ}}:\mathcal O(Y)\to \mathcal O(J^1(Y)).$$ By the way, notice that one has a natural closed embedding $$\iota:J^2(Y)\to J^1(J^1(Y)).$$ Then one checks that a $\delta^A$-flow $\delta^{J^1(Y)}$ on $J^1(Y)$ is canonical if and only if the section $J^1(Y)\rightarrow J^1(J^1(Y))$ defined by $\delta^{J^1(Y)}$ factors through $\iota$. Also notice that if $x\in \mathcal O(Y)$ is an \'{e}tale coordinate and $\delta=\delta^{J^1(Y)}$ is a canonical flow then $x,\delta x$ are \'{e}tale coordinates on $S$. The concept of canonical $\delta^A$-flow is an algebraic version of a classical concept related to second order ODEs (for instance Painlev\'{e} equations) and has an arithmetic analogue. \subsection{The arithmetic case} Let $p$ be a rational odd prime. If $B$ is a ring a {\it Frobenius lift} on $B$ is a ring endomorphism $\phi=\phi^B:B\rightarrow B$ whose reduction mod $p$ is the $p$-power Frobenius on $B/pB$. Similarly if $X$ is a scheme or a $p$-adic formal scheme a {\it Frobenius lift} on $X$ is an endomorphism $\phi=\phi^X:X\rightarrow X$ whose reduction mod $p$ is the $p$-power Frobenius on the reduction of $X$ mod $p$. Let $A$ be a complete discrete valuation ring with maximal ideal generated by $p$ and perfect residue field $k=A/pA$; we fix this $A$ once and for all in the discussion below. Such an $A$ is uniquely determined up to isomorphism by $k$ and possesses a unique Frobenius lift $\phi=\phi^A:A\rightarrow A$. For any $A$-algebra $B$ and any scheme or $p$-adic formal scheme $X$ over $A$ Frobenius lifts on $B$ or $X$ will be tacitly assumed to be compatible with the Frobenius lift on $A$. For any Noetherian $A$-algebra $B$ and Noetherian scheme $X$ over $A$ we denote by $\widehat{B}$ and $\widehat{X}$ the $p$-adic completions of $B$ and $X$ respectively. We also define the K\"{a}hler differentials on the formal scheme $\widehat{X}$ by \begin{equation} \label{maxim} \Omega_{\widehat{X}}=\lim_{\leftarrow} \Omega_{X_n/A_n}\end{equation} where $A_n=A/p^nA$, $X_n=X\otimes A_n$. If $X$ is smooth over $A$ and $\phi$ is a Frobenius lift on $\widehat{X}$ then $\phi$ naturally induces additive maps \begin{equation} \label{phipep} \frac{\phi^*}{p^i}:\Omega^i_{\widehat{X}}\rightarrow \Omega^i_{\widehat{X}},\end{equation} where $\Omega^i_{\widehat{X}}:=\wedge^i \Omega_{\widehat{X}}$. Given a ring $B$ which is $p$-torsion free (i.e., $p$ is a non-zero divisor in $B$) a map of sets $$\delta=\delta^B:B\rightarrow B$$ will be called a $p$-{\it derivation} if the map $$\phi=\phi^B:B\rightarrow B, \ \ \ \phi(b):=b^p+p\delta b$$ is a ring homomorphism (equivalently a Frobenius lift); we say that $\delta$ and $\phi$ are {\it attached} to each other. We view $p$-derivations as arithmetic analogues of derivations; cf. \cite{char, Jo}. Then we view \ref{phipep} as analogues of Lie derivatives with respect to $p$-derivations. Similarly, if $X$ is a $p$-adic formal scheme over $A$, a $p$-{\it derivation} on $X$ (or an {\it arithmetic $\delta^A$-flow} on $X$) is a map of sheaves of sets $\delta=\delta^X:\mathcal O_X \rightarrow \mathcal O_X$ such that the map of sheaves of sets $\phi=\phi^X:\mathcal O_X\rightarrow \mathcal O_X$, $\phi(b)=b^p+p\delta b$, is a map of sheaves of rings (and hence induces a Frobenius lift $\phi=\phi^X:X\rightarrow X$). We again say that $\delta$ and $\phi$ are {\it attached} to each other. As we will see later the above concept of {\it arithmetic $\delta^A$-flow} is not flexible enough to accommodate some of the interesting examples of the theory; in the case of the Painlev\'{e} equations we will need a generalization of the concept of {\it arithmetic $\delta^A$-flow} which will be referred to as {\it generalized arithmetic $\delta^A$-flow}. Let $\delta$ be a $p$-derivation on some $p$-adically complete $p$-torsion free ring $B$. An element $c\in B$ is called a {\it $\delta$-constant} if $\delta c=0$. The set $B^{\delta}\subset B$ of $\delta$-constants is a multiplicative submonoid (but not a subring) of $B$. Let ${\mathbb Z}[B^{\delta}]$ be the subring of $B$ generated by $B^{\delta}$ and let $\Sigma$ be the multiplicative system $\Sigma:=B^{\times}\cap {\mathbb Z}[B^{\delta}]$. An element of $B$ is called a {\it pseudo-$\delta$-constant} if it is a $p$-adic limit in $B$ of elements in the ring of fractions $\Sigma^{-1}{\mathbb Z}[B^{\delta}]$. So the set of pseudo-$\delta$-constants in $B$ is a subring of $B$. One can easily check that if $B/pB$ is perfect (i.e., the $p$-power Frobenius on $B/pB$ is surjective) then any element in $B$ is congruent mod $p$ to an element of $B^{\delta}$ and, consequently, any element of $B$ is pseudo-$\delta$-constant; in particular any element of $A$ is a pseudo-$\delta$-constant. Conversely, if an element $b\in B$ is congruent mod $p$ to an element in $B^{\delta}$ then $\delta b$ is congruent mod $p$ to a $p$-th power in $B$. One can introduce arithmetic analogues of jet spaces as follows; cf. \cite{char}. Let $X$ be a scheme of finite type over $A$ or the $p$-adic completion of such a scheme. Say first that $X$ is affine, $$X=Spec\ A[x]/(f)\ \ \text{or}\ \ \ X=Spf\ A[x]^{\widehat{\ }}/(f),$$ with $x$ and $f$ tuples. Then define the $p$-{\it jet spaces} of $X$ to be the $p$-adic formal schemes \begin{equation} \label{J} J^n(X):=Spf\ A[x,x',...,x^{(n)}]^{\widehat{\ }}/(f,\delta f, ... ,\delta^nf)\end{equation} where $x',...,x^{(n)}$ are new tuples of indeterminates and $\delta=\delta^{\text{univ}}$ is the unique $p$-derivation on $A[x,x',...,x^{(n)},...]$ extending $\delta^A$ and satisfying $\delta x=x'$, ..., $\delta x^{(n)}=x^{(n+1)}$,... We denote, as usual, by $\phi=\phi^{\text{univ}}$ the Frobenius lift attached to $\delta^{\text{univ}}$; it induces ring homomorphisms $$\phi=\phi^{\text{univ}}:\mathcal O(J^n(X))\rightarrow \mathcal O(J^{n+1}(X)).$$ If $X$ is not necessarily affine then, again, one defines $J^n(X)$ by gluing $J^n(X_i)$ where $X=\cup X_i$ is a Zariski affine open cover. (In the gluing process one uses the fact that we are dealing with formal schemes rather than schemes. There is a more global approach, avoiding gluing, that leads to functorially constructed algebraizations of our $p$-jet spaces; cf. \cite{borger1, borger2}. We will not need these algebraized $p$-jet spaces in what follows.) Then $(J^n(X))_{n\geq 0}$ has, again, a structure of projective system of $p$-adic formal schemes depending functorially on $X$, with $J^0(X)=\widehat{X}$. If $G$ is a group in the category of schemes or $p$-adic formal schemes over $A$ then $(J^n(G))_{n\geq 0}$ has, again, a structure of projective system of groups in the category of $p$-adic formal schemes; however, even if $G/A$ is smooth, the kernels of the homomorphisms $J^n(G)\rightarrow J^{n-1}(G)$ are {\it generally not} isomorphic as groups to powers of $\widehat{{\mathbb G}_a}$, although they are always isomorphic as formal schemes to some completed affine space $\widehat{{\mathbb A}^d}$. (By the way, these kernels are commutative if and only if $G$ itself is commutative!) By an {\it arithmetic differential equation} on $X$ we will understand a closed formal subscheme of some $J^n(X)$. An {\it arithmetic $\delta^A$-flow} on $X$ will mean an arithmetic $\delta^A$-flow on $\widehat{X}$. To give an arithmetic $\delta^A$-flow on $\widehat{X}$ is equivalent to giving a section of the canonical projection $J^1(X)\rightarrow \widehat{X}$, i.e. to giving a differential equation $Z\subset J^1(X)$ for which the projection $Z\rightarrow \widehat{X}$ is an isomorphism. A {\it prime integral} for an arithmetic $\delta^A$-flow $\delta^X$ is a $\delta^X$-constant in $\mathcal O(\widehat{X})$, i.e., a function $H\in \mathcal O(\widehat{X})$ such that $\delta^XH=0$. For any $A$-point $P\in X(A)$, one defines the jets $J^n(P)\in J^n(X)(A)$ as the unique morphisms lifting $P$ that are compatible with the actions of $\delta^{\text{univ}}$ and $\delta^A$. A {\it solution} (in $A$) for a differential equation $Z\subset J^n(X)$ is, again, an $A$-point $P\in X(A)$ such that $J^n(P)$ factors through $Z$. If $P$ is a solution to the differential equation defined by an arithmetic $\delta^A$-flow and if $H$ is a prime integral for that arithmetic $\delta^A$-flow then, again, $\delta^A(H(P))=0$; so, again, intuitively $H$ is ``constant along any solution". If $X$ is affine and $Z\subset J^1(X)$ is the arithmetic differential equation corresponding to an arithmetic $\delta^A$-flow $\delta^X$ on $X$ then a point $P\in X(A)$ is a solution to $Z$ if and only if the ring homomorphism $P^*:\mathcal O(\widehat{X})\rightarrow A$ defined by $P$ satisfies $\delta^A\circ P^*=P^*\circ \delta^X$. Let, in what follows, $S=Spec\ B$ be a smooth affine surface. Then, by the discussion in the previous subsection, we have a notion of {\it symplectic} form $\eta\in \Omega^2_{B/A}$ and associated {\it Poisson structure}, $\{\ ,\ \}_{\eta}$. For an arithmetic $\delta^A$-flow $\delta=\delta^{\widehat{B}}:\widehat{B}\rightarrow \widehat{B}$ on $S$ the analogue of Lie derivatives will be the maps \begin{equation} \frac{\phi^*}{p^i}:\Omega^i_{\widehat{S}}\rightarrow \Omega^i_{\widehat{S}},\ \ i=0,1,2.\end{equation} At this point we would like to define what it means for an arithmetic $\delta^A$-flow on $S$ to be {\it Hamiltonian with respect to a symplectic form $\eta$ on $S$}. One is tempted to make the following definition: an arithmetic $\delta^A$-flow $\delta=\delta^S$ on $S$ is {\it Hamiltonian} with respect to the symplectic form $\eta$ on $S$ if \begin{equation} \label{Ham} \frac{\phi^*}{p^2}\eta=\lambda\cdot \eta, \end{equation} where $\lambda\in \mathcal O(\widehat{S})$ is a pseudo-$\delta$-constant. The concept we just defined is, however, not flexible enough to accommodate our examples. In particular, for the Painlev\'{e} equations one will need to replace arithmetic $\delta^A$-flows with what we will call {\it generalized arithmetic $\delta^A$-flows}; while for the Euler equations one will need to replace equality in \ref{Ham} by a congruence mod $p$. In view of the above we will {\it not} adopt, in what follows, the above attempted definition of the {\it Hamiltonian} property but rather postpone the discussion of the Hamiltonian-related concepts to the next sections where the main examples of the theory are discussed; there we will encounter arithmetic analogues of the Hamiltonian property, canonical contact forms, canonical (generalized) arithmetic $\delta^A$-flows, Euler-Lagrange forms, etc., each of which will be adapted to their specific context. \section{Painlev\'{e} equations} \subsection{The classical case} As a step in his proof of the Mordell conjecture over function fields of characteristic zero \cite{maninmordell} Manin introduced a differential algebraic map now referred to as the {\it Manin map}. A different, but equivalent, construction of this map was given in \cite{annals}. On the other hand Manin showed in \cite{maninmirror} (cf. also \cite{maninfrob}, p. 71) how the Painlev\'{e} VI equation can be understood as a ``deformation of the Manin map"; he attributes this viewpoint to Fuchs. In \cite{maninmirror} Manin also explained how this viewpoint leads to a Hamiltonian structure for the Painlev\'{e} VI equation. We quickly review here the ``deformation of the Manin map" interpretation of Painlev\'{e} VI in \cite{maninmirror} and refer to \cite{maninmirror} for the Hamiltonian picture. Let $A$ be the algebraic closure of a function field of one variable, $A=\overline{{\mathbb C}(t)}$, and $\delta^A=d/dt$. Let ${\mathcal E}$ be an elliptic curve (i.e., a smooth projective curve of genus one) over $A$ which does not descend to ${\mathbb C}$. Then the second jet space $J^2({\mathcal E})$ is easily seen to possess a non-zero group homomorphism of algebraic groups, unique up to multiplication by an element of $A$, \begin{equation} \label{mm} \psi:J^2({\mathcal E})\rightarrow {\mathbb G}_a,\end{equation} into the additive group ${\mathbb G}_a$ over $A$. The map \ref{mm} is an incarnation of the Manin map, as explained, in a more general setting, in \cite{annals}. Let us view $\psi$ as an element of $\mathcal O(J^2({\mathcal E}))$ and, for any open set $Y\subset {\mathcal E}$, let us view $\mathcal O(Y)$ and $\mathcal O(J^2({\mathcal E}))$ as subrings of $\mathcal O(J^2(Y))$ via pull-backs. Also let us recall that the classical Painlev\'{e} VI equation is a family, depending on $4$ parameters in ${\mathbb C}$, of differential equations. Then Manin's analysis in \cite{maninmirror} shows that each of the differential equations in the Painlev\'{e} VI family can be interpreted (in our language introduced above) as the closed subscheme $Z$ of $J^2(Y)$ defined by \begin{equation} \label{pain} f:=\psi-r\in \mathcal O(J^2(Y)) \end{equation} where $Y$ is the complement in ${\mathcal E}$ of the set ${\mathcal E}[2]$ of $2$-torsion points in ${\mathcal E}(A)$ and $r\in \mathcal O(Y)$ is an appropriate function. More precisely $r$ is a suitable ${\mathbb C}$-linear combination of the $4$ translates, by the $4$ points in ${\mathcal E}[2]$, of the $y$-function on ${\mathcal E}\backslash {\mathcal E}[2]$ in a representation $${\mathcal E}\backslash {\mathcal E}[2]=Spec\ A[x,y,y^{-1}]/(y^2-(x^3+ax+b));$$ the complex coefficients of this linear combination are related to the $4$ complex parameters in the corresponding classical Painlev\'{e} equation. Moreover one can easily show that \begin{thm} For any function $r\in \mathcal O(Y)$ the differential equation $Z\subset J^2(Y)$ given by \ref{pain} defines a canonical $\delta^A$-flow on $S:=J^1(Y)$.\end{thm} In particular the equations in the Painlev\'{e} VI family are defined by canonical $\delta^A$-flows on $J^1(Y)$. By the way notice that $J^1({\mathcal E})$, on which Painlev\'{e} equations ``live", is an ${\mathbb A}^1$-fibration over the elliptic curve ${\mathcal E}$; and actually, over $Y$, this fibration is trivial, so we have an isomorphism $$J^1(Y)\simeq Y \times {\mathbb A}^1.$$ For details on the Hamiltonian picture we refer to \cite{maninmirror}. \subsection{The arithmetic case} The construction of the Manin map in \cite{annals} was shown in \cite{char} to have an arithmetic analogue. Then, in \cite{BYM}, an arithmetic analogue of the Painlev\'{e} VI equation was introduced and a {\it Hamiltonian structure} was shown to exist for it. We explain this in what follows. Let $A$ be a complete discrete valuation ring with maximal ideal generated by $p$ and perfect residue field. Recall that elliptic curves over $A$ do not generally admit Frobenius lifts; an elliptic curve that admits a Frobenius lift is automatically with complex multiplication. \begin{thm} \label{charr} \cite{char} Let ${\mathcal E}$ be an elliptic curve over $A$ that admits no Frobenius lift. There exists a non-zero group homomorphism, in the category of $p$-adic formal schemes, \begin{equation} \label{amm} \psi:J^2({\mathcal E})\rightarrow \widehat{{\mathbb G}_a},\end{equation} which is unique up to multiplication by a constant in $A^{\times}$.\end{thm} We view \ref{amm} as an arithmetic analogue of the Manin map \ref{mm}. Given an invertible $1$-form $\omega$ on ${\mathcal E}$ one can normalize $\psi$ with respect to $\omega$; we will need, and review, this normalization later. The normalized $\psi$ can be referred to as the {\it canonical $\delta$-character} on ${\mathcal E}$; cf. \cite{book}, Definition 7.24. One can view $\psi$ as an element of $\mathcal O(J^2({\mathcal E}))$. By the way one has: \begin{thm}\label{jeje}\cite{je} Let ${\mathcal E}$ be an elliptic curve over $A$ that admits no Frobenius lift. Then the following hold: 1) $\mathcal O(J^1({\mathcal E}))=R$. 2) $J^1({\mathcal E})$ admits no Frobenius lift. \end{thm} Assertion 1 in Theorem \ref{jeje} shows that ``order $2$ in Theorem \ref{charr} is optimal." Assertion 2 in Theorem \ref{jeje} is equivalent to saying that the projection $$J^1(J^1({\mathcal E}))\rightarrow J^1({\mathcal E})$$ does not admit a section in the category of $p$-adic formal schemes; equivalently, there is no arithmetic $\delta^A$-flow on $J^1({\mathcal E})$! This justifies our generalization of the notion of arithmetic $\delta^A$-flow below. To introduce this let us assume, for a moment that $Y$ is any smooth affine curve over $A$ and assume we are given an arithmetic differential equation \begin{equation} \label{mortify} Z\subset \mathcal O(J^r(Y)). \end{equation} Then one can consider the module \begin{equation} \label{asthma77} \Omega_Z = \lim_{\leftarrow} \Omega_{Z_n/A_n}, \end{equation} $A_n:=A/p^nA$, $Z_n:=Z\otimes A_n$, and the module \begin{equation} \label{asthma77} \Omega_J = \lim_{\leftarrow} \Omega_{J_n/A_n}, \end{equation} where $J:=J^r(Y)$. On the other hand put \begin{equation} \label{4.6} \Omega^{\prime}_Z:= \frac{\Omega_{J}}{\langle I_Z \Omega_{J}, dI_Z\rangle} \end{equation} where $I_Z$ is the ideal of $Z$ in $J^r(Y)$. Moreover define $\Omega_Z^{\prime i}$ to be the $i$-th wedge power $\wedge^i\Omega_Z'$. Under quite general hypotheses the modules \ref{maxim} and \ref{4.6} coincide; we will not discuss this here but, rather, refer to \cite{foundations}, Lemma 3.165. Going back to $Z$ as in \ref{mortify}, for each $s\leq r$, there is a natural map $$\pi_{r,s} :\, Z\to J^s(Y).$$ We also have natural maps $$ \frac{\phi^{\text{univ}*}}{p^i} :\, \Omega^i_{J^{r-1}(Y)}\to \Omega^i_{J^{r}(Y)}, $$ inducing maps which we will denote by $$ \frac{\phi^*_Z}{p^i}:\Omega^i_{J^{1}(Y)}\to \Omega^{\prime i}_{Z}. $$ We say that $Z\subset J^2(Y)$ defines a {\it generalized $\delta$-flow} on $J^1(Y)$, if the induced map $$ \pi_{2,1}^*\Omega_{J^1(Y)}\to \Omega^{\prime}_{Z} $$ is injective, and its cokernel is annihilated by a power of $p$. Under quite general conditions, if $Z$ defines an arithmetic $\delta^A$-flow on $J^1(Y)$ then $Z$ defines a generalized arithmetic $\delta^A$-flow on $J^1(Y)$; again we will not need this so we will not discuss these conditions here; but see, again, \cite{foundations}, Lemma 3.165. Now let $S$ be a smooth surface over $A$ or the $p$-adic completion of such a surface. Recall that a {\it symplectic form} on $S$ is an invertible $2$-form on $X$ over $A$; a {\it contact form} on $S$ is a $1$-form on $X$ over $S$ such that $d\nu$ is symplectic; and for $S=J^1(Y)$ with $Y$ a smooth curve over $A$, a $1$-form $\nu$ on $S$ is called {\it canonical} if $\nu=f\beta$, where $f\in \mathcal O(S)$ and $\beta$ is an $1$-form lifted from $Y$. Let $Y$ be a smooth affine curve over $A$ and let $f\in \mathcal O(J^2(Y))$ be a function whose zero locus defines a generalized arithmetic $\delta^A$-flow on $S:=J^1(Y)$. The respective generalized arithmetic $\delta^A$-flow is called {\it Hamiltonian} with respect to the symplectic form $\eta$ on $S$, if $$\frac{\phi^*_Z}{p}\eta =\lambda\cdot \eta$$ in $\Omega^{\prime 2}_{Z}$ for some $\lambda \in A$; note that any element in $A$, hence in particular $\lambda$, is a pseudo-$\delta$-constant; so the definition we just gave is a generalized version of the definition we proposed in \ref{Ham}. Assume, moreover, that $\eta =d\nu$ for some canonical $1$-form $\nu$ on $S$. Then we call \begin{equation} \epsilon:= \frac{\phi^*_Z}{p}\nu-\lambda \nu\in \Omega^{\prime}_{Z} \end{equation} the {\it Euler-Lagrange form} attached to $\nu$. Now let ${\mathcal E}$ be an elliptic curve over $A$ that does not admit a Frobenius lift and let $\psi\in \mathcal O(J^2({\mathcal E}))$ be the canonical $\delta$-character with respect to an invertible $1$-form $\omega$ on ${\mathcal E}$. Consider the symplectic form $$\eta=\omega\wedge \frac{\phi^{\text{univ}*}}{p}\omega$$ on $J^1({\mathcal E})$. Let $Y\subset {\mathcal E}$ be an affine open set possessing an \'{e}tale coordinate; this latter condition is satisfied, for instance, if $Y={\mathcal E}\backslash {\mathcal E}[2]$. By the way, notice that $J^1({\mathcal E})$ is an $\widehat{{\mathbb A}^1}$-fibration over the elliptic curve ${\mathcal E}$; and actually this fibration is trivial over $Y$, hence we have an isomorphism of formal schemes, $$J^1(Y)\simeq \widehat{Y} \times \widehat{{\mathbb A}^1}.$$ \begin{thm} \label{bbb} \cite{BYM} 1) There exists a canonical contact form $\nu$ on $S:=J^1(Y)$ such that $d\nu=\eta$. 2) For any function $r\in \mathcal O(Y)$ the differential equation $Z \subset J^2(Y)$ given by the zero locus of the function $$f=\psi-\phi^{\text{univ}}(r)\in \mathcal O(J^2(Y))$$ defines a generalized arithmetic $\delta^A$-flow on $S$ which is Hamiltonian with respect to $\eta$. \end{thm} In particular the symplectic form $\eta$ is exact and the Euler-Lagrange form $\epsilon$ is closed. The function $f$ in assertion 2 of the Theorem is our analogue of the Painlev\'{e} VI equation. By the Theorem it defines a generalized arithmetic $\delta$-flow on $S$; however, it does not define an arithmetic $\delta$-flow on $S$ which is our motivation for generalizing the definition of arithmetic $\delta$-flow. Note the discrepancy with the classical case coming from replacing $r$ by $\phi^{\text{univ}}(r)$ in the expression of $f$ in Theorem \ref{bbb}. Another discrepancy comes from the absence, in the arithmetic setting, of an analogue of the $4$ constant parameters in the classical Painlev\'{e} equations. \section{Euler equations} In Manin's picture \cite{maninmirror} we have just reviewed the Painlev\'{e} VI equation ``lives" on an ${\mathbb A}^1$-fibration over an elliptic curve. On the other hand, the {\it Euler equation} describing the motion of a rigid body with a fixed point, which we are discussing next, ``lives" on an elliptic fibration over ${\mathbb A}^1$. This already suggests an analogy between the geometries underlying these differential equations and their arithmetic analogues; we will make such analogies/links more precise below. \subsection{The classical case} We begin by reviewing the classical Euler equations from a purely algebraic point of view. Let $A$ be either a field or a discrete valuation ring and assume $2$ is invertible in $A$. Let $x_1,x_2,x_3$ and $z_1,z_2$ be variables and let $a_1,a_2,a_3\in A$ be such that $$(a_1-a_2)(a_2-a_3)(a_3-a_1)\in A^{\times}.$$ We consider the quadratic forms, $$ H_1:=\sum_{i=1}^3 a_ix_i^2\in A[x_1,x_2,x_3],\ \ \ H_2:=\sum_{i=1}^3x_i^2\in A[x_1,x_2,x_3].$$ Also we consider the affine spaces $ {\mathbb A}^2=\operatorname{Spec}\ A[z_1,z_2]$, ${\mathbb A}^3=\operatorname{Spec}\ A[x_1,x_2,x_3] $ and the morphism ${\mathcal H}:{\mathbb A}^3\rightarrow {\mathbb A}^2$ defined by $z_1\mapsto H_1$, $z_2\mapsto H_2$. For $i=1,2,3$ denote by $Z_i\subset {\mathbb A}^3$ the $x_i$-{\it coordinate plane} and let $$ L_1=Z_2\cap Z_3,\ \ L_2=Z_3\cap Z_1,\ \ L_3=Z_1\cap Z_2 $$ be the {\it $x_i$-coordinate axes}. Then ${\mathcal H}$ is smooth on the complement of $L_1\cup L_2 \cup L_3$. For any $A$-point $c=(c_1,c_2)\in A^2={\mathbb A}^2(A)$ we set $$E_c:={\mathcal H}^{-1}(c)=\operatorname{Spec}\ A[x_1,x_2,x_3]/(H_1-c_1,H_2-c_2),$$ and we let $i_c:E_c\rightarrow {\mathbb A}^3$ be the inclusion. Consider the polynomial $$ N(z_1,z_2)= \prod_{i=1}^3(z_1-a_iz_2)\in A[z_1,z_2].$$ Then, for $N(c_1,c_2)\in A^{\times}$, $E_c$ is disjoint from $L_1\cup L_2 \cup L_3$ and, in particular, $E_c$ is smooth over $A$: it is an affine elliptic curve. Moreover $E_c$ comes equipped with a global $1$-form given by \begin{equation} \label{the form on intersections of two quadrics} \omega_c=i_c^*\frac{dx_1}{(a_2-a_3)x_2x_3}=i_c^*\frac{dx_2}{(a_3-a_1)x_3x_1}=i_c^*\frac{dx_3}{(a_1-a_2)x_1x_2}.\end{equation} If one considers the smooth projective model ${\mathcal E}_c$ of $E_c$ then $\omega_c$ extends to an invertible $1$-form on the whole of ${\mathcal E}_c$. In the discussion below a certain plane quartic will play a role; let us review this next. Consider two more indeterminates $x,y$, and consider the polynomial \begin{equation} \label{quartic} F:=((a_2-a_3)x^2+z_1-a_2z_2)((a_3-a_1)x^2-z_1+a_1z_2)\in A[z_1,z_2][x] . \end{equation} For any $c=(c_1,c_2)\in A^2$ set $$ E'_c:=\operatorname{Spec}\ A[x,y]/(y^2-F(c_1,c_2,x)). $$ Then we have a morphism $\pi: E_c\rightarrow E'_c$ given by $ x\mapsto x_3$, $y\mapsto (a_1-a_2)x_1x_2$. If $N(c_1,c_2)\in A^{\times}$, $E'_c$ is smooth over $A$ and $$ \pi^*(\frac{dx}{y})=i_c^*\frac{dx_3}{(a_1-a_2)x_1x_2}=\omega_c.$$ For $A$ a perfect field and $c_1,c_2$ satisfying $N(c_1,c_2)\neq 0$ we have that $E'_c$ is a smooth plane curve. If ${\mathcal E}'_c$ is its smooth projective model then we have an induced isogeny of elliptic curves, ${\mathcal E}_c\rightarrow {\mathcal E}'_c$. Assume now, until further notice, that $A$ is a field of characteristic zero (classically $A={\mathbb C}$, the complex field), viewed as equipped with the trivial derivation $\delta^A=0$, and consider the $A$-derivation $\delta=\delta^B$ on the polynomial ring $B=A[x_1,x_2,x_3]$ given by \medskip \begin{equation} \label{Euler system} \delta x_1 = (a_2-a_3)x_2x_3,\ \ \delta x_2 = (a_3-a_1)x_3x_1,\ \ \delta x_3 = (a_1-a_2)x_1x_2.\end{equation} \medskip We refer to the derivation $\delta$ as the {\it classical Euler flow} on ${\mathbb A}^3$. For any $c=(c_1,c_2)\in A^2$ with $N(c_1,c_2)\neq 0$ denote by $\delta_c$ the derivation on $\mathcal O(E_c)$ induced by the derivation $\delta$ on $B$. We have the following trivially checked classical fact: \begin{thm} \label{classical theorem} \ 1) $H_1$ and $H_2$ are prime integrals for the classical Euler flow, i.e., $$\delta H_1=\delta H_2=0.$$ 2) For any $c=(c_1,c_2)\in A^2$ with $N(c_1,c_2)\neq 0$ the Lie derivative $\delta_c$ on $\Omega^1_{\mathcal O(E_c)/A}$ annihilates the $1$-form $\omega_c$ on $E_c$: $$ \delta_c \omega_c=0. $$\end{thm} Condition 2 can be viewed as a {\it linearization} condition for the $\delta^A$-flow $\delta_c$ on $E_c$. It is equivalent to $\delta_c$ having an extension to a vector field on the compactification ${\mathcal E}_c$. It is the condition in 2 and {\it not} the ``extension to the compactification" property that will have an arithmetic analogue. The classical Euler flow fits into the Hamiltonian paradigm. We explain this in what follows. Since we will later need a discussion of these concepts in the arithmetic case as well we revert in what follows to the case when $A$ is either a field or a discrete valuation ring. Let $c_2\in A^{\times}$ and set $$S_{c_2}:=Spec\ A[x_1,x_2,x_3]/(H_2-c_2)\subset {\mathbb A}^3,$$ the {\it sphere of radius $c_2^{1/2}$}. Then $S_{c_2}$ is the scheme theoretic pullback via ${\mathcal H}$ of the line in ${\mathbb A}^2$ defined by $z_2-c_2$ and hence the map $$H:S_{c_2}\rightarrow {\mathbb A}^1=Spec\ A[z_1]$$ induced by $z_1\rightarrow H_1$ is smooth above the complement of the closed subscheme defined by $$N(z_1,c_2)=(z_1-c_2a_1)(z_1-c_2a_2)(z_1-c_2a_3).$$ Now consider the $2$-forms $$\ \ \eta_1=\frac{dx_2\wedge dx_3}{x_1},\ \ \eta_2=\frac{dx_3\wedge dx_1}{x_2},\ \ \eta_3=\frac{dx_1\wedge dx_2}{x_3}$$ defined on the complements in $S_{c_2}$ of $Z_1, Z_2, Z_3$, respectively. These forms glue together defining a symplectic form $\eta_{c_2}$ on $S_{c_2}$. If one considers the Poisson structure $\{\ ,\ \}$ on $\mathcal O({\mathbb A}^3)=A[x_1,x_2,x_3]$ defined by $$\{x_1,x_2\}=x_3,\ \ \{x_2,x_3\}=x_1,\ \ \{x_3,x_1\}=x_2,$$ then $H_2$ is a {\it Casimir} i.e., $\{H_2,-\}=0$, so this Poisson structure induces a Poisson structure $\{\ ,\ \}_{c_2}$ on each $\mathcal O(S_{c_2})$. On $\mathcal O(S_{c_2})$ we have $$\{x_1,x_2\}_{c_2}=\frac{dx_1\wedge dx_2}{\eta_{c_2}},\ \ \{x_2,x_3\}_{c_2}=\frac{dx_2\wedge dx_3}{\eta_{c_2}},\ \ \{x_3,x_1\}_{c_2}=\frac{dx_3\wedge dx_1}{\eta_{c_2}},$$ hence the Poisson structure $\{\ ,\ \}_{c_2}$ on $\mathcal O(S_{c_2})$ coincides with the Poisson structure $\{\ ,\ \}_{\eta_{c_2}}$ on $\mathcal O(S_{c_2})$ defined by the symplectic form $\eta_{c_2}$ (because the two Poisson structures coincide on the generators $x_1,x_2,x_3$ of $\mathcal O(S_{c_2})$). In other words $S_{c_2}$ are {\it symplectic leaves} for our Poisson structure on $\mathcal O({\mathbb A}^3)$, with corresponding symplectic forms $\eta_{c_2}$. Furthermore, if $\delta$ is the {\it classical Euler flow} \ref{Euler system} then $\delta$ induces a derivation $\delta_{c_2}$ on each $\mathcal O(S_{c_2})$ and the Lie derivative on $2$-forms, $$\delta_{c_2}:\Omega^2_{\mathcal O(S_{c_2})/A}\rightarrow \Omega^2_{\mathcal O(S_{c_2})/A}$$ is trivially seen to satisfy \begin{equation} \label{lie} \delta_{c_2} \eta_{c_2}=0.\end{equation} In other words we have: \begin{thm}\label{other words} The $\delta^A$-flow $\delta_{c_2}$ on $S_{c_2}$ is symplectic with respect to $\eta_{c_2}$. \end{thm} The link between the $2$-forms $\eta_{c_2}$ and the $1$-forms $\omega_c$ is as follows. Consider the $1$-forms $$\omega_1=\frac{dx_1}{(a_2-a_3)x_2x_3},\ \ \omega_2=\frac{dx_2}{(a_3-a_1)x_3x_1},\ \ \omega_3=\frac{dx_3}{(a_1-a_2)x_1x_2}$$ defined on $$ S_{c_2}\backslash (Z_2\cup Z_3), \ \ \ S_{c_2}\backslash (Z_3\cup Z_1), \ \ \ S_{c_2}\backslash (Z_1\cup Z_2),$$ respectively. Recall that for any $c=(c_1,c_2)$ with $N(c_1,c_2)\in A^{\times}$ the restrictions of $\omega_1,\omega_2,\omega_3$ to $E_c$ glue to give the form $\omega_c$ on $E_c$. A trivial computation then gives the following equalities of $2$-forms on $S_{c_2}\backslash (Z_1\cup Z_2\cup Z_3)$ which will play a role later: \begin{equation} \label{fiona} \eta_{c_2}=-d H_1 \wedge \omega_1=-d H_1 \wedge \omega_2=-d H_1 \wedge \omega_3.\ \ \end{equation} By the way, the equalities \ref{fiona} imply that for all $c=(c_1,c_2)$ with $N(c_1,c_2)\in A^{\times}$ the form $\omega_c$ on $E_c$ satisfies \begin{equation} \label{PR} \omega_c=-P.R.\left(\frac{\eta_{c_2}}{H_1-c_1}\right), \end{equation} where $$P.R.:\Omega^2_{S_{c_2}/A}(E_c)\rightarrow \Omega^1_{E_c/A}$$ is the Poincar\'{e} residue map \cite{GH}, p. 147; we will not need this interpretation in what follows. \subsection{The arithmetic case} In what follows $A$ is a complete discrete valuation ring with maximal ideal generated by an odd prime $p$ and perfect residue field $k=A/pA$. Let $F\in A[z_1,z_2][x]$ be the polynomial in \ref{quartic}. Define the {\it Hasse invariant} to be the coefficient $A_{p-1}\in A[z_1,z_2]$ of $x^{p-1}$ in the polynomial $F^{\frac{p-1}{2}}.$ In addition to the quantities defined in the previous subsection we also consider the following polynomial $$ Q:=x_1x_2\cdot N(H_1,H_2)\cdot A_{p-1}(H_1,H_2)\in A[x_1,x_2,x_3], $$ and the open subscheme of ${\mathbb A}^3$ defined by $$ X=Spec\ A[x_1,x_2,x_3][1/Q].$$ Assume in addition that $c=(c_1,c_2)\in A^2$ satisfies $$ \delta c_1=\delta c_2=0\ \ \ \text{and}\ \ \ N(c_1,c_2)\cdot A_{p-1}(c_1,c_2)\in A^{\times} $$ and let $\delta^{X}$ be any arithmetic $\delta^A$-flow on $\widehat{X}$ satisfying $\delta^X H_1=\delta^X H_2=0$. Then the Frobenius lift $\phi^{X}$ on $\mathcal O(\widehat{X})$ induces a Frobenius lift $\phi_c:=\phi^{E^0_c}$ on $\widehat{E^0_c}$ where $E^0_c$ is the open set of $E_c$ given by $E^0_c := E_c\cap X$. We refer to $\phi_c$ as the {\it Frobenius lift} on $\widehat{E^0_c}$ attached to $\delta^X$. On the other hand, the global $1$-form $\omega_c$ in \ref{the form on intersections of two quadrics} restricted to $E^0_c$ will be referred to as the {\it canonical} $1$-form on $E^0_c$ and will still be denoted by $\omega_c$. \medskip The following provides an arithmetic analogue of the classical Euler flow: assertions 1 and 2 below are arithmetic analogues of assertions 1 and 2 in Theorem \ref{classical theorem} respectively. \begin{thm} \label{linearization theorem} \cite{euler} There exists an arithmetic $\delta^A$-flow $\delta^X$ on $\widehat{X}$ such that: 1) $H_1$ and $H_2$ are prime integrals for $\delta^X$, i.e., the following holds in $\mathcal O(\widehat{X})$: $$ \delta^XH_1=\delta^XH_2=0;$$ 2) For any point $c=(c_1,c_2)\in A^2$ with $$ \delta c_1=\delta c_2=0,\ \ \ \text{and}\ \ \ N(c_1,c_2)\cdot A_{p-1}(c_1,c_2)\in A^{\times}$$ the Frobenius lift $\phi_c$ on $\widehat{E_c^0}$ attached to $\delta^X$ and the canonical $1$-form $\omega_c$ on $E_c^0$ satisfy the following congruence in $\Omega^1_{\widehat{E^0_c}}$: $$ \frac{\phi_c^*}{p}\omega_c\equiv A_{p-1}(c_1,c_2)^{-1} \cdot \omega_c\ \ \ \text{mod}\ \ \ p.$$ \end{thm} By the way, one can ask if the open set $X$ in Theorem \ref{linearization theorem} can be taken to be the whole of ${\mathbb A}^3$. In contrast with the classical case (Theorem \ref{classical theorem}), the answer to this is negative; indeed we have the following ``singularity theorem": \begin{thm} \label{singularity} \cite{euler} If $X\subset {\mathbb A}^3$ is an open set such that $\widehat{X}$ possesses an arithmetic $\delta^A$-flow $\delta^X$ with $ \delta^XH_1=\delta^XH_2=0$ and if $\delta a_i\in A^{\times}$ for some $i\in\{1,2,3,\}$ then $\widehat{X}$ cannot meet the coordinate axis $\widehat{L_i}$. \end{thm} Another question one can ask is whether it is possible to extend the Frobenius lifts $\phi_c$ in Theorem \ref{linearization theorem} to the compactifications ${\mathcal E}_c$ of $E_c$. In contrast with the classical case (Theorem \ref{classical theorem}), the answer to this is, again, negative, cf. Theorem \ref{masa} below. For this theorem we fix, for every rational prime $p$, a complete discrete valuation ring $R_p$ with maximal ideal generated by $p$ and algebraically closed residue field. We also fix a number field $F$, with ring of integers $\mathcal O_F$, a rational integer $M$, and, for each $p>>0$ we fix an embedding of $\mathcal O_F[1/M]$ into $R_p$. \begin{thm} \label{masa} \cite{canonical} Let $a_1,a_2,a_3\in \mathcal O_F[1/M]$. Then, if $p>>0$, there is no triple $(K,X,\phi^X)$ with $\bullet$ $K\in \mathcal O(\widehat{{\mathbb A}^2})=R_p[z_1,z_2]^{\widehat{\ }}$, $K\not\equiv 0$ mod $p$, $\bullet$ $X\subset {\mathbb A}^3$ an open set over $R_p$, $\bullet$ $\phi^X$ a Frobenius lift on $\widehat{X}$, \noindent satisfying the following two conditions: 1) $H_1$ and $H_2$ are prime integrals for the arithmetic $\delta^A$-flow $\delta^X$ attached to $\phi^X$, i.e., the following holds in $\mathcal O(\widehat{X})$: $$ \delta^XH_1=\delta^XH_2=0;$$ 2) for all $c\in R_p^2$ with $\delta c=0$, $N(c)K(c)\in R_p^{\times}$, one has $\widehat{E}_c\cap \widehat{X}\neq \emptyset$ and $$\textit{$\phi_c$ extends to an endomorphism of the compactification ${\mathcal E}_c$ of $E_c$,} $$ where $\phi_c$ is the Frobenius lift on $\widehat{E}_c\cap \widehat{X}$ induced by $\phi^X$. \end{thm} Interestingly, the proof of Theorem \ref{masa} is based on a variant of a Diophantine result in \cite{local} which, in its turn, is proved using, again, arguments involving arithmetic differential equations. \medskip In what follows we use Theorem \ref{linearization theorem} to derive an arithmetic analogue of the Hamiltonian picture; cf. Theorem \ref{other words}. Let $c_2\in A^{\times}$ be such that $\delta c_2=0$. Then the Frobenius lift $\phi^X$ attached to the arithmetic $\delta^A$-flow $\delta^X$ in Theorem \ref{linearization theorem} induces a Frobenius lift $\phi_{c_2}$ on $\widehat{S^0_{c_2}}$ where $S^0_{c_2}:=S_{c_2}\cap X$. Recall that it follows from equations 6.1 and 6.2 in \cite{euler} that the function $A_{p-1}(H_1,c_2)$ is invertible on $\widehat{S^0_{c_2}}$. Set $$\lambda:=\frac{H_1^{p-1}}{A_{p-1}(H_1,c_2)}\in \mathcal O(\widehat{S_{c_2}^0})$$ and note that $\lambda$ is a pseudo-$\delta$-constant in $\mathcal O(\widehat{S_{c_2}^0})$ because $H_1$ is a $\delta$-constant and all elements of $A$ are pseudo-$\delta$-constants. We will prove the following result which can be interpreted as a relaxation of the condition defining the {\it Hamiltonian} property in \ref{Ham}: \begin{theorem} \label{new1} The following holds in $\Omega^2_{\widehat{S_{c_2}^0}}$: $$\frac{\phi_{c_2}^*}{p^2}\eta_{c_2}\equiv \lambda \eta_{c_2}\ \ \ \text{mod}\ \ \ p.$$ \end{theorem} {\it Proof}. Consider the form $\theta$ on $\widehat{S^0_{c_2}}$ defined by $$ \theta:=\frac{\phi_{c_2}^*}{p^2}\eta_{c_2}-\frac{H_1^{p-1}}{A_{p-1}(H_1,c_2)}\eta_{c_2}. $$ By \ref{fiona} and $\phi_{c_2}(H_1)=H_1^p$ we have $$ \theta = -H_1^{p-1}d H_1\wedge \frac{\phi_{c_2}^*}{p}\omega_1 +\frac{H_1^{p-1}}{A_{p-1}(H_1,c_2)}dH_1\wedge \omega_1= - H_1^{p-1}dH_1 \wedge \beta $$ where $$\beta:=\frac{\phi_{c_2}^*}{p}\omega_1- \frac{1}{A_{p-1}(H_1,c_2)} \omega_1 $$ Let $i_c:E^0_c=E_c\cap X\rightarrow S^0_{c_2}$ be the inclusion. Then, if $\delta c_1=0$, $N(c_1,c_2)\in A^{\times}$, $A_{p-1}(c_1,c_2)\in A^{\times}$, by Theorem \ref{linearization theorem}, $$i_c^*\beta= \frac{\phi_{c}^*}{p}\omega_c- \frac{1}{A_{p-1}(c_1,c_2)} \omega_c \equiv 0\ \ \ \text{mod}\ \ \ p.$$ Let us denote by an upper bar the operation of reduction mod $p$. Since any element in $k$ can be lifted to an element $c_1$ of $A$ killed by $\delta$ (this lift is the Teichm\"{u}ller lift) it follows that $$\overline{i}^*_{\overline{c}} \overline{\beta}=0$$ for all except finitely many $\overline{c}_1\in k$, where $\overline{i}_{\overline{c}}:\overline{E^0_c}\rightarrow \overline{S^0_{c_2}}$ is the inclusion. Recall from \cite{euler} that $H_1,H_2,x_3$ are \'{e}tale coordinates on $X$; so $H_1,x_3$ are \'{e}tale coordinates on $S_{c_2}^0$. Write $$\beta=b_1dH_1+b_2dx_3,$$ on $\widehat{S^0_{c_2}}$, with $b_1,b_2\in \mathcal O(\widehat{S^0_{c_2}})$. Since $$\overline{i}^*_{\overline{c}} \overline{\beta}=\overline{i}^*_{\overline{c}}\overline{b}_1 \cdot d \overline{c_1}+\overline{i}^*_{\overline{c}}\overline{b}_2 \cdot d\overline{x_3}=\overline{i}^*_{\overline{c}}\overline{b}_2 \cdot d\overline{x_3}$$ it follows that $$\overline{i}^*_{\overline{c}}\overline{b}_2 =0.$$ Since this is true for all except finitely many $\overline{c}_1$ it follows that $\overline{b_2}=0$. But then $$\overline{\theta}=- \overline{H_1}^{p-1}d\overline{H_1} \wedge \overline{\beta}= - \overline{H_1}^{p-1}d\overline{H_1} \wedge \overline{b}_2 d\overline{x_3}=0.$$ We conclude that the congruence in the statement of the Theorem holds on an open set of $\widehat{S_{c_2}^0}$ and hence on the whole of $\widehat{S_{c_2}^0}$.\qed \medskip We next deduce a result that establishes a {\it link} between the Painlev\'{e} paradigm and the Euler paradigm. Assume we are under the hypotheses and notation of Theorem \ref{linearization theorem}, with $a_1,a_2,a_3\in {\mathbb Z}_p$. (So morally we are in the ``Euler paradigm".) Assume moreover that $c_1,c_2$ in assertion 2 of that Theorem belong to ${\mathbb Z}_p$ and are such that ${\mathcal E}_c$ does not have a Frobenius lift. Examples of this situation are abundant; cf. the last Remark in \cite{canonical}. Let, furthermore, $\phi_c:\widehat{E^0_c}\rightarrow \widehat{E^0_c}$ be as in Theorem \ref{linearization theorem} and let $\sigma^n_c:\widehat{E^0_c}\rightarrow J^n(E^0_c)$ be the sections of the projections $J^n(E^0_c)\rightarrow \widehat{E^0_c}$ induced by $\phi_c$. On the other hand let $\psi_c\in J^2({\mathcal E}_c)$ be the canonical $\delta$-character. (The latter belongs, as we saw, to the ``Painlev\'{e} paradigm".) Assume, for simplicity, that the field $k:=A/pA$ is algebraically closed and let $K_c$ be function field of ${\mathcal E}_c\otimes k$. We will prove: \begin{thm} \label{new2} The image of $\sigma_c^{2*} \psi_c$ in $K_c$ is a $p$-th power in $K_c$.\end{thm} {\it Proof}. Recall by \cite{book}, Corollary 7.28, that \begin{equation} \label{opop} d\psi_c=\lambda_2 \left(\frac{\phi_c^{\text{univ}*}}{p}\right)^2\omega_c+\lambda_1\frac{\phi_c^{\text{univ}*}}{p}\omega_c+\lambda_0\omega_c\end{equation} in $\Omega_{J^2({\mathcal E}_c)}$ where $\phi_c^{\text{univ}}:J^n({\mathcal E}_c)\rightarrow J^{n-1}({\mathcal E}_c)$ are the universal Frobenius lifts and $\lambda_i\in A$, $\lambda_2=p$. By the way, the above holds without the assumption that $a_i, c_j\in {\mathbb Z}_p$; also the equality $\lambda_2=p$ is precisely the definition of $\psi$ being {\it normalized} with respect to $\omega_c$. With the additional assumption that $a_i, c_j\in {\mathbb Z}_p$ we have that ${\mathcal E}_c$ descends to an elliptic curve ${\mathcal E}_{c/{\mathbb Z}_p}$ over ${\mathbb Z}_p$; then, by \cite{frob}, Theorem 1.10, and \cite{book}, Theorem 7.22, we actually also have $$\lambda_1=-a_p,\ \ \ \lambda_0=1,$$ where $a_p$ in the trace of Frobenius acting on the reduction mod $p$ of ${\mathcal E}_{c/{\mathbb Z}_p}$. Similarly ${\mathcal E}'_c$ descends to an elliptic curve ${\mathcal E}'_{c/{\mathbb Z}_p}$ over ${\mathbb Z}_p$. Since there is a separable isogeny between ${\mathcal E}'_{c/{\mathbb Z}_p}$ and ${\mathcal E}_{c/{\mathbb Z}_p}$ it follows that $a_p$ is also the trace of Frobenius acting on the reduction mod $p$ of ${\mathcal E}'_{c/{\mathbb Z}_p}$. On the other hand, by \cite{silverman}, p. 141-142, $$a_p\equiv 1-|{\mathcal E}'_{c/{\mathbb Z}_p}({\mathbb F}_p)|\ \ \ \text{mod}\ \ \ p.$$ Now by \cite{euler}, Lemma 5.2, $$|{\mathcal E}'_{c/{\mathbb Z}_p}({\mathbb F}_p)|\equiv 1-A_{p-1}(c_1,c_2)\ \ \ \text{mod}\ \ \ p.$$ It follows that $$\lambda_1\equiv -A_{p-1}(c_1,c_2)\ \ \ \text{mod}\ \ \ p.$$ Let us view the maps $\mathcal O(J^n({\mathcal E}_c))\rightarrow \mathcal O(J^{n+1}({\mathcal E}_c))$ induced by the natural projections as inclusions. Then $\phi_c$ equals the composition $\phi_c^{\text{univ}}\circ \sigma_c^1$; hence $$\sigma_c^{2*} \phi^{\text{univ}*}=\sigma_c^{1*} \phi^{\text{univ}*}=\phi_c^*.$$ (Note, by the way, that $\phi_c^2$ is not equal to $(\phi_c^{\text{univ}})^2\circ \sigma_c^2$!) Taking $\sigma_c^{2*}$ in \ref{opop} we get $$d (\sigma_c^{2*}\psi_c)=\sigma_c^{2*} d\psi_c\equiv -A_{p-1}(c_1,c_2)\frac{\phi_c^*}{p}\omega_c+\omega_c\equiv 0\ \ \ \text{mod}\ \ \ p.$$ If $K_c$ is the function field of ${\mathcal E}_c\otimes k$ we have $K_c=k(x,\gamma)$ with $x$ a variable and $\gamma$ quadratic over $k(x)$. Since $k(x,\gamma)=k(x,\gamma^p)$ we may write $$\overline{\sigma_c^{2*}\psi_c}=u+v\gamma^p\in K_c,\ \ \ u,v\in k(x)$$ hence $$0=d (\overline{\sigma_c^{2*}\psi_c})=\left(\frac{du}{dx}+\frac{dv}{dx} \gamma^p\right)dx\in \Omega_{K_c/k}=K_cdx,$$ hence $\frac{du}{dx}=\frac{dv}{dx}=0$ which implies that $u,v\in k(x^p)$ as one can see by considering the simple fraction decomposition of $u$ and $v$. Consequently $\overline{\sigma_c^{2*}\psi_c}\in K_c^p$. \qed \section {Lax equations} \subsection{The classical case} Let $A$ be a Noetherian ring, let $B:=A[x_1,...,x_N]$ be a polynomial ring, and consider the affine space ${\mathbb A}^n=Spec\ B$. Let $L$ be an $A$-Lie algebra, free as an $A$-module, with basis $e_1,...,e_N$, and write $$[e_i,e_j]=\sum_k c_{ijk}e_k,\ \ \ c_{ijk}\in A.$$ Then there is a unique Poisson structure $\{\ ,\ \}$ on $B$, (or on ${\mathbb A}^N=Spec\ B$) called the {\it Lie-Poisson} structure attached to $(L,(e_i))$, such that $$[x_i,x_j]=\sum_k c_{ijk}x_k.$$ In particular we may consider the variables $x_1,...,x_N$, with $N=n^2$, to be the entries of a matrix of indeterminates $x=(x_{ij})$, we may consider the affine space ${\mathfrak g}:={\mathbb A}^{n^2}=Spec\ B$, $B:=A[x]$, and we may consider the Lie algebra $L:={\mathfrak g}(A)$ of $n\times n$ matrices with coefficients in $A$, with respect to the commutator, with basis $e_{ij}$ the matrices that have $1$ on position $(i,j)$ and $0$ everywhere else. One may consider then the Lie-Poisson structure $\{\ ,\ \}$ on $B$ (equivalently on ${\mathfrak g}$) attached to $(L,(e_{ij}))$. On the other hand let $\delta=\delta^A$ be a derivation on $A$, and let $\delta=\delta^{\mathfrak g}$ be a $\delta^A$-flow on ${\mathfrak g}$, i.e. $\delta^{\mathfrak g}$ is a derivation on $B=A[x]$, extending $\delta^A$. Say that $\delta=\delta^{\mathfrak g}$ is a {\it Lax $\delta^A$-flow} if we have an equality of matrices with $B$-coefficients $$\delta x=[M,x]$$ for some matrix $M=(m_{ij})$ with $B$-coefficients, i.e., $$\delta x_{ij}=\sum_k (m_{ik}x_{kj}-x_{ik}m_{kj}).$$ It is trivial to check that any $\delta^A$-flow on $A[x]$ that is Hamiltonian with respect to the Lie-Poisson structure on $A[x]$ is a Lax $\delta^A$-flow on ${\mathfrak g}$: if the Hamiltonian is $H$ then $M$ can be taken to be the matrix $\frac{\partial H}{\partial x}:=\left(\frac{\partial H}{\partial x_{ij}}\right)$. Let us say that a Lax $\delta^A$-flow on ${\mathfrak g}$ is {\it Hamiltonian} (or more accurately {\it Poisson-Hamiltonian}) if it is Hamiltonian with respect to the Lie-Poisson structure on $A[x]$, equivalently if $$\delta x=[\frac{\partial H}{\partial x},x]$$ for some $H\in B$. On the other hand, assuming for simplicity that $A$ is an algebraically closed field of characteristic zero, any Lax $\delta^A$-flow $\delta$ is {\it isospectral}, by which we understand that: \begin{thm}\label{par} The following diagram is commutative: $$ \begin{array}{rcl} B & \stackrel{\delta}{\longrightarrow} & B\\ {\mathcal P} \uparrow & \ & \uparrow {\mathcal P}\\ A[z] &\stackrel{\delta_0}{\longrightarrow} & A[z]\end{array} $$ where $A[z]=A[z_1,...,z_n]$, a polynomial ring in $n$ variables, $\delta_0:A[z]\rightarrow A[z]$ is the unique derivations extending $\delta^A$ with $\delta_0z_j=0$, and ${\mathcal P}:A[z]\rightarrow B$ is the $A$-algebra homomorphism with ${\mathcal P}(z_j)={\mathcal P}_j(x)$, where $$\det(s\cdot 1_n-x)=\sum_{j=0}^n (-1)^j{\mathcal P}_j(x)s^{n-j}.$$ \end{thm} In the above $1_n$ is the identity matrix, $s$ is a variable, ${\mathcal P}_0=1$, and, for $j=1,...,n$, ${\mathcal P}_j(x)$ are, of course, the coefficients of the characteristic polynomial of $x$: $${\mathcal P}_1(x)=\text{tr}(x),...,{\mathcal P}_n(x)=\det(x).$$ The commutativity of the above diagram implies $\delta^A({\mathcal P}_j(x))=0$, i.e., ${\mathcal P}_j(x)$ are prime integrals for any Lax $\delta^A$-flow. This implies that the characteristic polynomial of any solution to a Lax $\delta^A$-flow has $\delta$-constant coefficients; equivalently, the spectrum of any solution consists of $\delta$-constants. (This equivalence will fail in the arithmetic case.) \subsection{The arithmetic case} As usual we consider a complete discrete valuation ring $A$ with maximal ideal generated by an odd rational prime $p$ and perfect residue field and we view $A$ as equipped with its unique $p$-derivation $\delta=\delta^A$. There are two arithmetic analogues of Lax $\delta^A$-flows: one for which the characteristic polynomial of any solution has $\delta$-constant coefficients; and another one for which the solutions have $\delta$-constant spectrum. These two conditions are not equivalent because, for a monic polynomial $$\sum_{j=0}^n a_j s^j=\prod_{j=1}^n (x-r_j) \in A[s],$$ with all its roots $r_j$ in $A$, the condition that $\delta a_j=0$ for all $j$ is {\it not} equivalent to the condition $\delta r_j=0$ for all $j$; these two conditions are equivalent for $\delta$ a derivation on a field of characteristic zero but {\it not} for $\delta$ our $p$-derivation on $A$. In what follows we explain these two analogues of Lax equations following \cite{foundations}. First let $T\subset G:=GL_n$ be the diagonal maximal torus, $$T=Spec\ A[t_1,t_1,^{-1},...,t_n,t_n^{-1}],\ \ \ G=Spec\ A[x,\det(x)^{-1}],$$ with embedding given by $x_{jj}\mapsto t_j$ and $x_{ij}\mapsto 0$ for $j\neq i$, and consider the map $${\mathcal C}:T\times G\rightarrow G,\ \ {\mathcal C}(h,g)=g^{-1}hg.$$ We have the following: \begin{thm}\label{t*} \cite{foundations} There exists an open set $G^*$ of $G=GL_n$ and a unique Frobenius lift $\phi^{G^*}$ on $\widehat{G^*}$ such that the following diagram is commutative: $$ \begin{array}{rcl} \widehat{T^*}\times \widehat{G} & \stackrel{\phi^{T^*}_0\times \phi^G_{0}}{\longrightarrow} & \widehat{T^*}\times \widehat{G}\\ {\mathcal C}\downarrow & \ & \downarrow {\mathcal C}\\ \widehat{G^*} & \stackrel{\phi^{G^*}}{\longrightarrow} & \widehat{G^*}\end{array} $$ where $T^*:=T\cap G^*$, ${\mathcal C}(T^*\times G)\subset G^*$, $\phi^{T^*}_0$ is induced by the unique Frobenius lift on $T$ that sends $t_j\mapsto t_j^p$, and $\phi^G_{0}$ is the Frobenius lift on $\widehat{G}$ that sends $x_{ij}\mapsto x_{ij}^p$. \end{thm} Cf. \cite{foundations}, Theorem 4.50. By the way, in contrast with the classical case (Theorem \ref{par}), and in analogy with the arithmetic Euler paradigm (Theorem \ref{singularity}) we have the following ``singularity theorem": \begin{thm} \cite{foundations} For $n\geq 3$, $G^*$ in Theorem \ref{t*} cannot be taken to be the whole of $G$. \end{thm} Cf. \cite{foundations}, Theorem 4.54. Also, it was shown in \cite{foundations}, Theorem 4.60, that any solution to the arithmetic $\delta^A$-flow $\delta^{G^*}$ attached to $\phi^{G^*}$, with spectrum contained in $A$, has the property that its spectrum consists of $\delta^A$-constants. More generally, the same property holds if one replaces the Frobenius lift $\phi^{G^*}$ on $\widehat{G^*}$ by any Frobenius lift $\phi^{G^*(\alpha)}$ on $\widehat{G^*}$ that is {\it conjugate} to $\phi^{G^*}$ in the sense that $$ \phi^{G^*(\alpha)}(x) :=\epsilon(x)^{-1} \cdot \phi^{G^*}(x) \cdot \epsilon(x),$$ where $\epsilon(x)=1+p\alpha(x)$, $\alpha(x)$ any $n\times n$ matrix with coefficients in $\mathcal O(\widehat{G^*})$. By the way, for $\alpha$ with coefficients in $A$ (rather than in $\mathcal O(\widehat{G^*})$) the arithmetic $\delta^A$-flows corresponding to $\phi^{G^*}$ and $\phi^{G^*(\alpha)}$ are {\it linear} with respect to each other in the sense of \cite{foundations}; we will not review this concept of linearity here. We refer to loc. cit. for details. On the other hand we have: \begin{thm}\label{t**} \cite{foundations} There exists an open set $G^{**}$ of $G=GL_n$ and a Frobenius lift $\phi^{G^{**}}$ on $\widehat{G^{**}}$ such that the following diagram is commutative: $$ \begin{array}{rcl} \widehat{G^{**}}& \stackrel{\phi^{G^{**}}}{\longrightarrow} & \widehat{G^{**}}\\ {\mathcal P} \downarrow & \ & \downarrow {\mathcal P}\\ \widehat{{\mathbb A}^n} &\stackrel{\phi^{{\mathbb A}^n}_0}{\longrightarrow} & \widehat{{\mathbb A}^n},\end{array} $$ where $\phi^{{\mathbb A}^n}_0$ is induced by the unique Frobenius lift on ${\mathbb A}^n=Spec\ A[z_1,...,z_n]$ which sends $z_j\mapsto z_j^p$. \end{thm} Cf. \cite{foundations}, Theorem 4.56. The polynomials ${\mathcal P}_j(x)$ are then prime integrals for the arithmetic $\delta^A$-flow $\delta^{G^{**}}$ attached to $\phi^{G^{**}}$. In particular the characteristic polynomial of any solution to the arithmetic $\delta^A$-flow $\delta^{G^{**}}$ are $\delta^A$-constant. More generally, the same property holds if one replaces the Frobenius lift $\phi^{G^{**}}$ on $\widehat{G^{**}}$ by any Frobenius lift $\phi^{G^{**}(\alpha)}$ on $\widehat{G^{**}}$ that is {\it conjugate} to $\phi^{G^{**}}$ in the same sense as before, namely that $$ \phi^{G^{**}(\alpha)}(x) :=\epsilon(x)^{-1} \cdot \phi^{G^{**}}(x) \cdot \epsilon(x),$$ where $\epsilon(x)=1+p\alpha(x)$, $\alpha(x)$ any $n\times n$ matrix with coefficients in $\mathcal O(\widehat{G^{**}})$. Again, for $\alpha$ with coefficients in $A$ (rather than in $\mathcal O(\widehat{G^{**}})$) the arithmetic $\delta^A$-flows corresponding to $\phi^{G^{**}}$ and $\phi^{G^{**}(\alpha)}$ are {\it linear} with respect to each other in the sense of \cite{foundations}. We refer to loc. cit. for details. The $\phi^{G^{**}}$ in Theorem \ref{t**} is not unique; one can further subject it to appropriate constraints that make it unique; we will not go into this here. In view of the above mentioned ``isospectrality-type" properties for the arithmetic $\delta^A$-flows $\delta^{G^*}$ and $\delta^{G^{**}}$ in Theorems \ref{t*} and \ref{t**} respectively one may see these arithmetic flows as analogues of the classical Lax $\delta^A$-flows. One is then tempted to ask for an arithmetic analogue of the condition that a Lax $\delta^A$-flow be Poisson-Hamiltonian, i.e., an arithmetic analogue of the condition that the matrix $M$ in the classical equation $\delta x=[M,x]$ is of the form $M=\frac{\partial H}{\partial x}$ for some $H\in B$. The matrix $M$ itself does not have an obvious arithmetic analogue so the problem needs to be approached on a more conceptual level.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There has been a long time since apparently chiral non-invariant, divergent contributions were noticed in the loop calculations of nonlinear sigma (NLS) models. It is important to realize that there are two kinds of such contributions. The first kind, which produces the mass term of the pion field, leads to the violation of the soft pion theorem. The second kind is more subtle. It does not violate the soft pion theorem and is claimed to vanish on-shell. It is well understood that the first kind contributions are canceled by those from the Jacobian~\cite{Charap:1971bn, Honerkamp:1971va, Gerstein:1971fm}. (They had been overlooked at that time.) In the dimensional regularization, this kind of non-invariant contributions are absent; it is consistent with the absence of the nontrivial Jacobian in this regularization scheme. As for the second kind contributions, although they have been discussed in the literature~\cite{Tataru:1975ys, Honerkamp:1971sh, Kazakov:1976tj, deWit:1979gw, Appelquist:1980ae}, there still seems to be unclear points, on which we are going to discuss in this paper. The prescriptions of how to avoid the second kind have been proposed. T\u{a}taru~\cite{Tataru:1975ys} showed, using dimensional regularization, that the second kind contributions are proportional to the (classical) equations of motion and do not contribute to the S-matrix, following the argument by 't~Hooft~\cite{'tHooft:1973us}. Honerkamp~\cite{Honerkamp:1971sh} and Kazakov, Pervushin, and Pushkin~\cite{Kazakov:1976tj} proposed to use the background field method. This is essentially to modify the theory. Appelquist and Bernard~\cite{Appelquist:1980ae} pointed out that a field redefinition removes such contributions. The most popular and practical method is to consider not the pion field but the currents~\cite{Gasser:1983yg, Gasser:1984gg}. In recent papers Ferrari \textit{et al.}~\cite{Ferrari:2005ii, Ferrari:2005va, Ferrari:2005fc, Bettinelli:2007zn} reconsidered the renormalization problem emphasizing the symmetry point of view, heavily relying on the Ward-Takahashi identities, and gave the subtraction procedure consistent with them. They claim that the use of the dimensional regularization, in which the tadpole contributions are absent, is essential. In this paper, we instead use lattice regularization for the following reasons: (i) Since everything is well defined in the lattice regularization, it is obvious that there is no source of the violation of chiral symmetry (up to a ``spurion'' mass term), if we start with a symmetric partition function. This fact is important for establishing that chiral symmetry is not lost despite the appearance of ANTs. Hence the name; they \textit{do not} violate chiral symmetry, though they \textit{appear} to be non-invariant. (ii) In the case of the first kind contributions, the Jacobian plays an essential role. It is interesting to see if the Jacobian plays any role for the second kind. The logarithm of the Jacobian is proportional to $\delta^4(0)$, thus in the dimensional regularization it is trivially set to zero, while in other continuous regularization schemes it is ill-defined. In the lattice regularization, on the other hand, it is regularized and well-defined, so that one can carefully examine the effects of the Jacobian. One might suspect that the (naive) Jacobian is actually the latent source of the violation of chiral symmetry, and that a properly defined Jacobian should contain momentum-dependent terms in order for the theory to be chiral invariant, which eventually cancel the ANTs produced by loop diagrams. It is therefore important to see what happens with the well-defined, momentum-independent Jacobian in the manifestly chiral invariant theory. (iii) Lattice regularization is completely different from dimensional regularization. It is therefore useful to see if the existence of ANTs is independent of the regularization scheme. To our best knowledge, ANTs in four dimensions have never been calculated by using lattice regularization in the literature. (In 2+$\epsilon$ dimensions, Symanzik~\cite{Symanzik:1983gh} obtained ANTs in the lattice regularization.) The purpose of this paper is to establish the existence of ANTs in the lattice perturbation theory at one-loop preserving chiral symmetry manifestly. This implies that ANTs are compatible with chiral symmetry. We also see that the Jacobian does not play an important role in generating ANTs and that the appearance of ANTs is independent of regularization schemes. Our calculation is a straightforward generalization of Shushpanov and Smilga~\cite{Shushpanov:1998ms}, who calculated only the self-energy contributions. We consider the four-point (amputated) Green functions at low momenta ($p \ll 1/a$) to order $\mathcal{O}(p^4)$ at one-loop level. A mass term is introduced in order to regularize the IR singularities. Unlike the self-energy calculation, the IR regularization with the mass term plays an important role for the calculations of the four-point functions. We find that the divergent part of it contains ANTs, which cannot be removed by a symmetric counterterms. We also find that the Jacobian does not play an essential role. The ANTs vanish on the mass shell. In the next section, we establish the existence of the ANTs by an explicit one-loop calculation. In Sec.~\ref{sec:conclusion}, we summarize the results and give discussions. Appendix~\ref{sec:formulae} contains some integration formulae. \section{Lattice perturbation theory} \label{sec:lattice} \subsection{Setup} In this section, we give an explicit one-loop calculation for the four-point amputated Green function in the $SU(2)\times SU(2)$ NLS model in four dimensions. In the NLS as an effective theory there are infinitely many terms with increasing number of derivatives. We are however interested only in whether there arises an ANT of $\mathcal{O}(p^2)$ or of $\mathcal{O}(p^4)$ at one-loop level. (Note that, unlike the dimensional regularization, there are contributions of $\mathcal{O}(p^2)$ from the one-loop diagrams in the lattice regularization.) To see this, we will consider the one-loop contributions only with vertices of $\mathcal{O}(p^2)$ and examine whether the contributions of $\mathcal{O}(p^2)$ and of $\mathcal{O}(p^4)$ can be absorbed in the symmetric terms. There may be other ANTs involving higher derivative vertices, but they are not related to the lower order contributions by the symmetry, and cannot cancel the ANTs that may arise to this lowest order. In the continuum, the action of $\mathcal{O}(p^2)$ is given by \begin{equation} \mathcal{L}_2=\frac{F^2}{4}\mbox{\rm Tr} \left( \partial_\mu U^\dagger \partial_\mu U \right) -\frac{F^2m^2}{4}\mbox{\rm Tr} \left( U + U^\dagger \right), \label{lowest} \end{equation} where $U$ is an $SU(2)$-valued field and $F$ is the coupling constant. (In the dimensional regularization, it is the pion decay constant in the chiral limit.) We also introduce the mass term to regularize the IR singularities. On the hypercubic lattice with $a$ being the lattice constant, the action may be written as \begin{equation} S^{lat}_2[U]=\frac{F^2a^2}{4}\sum_n \bigg[ \sum_{\mu} \mbox{\rm Tr} \left( 2-U_{n}^\dagger U_{n+\mu}-U^\dagger_{n+\mu}U_n \right) -m^2a^2 \; \mbox{\rm Tr} \left( U_n^\dagger + U_n \right) \bigg], \end{equation} which is obtained by the simple replacement, \begin{equation} \partial_\mu U(x) \rightarrow (U_{n+\mu}-U_n)/a. \end{equation} There are many other discretization methods, but the choice does not make a crucial difference in the following discussions so that we stick to this simplest choice. The partition function is given by \begin{equation} Z=\int \prod_n DU_n \; e^{-S_2^{lat}[U]}, \end{equation} where $DU_n$ stands for the invariant measure under the global $SU(2)_L\times SU(2)_R$ transformations, \begin{equation} U_n \rightarrow g_L U_n g_R^\dagger, \end{equation} where $g_L$ and $g_R$ are $SU(2)_{L,R}$ elements. Note that if the mass term is treated as a ``spurion'' field~\cite{Gasser:1983yg}, and transformed properly, the theory is manifestly invariant under $SU(2)_L\times SU(2)_R$. We introduce pion fields to do perturbation theory. We employ the following parameterization, \begin{equation} U_n=\sigma_n +i\pi^a_n \tau^a /F,\quad \sigma_n=\sqrt{1-\left(\pi^a_n\right)^2/F^2}. \end{equation} There are of course other parameterizations. But the main results are independent of the choice. In terms of the pion fields, the measure is written as \begin{equation} \prod_n DU_n = e^{-S_{Jacob}^{latt}}\prod_{n,a} D\pi_n^a, \end{equation} with~\cite{Boulware:1970zc} \begin{equation} S_{Jacob}^{lat}=-\frac{1}{2}a^4\sum_n\frac{1}{a^4}\mbox{\rm Tr} \ln \left[ \delta_{ab}+\frac{\pi_n^a \pi_n^b}{F^2-(\pi_n^c)^2} \right]. \end{equation} Note that the $\delta^4(0)$ is regularized as $1/a^4$ on the lattice. It is important to note that the vertices from the Jacobian is momentum independent. Expanding $S_2^{latt}$ and $S_{Jacob}^{latt}$ in terms of the pion fields $\pi$, we obtain \begin{eqnarray} S^{lat}_2&=&\frac{a^2}{2}\sum_n \left[ \sum_\mu\left(\pi^a_{n+\mu}-\pi^a_n\right)^2 +m^2a^2\left(\pi^a_n\right)^2 \right] -\frac{a^2}{4F^2}\sum_{n,\mu} \left(\pi_n^a\right)^2\left(\pi^b_{n+\mu}\right)^2 \nonumber \\ &&+\frac{a^2}{8F^2}\left(m^2a^2+8\right)\sum_n \left[ \left(\pi_n^a\right)^2 \right]^2 -\frac{a^2}{16F^4}\sum_{n,\mu} \left(\pi_n^a\right)^2\left(\pi_{n+\mu}^b\right)^2 \left[ \left(\pi_n^c\right)^2+\left(\pi_{n+\mu}^c\right)^2 \right] \nonumber \\ &&+\frac{a^2}{16F^4}\left(m^2a^2+8\right)\sum_n \left[ \left(\pi_n^a\right)^2 \right]^3 +\cdots, \\ S_{Jacob}^{lat}&=&\sum_n \left[ -\frac{1}{2}\frac{(\pi^a_n)^2}{F^2} -\frac{1}{4}\frac{\left[(\pi^a_n)^2\right]^2}{F^4} +\cdots \right], \label{jacobisqrt} \end{eqnarray} where we retain only the terms which contribute to the two- and four-point Green functions up to including $\mathcal{O}(p^4/F^4)$. Note that, because of the discretization, it is difficult to count the power of momenta buried, say, in $1-\cos(ap)$. Instead we count the power of $1/F$. There are no terms with positive power of $F$. The Feynman rules are obtained in the usual way, treating all the contributions from $S_{Jacob}^{lat}$ as interactions. (They are of higher order in $1/F$.) The propagator is the usual one, \begin{equation} \langle \pi^a_n\pi^b_m\rangle_0 = \delta^{ab}\int_{\Box}\frac{d^4k}{(2\pi)^4} \frac{e^{ik(n-m)a}} {m^2 + \left[k\right]_a^2}, \label{propagator} \end{equation} where $\int_\Box d^4k $ stands for the integration over the hypercube, \begin{equation} \left\{ k\ |\ k_\mu \in \left[ -\pi/a, \pi/a \right], \mu=1, \cdots, 4 \right\}, \end{equation} and we have introduced a useful notation, \begin{equation} \left[k\right]_a^2\equiv \frac{2}{a^2}\sum_\mu \left(1-\cos\left(k_\mu a\right)\right), \end{equation} which goes to $k^2$ in the continuum limit $a\rightarrow 0$. $S_2^{latt}$ leads to the following four-point and six-point vertices: \begin{equation} -\frac{1}{F^2}\sum_\mu \bigg\{ \delta^{ab}\delta^{cd} \left( [k_a+k_b]_a^2 +m^2 \right) + \delta^{ac}\delta^{bd} \left( [k_a+k_c]_a^2 +m^2 \right) + \delta^{ad}\delta^{bc} \left( [k_a+k_d]_a^2 +m^2 \right) \bigg\}, \end{equation} and \begin{equation} -\frac{1}{F^4} \Bigg\{ \delta^{ab} \left( [k_a+k_b]_a^2+ m^2 \right) \left[ \delta^{cd}\delta^{ef} \!\!+\!\delta^{ce}\delta^{df} \!\!+\!\delta^{cf}\delta^{de} \right] +14 \ \mbox{similar terms} \Bigg\}, \end{equation} respectively. See FIG.~\ref{4-point_latt} and FIG.~\ref{6-point_latt}. \begin{figure}[tbp] \begin{tabular}{cc} \begin{minipage}[t]{0.45\linewidth} \begin{center} \includegraphics[width=0.7\linewidth,clip]{4-point_latt.eps}% \caption{\label{4-point_latt}Four-point vertex from $S_2^{lat}$. The indices $a,\cdots, d$ stands for the isospin of the pion field, and $k_a, \cdots, k_d$ are corresponding incoming momenta.} \end{center} \end{minipage}\quad\quad\quad \begin{minipage}[t]{0.45\linewidth} \begin{center} \includegraphics[width=0.7\linewidth,clip]{6-point_latt.eps}% \caption{\label{6-point_latt} Six-point vertex from $S_2^{lat}$. The indices $a,\cdots, f$ stands for the isospin of the pion field. The momentum labels are omitted.} \end{center} \end{minipage} \end{tabular} \end{figure} \subsection{Self-energy} Shushpanov and Smilga~\cite{Shushpanov:1998ms} calculated the self-energy contribution from the four-point vertex with a massless propagator. We do the same calculation with a finite mass (See FIG.~\ref{selfenergy_latt}), \begin{equation} -\Sigma^{ab}(p) =-\delta^{ab}\Sigma(p) =-\frac{\delta^{ab}}{2F^2}\int_\Box \frac{d^4k}{(2\pi)^4} \frac{\left[k+p\right]_a^2+\left[k-p\right]_a^2+5m^2} {m^2+\left[k\right]_a^2}. \end{equation} Note that this leading order contribution is of order $1/F^2$. Following their calculations, we find \begin{equation} \Sigma(p)= \left[ \frac{1}{2F^2a^2}\left(1+\frac{m^2a^2}{8}\right)\left[p\right]_a^2 +\frac{3m^2}{4F^2a^2} \right]\mathcal{I}_0 - \frac{1}{8F^2a^2}\left[p\right]_a^2 +\frac{1}{F^2a^4}, \label{selfenergy} \end{equation} where we have introduced $\mathcal{I}_0$, \begin{equation} \mathcal{I}_n\equiv \int_0^\infty ds s^n e^{-\frac{s}{2}(m^2a^2+8)}\left[I_0(s)\right]^4, \label{commonint} \end{equation} and $I_0(s)$ is the modified Bessel function, \begin{equation} I_0(s)=\int_{-\pi}^{\pi}\frac{dk}{2\pi}e^{s\cos k}. \label{modifiedB} \end{equation} The last term of Eq.~(\ref{selfenergy}) is quartically divergent, and it is cancelled by the $\mathcal{O}\left(1/F^2\right)$ contribution from $S_{Jacob}^{lat}$, giving no ANTs. This cancellation mechanism is well-known~\cite{Charap:1971bn, Honerkamp:1971va, Gerstein:1971fm}. Note that modified Bessel function behaves for $s \gg 1$ as \begin{equation} I_0(s) = \frac{e^s}{\sqrt{2\pi s}} \left(1+\mathcal{O}\left(\frac{1}{s}\right)\right), \end{equation} and for $0<s \ll 1$ as \begin{equation} I_0(s) = 1+\mathcal{O}(s^2), \end{equation} so that the integral $\mathcal{I}_n$ is finite as far as $m$ is kept finite. Although it is finite, but is not analytic at $m^2a^2=0$. One cannot expand the result in terms of $m^2a^2$. This kind of singularity at $m^2a^2=0$ persists in the calculations of the four-point functions which we discuss in the next subsection. We therefore keep the mass terms in the exponents (which come from the propagators) intact. It is instructive to compare the cutoff integral (for $m \ll \Lambda$) \begin{equation} 2\pi^2\int_0^\Lambda \frac{k^3dk}{(2\pi)^4}\frac{1}{k^2+m^2} \sim c_2 \Lambda^2 + c_0 m^2 \ln \left(\frac{m^2}{\Lambda^2}\right), \end{equation} where $c_2$ and $c_0$ are numerical constants, with the corresponding lattice version, \begin{equation} \int_\Box \frac{d^4k}{(2\pi)^4}\frac{1}{[k^2]_a+m^2} =\frac{1}{2a^2}\mathcal{I}_0\, . \end{equation} By identifying $\Lambda\sim 1/a$, we see \begin{equation} \mathcal{I}_0 \sim \tilde{c}_2 + \tilde{c}_0 (m^2a^2) \ln (m^2a^2) \end{equation} for $ma \ll 1$, where $\tilde{c}_2$ and $\tilde{c}_0$ are other numerical constants. The second term causes the nonanalyticity of $\mathcal{I}_0$. Similarly for $\mathcal{I}_1$, we have \begin{equation} \mathcal{I}_1 \sim \tilde{d}_2 + \tilde{d}_0 \ln (m^2a^2)\, , \end{equation} with some numerical constants $\tilde{d}_2$ and $\tilde{d}_0$. \subsection{Four-point function} \begin{figure}[tbp] \begin{tabular}{cc} \begin{minipage}[t]{0.45\linewidth} \begin{center} \includegraphics[width=0.7\linewidth,clip]{selfenergy_latt.eps}% \caption{\label{selfenergy_latt}Self-energy contribution from the four-point vertex from $S_2^{lat}$.} \end{center} \end{minipage}\quad\quad\quad \begin{minipage}[t]{0.45\linewidth} \begin{center} \includegraphics[width=0.7\linewidth,clip]{4-p_from6_latt.eps}% \caption{\label{4-p_from6_latt}Contribution from the six-point vertex from $S_2^{lat}$ to the four-point function.} \end{center} \end{minipage} \end{tabular} \end{figure} There are two kinds of contributions to the four-point function besides the ones from $S_{Jacob}^{lat}$: the ones involving a six-point vertex (FIG.~\ref{4-p_from6_latt}) and the ones involving two four-point vertices (FIG.~\ref{fish_latt}). In general, the four-point function in the continuum has the following structure, \begin{equation} \delta_{ab}\delta_{cd}A(p_a,p_b,p_c,p_d) +\delta_{ac}\delta_{bd}A(p_a,p_c,p_b,p_d) +\delta_{ad}\delta_{bc}A(p_a,p_b,p_d,p_c). \label{amp} \end{equation} It has the same structure on the lattice. Since the amplitude is symmetric under the crossing, it is sufficient to calculate only the contributions $A_L(p_a,p_b,p_c,p_d)$ on the lattice that correspond to the first term $A(p_a,p_b,p_c,p_d)$ of Eq.~(\ref{amp}). \begin{figure}[t] \includegraphics[width=0.8\linewidth,clip]{fish_latt.eps}% \caption{\label{fish_latt}Three s-, t-, u-channel contributions from the four-point vertex from $S_2^{latt}$ to the four-point function.} \end{figure} From FIG.~\ref{4-p_from6_latt}, we have the contribution \begin{eqnarray} A_L(p_a,p_b,p_c,p_d)^{\mbox{\scriptsize FIG.}~\ref{4-p_from6_latt}} \!\! &=&-\frac{1}{2F^4}\int_\Box \frac{d^4k}{(2\pi)^4}\frac{1}{m^2+[k]_a^2} \nonumber \\ &&{}\times \Bigg\{ 10[p_a+p_b]_a^2 +\!\!\!\! \sum_{i=a,b,c,d}\!\!\!\left([k+p_i]_a^2+[k-p_i]_a^2\right) \!+\!21m^2 \Bigg\}, \end{eqnarray} and from FIG.~\ref{fish_latt}, \begin{align} &A_L(p_a,p_b,p_c,p_d)^{\mbox{\scriptsize FIG.}~\ref{fish_latt}} \nonumber \\ &= \frac{1}{2F^4} \int_\Box\frac{d^4k}{(2\pi)^4} \frac{1}{m^2+\left[k\right]_a^2} \frac{1}{m^2+\left[p_a+p_b-k\right]_a^2} \nonumber \\ &{} \times \bigg( 3\left(\left[p_a+p_b\right]_a^2+m^2\right)^2 +2\left(\left[p_a\!+\!p_b\right]_a^2\!+\!m^2\right) \left(\left[k\!+\!p_d\right]^2\!+\!\left[k\!-\!p_a\right]^2\!+\!2m^2\right) \bigg) \nonumber \\ &{}+\frac{1}{2F^4} \int_\Box\frac{d^4k}{(2\pi)^4} \frac{1}{m^2+\left[k\right]_a^2} \frac{1}{m^2+\left[p_a+p_c-k\right]_a^2} \nonumber \\ &{} \times 2\left(\left[p_a-k\right]_a^2+m^2\right) \left(\left[k+p_b\right]_a^2+m^2\right) \nonumber \\ & {}+\frac{1}{2F^4} \int_\Box\frac{d^4k}{(2\pi)^4} \frac{1}{m^2+\left[k\right]_a^2} \frac{1}{m^2+\left[p_a+p_d-k\right]_a^2} \nonumber \\ &{} \times 2\left(\left[p_a-k\right]_a^2+m^2\right) \left(\left[k+p_b\right]_a^2+m^2\right). \end{align} Note that these are of $1/F^4$. If we set all the external momenta and the mass $m$ to be zero, we have \begin{eqnarray} \left. A_L(p_a,p_b,p_c,p_d)^{\mbox{\scriptsize FIG.}~\ref{4-p_from6_latt}} \right|_{p_i=m=0} &=&-\frac{4}{F^4a^4}\, , \\ \left. A_L(p_a,p_b,p_c,p_d)^{\mbox{\scriptsize FIG.}~\ref{fish_latt}} \right|_{p_i=m=0} &=&\frac{2}{F^4a^4}\, . \end{eqnarray} The sum of them exactly cancels the $\mathcal{O}\left(1/F^4\right)$ contribution from the Jacobian, $2/F^4a^4$. Thus the amplitude satisfies the soft-pion theorem. There is no momentum (or mass) independent ANT. Note that all the Jacobian contributions are used up to cancel the momentum (and mass) independent contributions to this order. The vertices from the Jacobian are now shown not to produce ANTs. A straightforward but tedious calculation leads to the following result for the one-loop contributions, \begin{eqnarray} A_L(p_a,p_b,p_c,p_d)\!\! &=& \! -\frac{3\mathcal{I}_0}{4F^4a^2}(2s+3m^2) +\frac{s}{8F^4 a^2}\left(1-\frac{1}{2}(8+m^2a^2)\mathcal{I}_0\right) \nonumber \\ &&{} +\frac{\mathcal{I}_1}{24F^4} \left[ 9(s+m^2)^2-3(s+m^2)(2s-\Delta)+2 Z(p_a, p_b, p_c, p_d) \right] \nonumber \\ &&{} +\frac{\mathcal{I}_0}{288F^4} \bigg[ 9(s+m^2)(2s-\Delta)-8 Z(p_a, p_b, p_c, p_d) \nonumber \\ &&{} \qquad \qquad \qquad \qquad \qquad \qquad \ \ +48\sum_\mu(p_a)_\mu (p_b)_\mu (p_c)_\mu (p_d)_\mu, \bigg], \label{A_L_expanded} \end{eqnarray} where $s=(p_a+p_b)^2$, $t=(p_a+p_c)^2$, and $u=(p_a+p_d)^2$, expanded in powers of the external momenta up to including $\mathcal{O}(p^4/F^4)$. Here we have introduced the notation, \begin{eqnarray} \Delta &\equiv& s + t + u\, , \nonumber \\ Z(p_a, p_b, p_c, p_d) &\equiv& \frac{1}{2} \left[ s(t+u)+2(t^2+u^2)-2(t+u)\Delta +2(\Delta_{ac}\Delta_{bd}+\Delta_{ad}\Delta_{bc}) -\Delta_{ab}\Delta_{cd} \right]\, , \nonumber \\ \Delta_{ij}&\equiv& p_i^2+p_j^2\, . \end{eqnarray} Some useful formulae to calculate Eq.~(\ref{A_L_expanded}) are given in Appendix~\ref{sec:formulae}. The terms proportional to $1/a^2$ correspond to quadratically divergent ones. The chiral logarithms are contained in $\mathcal{I}_n$. The last term in Eq.~(\ref{A_L_expanded}) is not rotational invariant. It is not a surprise, because the lattice regularization breaks rotational invariance. In order to see if the result is manifestly chiral invariant, we need to relate the expression to local operators. The terms in the first line of Eq.~(\ref{A_L_expanded}) are proportional to $1/a^2$ (i.e., quadratically divergent) and quadratic in external momenta. It is important to notice that they depend only on $s$ except for the mass $m$. Note that there is only one chiral invariant operator of $\mathcal{O}(p^2)$; Eq.~(\ref{lowest}) in the continuum. It produces terms of exactly the same form as those in the first line, thus may cancel the divergence. That is, the terms in the first line do not contain ANTs. A vigilant reader may notice that we have already considered the same counterterm to cancel the divergence in the self-energy contribution, thus its coefficient has been fixed. Here comes an important feature of the perturbation theory; in terms of $U$, there is only one parameter, i.e., the coupling constant $F$. On the other hand, when we introduce the pion field, we have another parameter, the wave function renormalization constant. Introducing the renormalized coupling constant $F_R$ and the renormalized field $\pi^a_{Rn}$, we have \begin{equation} \frac{\pi^a_n}{F}= \left(\frac{1+\delta_\pi}{1+\delta_F}\right) \frac{\pi^a_{Rn}}{F_R}. \end{equation} By tuning only the parameter $\delta_\pi$, one can cancel the divergence in the self-energy contribution. The parameter $\delta_F$ is now determined to cancel the divergence in the first line of Eq.~(\ref{A_L_expanded}). Note that we consider the continuum action in order to see if ANTs emerge. In momentum space, the difference between the continuum and the lattice regularized ones is of higher order in momenta, and is not rotational invariant. In order to cancel the divergence coming from the difference, we need more counterterms which are of higher order in momenta. Since they are not rotational invariant, the existence of such counterterms do not interfere with the following argument for the existence of ANTs, which, as we will see shortly, are rotational invariant. The terms in the second and third lines of Eq.~(\ref{A_L_expanded}) are quartic in momenta (and the mass). The terms in the second line contain logarithmic divergence due to $\mathcal{I}_1$, while those in the third line are finite. There are only three chiral invariant operators of $\mathcal{O}(p^4)$ available in the continuum; \begin{eqnarray} \mathcal{O}_{1}&=& \mbox{\rm Tr}\left(\partial_{\mu}U^{\dagger}\partial_{\mu}U\right) \mbox{\rm Tr}\left(\partial_{\nu}U^{\dagger}\partial_{\nu}U\right), \label{invop1} \\ \mathcal{O}_{2}&=& \mbox{Tr}\left(\partial_{\mu}U^{\dagger}\partial_{\nu}U\right) \mbox{Tr}\left(\partial_{\mu}U^{\dagger}\partial_{\nu}U\right), \label{invop2} \\ \mathcal{O}_{3}&=& \mbox{Tr}\left(\partial^2_{\mu}U^{\dagger}\partial^2_{\nu}U\right). \label{invop3} \end{eqnarray} (Note that for $SU(2)$ there are some nontrivial relations which reduce the number of independent operators. For example, $\mbox{\rm Tr}\left[(\partial_\mu U^\dagger \partial_\mu U)^2 \right]$ is proportional to $\mathcal{O}_1$. ) If the terms in the second and third lines of Eq.~(\ref{A_L_expanded}) are of the same form as those produced by some linear combinations of these operators, then these divergences may be cancelled by manifestly chiral invariant operators. Let $C_i(p_a,p_b,p_c,p_d)/F^4\ (i=1,2,3)$ denote the contributions of these operators to the amplitude, $A_L(p_a,p_b,p_c,p_d)$, to $\mathcal{O}(p^4/F^4)$. They are given by \begin{eqnarray} C_{1}(p_a,p_b,p_c,p_d)&=& \left(s-\Delta_{ab}\right)\left(s-\Delta_{cd}\right), \\ C_{2}(p_a,p_b,p_c,p_d)&=& \left(t-\Delta_{ac}\right)\left(t-\Delta_{bd}\right) + \left(u-\Delta_{bc}\right)\left(u-\Delta_{ad}\right), \\ C_{3}(p_a,p_b,p_c,p_d)&=& s^2, \end{eqnarray} respectively. In the massless limit, the terms in the square bracket in the second line of Eq.~(\ref{A_L_expanded}) may be written as \begin{equation} -C_{1}(p_a,p_b,p_c,p_d) +2C_{2}(p_a,p_b,p_c,p_d) +3C_{3}(p_a,p_b,p_c,p_d) +3s\Delta, \label{secondline} \end{equation} and those in the third line as \begin{equation} 4C_{1}(p_a,p_b,p_c,p_d) -8C_{2}(p_a,p_b,p_c,p_d) +18C_{3}(p_a,p_b,p_c,p_d) -9s\Delta. \label{thirdline} \end{equation} It is important to note that the last terms of Eqs.~(\ref{secondline}) and (\ref{thirdline}) cannot be expressed as a contribution of chiral invariant operators. We have thus established the existence of ANTs. We remark that the terms which correspond to the logarithmic divergence, Eq.~(\ref{secondline}), are different from those in the continuum. Compare Eq.~(\ref{secondline}) with Eq.~(3.3) in Ref.~\cite{Appelquist:1980ae}. It is interesting to note that the ANTs are rotational invariant. We also note that these are proportional to $\Delta$, i.e., the ANTs vanish if the (massless) on-shell conditions are imposed for all the external momenta. The terms in the fourth line of Eq.~(\ref{A_L_expanded}) are finite. They are manifestly chiral invariant, though they are not rotational invariant. Actually, they can be obtained from the chiral invariant operator of the form, \begin{equation} \sum_\mu \mbox{\rm Tr} \left( \partial_\mu U^\dagger \partial_\mu U \partial_\mu U^\dagger \partial_\mu U \right). \end{equation} Even though it is uneasy to have such a rotational non-invariant term, it has nothing to do with ANTs. \section{Conclusion} \label{sec:conclusion} In this paper, we have established the existence of ANTs in lattice chiral perturbation theory. Since the definition of the partition function regularized on a lattice is manifestly chiral invariant (up to the mass which regularizes the infrared singularities), and the calculations are consistent with chiral symmetry, the symmetry is not broken at all. Nevertheless the one-loop diagrams generate ANTs. ANTs are compatible with chiral symmetry. The existence has been known in the literature. Our contribution is the first demonstration of it in the explicit lattice calculation. On a lattice the Jacobian is well regularized, and we have shown that it is not responsible for the appearance of ANTs. The role played by the Jacobian is just to cancel the momentum independent, chirally non-invariant contributions of the first kind mentioned in Introduction. The result of the present paper has also given support for that the appearance of ANTs is independent of regularization scheme. We find that the ANTs vanish when all the external momenta are on-shell, consistent with the results obtained with dimensional regularization. It means that the ANTs do not contribute to the S-matrix for the two-pion scattering at least at the one-loop level. Finally, we discuss a few points concerning ANTs, which are still unclear to us. Our original motivation for this study is related to setting up the Wilsonian renormalization group calculation for the nonlinear sigma model. The appearance of ANTs would cause a problem to the standard program of the approach, even though they are compatible with chiral symmetry. It would be desired to have a better statement of symmetry than just the manifest invariance of the Wilsonian effective action. In other words, we should seek for the combination of the Wilsonian program and the Ward-Takahashi identities. It is not clear to us if the ANTs in general (i.e., in higher order, and/or in $n(>4)$-point functions) do not contribute to the S-matrix. Ferrari \textit{et al.}~\cite{Ferrari:2005va} discussed general forms of ANTs in the effective action, which is the generating function of the one-particle irreducible Green functions. In order to see how these terms contribute to the S-matrix, one needs to examine the effects of one-particle reducible diagrams.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper we write $A,B$ to denote finite subsets of ${\mathbb R}^d$, and $|\cdot|$ stands for their cardinality. We say that $A\subset {\mathbb R}^d$ is $d$--{\it dimensional} if it is not contained in any affine hyperplane of ${\mathbb R}^d$. Equivalently, the real affine span of $A$ is ${\mathbb R}^d$. For objects $X_1,\ldots,X_k$ in ${\mathbb R}^2$, $[X_1,\ldots,X_k]$ denotes their convex hull. The \emph{lattice generated by $A$} is the additive subgroup $\Lambda = \Lambda(A)\subset {\mathbb R}^d$ generated by $A-A$, and $A$ is called \emph{saturated} if it satisfies $A=[A]\cap \Lambda (A)$. Our starting point are two classical results. The first one is from the 1950's, due to Kemperman \cite{Kem56}, and popularized by Freiman \cite{Fre73}: if $A$ and $B$ are finite nonempty subsets of ${\mathbb R}$, then $$ |A+ B|\geq |A|+|B|-1, $$ with equality if and only if $A$ and $B$ are arithmetic progressions of the same difference. The other result, the Brunn-Minkowski inequality, dates back to the 19th century. It says that if $X,Y\subset {\mathbb R}^d$ are compact nonempty sets then $$ \lambda(X+Y)^{\frac1d}\geq \lambda(X)^{\frac1d}+\lambda(Y)^{\frac1d} $$ where $\lambda$ stands for the Lebesgue measure. Moreover, provided that $\lambda (X)\lambda(Y)>0$, equality holds if and only if $X$ and $Y$ are convex homothetic sets. Various discrete analogues of the Brunn-Minkowski inequality have been established in Bollob\'as, Leader \cite{BoL91}, Gardner, Gronchi \cite{GaG01}, Green, Tao \cite{GrT06}, Gonz\'alez-Merino, Henze \cite{MeH}, Hern\'andez, Iglesias and Yepes \cite{HeIgYe17}, Huicochea \cite{Hul18} in any dimension, and Grynkiewicz, Serra \cite{GrS10} in the planar case. Most of these papers use the method of compression, which changes a finite set into a set better suited for sumset estimates, but does not control the convex hull. Unfortunately the known analogues are not as simple in their form as the original Brunn--Minkowski inequality. For instance, a formula due to Gardner and Gronchi \cite{GaG01} says that, if $A$ is $d$--dimensional, then \begin{equation} \label{GraGro} |A+B|\geq (d!)^{-\frac1d}(|A|-d)^{\frac1d}+|B|^{\frac1d}. \end{equation} Concerning the case $A=B$, Freiman \cite{Fre73} proved that if the dimension of $A$ is $d$, then \begin{equation} \label{Freimanmulti} |A+A|\ge (d+1)|A|-{d+1 \choose 2}. \end{equation} Both estimates are optimal. In particular, we can not expect a true discrete analogue of the Brunn--Minkowski inequality if the notion of volume is replaced by cardinality. We here conjecture and discuss a more direct version of the Brunn--Minkowski inequality where the notion of volume is replaced by the number of full dimensional simplices in a triangulation of the convex hull of the finite set. For any finite $d$--dimensional set $A\subset {\mathbb R}^d$ we write $T_A$ to denote some triangulation of $A$, by which we mean a triangulation of $[A]$ using $A$ as the set of vertices. We denote $|T_A|$ the number of $d$-dimensional simplices in $T_A$. In dimension two the number $|T_A|$ is the same for all triangulations of $A$, so we denote it ${\rm tr}(A)$. More precisely, if $\Delta_A$ and $\Omega_A$ denote the number of points of $A$ in the boundary $\partial [A]$ and in the interior ${\rm int}[A]$, respectively, then \begin{equation} \label{Eulerpoints} {\rm tr}(A)=\Delta_A+2\Omega_A-2=2|A|-\Delta_A-2. \end{equation} Therefore in dimension two we can formulate the following discrete analogue of the Brunn--Minkowski inequality. \begin{conj} \label{ruzsabrunnconj} If finite $A,B\subset{\mathbb R}^2$ in the plane are not collinear, then $$ {\rm tr}(A+B)^{\frac12}\geq {\rm tr}(A)^{\frac12}+{\rm tr}(B)^{\frac12}. $$ \end{conj} One case where Conjecture~\ref{ruzsabrunnconj} holds with equality is when $A$ and $B$ are homothetic saturated sets with respect to the same lattice; namely, $A=\Lambda\cap k\cdot P$ and $B=\Lambda \cap m\cdot P$ for a lattice $\Lambda$, polygon $P$ and integers $k,m\ge 1$. This follows from the original Brunn-Minkowski equality, since $A+B=\Lambda\cap (k+m)\cdot P$ and the area of any triangle in a suitable triangulation is $\frac12\det\Lambda$. We also note that Conjecture \ref{ruzsabrunnconj}, together with the equality \eqref{Eulerpoints} and the fact that $\Delta_{A+B}\ge \Delta_A+\Delta_B$, would imply the following inequality of Gardner and Gronchi \cite[ Theorem 7.2]{GaG01} for sets $A$ and $B$ saturated with respect to the same lattice: $$ |A+B|\ge |A|+|B|+(2|A|-\Delta_A-2)^{1/2}(2|B|-\Delta_B-2)-1. $$ Unfortunately we have not been able to prove Conjecture~\ref{ruzsabrunnconj} in full generality. Our main results are the following four cases of it: if $[A]=[B]$ (Theorem~\ref{A=B}), in which case we also determine the conditions for equality in Conjecture~\ref{ruzsabrunnconj}; if $A$ and $B$ differ by one element (Theorem~\ref{oneextra}); if either $|A|=3$ or $|B|=3$ (Theorem~\ref{triangle-mixed}); and if none of $A$ and $B$ have interior points (Theorem~\ref{convex-position}). Actually, the last two theorems verify a stronger conjecture (Conjecture~\ref{ruzsabrunnconj-mixed}) discussed below. We start with the case $[A]=[B]$, which naturally include the case $A=B$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \foreach \i in {(0,0), (2,8),(8,0)} \draw[fill] \i circle(2pt); \foreach \i in {0,...,2} \draw[fill] (0.5*\i,0.5*\i) circle(2pt); \foreach \i in {5,...,6} \draw[fill] (0.5*\i,0.5*\i) circle(2pt); \draw (0,0)--(2,8)--(8,0)--(0,0)--(1,1) (2.5,2.5)--(3,3); \draw[dotted] (1,1)--(2.5,2.5); \node[below,left] at (0,0) {\small $z_{k-2}$}; \node[below,right] at (0.5,0.5) {\small $z_{k-3}$}; \node[below,right] at (1,1) {\small $z_{k-4}$}; \node[below,right] at (2.5,2.5) {\small $z_{2}$}; \node[below,right] at (3,3) {\small $z_{1}$}; \node[below,right] at (8,0) {\small $z_{k-1}$}; \node[above] at (2,8) {\small $z_{k}$}; \end{tikzpicture} \end{center} \caption{ An illustration of case (b) in Theorem \ref{A=B}. } \end{figure} \begin{theo} \label{A=B} Let $A,B\subset{\mathbb R}^2$ be finite two dimensional sets. If $[A]=[B]$ then Conjecture~\ref{ruzsabrunnconj} holds. Moreover equality holds if and only if $A=B$, and \begin{description} \item{(a)} either $A$ is a saturated set, or \item{(b)} $A=\{z_1,\ldots,z_k\}$ for $k\geq 4$, where $z_1,\ldots,z_{k-3}\in{\rm int}[z_{k-2},z_{k-1},z_k]$, and $z_1,\ldots,z_{k-2}$ are collinear and equally spaced in this order (see Figure~1). \end{description} \end{theo} Let us mention that Theorem~\ref{A=B} (in fact, its particular case $A=B$) gives a simple proof of the following structure theorem of Freiman \cite{Fre73} for a planar set with small doubling. We recall that according to (\ref{Freimanmulti}), if finite $A\subset {\mathbb R}^2$ is two dimensional, then $|A+A|\geq 3|A|-3$ and, if the dimension of $A$ is at least $3$, then $|A+A|\geq 4|A|-6$. \begin{coro}[Freiman] \label{A=Bstability} Let $A\subset {\mathbb R}^2$ be a fnite two dimensional set and $\varepsilon\in (0,1)$. If $|A|\geq 48/\varepsilon^2$ and $$ |A+A|\leq (4-\varepsilon)|A|, $$ then there exists a line $l$ such that $A$ is covered by at most $$ \frac2{\varepsilon}\cdot(1+\frac{32}{|A|\varepsilon^2}) $$ lines parallel to $l$. \end{coro} We note that, for $A$ the grid $\{1,\ldots,k\}\times\{1,\ldots,k^2\}$ and large $k$, \begin{equation} \label{square} |A+A|\leq (4-\varepsilon)\,|A|, \end{equation} with $\varepsilon=\varepsilon_k=\frac{2}{k}$ and $A$ can not be covered by less than $k$ parallel lines. Therefore the constant $2$ in the numerator of $\frac2\varepsilon$ is asymptotically optimal in Corollary~\ref{A=Bstability}. The next case w address is when $A$ and $B$ differ by one element. \begin{theo} \label{oneextra} Let $A\subset {\mathbb R}^2$ be a finite two dimensional set. If $B=A\cup\{b\}$ for some $b\not\in A$ then Conjecture~\ref{ruzsabrunnconj} holds. \end{theo} For our next results we need the notion of \emph{mixed subdivision} (see De Loera, Rambau, Santos \cite{LRF10} for details). For finite $d$--dimensional sets $A,B\subset{\mathbb R}^d$ and triangulations $T_A$ and $T_B$ of $[A]$ and $[B]$, we call a polytopal subdivision $M$ of $[A+B]$ a {\it mixed subdivision} corresponding to $T_A$ and $T_B$ if \begin{description} \item{(i)} every $k$-cell of $M$ is of the form $F+G$ where $F$ is an $i$-simplex of $T_A$ and $G$ is a $j$-simplex of $T_B$ with $i+j=k$; \item{(i)} for any $d$-simplices $F$ of $T_A$ and $G$ of $T_B$, there is a unique $b\in B$ and a unique $a\in A$ such that $F+b\in M$ and $a+G\in M$. \end{description} We write $\|M\|$ to denote the weighted number of $d$-polytopes, where $F+G$ has weight ${i+j \choose i}$ if $F$ is an $i$-simplex of $T_A$, and $G$ is a $j$-simplex of $T_B$ with $i+j=d$. In particular, all vertices of $M$ are in $A+B$, and the number of $d$-simplices is $\|M\|$ for any triangulation of $M$ with the same set of vertices (see e.g. \cite[Proposition 6.2.11]{LRF10}). The main goal of this paper is to investigate the following problem: For which triangulations $T_A$ and $T_B$ there exists a corresponding mixed subdivision $M$ for $[A+B]$ such that \begin{equation} \label{MixedBrunnMinkowski} \|M\|^{\frac1d}\geq |T_A|^{\frac1d}+|T_B|^{\frac1d}. \end{equation} In ${\mathbb R}^2$, we write $M_{11}$ to denote the set of parallelograms in a mixed subdivision $M$. In this case (\ref{MixedBrunnMinkowski}) is equivalent to the following stronger version of Conjecture~\ref{ruzsabrunnconj}. \begin{conj} \label{ruzsabrunnconj-mixed} For every finite two dimensional sets $A,B\subset{\mathbb R}^2$ there exist triangulations $T_A$ and $T_B$ of $[A]$ and $[B]$ using $A$ and $B$, respectively, as the set of vertices, and a corresponding mixed subdivision $M$ of $[A+B]$ such that \begin{equation} \label{MixedBrunnMinkowski2} |M_{11}|\geq \sqrt{|T_A|\cdot |T_B|}. \end{equation} \end{conj} Conjecture~\ref{ruzsabrunnconj-mixed} offers a geometric and algorithmic approach to prove Conjecture~\ref{ruzsabrunnconj}. The following example shows that one cannot a priori fix the triangulations $T_A$ and $T_B$ in Conjecture~\ref{ruzsabrunnconj-mixed}: \begin{figure} \label{AA} \begin{center} \begin{tikzpicture}[scale=0.5] \draw[lightgray] (-1,-2) grid (2,1); \foreach \i in {(0,0), (2,1),(-1,-2)} \draw[fill] \i circle(2pt); \draw (0,0)--(2,1)--(-1,-2)--(0,0); \end{tikzpicture} \hspace{5mm} \begin{tikzpicture}[scale=0.5] \foreach \i in {-5,...,3} { \draw[lightgray] (\i+1,\i)--(\i+1,\i+1)--(\i+2,\i+1); \draw[lightgray] (\i+1,\i+1)--(-5,5)--(\i+1,\i); \draw[lightgray] (\i+1,\i+1)--(5,-5)--(\i+1,\i); \draw[fill] (\i+1,\i) circle(2pt); \draw[fill] (\i+1,\i+1) circle(2pt); } \draw[lightgray] (5,-5)--(5,4)--(-5,5); \draw[fill] (-5,5) circle(2pt); \draw[fill] (5,-5) circle(2pt); \draw[fill] (5,4) circle (2pt); \end{tikzpicture} \end{center} \caption{ An illustration of the example described in Proposition \ref{counterexample}. } \end{figure} \begin{prop} \label{counterexample} Let $$ A=\{(0,0),(-1,-2),(2,1)\}. $$ For $k\geq 145$, let $$ B=\{p,q,l_0,\ldots,l_k,r_0,\ldots,r_{k-1}\}, $$ where $p=(-1,k+1)$, $q=(k+1,-1)$, $l_i=(i,i)$ for $i=0,\ldots,k$ and $r_i=(i,i+1)$ for $i=0,\ldots,k-1$. Let $T_B$ be the triangulation of $B$ consisting of the triangles $$ [p,l_i,r_i],[q,l_i,r_i], \, i=0,\ldots , k-1\;\text{and }\; [p,l_i,r_{i-1}], [q,l_i,r_{i-1}], \; i=1,\ldots, k. $$ Then, no mixed subdivision of $A+B$ corresponding to $T_B$ and any triangulation $T_A$ of $A$ satisfies \eqref{MixedBrunnMinkowski} for $d=2$. \end{prop} Now Conjecture~\ref{ruzsabrunnconj-mixed} is verified if either $A$ or $B$ has only three elements. \begin{theo} \label{triangle-mixed} If $|B|=3$, then Conjecture~\ref{ruzsabrunnconj-mixed} holds for any finite two dimensional set $A\subset{\mathbb R}^2$. \end{theo} {\bf Remark } It follows that if $B$ is the sum of sets of cardinality three, then Conjecture~\ref{ruzsabrunnconj} holds for any finite two dimensional set $A\subset{\mathbb R}^2$. For example, if $m\geq 1$ is an integer, and $B=\{(t,s)\in{\mathbb Z}^2:t,s\geq 0\mbox{ and }t+s\leq m\}$, or $B=\{(t,s)\in{\mathbb Z}^2:|t|,|s|\leq m\mbox{ and }|t+s|\leq m\}$.\\ Conjecture~\ref{ruzsabrunnconj} was verified by B\"or\"oczky, Hoffman \cite{BoH15} if $A$ and $B$ are in convex position; namely, $A\subset\partial[A]$ and $B\subset\partial[B]$. Here we even verify Conjecture~\ref{ruzsabrunnconj-mixed} under these conditions. \begin{theo} \label{convex-position} Let $A,B\subset{\mathbb R}^2$ be finite two dimensional sets. If $A\subset\partial[A]$ and $B\subset\partial[B]$ then Conjecture~\ref{ruzsabrunnconj-mixed} holds. \end{theo} Part of the reason why we could not verify Conjecture~\ref{ruzsabrunnconj} in general is that, except for Theorem~\ref{triangle-mixed}, our arguments actually prove the inequality ${\rm tr}(A+B) \ge 2 ({\rm tr}(A) + {\rm tr}(B))$, which is stronger than Conjecture~\ref{ruzsabrunnconj}, but which does not hold for all pairs with $A \subset B$. For example, if $A$ are the nonnegative integer points with sum of coordinates at most $k$ and $B$ is the same with sum of coordinates at most $l$, we have ${\rm tr}(A+B) = (k+l)^2$, ${\rm tr}(A)=k^2$ and ${\rm tr}(B)=l^2$. So we have ${\rm tr}(A+B) < 2 ({\rm tr}(A) + {\rm tr}(B))$ if $k\neq l$. Turning to higher dimensions, we note that if $T_A=T_B$, then a mixed subdivision satisfying (\ref{MixedBrunnMinkowski}) does exist. \begin{theo} \label{TA=TB} For a finite $d$--dimensional set $A\subset{\mathbb R}^d$ and for any triangulation $T_A$ of $[A]$ using $A$ as the set of vertices there exists a corresponding mixed subdivision $M$ of $[A+A]$ such that $$ \|M\|= 2^d|T_A|. $$ \end{theo} Therefore in certain cases, mixed subdivisions point to a higher dimensional generalization of Conjecture~\ref{ruzsabrunnconj}. This is specially welcome knowing that, if $d\geq 3$, then the order of the number of $d$-simplices in a triangulation of the convex hull of a finite $A\subset{\mathbb R}^d$ spanning ${\mathbb R}^d$ might be as low as $|A|$ and as high as $|A|^{\lfloor d/2\rfloor}$ for the same $A$. In particular, one can not assign the number of $d$-simplices as a natural notion of discrete volume if $d\geq 3$. \section{Proof of Theorem~\ref{A=B}} We will actually prove that \begin{equation} \label{[A]=[B]strong} {\rm tr}(A+B) \ge 2 {\rm tr}(A) + 2 {\rm tr}(B), \end{equation} a stronger inequality than Conjecture~\ref{ruzsabrunnconj}. For a finite two dimensional set $X\subset{\mathbb R}^2$, we define $$ f_X(z)=\left\{ \begin{array}{rl} 1&\mbox{ \ if $z\in\partial[X]$}\\[0.5ex] 2&\mbox{ \ if $z\in{\rm int}\,[X]$} \end{array} \right. , $$ so that $$ {\rm tr}(X)=\left( \sum_{z\in X}f_X(z)\right)-2. $$ \begin{lemma} Let $A,B\subset{\mathbb R}^2$ satisfy $[A]=[B]$. Then inequality \eqref{[A]=[B]strong} holds. Moreover, equality in (\ref{[A]=[B]strong}) yields $A=B$. \end{lemma} \begin{proof} Let $T$ be a triangulation of $[A]=[B]$ using the points in $A\cap B$ as vertices. One nice thing about inequality~\eqref{[A]=[B]strong} is that, since it is linear, it is additive over the triangles of $T$. Therefore, it suffices to show that, for each triangle $t$ of $T$, if $A_t=A\cap t$ and $B_t=B\cap t$, then \begin{equation} \label{[A]=[B]triangle} {\rm tr}(A_t+B_t) \ge 2 {\rm tr}(A_t) + 2 {\rm tr}(B_t), \end{equation} and that equality in (\ref{[A]=[B]triangle}) implies that $A_t=B_t$ consists of the three vertices of $t$ alone. Moreover, inequality \eqref{[A]=[B]triangle} is equivalent to \begin{equation} \label{[A]=[B]f} \sum_{p\in A_t+B_t }f_{A_t+B_t}(p)= \left(\sum_{p\in A_t }f_{A_t}(p)\right)+ \left(\sum_{p\in B_t }f_{B_t}(p)\right) - 6. \end{equation} Let $A_t\cap B_t=\{v_1,v_2,v_3\}$ be the three vertices of the triangle $t=[A_t]=[B_t]$. We claim that if $i,j\in\{1,2,3\}$, $p\in (A_t\cup B_t)\backslash \{v_1,v_2,v_3\}$ and $q\in A_t\cup B_t$, then \begin{equation} \label{tsums} v_i+p=v_j+q\mbox{ \ yields \ } v_i=v_j\mbox{ \ and \ } p=q. \end{equation} We may assume that $v_i$ is the origin and, to get a contradiction, $v_i\neq v_j$. Then the line $l$ passing through $v_j$ and parallel to the side of $t$ opposite to $v_j$ separates $t$ and $v_j+t$, and intersects $t$ only in $v_j\neq p$. Since $v_j+q\in v_j+t$, we get the desired contradiction. It follows from (\ref{tsums}) that the six points $v_i+v_j$, $1\leq i\leq j\leq 3$, and the points of the form $v_i+p$, $i=1,2,3$ and $p\in (A_t\cup B_t)\backslash \{v_1,v_2,v_3\}$ are all different. Since the six points $v_i+v_j$, $1\leq i\leq j\leq 3$, belong to $\partial(A_t+B_t)$, we have \begin{equation} \label{tvertices} \left(\sum_{i,j=1,2,3}f_{A_t+B_t}(v_i+v_j)\right)= \left(\sum_{i=1}^3f_{A_t}(v_i)\right)+ \left(\sum_{j=1}^3f_{B_t}(v_j)\right) = 6. \end{equation} On the other hand, we claim that, if $p\in A_t\backslash \{v_1,v_2,v_3\}$ and $q\in B_t\backslash \{v_1,v_2,v_3\}$, then \begin{equation} \label{tnovertices} \begin{array}{rcl} \sum_{j=1}^3f_{A_t+B_t}(p+v_j)&>&2f_{A_t}(p) \\ \sum_{i=1}^3f_{A_t+B_t}(v_i+q)&>&2f_{B_t}(q). \end{array} \end{equation} Indeed, the inequality readily holds if $p\in\partial[A_t]$ and, if $p\in{\rm int}\,[A_t]$, then $p+v_j\in{\rm int}\,[A_t+B_t]$ for $j=1,2,3$, as well, yielding (\ref{tnovertices}). By combining \eqref{tvertices} and \eqref{tnovertices} we get \eqref{[A]=[B]f} and in turn \eqref{[A]=[B]strong}. Moreover, \eqref{tnovertices} shows that if equality holds in \eqref{[A]=[B]triangle} then $A_t=B_t$ and, therefore, if equality holds in \eqref{[A]=[B]strong}, then $A=B$.\hfill \mbox{ $\Box$}\\ \end{proof} For a finite two dimensional set $A\subset {\mathbb R}^2$ and a triangulation $T$ of $A$ we denote by $A_T$ the union of $A$ and the set of midpoints of the edges of $T$ (see Figure \ref{fig:midpoints}). \begin{lemma}\label{lem:A=B} Let $A\subset {\mathbb R}^2$ be a finite a finite two dimensional set. The equality $$ {\rm tr}(A+A)=4\cdot{\rm tr}(A) $$ holds if, and only if, for every triangulation $T$ of $[A]$, we have $A_T=\frac12(A+A)$. \end{lemma} \begin{proof} Divide each triangle $t$ of $T$ into four triangles using the vertices of $t$ and the midpoints of the sides of $t$. This way we have obtained a triangulation of $[A]=[A_T]$ using $A_T$ as the vertex set. Therefore $$ {\rm tr}(A+A)={\rm tr}(\mbox{$\frac12$}(A+A)) \geq {\rm tr}(A_T)=4\cdot{\rm tr}(A). $$ Moreover, there is equality if and only if $A_T=\frac12(A+A)$.\hfill \mbox{ $\Box$}\\ \end{proof} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.4] \draw (0,7)--(0,13)--(10,14)--(14,2)--(3,0)--(0,7)--(4,10)--(5,4)--(0,7) (0,13)--(4,10)--(10,14)--(9,8)--(14,2)--(5,4)--(9,8)--(4,10) (3,0)--(5,4); \foreach \i in {(0,7), (0,13), (3,0), (4,10), (5,4), (9,8), (10,14), (14,2)} { \draw[fill] \i circle(4pt); } \foreach \i in {(0,10), (5,13.5), (12,8),(8.5,1),(1.5,3.5),(2,8.5),(4.5,7), (2.5,5.5), (2,11.5), (7,12),(9.5,11),(11.5,5),(9.5,3), (7,6), (6.5,9), (4,2)} { \draw \i circle(3pt); } \draw[dotted] (0,10)--(2,11.5)--(2,8.5)--(0,10) (2,8.5)--(4.5,7)--(2.5,5.5)--(2,8.5) (2.5,5.5)--(1.5,3.5)--(4,2)--(2.5,5.5) (4,2)--(9.5,3)--(8.5,1)--(4,2) (9.5,3)--(7,6)--(11.5,5)--(9.5,3) (7,6)--(4.5,7)--(6.5,9)--(7,6) (2,11.5)--(5,13.5)--(7,12)--(2,11.5) (7,12)--(9.5,11)--(6.5,9)--(7,12) (9.5,11)--(12,8)--(11.5,5)--(9.5,11); \end{tikzpicture} \end{center} \caption{A triangulation and its midpoints.}\label{fig:midpoints} \end{figure} We observe that the equation in Lemma \ref{lem:A=B} is equivalent to Conjecture \ref{ruzsabrunnconj} for the case $A=B$. Therefore all we have left to prove is that ${\rm tr}(A+A)=4\cdot{\rm tr}(A)$ if and only if $A$ is of the form either (a) or (b) in Theorem~\ref{A=B}. The if part is simple. \begin{lemma} Suppose that either (a) or (b) in Theorem~\ref{A=B} hold for the finite set $A$. Then $$ A_T=\frac{1}{2}(A+A). $$ \end{lemma} \begin{proof} Suppose first that $A=[A]\cap \Lambda$ for a lattice $\Lambda$. We may assume $\Lambda ={\mathbb Z}^2$. Then clearly the midpoints of sides of every triangulation $T$ of $[A]$ using $A$ as vertex set are precisely the points of $\frac{1}{2}(A+A)$. Next, if we have property (b), then there is a unique triangulation $T$ of $[A]$ using $A$ as vertex set. For $1\leq i <j\leq k$, $[z_i,z_j]$ is an edge of $T$, unless $j\leq k-2$, an hence we have $A_T=\frac{1}{2}(A+A)$ again. \hfill \mbox{ $\Box$}\\ \end{proof} The next Lemma shows the reverse direction and concludes the proof of Theorem~\ref{A=B}. \begin{lemma} Let $A\subset {\mathbb R}^2$ be a finite two dimensional set. If for every triangulation $T$ of $A$ it holds that $$ A_T=\frac{1}{2}(A+A), $$ then either (a) or (b) from Theorem \ref{A=B} hold. \end{lemma} \begin{proof} We first prove two simple claims. All throughout we assume that $A_T=\frac{1}{2}(A+A)$ for every triangulation $T$ of $A$. \begin{claim}\label{claim:line} Let $\ell$ be a line intersecting $A$ in at least two points and $A_{\ell}=A\cap \ell$. If $A_{\ell}+A_{\ell}=(A+A)\cap (\ell+\ell)$ then the points in $A_{\ell}$ form an arithmetic progression. In particular, the points on each side of the convex hull of $A$ form an arithmetic progression. \end{claim} \begin{proof} There is a triangulation $T$ of $A$ which contains the edges defined by consecutive points in $A_{\ell}$. Since there are $|A_{\ell}|-1$ midpoints of $T$ on $A_{\ell}$, by the hypothesis of the Lemma and of the Claim, we have $$ |A_{\ell}+A_{\ell}|=|(A+A)\cap (\ell+\ell)|=|A_T\cap \ell|=2|A_{\ell}|-1, $$ which implies that $A_{\ell}$ consists of an arithmetic progression. \hfill \mbox{ $\Box$}\\ \end{proof} Call a set of four points of $A$ no three of which collinear an empty quadrangle of $A$ if their convex hull contains no further points of $A$. \begin{claim}\label{claim:quadrangle} Let $x_1,x_2,x_3,x_4\in A$ form an empty quadrangle of $A$. If they are in convex position then the four points form a parallelogram. That is, assuming they are listed in clockwise order, we have $x_1+x_3=x_2+x_4$. \end{claim} \begin{proof} There are two triangulations of $A$ containing the edges of the convex quadrangle, one of them containing the edge $x_1x_3$ and the other one containing $x_2x_4$. Since $A_T$ cannot depend on the triangulation, the midpoints of these two edges must coincide and therefore $x_1+x_3=x_2+x_4$.\hfill \mbox{ $\Box$}\\ \end{proof} The proof of the Lemma is by induction on $k=|A|$. The Lemma clearly holds if $k=3$. Suppose $k=4$. If three of the points are collinear then they are on an edge of the convex hull of $A$ and, by Claim \ref{claim:line}, they form an arithmetic progression. With the fourth one they form a saturated set. If no three of the points are collinear then the four points form an empty quadrangle. If they are in convex position then by Claim \ref{claim:quadrangle} they form a saturated set, otherwise case (b) holds. Let $k>4$. Choose a vertex $v$ of the convex hull of $A$ and let $A'=A\setminus \{v\}$. If all points of $A'$ are collinear then by Claim \ref{claim:line} they are in a progresion and, with $v$, they form a saturated set. Suppose that $A'$ is not on o a line. For every triangulation $T'$ of $A$ there is a triangulation $T$ of $A$ containing $T'$. The points in $\frac{1}{2}(A'+A')$ are contained in the convex hull of $A'$ and, by the condition of the Lemma, coincide with $A'_{T'}$. By induction either (a) or (b) hold for $A'$. We consider the two cases. {\it Case 1.} $A'$ is a saturated set. {\it Case 1.1.} There is a convex empty quadrangle formed by $v$ and three points of $A'$. Then, by Claim \ref{claim:quadrangle}, $v$ belongs to the lattice generated by $A'$ as well. Moreover, since $A'$ is convex, $A$ is also convex and case (a) holds. {\it Case 1.2.} There is no convex empty quadrangle involving $v$ and three points of $A'$. Then it is easily checked that $A'$ has at most one empty convex quadrangle. If there is none in $A'$ then, up to an affine transformation, $A'$ consists of the point $(0,1)$ or the two points $(0,\pm 1)$, and the remaining points on the line $y=0$. Then either (i) $v$ belongs to the same line $y=0$, which satisfies the condition of Claim \ref{claim:line}, and all points on that line in $A$ are in arithmetic progression, so that $A$ is a saturated set, or (ii) $A'$ contains only the point $(0,1)$ and $v$ is on the line $x=0$, in which case Claim \ref{claim:line} yields that the three points of $A$ on that line are in arithmetic progression and $A$ is a saturated set again, or (iii) $A'$ contains only the point $(0,1)$ and $v$ belongs to none of the two lines containing $A'$ and case (b) holds (see Figure \ref{fig:induction}). If $A'$ contains one convex empty quadrangle then, up to affinities, $A'$ consists of the four points $(0,0), (1,0), (1,1), (0,1)$ and the remaining ones are on the line $x=y$. Moreover $v$ must belong to the latter line as well and Claim \ref{claim:line} yields that the points on that line are in arithmetic progression and $A$ is a saturated set (see Figure \ref{fig:induction}). \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.4] \draw[help lines] (-1,0) grid (3,1); \foreach \i in {(-1,0),(0,0),(1,0),(2,0),(0,1)} \draw[fill] \i circle(3pt); \draw (3,0) circle (3pt); \node[above] at (3,0) {$v$}; \node at (0,-2) {$(i)$}; \end{tikzpicture} \hspace{3mm} \begin{tikzpicture}[scale=0.4] \draw[help lines] (-1,-1) grid (3,1); \foreach \i in {(-1,0),(0,0),(1,0),(2,0),(0,1), (0,-1)} \draw[fill] \i circle(3pt); \draw (3,0) circle (3pt); \node[above] at (3,0) {$v$}; \node at (0,-2) {$(i)$}; \end{tikzpicture} \hspace{3mm} \begin{tikzpicture}[scale=0.4] \draw[help lines] (-1,-1) grid (3,1); \foreach \i in {(-1,0),(0,0),(1,0),(2,0),(0,1)} \draw[fill] \i circle(3pt); \draw (0,-1) circle (3pt); \node[above] at (0,-1) {$v$}; \node at (0,-2) {$(ii)$}; \end{tikzpicture} \hspace{3mm} \begin{tikzpicture}[scale=0.4] \draw[help lines] (-3,-1) grid (2,1); \foreach \i in {(-1,0),(0,0),(1,0),(2,0),(0,1)} \draw[fill] \i circle(3pt); \draw (-3,-1) circle (3pt); \node[above] at (-3,-1) {$v$}; \draw (-3,-1)--(0,1)--(2,0)--(-3,-1); \node at (0,-2) {$(iii)$}; \end{tikzpicture} \hspace{3mm} \begin{tikzpicture}[scale=0.4] \draw[help lines] (-2,-2) grid (3,3); \foreach \i in {(0,0),(1,0),(1,1), (0,1), (2,2),(3,3),(-1,-1)} \draw[fill] \i circle(3pt); \draw (-2,-2) circle (3pt); \node[above] at (-2,-2) {$v$}; \end{tikzpicture} \caption{An illustration of Case 1.2.}\label{fig:induction} \end{center} \end{figure} {\it Case 2.} $A'$ is as in (b). We may assume that the progression of points of $A'$ lies on the line $x=0$. If $v$ is not on this line then it forms a convex empty quadrangle with two extreme points of the progression and one of the vertices $w$ of the triangle. By Claim \ref{claim:quadrangle}, $v$ must be the point $w+(\pm 1,0)$, which gives a configuration not satisfying the condition of the Lemma. Therefore $v$ lies on the line $x=0$ which satisfies the condition of Claim \ref{claim:line}, so that $v$ belongs to the progression on that line yielding case (b).\hfill \mbox{ $\Box$}\\\end{proof} \section{Proof of Theorem~\ref{oneextra}} The inequality between the quadratic and arithmetic means gives that, if $a,k>0$, then $$ (4a+2k)^{\frac12}>a^{\frac12}+(a+k)^{\frac12}. $$ Therefore to prove Theorem~\ref{oneextra}, it is sufficient the verify the following: Let $B=A\cup\{b\}$ for $b\not\in A$. \begin{description} \item[(*)] If ${\rm tr}(A)=a$ and ${\rm tr}(B)=a+k$, then ${\rm tr}(A+B)\geq 4a+2k$. \end{description} We fix a triangulation $T$ of $A$, and let $A_T$ be the union of $A$ and the family of midpoints of the edges of $T$. It follows by \eqref{Eulerpoints} that $$ \Delta_{A_T}+2\Omega_{A_T}-2={\rm tr}(A_T)=4a. $$ To estimate ${\rm tr}(A+B)={\rm tr}(\mbox{$\frac12$}(A+B))$, we isolate certain subset $V$ of $A$ in a way such that \begin{equation} \label{Vcond} A_T\cap(\mbox{$\frac12$}(V+\{b\}))=\emptyset. \end{equation} Therefore \begin{eqnarray} \nonumber {\rm tr}(A+B)&\geq & 4a+2|\mbox{$\frac12$}(V+\{b\})\cap{\rm int}[B]|+ \\ \label{extrabasic} && |\mbox{$\frac12$}(V+\{b\})\cap\partial [B]|+ |A_T\cap\partial[A]\cap{\rm int}[B]|. \end{eqnarray} We distinguish two cases depending on how to define $V$.\\ \noindent{\bf Case 1 } $b\not\in [A]$ We say that $x\in [A]$ is visible if $[b,x]\cap[A]=\{x\}$. In this case $x\in\partial A$. We note that there are exactly two visible points on $\partial[B]$, which are on the two supporting lines to $[A]$ passing through $b$ (see Figure~\ref{fig:case1}). Let $k+1$ be the number of visible points of $A$, and hence $k\geq 1$. Now $k-1$ visible points of $A$ lie in ${\rm int}[B]$, thus \eqref{Eulerpoints} yields that ${\rm tr}(B)= a+k$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.4] \draw (0,7)--(0,13)--(10,14)--(14,2)--(3,0)--(0,7)--(4,10)--(5,4)--(0,7) (0,13)--(4,10)--(10,14)--(9,8)--(14,2)--(5,4)--(9,8)--(4,10) (3,0)--(5,4); \foreach \i in {(0,7), (0,13), (3,0), (4,10), (5,4), (9,8), (10,14), (14,2)} { \draw[fill] \i circle(3pt); } \foreach \i in {(-3,-3), (0,10), (5,13.5), (12,8),(8.5,1),(1.5,3.5),(2,8.5),(4.5,7), (2.5,5.5), (2,11.5), (7,12),(9.5,11),(11.5,5),(9.5,3), (7,6), (6.5,9), (4,2), (-1.5,2), (-1.5,5), (0,-1.5), (5.5,-0.5)} { \draw \i circle(3pt); } \draw[dotted] (0,10)--(2,11.5)--(2,8.5)--(0,10) (2,8.5)--(4.5,7)--(2.5,5.5)--(2,8.5) (2.5,5.5)--(1.5,3.5)--(4,2)--(2.5,5.5) (4,2)--(9.5,3)--(8.5,1)--(4,2) (9.5,3)--(7,6)--(11.5,5)--(9.5,3) (7,6)--(4.5,7)--(6.5,9)--(7,6) (2,11.5)--(5,13.5)--(7,12)--(2,11.5) (7,12)--(9.5,11)--(6.5,9)--(7,12) (9.5,11)--(12,8)--(11.5,5)--(9.5,11) (0,7)--(-3,-3)--(0,13) (3,0)--(-3,-3)--(14,2); \node[above] at (-3,-3) {$b$}; \end{tikzpicture} \end{center} \caption{An illustration of Case 1.} \label{fig:case1} \end{figure} Let $V$ be the set of visible points of $A$. The condition \eqref{Vcond} is satisfied because $[A]\cap(\mbox{$\frac12$}(V+\{b\}))=\emptyset$. We have $|\mbox{$\frac12$}(V+\{b\})|=k+1$, and $2k-1$ visible points of $A_T$ lie in ${\rm int}[B]$. In particular, $(*)$ follows as \eqref{extrabasic} yields $$ {\rm tr}(A+B)\geq 4a+2k-1+k+1=4a+3k>4a+2k. $$ \noindent{\bf Case 2 } $b\in [A]$ In this case ${\rm tr}(B)= a+k$ for $k\leq 2$ by \eqref{Eulerpoints}, and $b$ is contained in a triangle $T=[p,q,r]$ of $T$ (see Figure~\ref{fig:case2}). \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.5] \draw (0,7)--(0,13)--(10,14)--(14,2)--(3,0)--(0,7)--(4,10)--(5,4)--(0,7) (0,13)--(4,10)--(10,14)--(9,8)--(14,2)--(5,4)--(9,8)--(4,10) (3,0)--(5,4); \foreach \i in {(0,7), (0,13), (3,0), (4,10), (5,4), (9,8), (10,14), (14,2)} { \draw[fill] \i circle(3pt); } \foreach \i in {(0,10), (5,13.5), (12,8),(8.5,1),(1.5,3.5),(2,8.5),(4.5,7), (2.5,5.5), (2,11.5), (7,12),(9.5,11),(11.5,5),(9.5,3), (7,6), (6.5,9), (4,2), (4,12), (7,13), (2,12.5), (4,11)} { \draw \i circle(3pt); } \draw[dotted] (10,14)--(4,12)--(0,13) (4,10)--(4,12); \node[above] at (4,12) {$b$}; \end{tikzpicture} \end{center} \caption{An illustration of Case 2. }\label{fig:case2} \end{figure} We may assume that $b$ is not contained in the sides $[r,p]$ and $[r,q]$ of $T$. We take $V=\{p,q,r\}$, which satisfies \eqref{Vcond}. Since $\frac12(b+q)\in{\rm int}T\subset{\rm int}[A]$, \eqref{extrabasic} yields ${\rm tr}(A+B)\geq 4a+4$. In turn, we conclude Theorem~\ref{oneextra}.\\ \noindent{\bf Remark: } The argument does not work if we only asssume that $A\subset B$, because we may have equality in Conjecture \ref{ruzsabrunnconj} in this case. \section{Proof of Theorem~\ref{triangle-mixed}} \label{secmixed-triang} Let $A\subset {\mathbb R}^2$ be finite and not contained in any line. By a \emph{path} $\sigma$ on $A$ we mean a piecewise linear simple path whose vertices are in $A$, and every point of $A$ in the support of $\sigma$ is a vertex of the path. We write $|\sigma|$ to denote the number of segments forming $\sigma$. We allow the case that $\sigma$ is a point, and in this case $|\sigma|=0$. We say that $\sigma$ is {\it transversal} to a non-zero vector $u$ if every line parallel to $u$ intersects $\sigma$ in at most one point. In this case, the segments in $\sigma$ induce a subdivision of $\sigma+[o,u]$ into $|\sigma|$ parallelograms if $|\sigma|\geq 1$. For the proof of Theorem~\ref{triangle-mixed} the idea is to find an appropriate set of paths on $A$ with total length at least $\sqrt{|T_A|}$. First, we explore the possibilities using only one or two paths. We will see in Remark~\ref{onepath} that one path is not enough, but Proposition~\ref{horvert} shows that using two paths $\sigma_1,\sigma_2$ almost does the job. Observe that for any given non-zero vector $w$, the length of the longest path on $A$ transversal to $w$ equals the number of lines parallel to $w$ intersecting $A$, minus one. \begin{rem} \label{onepath} Given pairwise independent vectors $w_1,\ldots,w_n$ let $f(w_1,\ldots,w_n,s)$ be the minimal number such that, for every finite set $A\subset{\mathbb R}^2$ with ${\rm tr}(A)=s$, there is a $w_i$ and a path on $A$ transversal to $w_i$ of length $f(w_1,\ldots,w_n,s)$. For $n=2$, $f(w_1,w_2,s)\geq\sqrt{s/2}$, with equality provided that $k:=\sqrt{s/2}$ is an integer. An extremal configuration consists of the points $\{iw_1 + jw_2 : i,j\in \{0,\dots,k\} \}$. For $n=3$, $f(w_1,w_2,w_3,s)\geq\sqrt{2s/3}$ and equality holds provided that $s=6k^2$. Assuming without loss of generality that $w_1+w_2+w_3=0$, an extremal configuration is given by the points of the lattice generated by $w_1,w_2$ in the affine regular hexagon $[\pm kw_1,\pm kw_2,\pm kw_3]$. \end{rem} Let $e_1=(1,0)$ and $e_2=(0,1)$, and let $\sigma_1,\sigma_2$ be piecewise linear paths whose vertices are among the vertices of $A$. We say that the ordered pair $(\sigma_1,\sigma_2)$ is a \emph{horizontal-vertical} path if \begin{description} \item{(i')} $\sigma_i$ is transversal with respect $e_{3-i}$ (possibly a point), $i=1,2$; \item{(ii')} the right endpoint $a$ of $\sigma_1$ is the upper endpoint of $\sigma_2$ \item{(iii')} writing ${\mathbb R}_+=\{t\in{\mathbb R}:\, t>0\}$, if $|\sigma_1|,|\sigma_2|>0$, then $$ \left((\sigma_1\backslash\{a\})+{\mathbb R}_+e_2\right)\cap \left((\sigma_2\backslash\{a\})+{\mathbb R}_+e_1\right)=\emptyset. $$ \end{description} We call $\sigma_1$ the horizontal branch, and $\sigma_2$ the vertical branch, and $a$ the center. We observe that if $\sigma'_i$ is the image of $\sigma_i$ by reflection through the line ${\mathbb R}(e_1+e_2)$, then the ordered pair $(\sigma'_2,\sigma'_1)$ is also a horizontal-vertical path. For any polygon $P$ and non-zero vector $u$, we write $F(P,u)$ to denote the face of $P$ with exterior normal $u$. In particular, $F(P,u)$ is either a side or a vertex. \begin{prop} \label{horvert} For every finite $A\subset{\mathbb R}^2$ not contained in a line, and for every triangulation $T$ of $[A]$ using $A$ as a vertex set, there exists a horizontal-vertical path $(\sigma_1,\sigma_2)$ whose vertices belong to $A$, and satisfies $$ |\sigma_1|+|\sigma_2|\geq\sqrt{|T|+1}-\mbox{$\frac12$}. $$ \end{prop} \noindent{\it Proof: } Let us write \begin{eqnarray*} \xi&=&|F([A],-e_1)\cap F([A],-e_2)|\leq 1\\ \Delta_A'&=&\left|(A\cap\partial[A])\backslash(F([A],-e_1)\cup F([A],-e_2))\right|. \end{eqnarray*} By the invariance with respect to reflection through the line ${\mathbb R}(e_1+e_2)$, we may assume that \begin{equation} \label{hor-vert-size} |F([A],-e_2)\cap A|\geq |F([A],-e_1)\cap A|. \end{equation} We set $\{\langle e_1,p\rangle:\,p\in A\}=\{\alpha_0,\ldots,\alpha_k\}$ with $\alpha_0<\ldots<\alpha_k$, $k\geq 1$. For $i=0,\ldots,k$, let $A_i=\{p\in A:\,\langle e_1,p\rangle=\alpha_i\}$, let $x_i=|A_i|$, and let $a_i$ be the top most point of $A_i$; namely, $\langle e_2,a_i\rangle$ is maximal. In particular, $x_0=|F([A],-e_1)\cap A|$. For each $i=1,\ldots,k$, we consider the horizontal-vertical path $(\sigma_{1i},\sigma_{2i})$ where $$ \sigma_{1i}=\{[a_0,a_1],\ldots,[a_{i-1},a_i]\}, $$ and the vertex set of $\sigma_{2i}$ is $A_i$. In particular, the total length of the horizontal-vertical path is $(\sigma_{1i},\sigma_{2i})$ is $$ |\sigma_{1i}|+|\sigma_{2i}|=i+x_i-1. $$ The average length of these paths for $i=1,\ldots,k$ is $$ \frac{\sum_{i=1}^k(|\sigma_{1i}|+|\sigma_{2i}|)}{k}= \frac{\sum_{i=1}^{k}(i+x_i-1)}{k}= \frac{|A|-x_0}{k}+\frac{k}2-\frac12. $$ We observe that $2|A|=|T|+\Delta_A+2$, according to \eqref{Eulerpoints}, and (\ref{hor-vert-size}) yields $$ 2+\Delta_A-2x_0=2+\Delta_A'+|F([A],-e_2)\cap A|-\xi-x_0\geq \Delta_A'+1. $$ Therefore we deduce from the inequality between the arithmetic and geometric mean that \begin{eqnarray} \nonumber \frac{\sum_{i=1}^{k-1}(|\sigma_{1i}|+|\sigma_{2i}|)}{k-1}&=& \frac{2|A|-2x_0}{2k}+\frac{k}2-\frac12\\ \label{horvert-detailed} &\geq & \frac{1}{2}\left(\frac{|T|+\Delta_A'+1}{k}+k\right)-\frac12\\ \label{horvert-detailed-sqrt} &\geq& \sqrt{|T|+\Delta_A'+1}-\frac12. \end{eqnarray} Therefore there exists some horizontal-vertical path $(\sigma_{1i},\sigma_{2i})$ satisfying (\ref{horvert-detailed-sqrt}). \hfill \mbox{ $\Box$}\\ The estimate of Proposition~\ref{horvert} is close to be optimal according to the following example. \begin{example} \label{horvert-example} Let $k\ge 2$ and $t>0$. Let $A'$ be the saturated set with $[A']$ having vertices $(0,0), (0,k), (k-1,0)$ and $(k-1,1)$, and let $A=A'\cup \{(k+t,0)\}$. A triangulation of $A$ has $k^2+k-1$ triangles and every horizontal--vertical path $(\sigma_1,\sigma_2)$ on $A$ has total length $$ |\sigma_1|+|\sigma_2|\leq k<\sqrt{|T|+2}-\mbox{$\frac12$}. $$ \hfill \mbox{ $\Box$}\\\end{example} We next proceed to the proof of Theorem \ref{triangle-mixed} by a similar strategy using three paths. Let $B=\{v_1,v_2,v_3\}$ and, for $\{i,j,k\}=\{1,2,3\}$ denote by $u_i$ the exterior unit normal to the side $[v_j,v_k]$ of $B$. A set of three paths $(\sigma_1,\sigma_2,\sigma_3)$ meeting at some point $a\in A$ and using the edges of a triangulation $T$ of $A$ is called a {\it proper star} if the following conditions hold: \begin{description} \item{(i)} $\sigma_i$ is transversal with respect $v_j-v_k$ (possibly $\sigma_i=\{a\}$); \item{(ii)} $\sigma_i$ has an end point $b_i\in\partial[A]$ such that $u_i$ is an exterior unit normal to $[A]$ at $b_i$, and $$ \langle a,u_i\rangle=\min\{\langle x,u_i\rangle:x\in\sigma_i\}; $$ \item{(iii)} writing ${\mathbb R}_+=\{t\in{\mathbb R}:\, t>0\}$, if $|\sigma_j|,|\sigma_k|>0$, then $$ \left((\sigma_j\backslash\{a\})+{\mathbb R}_+(v_k-v_i)\right)\cap \left((\sigma_k\backslash\{a\})+{\mathbb R}_+(v_j-v_i)\right)=\emptyset. $$ \end{description} If the semi-open paths $\sigma_i\backslash\{a\}$, $i=1,2,3$, are all non-empty and pairwise disjoint, then (iii) means that they come around $a$ in the same order as the orientation of the triangle $[v_1,v_2,v_3]$ (see Figure \ref{fig:tristar} for an illustration). \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.4] \draw (0,0)--(1,0)--(0,1)--(0,0); \foreach \i in {(0,9), (1,4), (5,-0), (4,7), (6,3), (9,10), (8,6), (13,2)} { \draw[fill] \i circle(2pt); } \foreach \i in {(0,0), (1,0), (0,1)} \draw \i circle (4pt); \draw[fill] (6,3) circle (4pt); \node[below,right] at (6,3) {\small $a$}; \node[left] at (0,0) {\small $v_3$}; \node[below] at (1,0) {\small $v_1$}; \node[left] at (0,1) {\small $v_2$}; \draw[dotted] (0,9)--(1,4)--(5,-0)--(13,2)--(9,10)--(0,9) (4,7)--(6,3)--(8,6)--(4,7); \draw[dotted] (0,9)--(4,7)--(1,4)--(6,3)--(5,-0) (6,3)--(13,2) (5,0)--(13,2)--(8,6)--(9,10)--(4,7); (4,7)--(6,3)--(8,6)--(4,7); \draw[dotted] (0,9)--(1,4)--(5,-0)--(13,2)--(9,10)--(0,9) (4,7)--(6,3)--(8,6)--(4,7); \draw[dotted] (0,9)--(4,7)--(1,4)--(6,3)--(5,-0) (6,3)--(13,2) (5,0)--(13,2)--(8,6)--(9,10)--(4,7); (4,7)--(6,3)--(8,6)--(4,7); \draw[ thick] (0,9)--(4,7)--(6,3)--(5,0) (6,3)--(8,6)--(9,10); \node at (3,7-0.2) {\small $\sigma_1$}; \node at (5,2) {\small $\sigma_2$}; \node at (8,8) {\small $\sigma_3$}; \end{tikzpicture} \hspace{5mm} \begin{tikzpicture}[scale=0.4] \draw[fill,lightgray] (0,9)--(0,10)--(4,8)--(4,7)--(0,9); \draw[fill,lightgray] (4,8)--(4,7)--(6,3)--(6,4)--(4,8); \draw[fill,lightgray] (6,3)--(7,3)--(6,0)--(5,0)--(6,3); \draw[fill,lightgray] (6,4)--(7,3)--(9,6)--(8,7)--(6,4); \draw[fill,lightgray] (9,6)--(8,7)--(9,11)--(10,10)--(9,6); \draw[gray] (4,7)--(4,8) (8,7)--(9,6); \draw (0,0)--(1,0)--(0,1)--(0,0); \foreach \i in {(0,9), (1,4), (5,-0), (4,7), (6,3), (9,10), (8,6), (13,2)} { \draw \i+(0,1) circle (3pt); \draw \i+(1,0) circle (3pt); \draw[fill] \i circle(2pt); } \foreach \i in {(0,0), (1,0), (0,1)} \draw \i circle (4pt); \draw[fill] (6,3) circle (4pt); \node[below,right] at (6,3) {\small $a$}; \node[left] at (0,0) {\small $v_3$}; \node[below] at (1,0) {\small $v_1$}; \node[left] at (0,1) {\small $v_2$}; \draw[dotted] (0,9)--(1,4)--(5,-0)--(13,2)--(9,10)--(0,9) (4,7)--(6,3)--(8,6)--(4,7); \draw[dotted] (0,9)--(4,7)--(1,4)--(6,3)--(5,-0) (6,3)--(13,2) (5,0)--(13,2)--(8,6)--(9,10)--(4,7); (4,7)--(6,3)--(8,6)--(4,7); \draw[dotted] (0,9)--(1,4)--(5,-0)--(13,2)--(9,10)--(0,9) (4,7)--(6,3)--(8,6)--(4,7); \draw[dotted] (0,9)--(4,7)--(1,4)--(6,3)--(5,-0) (6,3)--(13,2) (5,0)--(13,2)--(8,6)--(9,10)--(4,7); (4,7)--(6,3)--(8,6)--(4,7); \draw[ thick] (0,9)--(4,7)--(6,3)--(5,0) (6,3)--(8,6)--(9,10); \node at (3,7-0.2) {\small $\sigma_1$}; \node at (5,2) {\small $\sigma_2$}; \node at (8,8) {\small $\sigma_3$}; \end{tikzpicture} \end{center} \caption{A proper star with respect to $v_1,v_2,v_3$ centered at $a$. On the right, paralellograms based on the proper star} \label{fig:tristar} \end{figure} The next Lemma shows how to construct an appropriate mixed subdivision of $A+B$ using a proper star. \begin{lemma} \label{proper-mixed} Given a proper star with rays $\sigma_1,\sigma_2,\sigma_3$ such that $|\sigma_1|+|\sigma_2|+|\sigma_3|>0$, there exists a mixed subdivision $M$ for $A+B$ satisfying $$ M_{11}=|\sigma_1|+|\sigma_2|+|\sigma_3|. $$ \end{lemma} \noindent{\it Proof: } We may assume that $|\sigma_1|>0$ and $v_3=o$. We partition the triangles of $T_A$ into three subsets $\Sigma_1,\Sigma_2,\Sigma_3$ (some of them might be empty). The idea is that if the semi-open paths $\sigma_i\backslash\{a\}$, $i=1,2,3$, are all non-empty and pairwise disjoint and $\{i,j,k\}=\{1,2,3\}$, then $\Sigma_i$ consists of the triangles cut off by $\sigma_j\cup\sigma_k$. A triangle $\tau$ of $T_A$ is in $\Sigma_1$ if and only if there exists a $p\in({\rm int}\,\tau)\backslash(a+{\mathbb R} v_1)$ such that $$ |(p-{\mathbb R}_+v_1)\cap\sigma_2|+|(p-{\mathbb R}_+v_1)\cap\sigma_3| $$ is finite and odd. Similarly, $\tau\in T_A$ is in $\Sigma_2$ if and only if there exists a $p\in{\rm int}\,\tau$ such that $$ |(p-{\mathbb R}_+v_2)\cap\sigma_1|+|(p-{\mathbb R}_+v_2)\cap\sigma_3| $$ is finite and odd. The rest of the triangles of $T_A$ form $\Sigma_3$. The triangles of the mixed subdivision $M$ are as follows. If $\tau\in\Sigma_i$, then the corresponding triangle in $M$ is $\tau+v_i$. In addition, $[B]+a$ is in $M$. For the parallelograms, let $\{i,j,k\}=\{1,2,3\}$. If $e$ is an edge of $\sigma_i$, then $e+[v_j,v_k]$ is in $M$. \hfill \mbox{ $\Box$}\\ For the rest of the section, we fix finite $A\subset{\mathbb R}^2$ and $B=\{v_1,v_2,v_3\}\subset{\mathbb R}^2$ such that both of them spans ${\mathbb R}^2$ affinely, and confirm Conjecture~\ref{ruzsabrunnconj-mixed} in this case. The following statement is a simple consequence of the definition of a proper star. \begin{lemma} \label{improve-hor-vert} Assuming $B=\{v_1,v_2,v_3\}$ with $v_1=(1,0)=-u_1$, $v_2=(0,1)=-u_2$ and $v_3=(0,0)$, and hence $u_3=(\frac1{\sqrt{2}},\frac1{\sqrt{2}})$, if $(\sigma_1,\sigma_2)$ is a horizontal-vertical path for $A$ centered at $a\in A$, then \begin{itemize} \item there exists a proper star $(\sigma'_1,\sigma'_2,\sigma'_3)$ centered at $a$ such that $\sigma_1\subset\sigma'_1$, $\sigma_2\subset \sigma'_2$, \item if in addition $a\not\in F([A],u_3)$, then $|\sigma'_3|\geq 1$. \end{itemize} \end{lemma} \noindent{\bf Proof of Theorem~\ref{triangle-mixed} } We may assume that $B=\{v_1,v_2,v_3\}$ with $v_1=(1,0)=-u_1$, $v_2=(0,1)=-u_2$ and $v_3=(0,0)$, and hence $u_3=(\frac1{\sqrt{2}},\frac1{\sqrt{2}})$. In addition, we may assume that $$ |F([A],-u_2)\cap A|\geq |F([A],-u_1)\cap A|. $$ Using the notation of the proof of (\ref{horvert-detailed}), we set $\{\langle u_1,p\rangle:\,p\in A\}=\{\alpha_0,\ldots,\alpha_k\}$ with $\alpha_0<\ldots<\alpha_k$, and $\Delta_A'=|(A\cap \partial[A])\backslash(F([A],-u_1)\cup F([A],-u_2))|$. For $i=0,\ldots,k$, let $A_i=\{p\in A:\,\langle u_1,p\rangle=\alpha_i\}$, let $x_i=|A_i|$ and let $a_i$ be the top most point of $A_i$; namely, $\langle u_2,a_i\rangle$ is maximal. According to (\ref{horvert-detailed}) and (\ref{horvert-detailed-sqrt}), we have \begin{equation} \label{horvert-detailedaverage} \frac{\sum_{i=1}^{k}(i+x_i-1)}{k}\geq \frac{|T_A|+\Delta_A'+1}{2k}+\frac{k}2-\frac12 \geq \sqrt{|T_A|+1}-\frac12. \end{equation} Let $I$ be the set of all $i\in \{1,\ldots,k\}$ such that \begin{equation} \label{ixilower} i+x_i-1\geq \left\lceil \frac{|T_A|+\Delta_A'+1}{2k}+\frac{k}2-\frac12\right\rceil=\xi. \end{equation} Since $\xi\geq \sqrt{|T_A|+1}-\frac12$, if strict inequality holds for some $i$ in (\ref{ixilower}), then we have a required proper star by Lemma~\ref{improve-hor-vert}. Thus we assume that $i+x_i-1=\xi$ for $i\in I$. Let $\theta=|I|$. Since $i+x_i-1\leq \xi-1$ if $i\not\in I$, we have $$ \xi-\frac{\sum_{i=1}^{k}(i+x_i-1)}{k}\geq \frac{k-\theta}{k}. $$ We deduce from (\ref{horvert-detailedaverage}) that if $i\in I$, then $$ i+x_i-1\geq\frac{|T_A|+\Delta_A'+1}{2k}+\frac{k}2-\frac12+\frac{k-\theta}{k}= \frac{|T_A|+\Delta_A'+1}{2k}+\frac{k}2+\frac12-\frac{\theta}{k}. $$ If $i\in I$ and $a_i\not\in F([A],u_3)$, then $\xi\geq \sqrt{|T_A|+1}-\frac12$ and Lemma~\ref{improve-hor-vert} yields the existence of a required proper star. Therefore we may assume that $a_i\in F([A],u_3)$ for $i\in I$. Since $|F([A],u_3)\cap F([A],-u_2))|\leq 1$, we deduce that \begin{equation} \label{tminusepsilonthetak} \theta\leq \max\{\Delta_A'+1,k\}. \end{equation} Therefore if $i\in I$, then we conclude using the inequality betwen the arightmetic and the geometric mean at the last inequality that $$ i+x_i-1\geq \frac{|T_A|+\theta}{2k}+\frac{k}2+\frac12-\frac{\theta}{k}\geq \frac{|T_A|}{2k}+\frac{k}2+\frac12-\frac{\theta}{2k}\geq\sqrt{|T_A|}. \mbox{ \ }\hfill \mbox{ $\Box$}\\ $$ \section{Proof of Theorem~\ref{convex-position}} \label{secconvex-position} We assume in this section that there are no points of $A$ (resp. $B$) in the interior of $[A]$, (resp. $[B]$). Recall that $\Delta_X$ denotes the number of points of $X$ in the boundary of $[X]$. It is easy to check that $\Delta_{A+B}$ has at least as many points as $\Delta_A$ and $\Delta_B$ together, that is: \[ \Delta_{A+B} \ge \Delta_A + \Delta_B = {\rm tr}(A) + {\rm tr}(B) +4 \] As a motivation for the proof, we note that Conjecture~\ref{ruzsabrunnconj} follows if the number $\Omega_{A+B}$ of points of $A+B$ in ${\rm int}([A+B])$ is at least \[ \frac{{\rm tr}(A) + {\rm tr}(B) - 2}2 = \frac{\Delta_A + \Delta_B} 2 - 3 . \] Naturally we aim at the stronger Conjecture~\ref{ruzsabrunnconj-mixed}. Given Theorem~\ref{triangle-mixed}, Theorem~\ref{convex-position} follows if $A$ and $B$ being in convex position and $|A|,|B|\geq 4$ yield that there exists a mixed subdivision of $A + B$ satisfying \begin{equation} \label{convex-position-parallelograms} |M_{11}|\geq \frac{{\rm tr}(A)+{\rm tr}(B)}2. \end{equation} Throughout the proof we assume that $[B]$ has at most as many vertices as $[A]$ and $v$ denotes a unit vector (which we assume pointing upwards) not parallel to any side of $[A+B]$. We denote by $a_0$ and $a_1$ the leftmost and rightmost vertex of $[A]$ and by $b_0$ and $b_1$ the leftmost and rightmost vertex of $[B]$. To prove (\ref{convex-position-parallelograms}), we say that $A$ and $B$ form a \emph{strange pair} if $[B]$ is a triangle and the three exterior normals to $[B]$ are also exterior normals of edges of $[A]$. We will use that, for $t,s\geq 1$, \begin{equation} \label{tsbig} ts\geq t+s-1. \end{equation} \noindent {\bf Case 1 } $A$ and $B$ are not strange pairs. We choose a unit vector $v$ as above in the following way: if $B$ is a triangle, then the upper arc of $\partial [B]$ is a side such that $[A]$ has no side with same exterior unit normal; if $[B]$ has at least four sides, then the two supporting lines of $[B]$ parallel to $v$ touch at non-consecutive vertices of $[B]$. For the existence of the latter pair of supporting lines, we note that while continuously rotating $[B]$, the number of ��upper - lower vertices�� changes by either zero or two units at a time when a side of $[B]$ is parallel to $v$, and after rotation by $\pi$ it changes to its opposite. Hence, at some position that difference is zero or one which implies, since $[B]$ has at least four vertices, that at that position there is at least one upper and one lower vertex, as required. \begin{claim} One of the two following statements hold: \begin{equation} \label{convex-position-case1} \begin{array}{rl} &\left|\Big((A + b_0) \cup (a_1 + B)\Big)\cap{\rm int}[A+B]\right|\geq \frac{\Delta_A + \Delta_B} 2 - 3, \mbox{ or } \\[3mm] &\left|\Big((a_0 + B) \cup (A + b_1)\Big)\cap{\rm int}[A+B]\right|\geq \frac{\Delta_A + \Delta_B} 2 - 3. \end{array} \end{equation} \end{claim} \begin{proof} We may assume that $b_1=a_0=o$ (see Fig. \ref{fig:a0+B}). Observe first that the only repetitions�� $x+b_0 = a_1 +y$ or $x+b_1 = a_0 +y$ in these configurations are the points $a_1+b_0$ and $a_0+b_1$ (which are interior to $[A+B]$ by our hypothesis). To prove (\ref{convex-position-case1}), we verify first that \begin{description} \item{(i)} for every $x\in A\setminus \{a_0,a_1\}$ except perhaps two of them, at least one of $x + b_0$ or $x + b_1$ is interior in $A+B$, \item{(ii)} for every $y\in B\setminus \{b_0,b_1\}$ except perhaps two of them, at least one of $a_0 + y$ or $a_1+y$ is interior in $A+B$. \end{description} \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.5] \draw (-6,-6)--(-6,4) (0,-6)--(0,4) (4,-6)--(4,4); \foreach \i in {(0, 0), (-2,2),(-5,1),(-6,-2),(-5,-4),(-1,-3)} \draw[fill] \i circle(2pt); \draw[fill, lightgray] (0, 0)--(-2,2)--(-5,1)--(-6,-2)--(-5,-4)--(-1,-3)--(0,0); \draw (0, 0)--(-2,2)--(-5,1)--(-6,-2)--(-5,-4)--(-1,-3)--(0,0); \foreach \i in {(0,0),(2,2),(4,1),(2,-2)} \draw[fill] \i circle(2pt); \draw[fill,lightgray] (0,0)--(2,2)--(4,1)--(2,-2)--(0,0); \draw (0,0)--(2,2)--(4,1)--(2,-2)--(0,0); \foreach \i in {(-1, 2), (0, 0), (-3, -1), (1, -5), (-1, -3), (-6, -2), (2, 3), (-2, 2), (-3, 3), (0, 4), (1, -1), (-4, 0), (-5, 1), (2, -2), (-5, -4), (-4, -4), (2, 2), (3, -2), (-3, -2), (-2, -1), (-3, -6), (4, 1)} \draw \i circle(2pt); \draw (-5,1)--(-3,3)--(0,4)--(2,3)--(4,1)--(3,-2)--(1,-5)--(-3,-6)--(-5,-4); \node[left] at (-6,-2) {$a_0$}; \node[below] at (0,-6) {$a_1=b_0$}; \node[right] at (4,1) {$b_1$}; \node at (-2.5,0) {$b_0+A$}; \node at (2,0.3) {$B+a_1$}; \end{tikzpicture} \hspace{2mm} \begin{tikzpicture}[scale=0.55] \draw (-6,-6)--(-6,4) (0,-6)--(0,4) (4,-6)--(4,4); \foreach \i in {(0, 0), (-2,2),(-5,1),(-6,-2),(-5,-4),(-1,-3)} \draw[fill] \i circle(2pt); \draw[fill, lightgray] (-2,-1)--(-1,2)--(2,3)--(4,1)--(3,-2)--(-1,-3)--(-2,-1); \draw (-2,-1)--(-1,2)--(2,3)--(4,1)--(3,-2)--(-1,-3)--(-2,-1); \foreach \i in {(0,0),(2,2),(4,1),(2,-2)} \draw[fill] \i circle(2pt); \draw[fill,lightgray] (-2,-1)--(-4,0)--(-6,-2)--(-4,-4)--(-2,-1); \draw (-2,-1)--(-4,0)--(-6,-2)--(-4,-4)--(-2,-1); \foreach \i in {(-1, 2), (0, 0), (-3, -1), (1, -5), (-1, -3), (-6, -2), (2, 3), (-2, 2), (-3, 3), (0, 4), (1, -1), (-4, 0), (-5, 1), (2, -2), (-5, -4), (-4, -4), (2, 2), (3, -2), (-3, -2), (-2, -1), (-3, -6), (4, 1)} \draw \i circle(2pt); \draw (-5,1)--(-3,3)--(0,4)--(2,3)--(4,1)--(3,-2)--(1,-5)--(-3,-6)--(-5,-4)--(-6,-2)--(-5,1); \node[left] at (-6,-2) {$a_0$}; \node[below] at (0,-6) {$a_1=b_0$}; \node[right] at (4,1) {$b_1$}; \node at (-4.2,-2) {$B+a_0$}; \node at (1.2,0) {$b_1+A$}; \end{tikzpicture} \end{center} \caption{An illustration of the proof of Claim \ref{convex-position-case1}.}\label{fig:a0+B} \end{figure} For (i), we note that if both $x + b_0$ or $x + b_1$ are in $\partial [A+B]$, then they are the end points of a segment translated from $[b_0,b_1]$ and only two such translations have their end-points in $\partial [A+B]$ because $A$ and $B$ are not a strange pair. The argument for (ii) is similar. Now (i) and (ii) say that counting the interior points of $(A + b_0) \cup (a_1 + B)$ and $ (a_0 + B) \cup (A + b_1)$ except $a_0+b_1$ and $a_1+b_0$ we have altogether at least $|\Delta_A| + |\Delta_B| - 8$ of them. Including the latter we have at least $|\Delta_A| + |\Delta_B| - 6$ of them and at least half of these in either $(A + b_0) \cup (a_1 + B)$ or $(a_0 + B) \cup (A + b_1)$, which yields \eqref{convex-position-case1}.\hfill \mbox{ $\Box$}\\\end{proof} Let us construct the suitable mixed triangulation of $[A+B]$. For every path $\sigma$ in $\partial A$, we assume that every point of $A$ in $\sigma$ is a vertex of $\sigma$. According to (\ref{convex-position-case1}), we may assume that \begin{equation} \label{convex-position-case10} \left|(A \cup B)\cap{\rm int}[A+B]\right|\geq \frac{\Delta_A + \Delta_B} 2 - 3 \end{equation} Let $a_{\rm upp}$ ($a_{\rm low}$) be the neighboring vertex of $[A]$ to $o$ on the upper (lower) arc of $\partial A$, and let $b_{\rm upp}$ ($b_{\rm low}$) be the neighboring vertex of $[B]$ to $o$ on the upper (lower) arc of $\partial B$. We write $\omega^A_{\rm upp}$ and $\omega^A_{\rm low}$ to denote the paths determined by $[o,a_{\rm upp}]$ and $[o,a_{\rm low}]$ and $\omega^B_{\rm upp}$ and $\omega^B_{\rm low}$ to denote the paths determined by $[o,b_{\rm upp}]$ and $[o,b_{\rm low}]$. Next let $\sigma^A_{\rm upp}$ ($\sigma^A_{\rm low}$) be the longest path on the upper (lower) arc of $\partial[A]$ starting from $o$ such that every segment $s$ of $\sigma^A_{\rm upp}$ ($\sigma^A_{\rm low}$) satisfies that $s+[o,b_{\rm upp}]$ ($s+[o,b_{\rm low}]$) is a parallelogram that does not intersect ${\rm int}[A]$. Similarly, let $\sigma^B_{\rm upp}$ ($\sigma^B_{\rm low}$) be the longest path on the upper (lower) arc of $\partial[B]$ starting from $o$ such that every segment $s$ of $\sigma^B_{\rm upp}$ ($\sigma^B_{\rm low}$) satisfies that $s+[o,a_{\rm upp}]$ ($s+[o,a_{\rm low}]$) is a parallelogram that does not intersect ${\rm int}[B]$. Since $a_1=b_0=o$ is a common point of $\sigma^A_{\rm upp}$, $\sigma^A_{\rm low}$, $\sigma^B_{\rm upp}$, $\sigma^B_{\rm low}$, we deduce from (\ref{convex-position-case10}) that $$ 1+(|\sigma^A_{\rm upp}|-1)+(|\sigma^A_{\rm low}|-1)+ (|\sigma^B_{\rm upp}|-1)+(|\sigma^B_{\rm low}|-1)\geq \frac{\Delta_A + \Delta_B} 2 - 3, $$ equivalently, \begin{equation} \label{convex-position-case100} |\sigma^A_{\rm upp}|+|\sigma^A_{\rm low}|+ |\sigma^B_{\rm upp}|+|\sigma^B_{\rm low}|\geq \frac{\Delta_A + \Delta_B} 2 . \end{equation} We construct the mixed subdivision by considering the subdivisions into suitable paralleograms of $\sigma^A_{\rm upp}+\omega^B_{\rm upp}$ and $\sigma^B_{\rm upp}+\omega^A_{\rm upp}$ that have $\omega^A_{\rm upp}+\omega^B_{\rm upp}$ in common, and the subdivisions into suitable parallelograms of $\sigma^A_{\rm low}+\omega^B_{\rm low}$ and $\sigma^B_{\rm low}+\omega^A_{\rm low}$ that have $\omega^A_{\rm low}+\omega^B_{\rm low}$ in common (see Figure \ref{fig:mixedconvex}). \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.55] \draw (-6,-6)--(-6,4) (0,-6)--(0,4) (4,-6)--(4,4); \foreach \i in {(0, 0), (-2,2),(-5,1),(-6,-2),(-5,-4),(-1,-3)} \draw[fill] \i circle(2pt); \draw[fill, lightgray] (0, 0)--(-2,2)--(-5,1)--(-6,-2)--(-5,-4)--(-1,-3)--(0,0); \draw (0, 0)--(-2,2)--(-5,1)--(-6,-2)--(-5,-4)--(-1,-3)--(0,0); \foreach \i in {(0,0),(2,2),(4,1),(2,-2)} \draw[fill] \i circle(2pt); \draw[fill,lightgray] (0,0)--(2,2)--(4,1)--(2,-2)--(0,0); \draw (0,0)--(2,2)--(4,1)--(2,-2)--(0,0); \foreach \i in{(-1, 2), (0, 0), (-3, -1), (1, -5), (-1, -3), (-6, -2), (2, 3), (-2, 2), (-3, 3), (0, 4), (1, -1), (-4, 0), (-5, 1), (2, -2), (-5, -4), (-4, -4), (2, 2), (3, -2), (-3, -2), (-2, -1), (-3, -6), (4, 1)} \draw \i circle(2pt); \draw (-5,1)--(-3,3)--(0,4)--(2,3)--(4,1)--(3,-2)--(1,-5)--(-3,-6)--(-5,-4); \draw[ultra thick] (0,0)--(-2,2) (0,0)--(-1,-3) (0,0)--(2,2) (2,-2)--(0,0); \draw[ultra thick,dashed] (-2,2)--(-5,1) (-1,-3)--(-5,-4) (2,2)--(4,1)--(2,-2); \draw[pattern=north west lines, pattern color=lightgray] (0,0)--(-2,2)--(0,4)--(2,2)--(0,0) (-2,2)--(0,4)--(-3,3)--(-5,1)--(-2,2) (0,4)--(2,2)--(4,1)--(2,3)--(0,4) (0,0)--(2,-2)--(1,-5)--(-1,-3)--(0,0) ; \draw[pattern=north east lines, pattern color=lightgray] (0,0)--(-2,2)--(0,4)--(2,2)--(0,0) (0,0)--(2,-2)--(1,-5)--(-1,-3)--(0,0) (1,-5)--(2,-2)--(4,1)--(3,-2)--(1,-5) (1,-5)-- (-1,-3)--(-5,-4)--(-3,-6)--(1,-5); \node[left] at (-6,-2) {$a_0$}; \node[below] at (0,-6) {$a_1=b_0$}; \node[right] at (4,1) {$b_1$}; \node at (-2.5,0) {$A$}; \node at (2,0) {$B$}; \end{tikzpicture} \end{center} \caption{An illustration of the parallelograms of the mixed subdivision in Case 1.}\label{fig:mixedconvex} \end{figure} In particular, $|\omega^A_{\rm upp}|,|\omega^B_{\rm upp}|\geq 1$, (\ref{tsbig}) and (\ref{convex-position-case100}) yield that \begin{eqnarray*} |M_{11}|&\geq & (|\sigma^A_{\rm upp}|-|\omega^A_{\rm upp}|)|\omega^B_{\rm upp}|+ (|\sigma^B_{\rm upp}|-|\omega^B_{\rm upp}|)|\omega^A_{\rm upp}|+ |\omega^A_{\rm upp}|\cdot |\omega^B_{\rm upp}|+\\ &&+ (|\sigma^A_{\rm low}|-|\omega^A_{\rm low}|)|\omega^B_{\rm low}|+ (|\sigma^B_{\rm low}|-|\omega^B_{\rm low}|)|\omega^A_{\rm low}|+ |\omega^A_{\rm low}|\cdot |\omega^B_{\rm low}|\\ &\geq &(|\sigma^A_{\rm upp}|-|\omega^A_{\rm upp}|)+ (|\sigma^B_{\rm upp}|-|\omega^B_{\rm upp}|)+ |\omega^A_{\rm upp}|+ |\omega^B_{\rm upp}|-1+\\ &&+ (|\sigma^A_{\rm low}|-|\omega^A_{\rm low}|)+ (|\sigma^B_{\rm low}|-|\omega^B_{\rm low}|)+ |\omega^A_{\rm low}|+ |\omega^B_{\rm low}|-1\\ &\geq &\frac{\Delta_A + \Delta_B}2-2=\frac{{\rm tr}(A) + {\rm tr}(B)} 2 \end{eqnarray*} proving (\ref{convex-position-parallelograms}) in Case~1.\\ \noindent {\bf Case 2 } $A$ and $B$ form a strange pair with $|A|,|B|\geq 4$, and $[A]$ and $[B]$ are not similar triangles We write $\alpha_{\rm upp}$ ($\alpha_{\rm low}$) to denote the number of segments that the points of $A$ divide the upper (lower) arc of $\partial[A]$. We denote by $b_2$ the third vertex of $[B]$ and by $[x_0,x_1]$ the side of $A$ with $x_1-x_0=t(b_1-b_0)$ for $t>0$. For $i=0,1,2$, let $s_i$ be the number of segments that the points of $B$ divide the side of $[B]$ opposite to $b_i$. \begin{claim}\label{claim:paths} There exists a $v$ such that one of the following holds: \begin{align} &\alpha_{\rm upp}\geq 2 \; \text{and} \; \alpha_{\rm upp}+s_0+s_1\geq \frac12(\Delta_A+\Delta_B), \text{or} \label{convex-position-case21}\\ &\alpha_{\rm low},s_2\geq 2 \; \text{and} \; \alpha_{\rm low}+s_2\geq \frac12(\Delta_A+\Delta_B).\label{convex-position-case22} \end{align} \end{claim} \begin{proof} Since $\alpha_{\rm upp}+\alpha_{\rm low}=\Delta_A$ and $s_0+s_1+s_2=\Delta_B$, the claim easily follows if there is a $v$ such that, for each the sets $A$ and $B$, both the upper arc and the lower arc contain a point of the set strictly between the two supporting lines parallel to $v$. Otherwise, choose a $v$ such that the side $[b_0,b_1]$ of $[B]$ contains at least $3$ points of $B$ (this is possible since $|B|\ge 4$). Then $[x_0,x_1]$ has no other point of $A$ than $x_0,x_1$ and the other side of $[A]$ at $x_i$, $i=0,1$ is parallel to $[b_i,b_2]$. As $[A]$ and $[B]$ are not similar triangles , $[A]$ has some more sides, which in turn yields that $[b_i,b_2]\cap B=\{b_i,b_2\}$ for $i=0,1$. In summary, we have $\alpha_{\rm upp}=s_0=s_1=1$ and $\alpha_{\rm low},s_2\geq 2$. Since $\alpha_{\rm low}+s_2>\alpha_{\rm upp}+s_0+s_1$, we conclude (\ref{convex-position-case22}).\hfill \mbox{ $\Box$}\\\end{proof} To prove (\ref{convex-position-parallelograms}) based on (\ref{convex-position-case21}) and (\ref{convex-position-case22}), we introduce some further notation. After a linear transformation, we may assume that $v$ is an exterior normal to the side $[b_0,b_1]$ of $[B]$. We say that $p,q\in \partial[A]$ are opposite if there exists a unit vector $w$ such that $w$ is an exterior normal at $p$ and $-w$ is an exterior normal at $q$. If $p,q\in \partial[A]$ are not opposite, then we write $\overline{pq}$ the arc of $\partial[A]$ connecting $p$ and $q$ and not containing opposite pair of points. First we assume that (\ref{convex-position-case21}) holds and $b_2=o$. Since $[x_0,x_1]$ has exterior normal $v$ and $\alpha_{\rm upp}\geq 2$, there exists $a\in A\backslash\{x_0,x_1\}$ such that $v$ is an exterior normal to $\partial[A]$ at $a$. We write $l_{\rm upp}$ and $r_{\rm upp}$ to denote the number of segments the points of $A$ divide the arcs $\overline{ax_0}$ and $\overline{ax_1}$, respectively. To construct a mixed subdivision, we observe that every exterior normal $u$ to a side of $[A]$ in $\overline{ax_0}$ satisfies $\langle u,b_0\rangle>0$, and every exterior normal $w$ to a side of $[A]$ in $\overline{ax_1}$ satisfies $\langle w,b_1\rangle>0$. We divide $\overline{ax_0}+[o,b_0]$ into suitable $s_1l_{\rm upp}$ parallelograms, and $\overline{ax_1}+[o,b_1]$ into suitable $s_0r_{\rm upp}$ parallelograms. It follows from (\ref{tsbig}) that \begin{eqnarray*} |M_{11}|&=&s_1l_{\rm upp}+s_0r_{\rm upp}\geq l_{\rm upp}+r_{\rm upp}+s_0+s_1-2= \alpha_{\rm upp}+s_0+s_1-2\\ &\geq& \mbox{$\frac12(\Delta_A+\Delta_B)-2=\frac12({\rm tr}(A)+{\rm tr}(B))$}. \end{eqnarray*} Secondly we assume that (\ref{convex-position-case22}) holds. Since $s_2\geq 2$, we may assume that $o\in ([b_0,b_1]\backslash\{b_0,b_1\})\cap B$. For $i=0,1$, we write $s_{2i}$ to denote the number of segments the points of $B$ divide $[o,b_i]$. Let $\tilde{x}_0$ and $\tilde{x}_1$ be the leftmost and rightmost points of $A$ such that $-v$ is an exterior normal to $\partial[A]$, where possibly $\tilde{x}_0=\tilde{x}_1$. Since $[A]$ has sides parallel to the sides $[b_2,b_0]$ and $[b_2,b_1]$ of $[B]$, we deduce that $\tilde{x}_0\neq x_0$ and $\tilde{x}_1\neq x_1$. To construct a mixed subdivision, we set $m_{\rm low}=0$ if $\tilde{x}_0=\tilde{x}_1$, and $m_{\rm low}$ to be the number of segments the points of $A$ divide $\overline{\tilde{x}_0,\tilde{x}_1}$ if $\tilde{x}_0\neq\tilde{x}_1$. In addition, we write $l_{\rm low}\geq 1$ and $r_{\rm low}\geq 1$ to denote the number of segments the points of $A$ divide the arcs $\overline{\tilde{x}_0x_0}$ and $\overline{\tilde{x}_1x_1}$, respectively. We divide $\overline{\tilde{x}_0x_0}+[o,b_0]$ into suitable $l_{\rm low}s_{20}$ parallelograms, and $\overline{\tilde{x}_1x_1}+[o,b_1]$ into suitable $r_{\rm upp}s_{21}$ parallelograms. In addition, if $\tilde{x}_0\neq\tilde{x}_1$, then we divide $[\tilde{x}_0\tilde{x}_1]+[o,b_2]$ into suitable $m_{\rm low}$ parallelograms. It follows from (\ref{tsbig}) that \begin{eqnarray*} |M_{11}|&=&l_{\rm low}s_{20}+r_{\rm low}s_{21}+m_{\rm low}\geq l_{\rm low}+r_{\rm low}+m_{\rm low}+s_{20}+s_{21}-2\\ &=& \alpha_{\rm low}+s_2-2 \geq \mbox{$\frac12(\Delta_A+\Delta_B)-2=\frac12({\rm tr}(A)+{\rm tr}(B))$}, \end{eqnarray*} finishing the proof of (\ref{convex-position-parallelograms}) in Case~2.\\ \noindent {\bf Case 3 } $[A]$ and $[B]$ are similar triangles and $|A|,|B|\geq 4$ We recall that $s_1, s_2$ and $s_3$ denote the number of segments the points of $B$ divide the sides of $[B]$ and let $s'_1, s'_2, s'_3$ be the number of segments the points of $A$ divide the corresponding sides of $[A]$. We have ${\rm tr}(A)=s_1'+s'_2+s'_3-2$ and ${\rm tr}(B)=s_1+s_2+s_3-2$. We may assume that $s_1$ is the largest among the six numbers and that $s'_2\geq s'_3$. Readily \begin{equation} \label{m11case3} |M_{11}|\geq \max\{s_1s_2', s'_1s_2, s'_1s_3\}. \end{equation} If $s_2'\geq 3$, then $$ |M_{11}|\geq 3s_1\geq \mbox{$\frac12$}(s_1+s_2+s_3+s'_1+s'_2+s'_3)>\mbox{$\frac12$}({\rm tr}(A)+{\rm tr}(B)). $$ If $s_2'=2$, then $s_3'\leq 2$ and $$ |M_{11}|\geq 2s_1\geq \mbox{$\frac12$}(s_1+s_2+s_3+s'_1+s'_2+s'_3-4)=\mbox{$\frac12$}({\rm tr}(A)+{\rm tr}(B)). $$ Therefore we assume that $s_2'=s_3'=1$. In particular, we may also assume that $s_2\geq s_3$. Since $s'_1\ge 2$ and $s_2\ge 1$ we have $s'_1s_2\ge s'_1+2s_2-2$. Therefore, \begin{align*} |M_{11}|&\ge \max \{s_1, s'_1s_2\}\\ &\ge \frac{1}{2}((s_1+s_2+s_3+s'_1-2)\\ &\ge \frac{1}{2}(s_1+s_2+s_3+s'_1-2)\\ &=\frac{1}{2}({\rm tr}(A)+{\rm tr}(B)), \end{align*} and we conclude (\ref{convex-position-parallelograms}) in Case~3, as well. \hfill \mbox{ $\Box$}\\ \section{Proof of Theorem~\ref{TA=TB}} Let $A=\{a_1,\ldots,a_n\}$. Naturally, $[A+A]$ has a triangulation $\{F+F:\,F\in T_A\}$, which we subdive in order to obtain $M$. We define $M$ to be the collection of the sums of the form $$ [a_{i_0},\ldots,a_{i_m}]+[a_{i_m},\ldots,a_{i_k}] $$ where $k\geq 0$, $0\leq m\leq k$, $i_j<i_l$ for $j<l$, and $[a_{i_0},\ldots,a_{i_k}]\in T_A$. To show that we obtain a cell decomposition, let $$ F=[a_{i_0},\ldots,a_{i_k}]\in T_A $$ be a $k$-simplex with $k>0$ where $i_j<i_l$ for $j<l$, and hence $$ F+F=\left\{\sum_{i=0}^k\alpha_ja_{i_j}:\,\sum_{i=0}^k\alpha_j=2\;\&\;\forall\,\alpha_j\geq 0\right\}. $$ We write ${\rm relint}\,C$ to denote the relative interior of a compact convex set $C$. For some $0\leq m\leq k$, $\alpha_0,\ldots,\alpha_k\geq 0$ with $\sum_{i=0}^k\alpha_j=2$, we have $$ \sum_{i=0}^k\alpha_ja_{i_j}\in{\rm relint}\,\left([a_{i_0},\ldots,a_{i_m}]+[a_{i_m},\ldots,a_{i_k}]\right)\subset F+F $$ if and only if $\sum_{j<m}\alpha_j<1$ and $\sum_{i=0}^m\alpha_j>1$ where we set $\sum_{j<0}\alpha_j=0$. We conclude that $M$ forms a cell decomposition of $[A+A]$. For any $d$-simplex $F\in T_A$, and for any $m=0,\ldots,d$, we have constructed one $d$-cell of $M$ that is the sum of an $m$-simplex and a $(d-m)$-simplex. Therefore $$ \|M\|=|T_A|\sum_{m=0}^d{d \choose m}=2^d|T_A|. $$ \section{Proof of Corollary~\ref{A=Bstability}} In this section, let $A\subset {\mathbb R}^2$ be finite and not collinear. We prove four auxiliary statements about $A$. The first is an application of the case $A=B$ of Conjecture~\ref{ruzsabrunnconj} (see Theorem~\ref{A=B}). \begin{lemma} \label{|A+A|} $$ |A+A|\geq 4|A|-\Delta_A-3 $$ \end{lemma} \noindent{\it Proof: } We have readily $\Delta_{A+A}\geq 2\Delta_A$. Thus (\ref{Eulerpoints}) and Theorem~\ref{A=B} yield $$ |A+A|=\frac12\left({\rm tr}(A+A)+\Delta_{A+A}+2\right)\geq 2{\rm tr}(A)+\Delta_{A}+1=4|A|-\Delta_A-3.\hfill \mbox{ $\Box$}\\$$ We note that the estimate of Lemma~\ref{|A+A|} is optimal, the configuration of Theorem \ref{A=B} (b) being an extremal set. Next we provide the well-known elementary estimate for $|A+A|$ only in terms of boundary points. \begin{lemma} Let $m_A$ denote the maximal number of points of $A$ contained in a side of $[A]$. We have, \label{|A+A|boundary} $$ |A+A|\geq \frac{\Delta_A^2}4-\frac{\Delta_A(m_A-1)}2. $$ \end{lemma} \noindent{\it Proof: } We choose a line $l$ not parallel to any side of $[A]$, that we may assume to be a vertical line, and denote by $s_1,\ldots ,s_k$ the sides of $[A]$ on the upper chain of $[A]$ in left to right order. Let $A_i$ be the set obtained from $A\cap s_i $ by removing its rightmost point. We may assume that $$ |A_1|+\cdots +|A_k|\ge \frac{\Delta_A}{2}. $$ We observe that, for $1\le i<j\le k$, we have $$ |A_i+A_j|=|A_i|\cdot |A_j|\; \text{and}\; (A_i+A_j)\cap (A_{i'}\cap A_{j'})=\emptyset \; \text{if}\; \{i,j\}\neq \{i',j'\}. $$ It follows that \begin{align*} |A+A|&\ge \sum_{1\le i<j\le k} |A_i+A_j|=\sum_{1\le i<j\le k} |A_i|\cdot |A_j|=(\sum_{i=1}^k |A_i|)^2-\sum_{i=1}^k |A_i|^2\\ &\ge \left(\frac{\Delta_A}{2}\right)^2-(m_A-1)\frac{\Delta_A}{2}.\hfill \mbox{ $\Box$}\\ \end{align*} The following Lemma can be found in Freiman \cite{Fre73}. \begin{lemma}\label{scover} Let $\ell$ be a line intersecting $[A]$ in $m$ points of $A$. If $A$ is covered by exactly $s$ lines parallel to $\ell$, then \begin{equation}\label{eq:weak4s} |A+A|\geq 2|A|+(s-1)m-s. \end{equation} Moreover, \begin{equation}\label{eq:4s} |A+A|\ge (4-\frac{2}{s})|A|-(2s-1). \end{equation} \end{lemma} \begin{proof} We may assume that $\ell$ is the vertical line through the origin, that $a_1,\ldots ,a_s$ are $s$ points of $A$ ordered left to right such that $A=\cup_{i=1}^s (A\cap (\ell +a_i))$ and that $|A\cap (\ell +a_1)|=m$. Let $A_i=A\cap (a_i+\ell)$. Then, \begin{align*} |A+A|&= |A_1+A|+|(A\setminus A_1)+A_s|\\ &\ge \sum_{i=1}^s (|A_1|+|A_i|-1)+\sum_{i=2}^s (|A_i|+|A_s|-1)\\ &=2|A|+(s-1)(|A_1|+|A_s|)-(2s-1), \end{align*} from which \eqref{eq:weak4s} follows. On the other hand, \begin{align*} |A+A|&=\sum_{i=1}^s |2A_i|+\sum_{i=1}^{s-1} |A_i+A_{i+1}|\\ &\ge \sum_{i=1}^s (2|A_i|-1)+\sum_{�=1}^{s-1} (|A_i|+|A_{i+1}|-1)\\ &=4|A|-(|A_1|+|A_s|)-(2s-1). \end{align*} If the latter estimate is larger than the former one we obtain \eqref{eq:4s}, otherwise we get the stronger inequality $|A+A|\ge (4-2/s^2)|A|-(2s-1)$. \hfill \mbox{ $\Box$}\\\end{proof} \noindent{\bf Proof of Corollary~\ref{A=Bstability} } Let $|A+A|\leq (4-\varepsilon)|A|$ where $\varepsilon\in(0,1)$ and $\varepsilon^2 |A|\geq 48$. To simply formulae, we set $\Delta=\Delta_A$ and $m=m_A$. We deduce from Lemma~\ref{|A+A|} that $\Delta\geq \varepsilon |A|-3$. Substituting this into Lemma~\ref{|A+A|boundary} yields \begin{eqnarray*} (4-\varepsilon)|A|&\geq& \frac{\Delta^2}4-\frac{\Delta(m-1)}2\geq \frac{\Delta(\varepsilon |A|-3)}4-\frac{\Delta(m-1)}2\\ &=&\frac{\Delta}2\cdot (\mbox{$\frac12\varepsilon |A|-m-\frac12$})\geq \frac{\varepsilon |A|-3}2\cdot (\mbox{$\frac12\varepsilon |A|-m-\frac12$}). \end{eqnarray*} Therefore $$ \mbox{$\frac12\varepsilon |A|-(m-1)$}\leq \frac{8}{\varepsilon}\left(1-\frac{\varepsilon}4\right) \left(1+\frac{3}{\varepsilon |A|-3}\right)+\frac32< \frac{12}{\varepsilon} $$ as $\varepsilon |A|-3\geq \frac{48}{\varepsilon}-3> \frac{12}{\varepsilon}$. In particular, $m-1>\frac12\varepsilon |A|-\frac{12}{\varepsilon}$. Next let $l$ be the line determined by a side of $[A]$ containing $m=m_A$ point of $A$, and let $s$ be the number of lines parallel to $l$ intersecting $A$. According to \eqref{eq:weak4s}, $$ (4-\varepsilon)|A|\geq 2|A|+(s-1)(m-1)-1> 2|A|+(s-1)\mbox{$(\frac12\varepsilon |A|-\frac{12}{\varepsilon})$}-1, $$ thus first rearranging, and then applying $\varepsilon^2 |A|\geq 48$ yield $$ 2|A|> s\cdot \mbox{$(\frac12\varepsilon |A|-\frac{12}{\varepsilon})$}\geq s\cdot \mbox{$\frac14\varepsilon |A|$}. $$ Therefore $s<\frac8\varepsilon$. We deduce from \eqref{eq:4s} and $s<\frac8\varepsilon$ that $$ (4-\varepsilon)|A|> \mbox{$(4-\frac2s)$}|A|-2s> \mbox{$(4-\frac2s)|A|-\frac{16}{\varepsilon}$}. $$ Rearranging, and then applying $\varepsilon^2 |A|\geq 48$ imply $$ s<\frac2\varepsilon\left(1-\frac{16}{\varepsilon^2|A|}\right)^{-1}< \frac2\varepsilon\left(1+\frac{32}{\varepsilon^2|A|}\right). \hfill \mbox{ $\Box$}\\$$ \section{Proof of Proposition~\ref{counterexample} } We call the points of $A$, $$ a_0=(0,0),\mbox{ \ }a_1=(-1,-2),\mbox{ \ }a_2=(2,1). $$ If $k\geq 2$, then we show that every mixed subdivision $M$ corresponding to $T_A$ and $T_B$ satisfies \begin{equation} \label{counter11} |M_{11}|\leq 24. \end{equation} We prove (\ref{counter11}) in several steps. First we verify \begin{eqnarray} \label{lia1a2} [a_1,a_2]+l_i&\mbox{ is not an edge of $M$ }& \mbox{ for }i=0,\ldots,k\\ \label{ria1a2} [a_1,a_2]+r_i&\mbox{ is not an edge of $M$ }&\mbox{ for }i=0,\ldots,k-1. \end{eqnarray} For (\ref{lia1a2}), we observe that $a_1+l_{i+1}$ if $i\leq k-1$ or $a_1+l_{i-1}$ if $i\geq 1$ is a point of $A+B$ in $[a_1,a_2]+l_i$ different from the endpoints. Similarly, for (\ref{ria1a2}), we observe that $a_1+r_{i+1}$ if $i\leq k-2$ or $a_1+r_{i-1}$ if $i\geq 1$ is a point of $A+B$ in $[a_1,a_2]+r_i$ different from the endpoints. Next, we have \begin{eqnarray} \label{liria0a2} [a_0,a_2]+[l_i,r_i]&\mbox{is not a parallelogram of $M$}& \mbox{ for }i=0,\ldots,k-1\\ \label{li+1ria0a1} [a_0,a_1]+[r_i,l_{i+1}]&\mbox{is not a parallelogram of $M$}&\mbox{ for }i=0,\ldots,k-1 \end{eqnarray} as $l_{i+1}\in{\rm int}\,[a_0,a_2]+[l_i,r_i]$ and $l_{i}\in{\rm int}\,[a_0,a_1]+[r_i,l_{i+1}]$. Let us call the edges of $T_B$ of the form either $[l_i,r_i]$ or $[r_i,l_{i+1}]$ for $i=0,\ldots,k-1$ {\it small edges}, and the edges of $T_B$ of the form either $[p,l_i]$, $[q,l_i]$ for $i=0,\ldots,k$, or $[p,r_i]$, $[q,r_i]$ for $i=0,\ldots,k-1$ {\it long edges}. In other words, long edges of $T_B$ contain either $p$ or $q$, while small edges of $T_B$ contain neither. Concerning long edges, we prove that that the number of parallelograms of $M$ of the form \begin{equation} \label{longedge} \mbox{$e_A+e_B$ for an edge $e_A$ of $T_A$ and a long edge $e_B$ of $T_B$ is at most $12$.} \end{equation} If $e_A$ is an edge of $T_A$, then there exist at most two cells of $M$ whose side are $p+e_A$. Since $T_A$ has three edges, there are at most six of parallelograms of $M$ of the form $e_A+e_B$ where $e_A$ is an edge of $T_A$ and $e_B$ is an edge of $T_B$ with $p\in e_B$. Since the same estimate holds if $q\in e_B$, we conclude (\ref{longedge}). Finally, we prove that that the number of parallelograms of $M$ of the form \begin{equation} \label{smalledge} \mbox{$e_A+e_B$ for an edge $e_A$ of $T_A$ and a small edge $e_B$ of $T_B$ is at most $12$.} \end{equation} The argument for (\ref{smalledge}) is based on the claim that if $e_A+e_B$ is a parallelogram of $M$ for an edge $e_A$ of $T_A$ and a small edge $e_B$ of $T_B$, then there is a long edge $e'_B$ of $T_B$ such that \begin{equation} \label{smalledge0} \mbox{$e_A+e'_B$ is a neighboring parallelogram of $M$.} \end{equation} We have $e_A\neq[a_1,a_2]$ according to (\ref{lia1a2}) and (\ref{ria1a2}). If $e_A=[a_0,a_1]$, then $e_B=[l_i,r_i]$ for some $i\in\{1,\ldots,k-1\}$ according to (\ref{li+1ria0a1}). Now $r_i+e_A$ intersects the interior of $[A+B]$ as $r_i\in{\rm int}\,[A]$, thus it is the edge of another cell of $M$, as well. This other cell is either a translate of $[A]$, which is impossible by (\ref{lia1a2}), (\ref{ria1a2}), and as $r_i\not\in p+[A],q+[A]$, or of the form $e_A+e'_B$ for an edge $e'_B\neq e_B$ of $T_B$ containing $r_i$. However, $e'_B\neq[r_i,l_{i+1}]$ by (\ref{li+1ria0a1}), therefore $e'_B$ is a long edge. On the other hand, if $e_A=[a_0,a_2]$, then $e_B=[r_i,l_{i+1}]$ for some $i\in\{1,\ldots,k-1\}$ according to (\ref{liria0a2}), and (\ref{smalledge0}) follows as above. Now if $e_A+e'_B$ is a parallelogram of $M$ for an edge $e_A$ of $T_A$ and a long edge $e'_B$ of $T_B$, then there is at most one neighboring paralellogram of the form $e_A+e_B$ for a small edge $e_B$ of $T_B$ because $e_A+e_B$ does not intersect $e_A+p$ and $e_A+q$. In turn, (\ref{smalledge}) follows from (\ref{longedge}) and (\ref{smalledge0}). Moreover, we conclude (\ref{counter11}) from (\ref{longedge}) and (\ref{smalledge}). Finally, it follows from (\ref{counter11}) that if $k\geq 145$, then $$ |M_{11}|\leq 24<\sqrt{4k}=\sqrt{|T_A|\cdot|T_B|}.\hfill \mbox{ $\Box$}\\ $$
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Implementation of Quantum Arithmetic} \label{sec:adders} Integer quantum adders and multipliers are the base underlying circuits for all the circuit constructions described in this paper. Because an integer multiplier can be constructed from repeated controlled integer adders, the integer addition circuit can be considered the fundamental circuit of all the constructions. The basic modular multiplier constructed from modular adders requires only one type of adder circuit, while the Barrett and Montgomery modular multipliers require additional adders in the reduction circuitry. These adders have different widths compared to the adders used in the main multiplier, and therefore, it may be advantageous to utilize several different types of adders in these circuits. Because the adder is such a fundamental circuit in all our circuit constructions the design of each adder used will have a significant impact on the design and resources required by our modular multiplier circuits. Many quantum adders have been proposed, and we summarize the ones used in our circuit constructions in \tab{adders:desc}. The adders fall into two main categories: adders based on reversible implementations of classical adders, and adders that operate in a transformed Fourier basis. Each of these adders presents a different trade-off with respect to the total gates required, the circuit depth, and the number of ancilla used. These resource trade-offs translate directly to the multiplier designs, however, the form of the adders can also impact the resource requirements of the multipliers. For example, the Fourier adders allow gates from multiple adders to be overlapped, which can reduce the overall depth of the multiplier. \begin{table}[h!] \caption{Quantum adder circuits used in multiplier constructions. The resource requirements are assuming the in-place addition of a classical value onto a quantum register of width $n$, and are given to leading order only. The resources for the Fourier transform basis adder assume the decomposition of the rotation gates required to a specified accuracy ($\epsilon$) using a technique such as described in~\cite{Ross2014}. } \vspace{5pt} \centering \begin{tabular}{l|c|c|c} \textbf{Adder type} & \textbf{\Toffoli//\T/ depth} & \textbf{\Toffoli//\T/ gates} & \textbf{Qubits required} \\ \hline Majority ripple~\cite{Cuccaro2004} & $2n$ & $2n$ & $2n+1$ \\ Prefix-ripple [\sect{circuits:prefix_adders}] & $n$ & $3n$ & $2n+1$ \\ Carry look-ahead~\cite{Draper2006} & $4log_2(n)$ & $10n$ & $4n - log_2(n) -1$\\ Fourier transform basis~\cite{Draper2000} & $3log_2(1/\epsilon)$ & $3nlog_2(1/\epsilon)$ & $n$ \\ \hline \end{tabular} \label{tab:adders:desc} \end{table} \input{prefix_adder.tex} \input{undo_adder.tex} \input{multipliers.tex} \input{fourier_adder.tex} \section{Quantum Barrett Multiplication} \label{sec:barrett} For classical modular arithmetic the Barrett reduction~\cite{Barrett1987} is beneficial because, for repeated reductions using the same modulus, it reduces the number of divisions by that modulus and replaces them with simpler multiply and shift operations. The standard method to reduce the number $t$ modulo $N$ would be to calculate $q = \lfloor t/N\rfloor$ and then calculate the reduced value as: $t -qN$. The main idea behind the Barrett reduction is to calculate a fixed-point fractional factor representing the division by the modulus and then use this factor many times. The only divisions involving $t$ can be picked to be constant powers of the machine word size and therefore can be implemented with shifts. Because of limited precision of the fixed-point factor, the Barrett reduction may not reduce $t$ completely to a value $<N$. However, we can set the precision of the factor so that at most one additional subtraction by $N$ is required. Because of this we can view the Barrett reduction as an approximate reduction technique. With our quantum circuit we implement all operations on qubits using binary fixed-point logic. Therefore, shift operations by a power of $2$ just redefine the fixed-point in our calculation. Additionally, the fixed-point fractional factor is a fixed classical value and can be pre-calculated offline. The quantum circuit to calculate $q$ is reduced to a quantum multiplication by a classical value. As is the case for the classical Barrett reduction, the quantum circuit calculates an approximate reduction factor, and we will want to carefully set the width of each of the individual operations in the reduction to bound the error and reduce the total number of gates required. Completing the reduction is one of the challenges in constructing a reversible circuit for it, and is one of the main differences between our implementation and that from~\cite{Zalka1998}. We show how to perform a complete reduction, whereas in~\cite{Zalka1998} they allow the case of a partial reduction and argue that doing so has an insignificant impact on the fidelity of the circuit. \subsection{The Barrett Reduction} \label{sec:barrett:classical} The Barrett reduction~\cite{Barrett1987} of an arbitrary number $t$ is defined as, \begin{equation} \REDC{t}{N} = t - \tilde{q}N\bmod N, \end{equation} where \begin{equation} \label{eq:barrett:red:qhat} \tilde{q} = \left\lfloor\left\lfloor\frac{t}{b^{k-1}}\right\rfloor \frac{\mu}{b^{k+1}}\right\rfloor \quad\mu = \left\lfloor\frac{b^{2k}}{N}\right\rfloor \end{equation} and the $\bmod N$ involves at most one subtraction of $N$ because the parameters $b$ and $k$ can be picked to ensure that $REDC(t) < 2N$. Typically $b$ is picked to be a small power of $2$ and $k$ must be picked so that $b^{k-1} \leq N < b^{k}$. The only operation involving the value $t$ is a multiplication. If $b$ is picked to be a power of two then the factors of $b^{k-1}$ and $b^{k+1}$ appearing in the denominators can be implemented as binary shifts, via redefinition of the fixed-point binary number. For our multiplier the value to be reduced is $t = Xy$. Further, we pick $k$ based on the value $N$, and we pick $b=2$, therefore $k$ is just the bit-width of $N$, which we will denote as $n$. The value $Xy$ is calculated as the sum over $i$ as: $\sum{y_i (2^i X\bmod N})$. Because the shifted value is reduced before adding to the running sum, the total number of bits required for the product is $n+\log_2(n)$. In the calculation of $\tilde{q}$ we only use the most significant bits of $Xy$ and $\mu$. To understand the impact of truncation it it useful to look at the full-precision quotient $q$, defined as: \begin{equation} q = \left\lfloor\frac{X y}{2^{n-1}} \nu\right\rfloor \quad\nu = \frac{1}{2^{n+1}} \left(\frac{2^{2n}}{N}\right), \end{equation} where $\nu$ corresponds to the shifted $\mu$ and because $2^{n-1} < N < 2^{n}$ we have $1/2 < \nu < 1$. We can now write $q$ to separate the calculated values from the truncated ones, \begin{equation} q = \left\lfloor\frac{\widetilde{Xy}+(Xy)_t}{2^{n-1}} (\tilde{\nu}+\nu_t)\right\rfloor, \end{equation} where $\tilde{a}$ denotes the retained approximate value, and $a_t$ is the truncated portion of a value. Separating the computed terms from the truncated ones we have: \begin{equation} q = \left\lfloor\frac{\widetilde{Xy}}{2^{n-1}} \tilde{\nu} + \frac{(Xy)_t \nu}{2^{n-1}} + \frac{\widetilde{Xy} \nu_t}{2^{n-1}}\right\rfloor = \left\lfloor\tilde{q} + \frac{(Xy)_t \nu}{2^{n-1}} + \frac{\widetilde{Xy} \nu_t}{2^{n-1}}\right\rfloor. \end{equation} If we bound the truncated terms to be less than $2$ then only a single extra adjustment will be required. We could use the upper bits of $Xy$ to calculate $\tilde{q}$, however, it will be useful for our quantum implementation to have an independent $\widetilde{Xy}$ therefore we would like to minimize the width of this calculation. If we use the $n_k$ upper bits of each term in the sum then we can bound the first truncated term as: \begin{equation} \label{eqn:barrett:classical:xyt} \frac{(Xy)_t \nu}{2^{n-1}} < n \frac{2^{n_t}1}{2^{n-1}} = \frac{2^{\log_2(n_t)} 2^{n_t}}{2^{n-1}}. \end{equation} Where $n_t = n - n_k$ is the number of bits truncated from each term, and we have taken the upper bound of $\nu = 1$. If we pick $n_t$ such that: $\log_2(n)+n_t < n-1$ then the error term will be $<1$. This implies that each term must be $n_k = \log_2(n) + 1$ bits, and the total approximate sum of $n$ values will require $2\log_2(n)+1$ bits. For the second truncated term, if we use $n_v$ bits for $\nu$ we can bound this term as, \begin{equation} \frac{\widetilde{Xy} \nu_t}{2^{n-1}} < \frac{n 2^n 2^{-n_v}}{2^{n-1}} < \frac{2^{\log_2(n)+n-n_v}}{2^{n-1}}. \end{equation} We then need to pick $n_v > \log_2(n)+1$ to ensure that this term is less than $1$. The resulting bit widths for $\widetilde{Xy}$ and $\nu$ result in an $2\log_2(n)+2$ by $\log_2(n)+1$ bit multiplication to calculate $\tilde{q}$. We can truncate the $\widetilde{Xy}$ values used to calculate $\tilde{q}$ to $\log_2(n)+1$ bits and therefore we need a register of length $2\log_2(n)+2$ to hold $\tilde{q}$. \subsection{Quantum Modular Multiplier with Barrett Reduction} In \alg{barrett:quantum:circuit} we describe the algorithm to compute the modular product of two numbers using the Barrett reduction described in the previous section. A quantum circuit to calculate the out-of-place modular product of either a quantum register with a classical constant or two quantum registers can be constructed directly from this algorithm. In the following discussion we will just describe the case of a quantum product with a classical constant. We will return to the implications of a full quantum multiply in \sect{barrett:quantum-quantum}. The Barrett multiplier uses one input register, one output register, two work registers, and one single qubit flag. The out-of-place multiplier performs the following operation: \begin{equation} \label{eqn:barrett:quantum:start} [n]{y} [n+m]{0} [2m]{0} [2m]{0} [1]{0} \lra{}{y}{yX\bmod N}{0}{0}{0}. \quad (m = \log_2(n)+1) \end{equation} In Steps~\ref{alg-line:barrett:quantum:xy} and~\ref{alg-line:barrett:quantum:xyapp} of the algorithm we calculate the full and approximate products and produce the state \begin{equation} {y} {Xy} {\smash{\widetilde{Xy}}} {0} {0}. \end{equation} The full product $(Xy)$ requires $\ord{n^2}$ basic gates and constitutes the majority of the operations in the entire circuit. Calculating an approximate product eliminates the need to re-compute the full product later when we need to clear the reduction factor $\tilde{q}$. In Steps~\ref{alg-line:barrett:quantum:qhat} and~\ref{alg-line:barrett:quantum:reduce} of the algorithm we calculate the approximate reduction factor and use it to reduce the full product in-place producing the state: \begin{equation} {y} {Xy-\tilde{q}N} {\smash{\widetilde{Xy}}} {\tilde{q}} {0}. \end{equation} As discussed above, the state reduced using $\tilde{q}$ may be greater than $N$ and therefore one more reduction by $N$ may be required. We perform this reduction, however, doing so results in a one-bit flag indicating whether the additional reduction was required. At this point we have the following state: \begin{equation} {y} {Xy\bmod N} {\smash{\widetilde{Xy}}} {\tilde{q}} {adj}. \end{equation} and we have the reduced product, however, we have two registers with garbage and the one-bit adjustment flag that need to be cleared. The two registers, containing $\widetilde{Xy}$ and $\tilde{q}$, can be cleared simply by reversing steps~\ref{alg-line:barrett:quantum:xyapp} and~\ref{alg-line:barrett:quantum:qhat} of the algorithm, however the adjustment bit must be cleared in some other way. If we add back $\tilde{q} N$ to $P$ then this register contains ${Xy-adjN}$. If we subtract $\widetilde{Xy}$ from this register we obtain ${(Xy)_t - adjN}$. But in \eqn{barrett:classical:xyt} we bounded the truncation term $(Xy)_t < 2^{n-1} < N$ and therefore if $(Xy)_t - adjN < 0$ this indicates that an adjustment has occurred. This fact can be used to clear the adjustment bit. We also note that we only need to compute the high-order bits for the addition done in step~\ref{alg-line:barrett:quantum:addqn} of the algorithm. \begin{algorithm} \caption{\BarPro{X,y}{N}} \label{alg:barrett:quantum:circuit} \begin{algorithmic}[1] \Require{Modulus $N$, integers $X,y < N$} \Ensure{Integer $P = yX\bmod N$} \State {$S \gets Xy$} \Comment{calculate the full product} \label{alg-line:barrett:quantum:xy} \State {$\widetilde{Xy} \gets app(Xy)$} \Comment{calculate an approximate product} \label{alg-line:barrett:quantum:xyapp} \State {$\tilde{q} \gets \widetilde{Xy} \tilde{\nu}$} \Comment{calculate the approximate reduction factor} \label{alg-line:barrett:quantum:qhat} \State {$S \gets S - \tilde{q}N$} \Comment{reduce S s.t. $S<2N$} \label{alg-line:barrett:quantum:reduce} \If {$S \ge N$}\label{alg-line:barrett:quantum:compare} \State {$S \gets S - N$} \Comment{reduce by N if required} \State {$adj \gets 1$} \Comment{reduction produces one bit of garbage} \EndIf \State{$S \gets S + \tilde{q}N$} \Comment{$S = Xy - adjN$} \label{alg-line:barrett:quantum:addqn} \If {$S_{[n+\log_2(n):n-\log_2(n)]} - \widetilde{Xy} < 0$}\label{alg-line:barrett:quantum:appcompare} \State {$adj \gets adj \oplus 1$} \Comment{clear adjustment flag} \EndIf \State {$S \gets S - \tilde{q}N$} \Comment{reset to modular product} \label{alg-line:barrett:quantum:subqn} \State {$\tilde{q} \gets 0$} \Comment{reverse~\ref{alg-line:barrett:quantum:qhat} to clear $\tilde{q}$} \label{alg-line:barrett:quantum:qhat-rev} \State {$\widetilde{Xy} \gets 0$} \Comment{reverse~\ref{alg-line:barrett:quantum:xyapp} to clear $\widetilde{Xy}$} \label{alg-line:barrett:quantum:xyapp-rev} \end{algorithmic} \end{algorithm} From \eqn{barrett:quantum:start} we see that the out-of-place modular multiplication of a quantum register by a constant requires $2n+5m+1 = 2n + 5\log_2(n)+6$ total qubits. This is compared to the $2n$ qubits required by the standard method that utilizes modular adders. The additional overhead of $5\log_2(n)+6$ is small for realistic sized multipliers. In \tab{barrett:quantum:adders} we show the overhead in terms of the size and total number of addition operations required per step in \alg{barrett:quantum:circuit}. For comparison the standard modular addition based adder would require $3n$ adders each of width $n$. The total number of gates is linear in the number and width of the adders, therefore, the product of the adder-width times the number of adders gives to first-order the number of gates required. For the Barrett multiplier this product is: $n^2 + 14n\log_2(n) + n + 17(\log_2(n))^2 + \log_2(n)$, compared to the standard multiplier with $3nn = 3n^2$. For realistic sized multipliers the $n^2$ term dominates for both adders and the Barrett multiplier provides close to a factor of $3$ fewer gates than the standard method. The circuit depth of the multipliers will depend on the implementation of the adders used as well as how the individual steps of the multiplier overlap. \begin{table} \centering \begin{tabular}{r|c|c} \hline \multicolumn{3}{c}{Width of Adder} \\ \cline{2-3} Step & $n+\log_2(n)$ & $\log_2(n)$ \\ \hline $1$ & $n$ & \\ $2$ & & $2n$ \\ $3$ & & $4\log_2(n)$ \\ $4$ & $3\log_2(n)$ & \\ $6$ & $1$ & \\ $9$ & $3\log_2(n)$ & \\ $13$ & $3\log_2(n)$ & \\ $14$ & & $4\log_2(n$) \\ $15$ & & $2n$ \\ \hline & $n+9\log_2(n)+1$ & $4n$ + $8\log_2(n)$ \\ \end{tabular} \caption{Full-width and log-width adders required by the steps of \alg{barrett:quantum:circuit} The $n$ adders of width $n + \log_2(n)$ required to calculate the full product dominate the required resources} \label{tab:barrett:quantum:adders} \end{table} \subsubsection{Quantum Barrett Multiplication with Two Quantum Inputs} \label{sec:barrett:quantum-quantum} The Barrett reduction method can be extended to a full-quantum multiplier, i.e., one where both inputs are contained in quantum registers. For this adder we can either add the shifted terms $2^ix$ directly or reduce them modulo $N$ before adding them. Since $x$ is a quantum value, reducing them would require the shift and reduce circuit described in \sect{mod-mult:mod-shift}, but the operation of the Barrett reduction is the same as the case when one input is classical. If we assume that the accumulated shifts must be reversed at the end of the multiplier then $2n$ shifts are required per out-of-place multiplication. Therefore the full-quantum Barrett Multiplier requires $~3n$ additions compared to the $3n+2n=5n$ additions that would be required by the corresponding standard quantum-quantum modular multiplier. Adding the shifted terms directly would eliminate the $2n$ reduction steps, but would require a $2n-bit$ product register and would require higher precision in the Barrett reduction. As in the case of the standard modular multiplier, constructing an in-place full-quantum multiplier would require calculating a multiplicative inverse, which would dominate the total cost of the multiplier. \section{Conclusions and Future Work} \label{sec:conclusion} We have presented three novel techniques for high-performance quantum modular multiplication, adapting fast classical frameworks for division, Montgomery residue arithmetic, and Barrett reduction to efficient reversible procedures. Our techniques are independent of the lower-level implementation of binary quantum addition, and therefore can make use of any current or future quantum addition implementations. As an illustration of this we have constructed and numerically analyzed quantum circuits resulting from three different binary adder implementations. Each modular multiplication technique implements exact, out-of-place modular multiplication with the asymptotic depth and complexity of a single non-modular multiplication, representing a factor of three improvement over the standard quantum modular multiplication technique comprising repeated modular addition. The added gate count and ancilla requirements of our constructions is only \ord{\log n}. The asymptotic depth and gate count of exact, in-place, controlled modular multiplication (comprising two out-of-place modular multipliers and $3n$ controlled-shift gates) is therefore that of $2n$ quantum adders, comparable to that previously achieved with inexact binary multipliers \cite{Zalka1998} and improving the $6n$ adders required of the typical modular-addition approach. A unique advantage of the modular multipliers introduced in this work is their particular amenability to quantum Fourier-basis arithmetic. All three proposed circuits require only $2n+\ord{\log n}$ qubits when implemented with Fourier-basis operations, asymptotically matching the low ancilla requirements of Beauregard's Fourier-basis modular-addition-based multiplier~\cite{Beauregard2003}. Both the Barrett and Montgomery reduction techniques circumvent the need for repeated comparison operations, and therefore the corresponding \QFT/ and \QFTd/ circuits, which dominate the depth and complexity in the modular addition approach, are not required. Taking advantage of the gate parallelization afforded by Fourier-basis arithmetic, both circuits can then be constructed with an asymptotic depth of $14n$ two-qubit gates. This compares favorably with the $1000n$-gate latency of the fastest prior exact Fourier-basis modular multiplier~\cite{Pavlidis2014}, and is comparable to the $12n$-gate latency of the fastest inexact circuit~\cite{Kutin2006}. Crucially, both prior circuits also expand the ancilla cost of Beauregard's circuit, asymptotically requiring $9n$ and $3n$ qubits, respectively. Direct comparison between quantum Fourier-basis and binary arithmetic circuits is generally difficult for fault-tolerant systems, as the resource cost of arbitrarily-angled Fourier-basis rotations and $\Toffoli/$ gates depends highly on the underlying quantum computing hardware and error correction strategy employed. It remains an open question as to whether the efficiency and speedup afforded by Fourier-basis circuits will be applicable to real quantum systems. However, in \sect{resources} we have shown that with reasonable architectural assumptions, Fourier-basis modular multipliers can be constructed with performance comparable to the fastest binary adder. The space-time tradeoff between the two types of addition circuits is roughly equivalent, with the Fourier-basis adders requiring fewer qubits but more total gates. In this work, we have primarily discussed circuits for \emph{quantum-classical} modular multiplication, where one of the two input multiplicands is a classical value known at ``compile time.'' Certain important quantum applications, such as the breaking of elliptic-curve cryptographic keys~\cite{Proos:2003}, instead require the implementation of in-place, \emph{quantum-quantum} modular multiplication. In the circuits we have presented, only the initial \GATE{Multiplication} and final \GATE{Uncomputation} accumulators depend on the individual input multiplicands, while the \GATE{Reduction} stage differentiating each circuit acts only on the computed product register. An \emph{out-of-place} quantum-quantum modular multiplier is then easily derived by adapting these accumulators to a second quantum input, as described in \sect{mod-mult}. However, in order to construct an in-place modular multiplier from this operator, we now require the multiplicative modular inverse of a quantum input state. Reversible circuits implementing the extended Euclidean algorithm have been demonstrated but overwhelmingly dominate the complexity of the operation~\cite{Proos:2003}. We have shown in the context of modular multiplication that with the adaptation of numerical or representational techniques we can mitigate the overhead of reversible reduction operations. As Euclid's algorithm predominately comprising the sequential calculation of quotients and remainders, the techniques applied here present a similar and potentially significant opportunity for improving the implementation of this operation and corresponding class of problems. Finally, the modular multipliers we have introduced are not limited to the arithmetic techniques discussed in this paper. For example, the techniques for inexact computation~\cite{Zalka1998,Kutin2006}, fast large-number multiplication~\cite{Zalka1998}, or parallel arithmetic over a superlinear number of qubits~\cite{Gossett1998,VanMeter2005,Pham2013} could be applied independently to our proposed frameworks. Similarly, the techniques could be extended to different domains; for example, implementations of both Barrett and Montgomery reduction over the Galois field $GF(2^m)$ (critical to elliptic curve cryptography) are well-known classically. \section{Quantum Modular Multiplication with Division} \label{sec:div} Our first implementation of a quantum modular multiplier is the most straightforward: after an initial multiplication, we implement a reversible division operation comprising trial subtractions and controlled re-additions. Standalone modular reduction being irreversible, we first define a quantum division operator, \begin{equation} \eq{div:div} [n+m]{t} \lra[\ensuremath{\text{\textsc{Q-}}}\DIV{N}] [m]{t \bdiv N} [n]{t \bmod N} = [m]{q} [n]{t - qN}, \end{equation} where [n+m]{t} is the $(n+m)$-bit result of the initial multiplication, and $(\bdiv)$ indicates integer division such that $(t\bdiv N) = \flr{t/N} = q$ is the computed quotient. Classically, modular reduction is constructed from a division operation by simply discarding the quotient and preserving the remainder. In the reversible case, the quotient must instead be uncomputed, exacerbating the computational discrepancy between multiplication and division and sourcing the principal bottleneck in typical quantum implementations of modular multiplication. Here, we utilize information present in both the input and output registers of the out-of-place modular multiplier in order to clear [m]{q} while circumventing a full Bennett-style reverse computation. The depth of the $\ensuremath{\text{\textsc{Q-}}}\DIV{}$ operator is then poly-logarithmic in $n$, so that the out-of-place modular multiplication operation, \begin{equation} [n+m]{0}[n]{x} \lra [n+m]{t}[n]{x} \lra[\ensuremath{\text{\textsc{Q-}}}\DIV{N}] [m]{q} [n]{t \bmod N} [n]{x} \lra [m]{0} [n]{t \bmod N} [n]{x} \end{equation} is asymptotically dominated by the initial computation of {t}. \subsection{Multiplication Stage} \label{sec:div:mult} We first must compute a state {t} such that $t$ is congruent to the non-modular product $X*y$. For the purpose of modulo-$N$ multiplication, we can reduce partial products prior to their accumulation, replacing $(Xy)$ with, \begin{equation} t \defeq \sum_{k=0}^{n-1} y_k\qty(2^k X \bmod N), \end{equation} such that $t\equiv Xy\pmod{N}$ and is bound by $t<nN$. We then require at most $n+m=\clog[2]{(Nn)}=n+\clog[2]{n}$ bits to hold [n+m]{t}, so that the initial multiplication stage, \begin{equation} [n+m]{0}[n]{y} \lra[\ensuremath{\text{\textsc{Q-}}}\ensuremath{\text{\textsc{MAC}}}(X\mid N)] [n+m]{t}[n]{y}, \end{equation} consists of $n$ in-place, width-$(n+m)$ quantum adders, conditioned on the bits of [n]{y}. \subsection{Division Stage} \label{sec:div:div} Using [n+m]{t} as the input to the quantum division operation, we require at most $m\defeq\clog[2]{n}$ bits for the quotient register [m]{q}. We compute the quotient bitwise, beginning with its MSB, $q_{m-1}$. After unconditionally subtracting $2^{m-1}N$, the sign bit (MSB) of the register indicates whether the input $t<2^{m-1}N$. Using the sign to condition the in-place re-addition of $2^{m-1}N$ onto the remaining bits of the register, we return its modulo-$(2^{m-1}N)$ reduction. We are left with the single sign bit indicating that the subtraction was undone, that is, that $q_{m-1}=0$. After inverting the sign, we have therefore performed the operation, \begin{equation} [n+m]{t} \lra {q_{m-1}} [n+m-1]{ t - q_{m-1}2^{m-1} N } = {q_{m-1}} [n+m-1]{ t\bmod (2^{m-1}N) }, \end{equation} where the resulting value in the accumulator is bound by $2^{m-1}N$, requiring only the $(n+m-1)$ LSBs of the register and allowing $q_{m-1}$ to be preserved in the MSB location. We iterate this process for $k=m-1,...,0$, each time reducing the register modulo-$2^{k}N$ and computing the bit $q_k$ of [m]{q}. After the final ($k=0$) reduction, we have constructed [m]{q} over the $m$ MSBs of the input register, while leaving the remainder $[n]{t\bmod N}$ in the remaining $n$ bits. The complete \ensuremath{\text{\textsc{Q-}}}\DIV{N} operation is shown as a quantum circuit in \fig{div:div}. \begin{figure}[ht] \begin{center} \includecircuit[8]{division.pdf} \end{center} \caption{Quantum division operation described in \sect{div:div}. At each step $k$, we perform a trial subtraction and conditional re-addition of $2^{k}N$, computing one (inverted) bit of the quotient $q$ while reducing the input state modulo-$(2^kN)$. The subtraction and re-addition of each stage can be merged into a single in-place quantum select-undo adder. (see \apx{adders:select-undo})} \label{fig:div:div} \end{figure} As described in \apx{adders:select-undo}, an in-place quantum adder generally comprises a pair of out-of-place adders. In the select-undo quantum adder construction introduced in the appendix, control is efficiently added to the in-place adder through the second adder in the pair, which selectively undoes the addition performed by the primary adder. In this structure, we require a control qubit only for the second adder. We can use the select-undo adder to merge the trial subtractions and subsequent conditional re-additions of the division circuit, performing the trial subtractions out-of-place and using the sign (MSB) of the difference to conditionally undo the result. As each subtraction by $2^kN$ affects only the bits more significant than $k$, the division operator comprises in total $m$ in-place, $n$-qubit select-undo adders. Assuming logarithmic-depth prefix adders (\apx{circuits:prefix_adders}), the overall depth of this stage is $\ord{m\log n}=\ord{\log^2 n}$, with \ord{n\log n} total quantum gates. Notably, the division stage constructed here assumes nothing of the initial multiplicands used to construct $t$, taking only the state {t} and the classical modulus as inputs. This stage is therefore independent of whether we are implementing a quantum-classical or quantum-quantum modular multiplier. \subsection{Uncomputation Stage} \label{sec:div:uncompute} We now must uncompute the quotient register. Unlike the division stage, this requires the incorporation of the individual multiplicands or partial products used in the calculation of ${t}$. However, we can avoid the complete reversal of the steps computing $[m]{q}$ by utilizing information contained in both the input [n]{y} and output [n]{t\bmod N} registers. Our strategy requires that we first multiply the quotient register by the modulus in-place: \begin{equation} [m]{q} \lra[\ensuremath{\text{\textsc{Q-}}}\ensuremath{\text{\textsc{MUL}}}(N)] [m]{qN}, \end{equation} where modulo-$2^m$ reduction is implicit in the size of the $m$-bit register. As described in \apx{appen:mult}, for odd $N$ we can reversibly multiply $q*N\pmod{2^m}$ in-place, with $m-1$ quantum adders of sizes $1,...,m-1$. We then add the $m$ LSBs of ${t\bmod N}$ to the quotient register, \begin{equation} \eq{div:uncompute:quotient} [m]{qN} \lra [m]{qN + (t\bmod N)} = [m]{qN + (t-qN)} = [m]{t}, \end{equation} leaving us with the result computed in the initial multiplication stage, truncated to its $m$ LSBs. We can clear ${t}$ by a reverse of the multiplication stage, truncating all operations to their first $m$ bits. Though we now require only $m$-bit addition, our use of reduced partial products in computing ${t}$ requires that we perform $n$ total additions each controlled by a bit of [n]{y}. Given logarithmic-depth quantum adders, the depth of this stage would then be \ord{n\log\log n}, dominating the \ord{\log^2n} depth of the \ensuremath{\text{\textsc{Q-}}}\DIV{} operation. We therefore make use of the work bits necessary for the multiplication and division stage adders to parallelize the narrower adders required of the uncomputation stage. Dividing the \ord{n} work bits into $\ord{n/m}=\ord{n/\log n}$ separate accumulation registers, we can distribute and sum the $n$ addends in groups of $\ord{m}$. The independent accumulators can then be combined with $\ord{\log(n/m)}=\ord{m}$ quantum-quantum adders in a binary tree structure in order to compute the complete sum $[m]{t}$. After using the result to clear the quotient register, we must uncompute the individual accumulators by reversing the parallelized adds. The overall depth of this procedure is then $\ord{m\log m}=\ord{\log n\log\log n}$, no longer dominating the \ord{\log_2^2n}-depth division stage. \begin{figure}[ht] \begin{center} \includecircuit[9]{div-full.pdf} \end{center} \caption{Out-of-place modular multiplier constructed from the $\ensuremath{\text{\textsc{Q-}}}\DIV{N}$ operation. The final sequence of subtractions in the uncomputation stage is shown in series, but can be parallelized across work qubits in order to minimize the depth of this stage.} \label{fig:div:full} \end{figure} \subsection{Arithmetic in Fourier Transform Basis} \label{sec:fourier:num-rep} Central to quantum Fourier-basis arithmetic is the Fourier number state representation. Defining the single-qubit state, \begin{align} \fket[1]{\alpha} &\defeq \cos(\alpha\pi/2){0} - i \sin(\alpha\pi/2){1} \nonumber\\ &= \Ry/(\alpha\pi) {0}, \end{align} where $\Ry/(\alpha) = e^{-i\alpha \sigma_y/2}$ represents a single-qubit $y$-axis rotation, the Fourier representation of an $n$-bit number $y$ is equivalent to the product state, \begin{equation} \eq{fourier:product-state} \fket[n]{y} \defeq \bigotimes_{k=0}^{n-1} \fKet[1]{ \frac{y}{2^{k}} }. \end{equation} Note that this differs from the typical definition~\cite{Nielsen2000}, as defined as $\fket[n]{y}^{'} \defeq \sum_{j=0}^{2^{n}-1} e^{-i jy\pi/2^{n}} [n]{j}$. The latter can be recovered as $(\S/\H/\S/^\dagger)^{\otimes n}\fket[n]{y}$, where \S/ and \H/ indicate single-qubit phase and Hadamard gates, respectively. \subsubsection{Fourier-Basis Summation} \label{sec:adders:fourier-add} Uncontrolled, in-place addition of a classical parameter $X$ to a $n$-bit quantum Fourier register \fket[n]{y} requires $n$ unconditional $\Ry/$-rotations, \begin{align} \eq{fourier:qft-add} \fket[n]{y+X} &= \bigotimes_{k=0}^{n-1} \fKet[1]{ \frac{y+X}{2^{k}} }\nonumber\\ &= \bigotimes_{k=0}^{n-1} \Ry/\qty( \frac{X\pi}{2^{k}} ) \fKet[1]{ \frac{ y}{2^k} } \nonumber\\ &= \qty[ \prod_{k=0}^{n-1} \Ry/^{(k)} \qty( \frac{X\pi}{2^{k}} )] \fket[n]{y}, \end{align} where $\Ry/^{(k)}(\alpha)[n]{y}$ indicates a $\Ry/(\alpha)$ rotation of the $k$th qubit of $[n]{y}$. Controlled Fourier addition of a classical parameter then requires conditioning each of these rotation gates on a single control qubit, inhibiting the parallelization of a standalone adder. Instead, the commutability of the Fourier rotations (and lack of carries to propagate) admits large-scale parallelization for multiple additions. In particular, as in \apx{appen:mult} we can construct an out-of-place multiply-accumulate operator, \begin{equation} \fket{z}[n]{y} \lra[\ensuremath{\Phi\text{-}}\ensuremath{\text{\textsc{MAC}}}(X)] \fket{Xy+z} [n]{y}, \end{equation} with each bit of a binary input register [n]{y} controlling the Fourier-basis addition of a classical value to a Fourier-basis accumulation register. The rotations required by this sum can be rearranged and executed in parallel, with a total depth of at most $\max(w,n)$ gates. The addition of a computational-basis register {y} to a Fourier-basis register \fket{x}, \begin{equation} \fket[n]{x } [n]{y} \lra \fket[n]{y+x} [n]{y}, \end{equation} requires constructing the $\Ry/^{(k)}(y\pi/2^k)$ rotations bitwise, performing the set of conditional rotations $\Ry/^{(k)}(2^ly_l/2^k)$ for each bit $y_l$ of $[n]{y}$. Equivalently, the quantum-quantum Fourier-basis adder is simply a special case of the Fourier multiply-accumulate operation, for which the multiplier is one. Finally, observing \eq{fourier:qft-add}, the addition of a classical value $X\ll2^n$ will involve asymptotically small rotations on bits more significant than $k\sim\clog[2]{X}$. As in~\cite{Barreiro2011a}, these operations can therefore be truncated to \ord{\log{nX}} gates with negligible loss of, or possibly improved, fidelity. \subsubsection{In-place Fourier-Basis Multiplication} \label{sec:fourier:mult} Given a binary input state [n]{y}, we can also perform an in-place multiply using Fourier-basis adders. Observing the $k$th bit of the Fourier state $\fket{Xy}$ (again assuming odd $X$), \begin{equation} \fkett[1]{ \frac{Xy}{2^k} } = \fkett[1]{y_k + \sum_{j<k} \frac{y_j2^jX}{2^{k}}} = \pm\prod_{j<k}\Ry/\bigg(\frac{y_j2^jX}{2^{k}}\bigg){y_k}, \end{equation} we find the binary input bit {y_k}, rotated by the less significant qubits in the register. Again beginning with the MSB, we perform in-place additions of $X/2$ controlled by each bit $y_k$ of the input {y} and acting on the more significant bits of the register. The resulting state is the Fourier representation of the product: \begin{equation} [n]{y} \lra[\ensuremath{\Phi\text{-}}\ensuremath{\text{\textsc{MUL}}}(X)] \fket[n]{Xy}. \end{equation} The quantum Fourier transform, ${y} \lra \fket{y}$, is then the special case $\ensuremath{\Phi\text{-}}\ensuremath{\text{\textsc{MUL}}}(1)$. Crucially, the $\ensuremath{\Phi\text{-}}\ensuremath{\text{\textsc{MUL}}}(X)$ can be parallelized identically to the standalone \QFT/, with a depth of $2n+\ord{1}$ on a variety of computational topologies~\cite{Cleve2000, Moore2001, Pham2013}. \section{Modular Multiplication with Quantum Fourier Arithmetic} \label{sec:fourier} Because of the elimination of division from the modular reduction step our Barrett and Montgomery modular multipliers constructions are uniquely amenable to arithmetic in the quantum Fourier number basis. In the Fourier basis, number states are represented by the quantum Fourier transform (\QFT/) of their binary state, \begin{equation} \eq{fourier:state} \fket[n]{y} \defeq \QFT/[n]{y} = \bigotimes_{k=0}^{n-1} \bigg\{ \cos(\frac{y\pi}{2^k}){0} + \sin(\frac{y\pi}{2^k}){1} \bigg\}, \end{equation} where we have commuted and dropped Hadamard gates from the \QFT/ definition (see \apx{fourier:num-rep} for details). Arithmetic in this representation circumvents the ancillary work qubits and data dependencies required by the carry bits needed for binary-basis arithmetic, absorbing these bits into the continuous state of each qubit. Fourier-basis addition is decomposed into independent, commutable rotations acting on each qubit in the register independently, enabling large-scale parallelization of arithmetical operations with minimal ancilla and on a variety of computational topologies (e.g. a nearest-neighbor architecture)~\cite{Cleve2000,Moore2001,Pham2013}. The bottleneck of quantum Fourier-basis arithmetic is in the implementation of quantum control. The continuous state \fket{y\pi/2^k} of the $k$th qubit of a Fourier-basis register \fket[n]{y} contains information about the bit $y_k$ as well each bit $y_{j<k}$ less significant than $y_k$. In order to condition a quantum operation on $y_k$, we therefore require a $k$-bit \QFTd/ in order to extract the single bit as a $Z$-eigenstate. Quantum Fourier arithmetic is generally characterized by these repeated transformations to and from the Fourier basis. This limitation introduces a significant computation discrepancy between quantum Fourier-basis multiplication and division. Each reduction step composing a typical division operation comprises a trial subtraction followed by a controlled re-addition conditioned on the sign bit (MSB) of the difference. For a quantum register in Fourier representation, each such comparison requires a full-register \QFTd/ in order to extract the sign as a binary state, and subsequent \QFT/ to return the rest of the register to the Fourier basis prior to the re-addition. In addition to the overhead of the \QFT/s themselves, each transform acts as a computational barrier point, inhibiting the parallelization of sequential adders. A single modular adder constructed as in \sect{mod-mult:mod-add} then requires two \QFTd/-\QFT/ pairs (for the two embedded comparisons), totaling $4n$ full-register \QFT/-like operations for an out-of-place quantum modular multiplier constructed from Fourier modular adders~\cite{Draper00,Beauregard2003}. \subsection{Quantum Fourier Division} \label{sec:fourier:div} The utility of Fourier-basis arithmetic for quantum modular multiplication can be improved by separating the component multiplication and division stages. As described in \sect{div:mult}, the initial multiplication stage comprises only controlled additions to an accumulation register. It can therefore be implemented with parallelized Fourier adders such that from a binary input state [n]{y} we accumulate the Fourier-basis state \fket[n+m]{t}, \begin{equation} [n+m]{0}[n]{y} \lra[\ensuremath{\Phi\text{-}}\ensuremath{\text{\textsc{MAC}}}(X\mid N)] \fket[n+m]{t}[n]{y}, \end{equation} with a total parallelized depth of $(n+m)$ controlled rotation gates. The quantum division operator (\ensuremath{\text{\textsc{Q-}}}\DIV{}) defined in \sect{div} consists of $m=\clog[2]{n}$ reduction steps. Using Fourier-basis arithmetic, each trial subtraction must be followed by $n$-bit \QFTd/ and \QFT/ operations in order to extract the resulting sign bit. After $m$ steps, the remainder $(t\bmod N)$ is held in Fourier representation, while the quotient, computed by inverting the $m$ extracted sign qubits, is constructed in binary: \begin{equation} \fket[n+m]{t} \lra[\ensuremath{\Phi\text{-}}\DIV{N}] [m]{q} \fket[n]{t\bmod N}. \end{equation} In order to construct the in-place modular multiplier described in \sect{mod-mult:in-place}, both outputs of the out-of-place operator must share the same representation. We therefore apply one more $n$-bit \QFTd/ to the remainder \fket[n]{t\bmod N}, leaving both it and the input [n]{y} in binary representation. \begin{figure}[ht] \centering \includecircuit[7]{fourier-div.pdf} \caption{\label{fig:fourier:div}$\ensuremath{\Phi\text{-}}\DIV{N}$ circuit, incorporating the intermediate \QFTd/ and \QFT/ operators required to extract the sign after each trial subtraction. The quotient, constructed by inverting the extracted sign bits, is computed in binary representation, while the remainder is output in the Fourier basis.} \end{figure} We finally uncompute the quotient register as in \sect{div:uncompute}: \begin{equation} [m]{ q} \lra[\ensuremath{\Phi\text{-}}\ensuremath{\text{\textsc{MUL}}}(N)] \fket[m]{qN} \lra \fket{qN + (t\bmod N)} = \fket[m]{t} \lra [m]{0}. \end{equation} Using the $\ensuremath{\Phi\text{-}}\ensuremath{\text{\textsc{MUL}}}(N)$ operator defined in \apx{fourier:mult}, we first multiply the register by $N$ (modulo-$2^m$) in-place, while simultaneously transforming the result to its Fourier representation. We then can add the remainder and uncompute the resulting $\fket[m]{t}$ register with a combined $(n+m)$ width-$m$ Fourier adders, controlled by the $m$ LSBs of the output register and all $n$ bits of [n]{y}. The gates composing the Fourier uncomputation stage can be overlapped with gates in either the \ensuremath{\Phi\text{-}}\DIV{N} operation or the final \QFTd/. \subsubsection{Analysis} The circuit dimensions of the out-of-place modular multiplier constructed from the \ensuremath{\text{\textsc{Q-}}}\DIV{} operation are broken down in \tab{fourier:div:costs}. The total two-qubit gate count of the multiplier is, \begin{equation} \#(\text{gates}) = n^2m + 3n^2/2 + 3nm - n/2 + m^2/2 - m/2, \end{equation} parallelized to a depth of, \begin{equation} 4nm+2n+\ord{m}. \end{equation} Though a significant speedup over Fourier modular multiplication via modular addition~\cite{Draper00,Beauregard2003}, the overall depth scales as \ord{n\log n}, offering no speedup over the binary-basis multiplier constructed with logarithmic-depth adders. However, we require no work qubits for the Fourier adders, so that the total operator requires only the $(2n+m)$ qubits required to hold \fket[n+m]{t} alongside the input state [n]{y}. The in-place-multiplier doubles the gate count and depth requirements, and adding a quantum control add \ord{n} gates. \begin{table}[H] \centering \caption{\label{tab:fourier:div:costs}% Total two-qubit gates and parallelized depth of an $n$-bit out-of-place Fourier modular multiplier constructed from the $\ensuremath{\Phi\text{-}}\DIV{N}$ operator (where $m=\clog[2]{n}$). % } \vspace{6pt} \begin{tabular}{l l l l l l l} \textbf{Stage} & & \textbf{Adds}& \textbf{$\QFT/$s} & \textbf{Width} & \textbf{Gates} & \textbf{Depth} \\ \hline\\[-9pt] \multicolumn{2}{l}{Multiplication} & $n$ & $0$ & $n+m$ & $n^2+nm$ & $n+m$ \\ \multicolumn{2}{l}{Division (\ensuremath{\Phi\text{-}}\DIV{})} & $m$ & $2m$ & $n$ & $n^2m+nm$ & $4nm$ \\ \multicolumn{2}{l}{Output \QFTd/} & $0$ & $1$ & $n$ & $(n^2-n)/2$ & $2n$ \\ Uncompute: &$[m]{q}\lra\fket[m]{qN}$ & $m-1$ & $0$ & $1,...,m-1$ & $(m^2-m)/2$ & $2m$ ${}^\ddagger$ \\ & $\hphantom{[m]{q}}\lra\fket[m]{t}$ & $m$ & $0$ & $m$ & $m^2$ & $m$ ${}^\ddagger$ \\ & $\hphantom{[m]{q}}\lra[m]{0}$ & $n$ & $0$ & $m$ & $nm$ & $n$ ${}^\ddagger$ \\ \hline\\[-9pt] \multicolumn{5}{l}{${}^\ddagger$Executed in parallel with \ensuremath{\Phi\text{-}}\DIV{} and output \QFTd/} \end{tabular} \end{table} The gate count of the \ensuremath{\Phi\text{-}}\DIV{}-multiplier can be further reduced if we bound the precision of the controlled rotations comprising each \QFT/. By eliminating rotations by angles below a determined threshold, the number of gates required for an $n$-bit \QFT/ is decreased from $n^2/2$ to $\ord{n\log n}$, while overall fidelity, of a noisy circuit, is likely improved~\cite{Barenco1996}. Applying this optimization to the \QFT/s embedded in the \ensuremath{\Phi\text{-}}\DIV{} operation, the total gate count of the multiplier becomes, \begin{equation} \#(\text{gates, bound precision}) = n^2 + \ord{n\log^2n}, \end{equation} where the magnitude of the second-order term is determined by the desired precision and fidelity of physical operations. In this case, the asymptotic gate count of the modular multiplier is dominated by the $n^2$ gates of the initial multiplication. Unfortunately, this optimization does not change the depth of the \QFT/ or the overall modular multiplier. \subsection{Quantum Fourier Montgomery Multiplication} \label{sec:fourier:redc} Beginning with the Fourier state \fket[n+m+1]{t} (where we extend Fourier-basis multiply described in \sect{fourier:div} to incorporate the single ancilla necessary to hold the MSB of the quantum Montgomery estimate), we can reconstruct the quantum Montgomery reduction operator using Fourier-basis arithmetic. The bottleneck of the \ensuremath{\Phi\text{-}}\DIV{} circuit was in the extraction of the sign bit after each trial subtraction, necessitating $n$-bit \QFTd/ and \QFT/ operations due to the dependency of the MSB on the less significant bits of the Fourier register. The Montgomery reduction algorithm sidesteps the requirement of trial subtraction, instead requiring additions controlled by the LSBs of the register. \subsubsection{Estimation Stage} \label{sec:fourier:redc-est} As in the binary case, the Fourier Montgomery estimation stage is constructed from controlled Fourier subtractions and implicit right-shifts. For integral $t$, the LSB \fket{t_0} of the Fourier state \fket[n+m+1]{t} ($k=0$ term in \eq{fourier:state}) is equivalent (up to phase) to that of its binary representation. We use this bit to condition a subtraction of $N$ from the register, ignoring the irreversible component of the subtraction affecting the control qubit. In the continuous Fourier representation, this is equivalent to subtracting the half-integer $(N/2)$ from the truncated register, \begin{equation} \fket[n+m+1]{t} = \fket[n+m]{t/2} {t_0} \lra \fket[n+m]{(t/2)-t_0\cdot(N/2)}{t_0}, \end{equation} where {t_0} is then equivalently the LSB $u_0$ of $[m]{u}$. By design, the subtraction occurs only when $t$ is odd ($t_0=1$), so that the difference is always an integer. The LSB of the truncated Fourier-basis state can therefore be used to condition the next subtraction of $(N/2)$ from the remaining bits of the register. As shown in the first stage of \fig{fourier:redc}, after $m$ such iterations we are left with the Montgomery estimate in Fourier representation, while simultaneously extracting the $m$ bits of [m]{u} in binary: \begin{equation} \fket[n+m+1]{t} \lra[\ensuremath{\Phi\text{-}}\MonEst{N,2^m}{}] \fket[n+1]{(t-u N)/2^m} [m]{u}. \end{equation} Remarkably, we have thus far required no extra transformations. \subsubsection{Correction Stage} \label{sec:fourier:redc-cor} Unfortunately, the quantum Montgomery reduction procedure does require a single modular reduction. In the correction stage, we add $N$ to the estimate if it is negative, requiring a single $(n+1)$-bit \QFTd/ to extract the sign bit of the register, followed by an $n$-bit \QFT/ to return the remaining bits of to their Fourier representation. As demonstrated in \fig{fourier:redc}, the binary sign bit is then used to control the addition of $N$, after which it is conditionally flipped by the LSB of the result and concatenated with [m]{u} to form [m+1]{\ensuremath{\smash{\tilde{u}}}} identically to the binary case. Combined with the estimation stage, we have constructed the Fourier Montgomery reduction operator, \begin{equation} \fket[n+m+1]{t} \lra[\ensuremath{\Phi\text{-}}\REDC{N,2^m}{}] \fket[n]{t2^{-m}\bmod N} [m+1]{t N^{-1}\bmod 2^{m+1}}, \end{equation} where the reduction is returned in Fourier representation, and the garbage state [m+1]{\ensuremath{\smash{\tilde{u}}}} in binary. We can then concatenate the \QFTd/ operation with the Fourier adders comprising the preceding estimation stage. The resulting sequence of controlled rotations is identical in structure to that of a single \QFTd/ over all $n+m+1$ qubits, and can likewise be parallelized to a depth of $2(n+m)-1$. Similarly, the controlled addition of $N$ can be concatenated with the preceding \QFT/ and parallelized as an $(n+1)$-qubit \QFT/ to depth $2n-1$ controlled rotation gates. \begin{figure}[H] \centering \includecircuit[8]{fourier-mon-redc.pdf} \caption{\label{fig:fourier:redc}\ensuremath{\Phi\text{-}}\REDC{N,2^m}{} circuit, with the requisite \QFTd/ and \QFT/ operators in order to extract the sign bit {s_\pm}. The estimation stage ($-N/2$) adders and \QFTd/ can then be sequenced like a single $\QFTd/$ over $n+m+1$ qubits.} \end{figure} \subsubsection{Uncomputation Stage} As in the case of the $\ensuremath{\Phi\text{-}}\DIV{N}$ procedure, in order to construct an in-place quantum Montgomery multiplier from the \ensuremath{\Phi\text{-}}\REDC{N,2^m}{} operator, we require a final $n$-bit \QFTd/ on the output so that it is returned in binary along with the input state [n]{y}. Simultaneously, we uncompute the [m+1]{\ensuremath{\smash{\tilde{u}}}} register. After transforming the register to its Fourier representation with an $(m+1)$-bit \QFT/, we can replicate the sequence of subtractions in the binary procedure (\sect{montgomery:uncompute}) with $n$ controlled Fourier adders, each truncated to $(m+1)$ bits. Being independent of the output state, the $n$-gate depth of this uncomputation is dominated by the $(2n-3)$-gate depth of the concurrent \QFTd/, fixing the total depth of this stage to the latter. \subsubsection{Analysis} Circuit characteristics of the out-of-place Fourier Montgomery multiplier are broken down by stage in \tab{fourier:montgomery:costs}. In total, we require, \begin{equation} \#(\text{gates}) = 5n^2/2 + 3nm - n/2 + m^2/2 - m/2, \end{equation} parallelized to a depth of, \begin{equation} 7n+\ord{m}. \end{equation} Comparing the Montgomery circuit to the division-based circuit we see than the depth of the division circuit is higher by a factor of $m$. This is a result of the extra \QFT/s required in the reduction portion of the division circuit. As with the \ensuremath{\Phi\text{-}}\DIV{} operator, if we bound the precision of the rotation gates composing each \QFT/, the total gate count is reduced to $n^2 + \ord{nm}$, asymptotically equivalent to that of just the initial multiplication stage. \begin{table}[H] \centering \caption{\label{tab:fourier:montgomery:costs}% Total two-qubit gates and parallelized depth of an $n$-bit out-of-place Fourier Montgomery multiplier (where $m=\clog[2]{n}$). % } \vspace{6pt} \begin{tabular}{l l l l l l l l} \textbf{Stage} & & \textbf{Adds}& \textbf{$\QFT/$s} & \textbf{Width} & \textbf{Gates} & \textbf{Depth} \\ \hline\\[-9pt] \multicolumn{2}{l}{Multiplication:} & $n$ & $0$ & $n+m$ & $n^2+nm$ & $n+m$ \\ \multicolumn{2}{l}{Estimation:} & $m$ & $0$ & $n+\{m,...,1\}$ & $nm+(m^2+m)/2$ & \multirow{2}{*}{$\Big\}\;2n+2m$} \\ Correction & \QFTd/: & $0$ & $1$ & $n+1$ & $(n^2+n)/2$ & \\ & \QFT/: & $0$ & $1$ & $n$ & $(n^2-n)/2$ & \multirow{2}{*}{$\Big\}\;2n$} \\ & Add $N$: & $1$ & $0$ & $n$ & $n$ & \\ \multicolumn{2}{l}{Output transform:} & $0$ & $1$ & $n$ & $(n^2-n)/2$ & \multirow{3}{*}{$\bigg\}\;2n{}^\ddagger$} \\ Uncompute & \QFTd/: & $0$ & $1$ & $m+1$ & $(m^2+m)/2$ & \\ & $t\rightarrow0$: & $n$ & $0$ & $m$ & $nm$ & \\ \hline\\[-9pt] \multicolumn{7}{l}{${}^\ddagger$Uncomputation steps executed in parallel with output \QFTd/} \end{tabular} \end{table} \subsection{Quantum Fourier Barrett reduction} Like Montgomery reduction, the benefit of Barrett reduction in the classical case is in the replacement of division with integer multiplication (\alg{barrett:quantum:circuit}). The quantum Barrett reduction procedure is therefore similarly well-suited for arithmetic in the quantum Fourier basis. As in the division and Montgomery reduction based multiplication procedures, we begin with Fourier calculation of \fket[n+m]{t} (\alg{barrett:quantum:circuit}, \step{barrett:quantum:xy}). In the Barrett case, we also accumulate the approximate product $\xyapp$ (\step{barrett:quantum:xyapp}) with a simultaneous $\ensuremath{\Phi\text{-}}\ensuremath{\text{\textsc{MAC}}}(\widetilde{X}\mid N)$ operation. The approximate product is then used to compute $\qhat$ (\step{barrett:quantum:qhat}), requiring its transformation to binary representation prior its Fourier-basis multiplication by $\mu$. Similarly, $\qhat$ is used as a quantum multiplicand in \step{barrett:quantum:reduce,barrett:quantum:addqn,barrett:quantum:subqn}, and so must be transformed to its binary representation prior to use in these steps. The quantum Barrett reduction procedure requires two comparison operations (\alg{barrett:quantum:circuit}, \step{barrett:quantum:compare} and \step{barrett:quantum:appcompare}), requiring the usual \QFTd/ and \QFT/ operations in order to extract sign bits. However, while the first requires full-register transformations, the second comparison is limited to just the most significant bits of the accumulation register (the number of bits required is equal to the size of $\qhat$). After a full-register \QFTd/ to extract the sign, we therefore only need to transform these MSBs of the register to the Fourier basis for the re-addition of $\xyapp$ and subtraction of $qN$ (\step{barrett:quantum:appcompare,barrett:quantum:subqn}). We finally transform these bits back to their binary representation so that both the output and input states of the modular multiplier are binary. The final uncomputation of $\qhat$ and $\xyapp$ (\step{barrett:quantum:qhat-rev,barrett:quantum:xyapp-rev}) requires the reversal of their initial computations and transformations. As in the uncomputation stages of the \ensuremath{\Phi\text{-}}\DIV{} and \ensuremath{\Phi\text{-}}\REDC{}{} operations, the Fourier uncomputation of $\xyapp$ is requires subtractions controlled by each bit of the input state, and therefore maintains the initial multiplication's depth of $n$ controlled rotations. Combined with the three full-register \QFT/s required for comparisons, the overall Fourier Barrett multiplication operation has a depth of $7n+\ord{m}$ controlled rotation gates, identical in leading order to the Montgomery reduction operator. As with both \ensuremath{\Phi\text{-}}\DIV{} and \ensuremath{\Phi\text{-}}\REDC{}{}, if we bound the precision of the component \QFT/s, the total leading-order gate count is just that of the initial multiplier. \begin{figure}[h!] \centering \includecircuit[14]{fourier-barr.pdf} \caption{Barrett multiplication circuit using Fourier arithmetic. The numbers in the figure correspond to the steps of \alg{barrett:quantum:circuit}.} \label{fig:fourier:barrett} \end{figure} \section{Introduction} \label{sec:intro} \subsection{Efficient Modular Multiplication Implementations} Circuits implementing reversible modular arithmetic operations are of significant contemporary interest due to their prevalence in important quantum algorithms~\cite{Shor1994}. Resource-efficient implementation of these operations is critical to their eventual implementation on a quantum computer. In this paper, we describe three novel techniques which asymptotically reduce the resources required for exact, reversible modular multiplication to those necessary for non-modular integer multiplication. Existing proposals for efficient reversible modular multipliers largely fall in one of two categories: (1) the composition of reversible modular adders, each comprising the equivalent of three reversible non-modular adders; or (2) approximated division, in which reversible modular reduction is simplified by allowing the return of an incorrect output for some subset of input values~\cite{Zalka1998, Kutin2006}. Compared to those in the first category, our circuits achieve a factor-of-three reduction in asymptotic gate count and circuit depth while requiring only $\ord{\log n}$ additional qubits. They perform comparably to those in the second category, but without employing arithmetical approximations--regardless of the impact of such approximations on the overall fidelity of quantum algorithms, our constructions demonstrate that accuracy need not be sacrificed in order to obtain an efficient modular multiplication circuit. Classically, large-integer modular multiplication is a critical component of cryptographic functions such as the RSA public-key encryption system~\cite{RSA1978}. The standard strategy for modular multiplication is to first calculate the integer product of the input values, and then reduce the result via division. However, due to the inefficiency of integer division on most processors, systems implementing extensive modular arithmetic typically employ specialized methods to eliminate the division requirement of modular reduction. In particular, Montgomery residue arithmetic~\cite{Montgomery1985} and Barrett reduction~\cite{Barrett1987} are commonly used techniques in applications requiring many reductions under the same modulus. In the Montgomery method, the problem is converted to an ``$N$-residue'' representation in which modulo-$N$ reduction instead requires calculations under an easier modulus (e.g. a power of two). Barrett reduction takes advantage of the constant modulus by pre-computing a fixed-point reduction factor once, so that individual reductions can be computed using only multiplication and bit-shift operations. In this work, we describe three novel reversible modular multipliers, employing (1) a new implementation of the standard division-based reduction procedure, as well efficient reversible adaptations of (2) classical Montgomery multiplication and (3) Barrett reduction. Our designs principally comprise common abstract circuit primitives, such as reversible adders and subtracters, enabling their implementation within various architectural and arithmetical models. In particular, the Montgomery and Barrett multipliers introduced are uniquely amenable to arithmetic in the quantum Fourier transform basis, sidestepping the bottlenecks plaguing many previous implementations utilizing this representation~\cite{Draper00,Beauregard2003}. We discuss the relative trade-offs of various implementation methods for each modular multiplication strategy, and perform a detailed resource analysis of each. \subsection{Prior Modular Multiplication Implementations} \label{sec:intro:prior-art} The first quantum circuits for modular multiplication were developed as part of a typical arithmetical hierarchy~\cite{Beckman1996, Vedral1996, Fowler2004}, \begin{equation*} (\text{Integer Adder}) \lra (\text{Modular Adder}) \lra (\text{Modular Multiplier}), \end{equation*} in which a quantum modular multiplier is constructed from a sequence of modular adders, which in turn consist of multiple integer addition steps. The complexity of modular multiplication is then driven by the complexity of reversible implementations of integer addition. The first quantum integer addition circuits were based on the classical ripple-carry adder, which has circuit depth that is linear in the bit-width of its addends~\cite{Vedral1996, Cuccaro2004}. The circuit depth and total size of modular multiplication using the ripple-carry adder is $\ord{n^2}$ for $n$-bit inputs. Logarithmic-depth addition circuits were subsequently proposed~\cite{Draper2006} that reduce the depth of modular multiplication to $\ord{n\log_2n}$, with the total number of gates remaining $\ord{n^2}$. Quantum adders that operate in the quantum Fourier transform basis~\cite{Draper00} have also been used to implement modular addition, multiplication, and exponentiation. By moving integer addition to the Fourier basis, Beauregard~\cite{Beauregard2003} presented a modular multiplication circuit that has depth \ord{n^2} and total circuit count $\ord{n^3}$, but requires fewer qubits than previously described circuits. However, Fourier-basis arithmetic employs arbitrarily angled rotation gates, which may require more resources to implement on typical architectures than the reversible AND (i.e. \Toffoli/) gates required by the other adders. For this reason, it is difficult to do a direct comparison of Fourier-basis and binary circuits. Using Fourier-basis addition, Kutin has devised a unique and efficient procedure for approximate quantum modular multiplication with linear algorithmic depth and a linear number of qubits~\cite{Kutin2006}. This circuit uses an approximate multiplication technique introduced by Zalka~\cite{Zalka1998}, which requires uncertain assumptions regarding the distribution of systematic errors. The argument is made that these errors do not lead to an appreciable level of error. An exact, linear-width quantum modular multiplication procedure was proposed by Pavlidis and Gizopoulos~\cite{Pavlidis2014}. By applying Fourier-basis arithmetic to construct exact reversible constructions of a large array of arithmetic operations, they introduce a novel quantum Granlund-Montgomery division procedure enabling exact quantum modular multiplication with linear depth and linear width. Still, the leading terms in this construction remain prohibitive, requiring $9n$ qubits, $800n^2$ total quantum gates, and a parallelized depth of $1000n$ rotation gates, with almost 90\% of these costs devoted to the division procedure. As in classical systems~\cite{Wallace1964}, a trade-off exists between algorithmic depth and width. Allowing a super-linear supply of ancillary qubits, various procedures for sub-linear time quantum multiplication have been introduced. Gossett's carry-save multiplier achieves logarithmic depth but requires $\ord{n^2}$ qubits~\cite{Gossett1998}. Similarly, modular exponentiation can be performed by arranging the component multipliers in a logarithmic-depth binary-tree structure at the cost of a quadratic circuit width~\cite{VanMeter2005}. Combining these models with constant-depth teleportation-based fan-out, Pham and Svore have introduced a 2D nearest-neighbor carry-save architecture allowing modular exponentiation in $\ord{\log^2(n)}$ depth with $\ord{n^4}$ qubits~\cite{Pham2013}. Classically, multipliers exploiting the fast Fourier transform (FFT) and convolution theorem have long dominated in asymptotic depth~\cite{Karatsuba1962, Schonhage1971, Furer2009}. Accordingly, the fastest known quantum multipliers are direct applications of these constructions~\cite{Zalka1998, Kowada2006}. However, a signature of both classical and quantum convolution-based multipliers is a prohibitive initial cost--for example, Zalka's quantum Sch\"onhage-Strassen multiplier~\cite{Zalka1998} has an asymptotic depth of $\sim2^{16}n^{0.2}$. We summarize these results in \tab{intro:comparison}. Because of the difficulty of making side-by-side comparisons of Fourier-basis and binary circuits, we have separated the two and characterized each in terms of their principle operation (\Toffoli/ gates in the case of binary arithmetic, total controlled-rotations in the Fourier case.) The characteristics for binary modular multipliers utilizing modular adders are determined from the particular adders employed, assuming three adds per modular addition and $2n$ modular adds per modular multiplication. All other circuits are presented as reported in the reference. We will discuss the various trade-offs that exist, and where our proposed circuits fall among these, in \sect{resources}. \begin{table}[H] \centering \caption{\label{tab:intro:comparison}% Resource comparison of in-place quantum modular multipliers. Only the leading order term is shown for each count. } \vspace{6pt} \begin{tabular}{r l l l l} \multicolumn{5}{c}{\textbf{Binary Arithmetic:} } \\[2pt] {Proposal} & {Architecture} & {Qubits} & {Gates${}^\dagger$}& {Depth${}^\dagger$} \\ \hline\\[-9pt] {${}^\star$Cuccaro et. al.}~\cite{Cuccaro2004} & Modular Addition (Ripple-Carry) & $3n$ & $12n^2$ & $12n^2$ \\ {${}^\star$Draper et. al.}~\cite{Draper2006} & Modular Addition (Prefix) & $5n$ & $60n^2$ & $24n\log_2n$ \\ {Zalka}~\cite{Zalka1998} & Sch\"onhage-Strassen (FFT) & $24...96n$ & $2^{16}n$ & $2^{16}n^{0.2}$ \\ {Pham-Svore}~\cite{Pham2013} & Carry-Save (nearest-neighbor) & $16n^2$ & $384n^2\log_2n$ & $56\log_2n$ \\ \multirow{6}{*}{\textbf{This work} $\left.\begin{array}{r}\\\\\\\\\\\end{array}\right\{$} & \textbf{Exact Division (Prefix)} & $\boldsymbol{5n}$ & $\boldsymbol{20n^2}$ & $\boldsymbol{8n\log_2n}$ \\ & \textbf{Montgomery Reduction (Prefix)} & $\boldsymbol{5n}$ & $\boldsymbol{20n^2}$ & $\boldsymbol{8n\log_2n}$ \\ & \textbf{Barrett Reduction (Prefix)} & $\boldsymbol{5n}$ & $\boldsymbol{20n^2}$ & $\boldsymbol{8n\log_2n}$ \\ & \textbf{Exact Division (Ripple)} & $\boldsymbol{3n}$ & $\boldsymbol{4n^2}$ & $\boldsymbol{4n^2}$ \\ & \textbf{Montgomery Reduction (Ripple)} & $\boldsymbol{3n}$ & $\boldsymbol{4n^2}$ & $\boldsymbol{4n^2}$ \\ & \textbf{Barrett Reduction (Ripple)} & $\boldsymbol{3n}$ & $\boldsymbol{4n^2}$ & $\boldsymbol{4n^2}$ \\ \hline\\[-9pt] \multicolumn{5}{l}{${}^\star$Reference proposes an adder only. We assume $3$ adders per modular add, $2n$ modular adds per multiply.}\\ \multicolumn{5}{l}{${}^\dagger$Total gate counts and depths provided in \Toffoli/ gates.}\\[12pt] \end{tabular} \end{table} \begin{table}[H] \centering \caption{\label{tab:intro:fourier-comparison}% Resource comparison of Fourier-basis in-place quantum modular multipliers. Gate counts are reported in two ways: (1) the total number of rotations assuming infinite-precision control, and (2) the number of gates after removing those with exponentially-small rotation angles~\cite{Barenco1996}. Only the leading order term is shown for each count. } \vspace{6pt} \begin{tabular}{r l l l l l} \multicolumn{5}{c}{\textbf{Fourier-basis arithmetic:} } \\[2pt] {Proposal} & {Architecture} & {Qubits} & {Gates (1)${}^\ddagger$} & {Gates (2)${}^\ddagger$} & {Depth${}^\ddagger$} \\ \hline\\[-9pt] {Beauregard}~\cite{Beauregard2003} & Modular Addition & $2n$ & $4n^2$ & $\ord{n^3\log n}$ & $8n^2$ \\ \multirow{2}{*}{{Kutin}~\cite{Kutin2006} $\Big\{$} & Approximate Division & $3n$ & $3n^2$ & $2n^2$ & $6n$ \\ & Approximate (nearest-neighbor) & $3n$ & $5n^2$ & $4n^2$ & $11n$ \\ {Pavlidis}~\cite{Pavlidis2014} & Granlund-Montgomery Division & $9n$ & $800n^2$ & $\sim250^2$ & $1000n$ \\ \multirow{3}{*}{\textbf{This work} \ \ \ $\Bigg\{$} & \textbf{Exact Division} & $\boldsymbol{2n}$ & $\boldsymbol{2n^2\log_2n}$ & $\boldsymbol{2n^2}$ & $\boldsymbol{8n\log_2n}$ \\ & \textbf{Montgomery Reduction} & $\boldsymbol{2n}$ & $\boldsymbol{5n^2}$ & $\boldsymbol{2n^2}$ & $\boldsymbol{14n}$ \\ & \textbf{Barrett Reduction} & $\boldsymbol{2n}$ & $\boldsymbol{5n^2}$ & $\boldsymbol{2n^2}$ & $\boldsymbol{14n}$ \\ \hline\\[-9pt] \multicolumn{5}{l}{${}^\ddagger$Total gate counts and depths in arbitrarily-angled controlled-rotations.} \end{tabular} \end{table} \subsection{Outline of the Paper} The remainder of the paper is organized into 6 sections and one appendix. In the next section (\sect{mod-mult}) we provide background required by all the circuit constructions described in this paper. We also use these constructions to describe the most common implementation of modular multiplication, building up the multiplier as a sequence of modular adders. In \sect{div} we describe the first of our modular multiplication circuits; a circuit that uses a new implementation of the standard division technique. In \sect{montgomery} we describe a circuit that uses the Montgomery reduction technique and in \sect{barrett} we describe the implementation of a modular multiplier based on Barrett reduction. In \sect{fourier} we describe details of the implementation of our schemes that are specific to Fourier basis arithmetic. In \sect{resources} we present a resource analysis of each of the new implementations that compares the resources required by our circuits to those required by the base implementation constructed from modular adders. We perform the resource analysis for circuits utilizing a variety of adders including: carry-ripple, carry look-ahead and Fourier basis. Finally in the appendix we describe details related to the integer adders and multiplier circuits used in the constructions, including several new implementations. \section{Quantum Modular Multiplication} \label{sec:mod-mult} In this section we describe the basic methods and circuits that are used to construct quantum modular multipliers. We also describe the standard circuit that is most often used for quantum modular multiplication, which performs the modular multiplication using modular adders. This circuit will provide the baseline of comparison for our new methods. Most quantum algorithms require ``quantum-classical'' modular multipliers, in which one multiplier input is a fixed classical parameter. The circuits introduced in this paper are therefore described in this form. We will also describe the modifications required to implement full ``quantum-quantum'' modular multipliers. These circuits would be useful, for example, in the quantum circuit for elliptic curves~\cite{Proos:2003}. In the quantum-classical case, our goal is described by the quantum operation, \begin{equation} \eq{mod-mult:mod-mult} [n]{y} \lra [n]{Xy\bmod N}, \end{equation} where $X$ is a classically-determined multiplicand, $n\defeq\clog[2]{N}$ is the number of bits required to hold the modulus $N$, and we have used subscripts to indicate the size of the requisite quantum registers (ignoring any intermediate ancilla qubits). In the quantum-quantum case, $x$ is held in an additional quantum register, which is preserved through the operation: \begin{equation} [n]{x}[n]{y} \lra [n]{xy\bmod N}[n]{y}. \end{equation} In general, we will reserve uppercase variable names for constant classical parameters. \subsection{Modular Multiplication} A non-modular quantum multiplier can be constructed using the techniques of grade-school arithmetic: we compute and accumulate a sequence of partial products. In binary arithmetic, each partial product is computed by multiplying a single bit of the multiplier $y$ by the shifted multiplicand $2^kX$, so that the product $yX$ is computed by accumulating $\sum_k{y_k(2^kX)}$ for each bit $y_k$ of $y$. A binary quantum multiplier therefore consists of a sequence of quantum additions, controlled by the bits of a quantum input {y}. A modular multiplier can be constructed by combining the steps of non-modular multiplication and integer division operators, where the modular product is returned as a remainder from the latter. In general, an information-preserving divider will compute both quotient and remainder terms. For reversible modular modular reduction, we therefore must take additional steps to uncompute the quotient. Applying Bennett's technique for constructing reversible circuits from classical operations~\cite{Bennett1973}, in which we pseudo-copy the result into an ancilla register and reverse the entire computation, we implement, \begin{alignat}{2} { 0}{y}{0} \lra &&{Xy}{y}&{0} \nonumber\\ \lra &&{q}{Xy-qN}{y}&{0} \nonumber\\ \lra &&{q}{Xy-qN}{y}&{Xy-qN} \nonumber\\ \lra &&{Xy}{y}&{Xy-qN} \nonumber\\ \lra &&{ 0}{y}&{Xy-qN}, \end{alignat} where ${Xy-qN}={Xy\bmod N}$. Unfortunately, the above computation is not very resource efficient. In addition to requiring additional ancilla registers to store results during the uncomputation, both the multiplication and division components of the modular multiplier must be implemented twice: once to compute the result, and again in reverse to uncompute and clear the ancilla space. Due to the complexity of reversible division-based modular reduction, early modular multipliers were instead constructed with reductions embedded within the multiplication, one after each addition of a partial product~\cite{Beckman1996, Vedral1996, Beauregard2003, VanMeter2005}. If we classically reduce each partial product modulo $N$ prior to its addition, then at most one value of $N$ needs to be subtracted with each step. We can therefore equivalently implement reversible circuits for modular addition, so that no garbage is left after any step of the multiplier. The disadvantage of this approach is that each reversible modular addition requires multiple integer adders, increasing the overhead of the modular multiplier linearly by this factor. In \fig{quantum:mult:mult_class} we illustrate the two methods for modular multiplication: using modular adders, or using an integer multiplication followed by a division step. The circuits introduced in this paper derive from the first, division-style reduction strategy. We are able to construct modular multipliers with a single integer multiplication step and a division step whose size and depth is logarithmic in the size of the inputs. While quantum circuits for modular multiplication have previously been developed employing this reduction style~\cite{Zalka1998,Kutin2006,Pavlidis2014}, the efficient reversible implementation of the modular reduction stage in this construction has remained a bottleneck. In the case of~\cite{Zalka1998,Kutin2006}, approximations are used that deterministically introduce errors for some inputs, which are argued to have an insignificant impact on the error rate of the circuit. In~\cite{Pavlidis2014}, a precise division-style reduction circuit is introduced. However, the overhead of their circuit is similar to the compute, copy, uncompute approach described above. \begin{figure} \centering \inputtikz{adder-class} \caption{A 4-bit quantum modular multiplier that multiplies a quantum register by a classical constant. We either reduce modulo $N$ after every addition (red blocks), or once for the entire multiplier (blue block). In both cases the multiplier consists of $n$ conditional additions of $n$-bit constants. } \label{fig:quantum:mult:mult_class} \end{figure} \subsection{In-place Modular Multiplication} \label{sec:mod-mult:in-place} The operation shown in \eq{mod-mult:mod-mult} is an example of an \emph{in-place} operation, where one of the input registers is overwritten by the output. To construct an in-place modular multiplier we can use the standard method that utilizes the reverse of the inverse function. For example, the out-of-place quantum-classical modular multiplier performs the transformation: ${y}{0} \lra{} {y}{X*y\bmod N}$. The inverse function with the modular product as input performs the transformation: ${X*y\bmod N}{0} \lra{} {X*y\bmod N}{y}$. Both these functions produce the same output and therefore we can chain them as: \begin{equation} \label{eqn:quantum:mult:inplace} {y}{0} \lra[(*X)_{fwd}] {y}{Xy\bmod N} \lra[\SWAP/] {Xy\bmod N}{y} \lra[(*X^{-1})_{rev}] {Xy\bmod N}{0}. \end{equation} For the quantum-classical multiplier the inverse of the modular multiplication by $X$ is a modular multiplication by $X^{-1}\bmod N$. Since $X$ is classical this inverse can be calculated offline using the extended Euclidean algorithm. The in-place modular multiplier is then just two out-of-place multipliers. \subsection{Controlled Modular Multiplication} \label{sec:mod-mult:control} For most applications of the modular multiplier we will need to condition its operation on an external control qubit. There are three main ways that this can be done. The first way is to add the external control to all gates in the circuit. This adds significant overhead to the circuit and require every gate to share a common control qubit, either sequentializing the circuit or forcing a pseudo-copy of the control to a set of ancillas to allow gates to execute in parallel. The second technique is to incorporate the global control into the individual controls already required by each adder in the multiplier. For each addition we can use a \Toffoli/ gate to combine the external control with its local control in the input register (i.e., bit $y_i$ from \fig{quantum:mult:mult_class}) onto an ancilla bit, and then use this bit to control the adder. A related and third way to control arbitrary quantum operations is to shift the input state into a subspace of the overall computational Hilbert space that is returned to its initial state after the operation. This methodology generally requires an additional qubit register serving as an auxiliary ``quantum cache''~\cite{Zhou2011}; however, in the case of in-place quantum multiplication, we can bypass this requirement at the cost of $\ord{n}$ additional \Fredkin/~\cite{Fredkin1982} (controlled-\SWAP/) gates. For the out-of-place multiplier, if the quantum input register {y} is zero, we expect the output of the multiplication by it to be zero regardless of $X$. The multiplications by $X$ and $X^{-1}$ will therefore have the same output. Further, because we implement multiplication using controlled additions into an ancilla register, if {y}={0} the value in the accumulation register is left unchanged. We can therefore swap the input into the accumulation register, so that we compute, \begin{equation} {y}{0} \lra[\SWAP/] {0}{y} \lra[*X] {0}{y+0*X} = {0}{y}. \end{equation} The fact that the additions are done modulo $N$ does not matter because, for $y<N$, no reductions are ever required. If we then lift the subsequent $\SWAP/$ step of \eqn{quantum:mult:inplace}, the two out-of-place multiplications will cancel one another and {y} will be returned at the end of the operation: \begin{equation} \label{eqn:quantum:mult:swap-control} {y}{0} \lra[\SWAP/] {0}{y} \lra[(*X)_{fwd}] {0}{y} \lra[(no\ \SWAP/)] {0}{y} \lra[(*X^{-1})_{rev}] {0}{y} \lra[\SWAP/] {y}{0}. \end{equation} In total, we require three sets of \Fredkin/ gates for this implementation of the controlled multiplier: two negatively-controlled sets to swap the input register into and out of the ``cache'' register, and an additional positively-controlled set to control the $\SWAP/$ at the center of \eqn{quantum:mult:inplace}. Each can be implemented with one \Toffoli/ gate and two \CNOT/ gates per bit. \subsection{Modular Addition} \label{sec:mod-mult:mod-add} There is an established method for constructing a reversible modular adder out of reversible integer adders. The method that we describe is a slight modification to those described in~\cite{Beckman1996}\cite{Fowler2004}. This method requires $3$ in-place integer adders for an in-place modular quantum-classical or quantum-quantum addition. A circuit for this adder is shown in \fig{quantum:mult:mod_adder}. The circuit is for a quantum-classical modular adder. To make this a full quantum-quantum adder we would replace the additions involving $X$ with full quantum integer additions. We have broken each integer adder into forward and reverse sections. An in-place adder requires both sections; therefore the modular adder is comparable in complexity to three in-place adders. Note, this adder is controlled by a single external qubit ${p}$ as would be required if we use it as part of a multiplier. Both input values to the modular adder are assumed to be reduced, i.e. $<N$. The modular adder works by first adding $X$ in-place to the second quantum register containing $y$. We then subtract $N$ out-of-place. If $X+y > N$, indicating that a reduction is necessary, then the result of this subtraction will be positive, otherwise it is negative. For two's complement addition a negative result will have the most-significant-bit (msb) set and this can be used as a control to the reverse segment of the subtraction to either overwrite the input or reverse the action of the forward subtraction. However, we must copy the msb in order to control the reverse subtraction and this produces a bit of garbage. Other than this bit of garbage, we have the result of the modular addition in the $y$ register. To clear the garbage bit, we subtract $X$ from the result and note that if we performed the reduction in the second step we now have $X+y-N-X=y-N<0$ in the register and if we did not perform the reduction we have $X+y-X=y>0$ in the register. Therefore we can use the sign of this subtraction to clear the garbage bit. We then uncompute the subtraction of $X$ to complete the modular addition. \begin{figure} \centering \inputtikz{mod-adder} \caption{A controlled Modular quantum-classical adder constructed from 3 integer adders. Adders with thick lines on the right side indicate forward out-of-place additions and adders with thick lines on the left side are the corresponding reverse functions. A pair of these two comprise a full in-place adder. The $msb$ function extracts the most significant bit from the register. } \label{fig:quantum:mult:mod_adder} \end{figure} \subsection{Quantum-Quantum Modular Multiplication} \label{sec:mod-mult:mod-shift} \label{sec:mod-mult:quantum-quantum} A full quantum-quantum multiplier (\fig{quantum:mult:mult}) computes the product {xy} from to quantum input values {x} and {y}. As in the case of the quantum-classical multiplier, the quantum-quantum multiplier consists of a sequence of controlled additions into an accumulation product register. If we use modular adders in the quantum-quantum multipliers then the resulting product will be reduced modulo $N$. Otherwise an additional reduction step will be required. As discussed previously, each reduced partial product is of the form $y_i(2^ix)\bmod N$. For the quantum-classical modular multiplier we can calculate $2^iX\bmod N$ off-line; however, for the full quantum-quantum multiplier ${x}$ is a quantum value and therefore $2^ix\bmod N$ must be calculated with a quantum circuit. Each addition in the multiplier uses a value that is twice the previous value, therefore we just need to shift the value by one position for each addition. A circuit that performs a shift and reduce is shown in \fig{quantum:mult:mod_shift}. The circuit shifts the input forward by one position by relabeling the input and inserting a zero at position 0. Since this produces a value $<2N$ at most one reduction is required. We can perform this reduction by subtracting $N$ and checking to see if the result is negative. If it is, we reverse the subtraction, otherwise we complete the in-place subtraction. We can clear the comparison bit by noting that since $N$ is always chosen to be odd, and the pre-reduced shifted value is even, the output value is only odd when we have done a reduction. Therefore, the least significant bit can be use to clear the comparison bit. For the quantum-classical modular multiplier, we could classically pre-compute the modular inverse $X^{-1}\bmod N$ required for the reversed step. In the quantum-quantum case, we instead require the inverse of a quantum value, requiring a reversible modular inversion routine. Reversible inversion circuits employing the extended Euclidean algorithm have been demonstrated~\cite{Proos:2003}; however, their cost is much higher than that of a single modular multiplication, and incorporating them is beyond the scope of this paper. We will therefore describe and determine resources for only the out-of-place implementation of quantum-quantum modular multipliers. \begin{figure} \centering \inputtikz{adder} \caption{A 4-bit quantum multiplier constructed from a sequence of controlled additions} \label{fig:quantum:mult:mult} \end{figure} \begin{figure} \centering \inputtikz{mod-shift} \caption{Modular shift and reduce circuit. Shown here for a 4-qubit register. The input value is shifted one position by relabeling the inputs and inserting a zero value at the LSB. At most one reduction of $N$ is required, and after this reduction the most significant bit is cleared, which replaces the input ancilla used at the LSB. } \label{fig:quantum:mult:mod_shift} \end{figure} \section{Quantum Montgomery Multiplication} \label{sec:montgomery} \emph{Montgomery residue arithmetic}~\cite{Montgomery1985} is a technique for efficient multi-precision modular arithmetic, ubiquitous in applications such as hardware cryptography~\cite{Menezes1996}. In the Montgomery framework, integers are mapped to a residue representation in which multiplication modulo $N$ requires calculations under a computationally friendly auxiliary radix $R$. While the initial mapping of inputs to the Montgomery representation is not free, the constant overhead is quickly overcome by the efficiency of Montgomery multiplication when multiple calculations are required. In our application, the advantage of moving to a power-of-two modulus is in flipping the reduction stage. While the $\ensuremath{\text{\textsc{Q-}}}\DIV{}$ procedure outlined in \sect{div} consists of quantum additions conditioned on the most significant bits of the product register, the corresponding Montgomery operator requires only the least significant bits. This rearrangement has particularly profound advantages in the application of quantum Fourier-basis arithmetic. As in \sect{div}, we are able to reduce the asymptotic complexity of quantum modular multiplication to that of a single non-modular multiplication. Remarkably, it turns out that the overhead incurred in mapping to and from the Montgomery representation can be relegated entirely to classical precomputation, making our construction immediately advantageous in terms of quantum overhead. \subsection{\label{sec:montgomery:cl}Classical Montgomery Residue Arithmetic} Montgomery residue arithmetic provides an efficient computational framework for extended modular arithmetic under a fixed modulus $N$. Given an auxiliary radix $R=b^m$ such that $\gcd(b,N)=1$, we define the unique $N$-residue representation of an integer $x\in\mathbb{Z}_N$ \begin{equation} x' \defeq xR \bmod N. \end{equation} The $N$-residues $\{ x' \mid 0\le x<N \}$ form a complete residue system such that $x$ represents the residue class containing $xR^{-1}\bmod N$. Distributivity ensures the usual addition, subtraction, and negation operations behave normally in this representation: \begin{equation} (x \pm y)' \equiv x' \pm y' \pmod N \end{equation} However, the $N$-residue representation of the product $xy$ is, \begin{equation} \qty(xy)' \equiv \qty(xR)\qty(yR)R^{-1} \equiv x'y'R^{-1} \pmod{N}. \end{equation} Multiplication in the $N$-residue representation therefore differs from the standard case, requiring the incorporation of the additional factor of $R^{-1}\pmod{N}$. The new operation is known as the {Montgomery product}~\cite{Montgomery1985}, \begin{equation} \MonPro{x',y'}{N,R} \defeq x'y'R^{-1}\bmod N. \end{equation} We can now perform modulo-$N$ arithmetic in the $N$-residue system. After initially mapping input values to their respective residue representations, we proceed with calculations as in the standard case, but with each modulo-$N$ multiplication mapped to the corresponding Montgomery product. The result of each operation acting on $N$-residues is then also an $N$-residue, and so extended arithmetic can be performed within the residue system without further conversion steps until we finally map results back to their standard $\mathbb{Z}_N$ representations. \subsubsection{Montgomery Reduction} \label{sec:montgomery:cl:redc} Mirroring standard modular multiplication, the Montgomery product can be decomposed into a non-modular multiplication and a \emph{Montgomery reduction}, \begin{equation} \REDC{t}{N,b^m} \defeq t b^{-m}\bmod N, \end{equation} where $b^m=R$ is the chosen auxiliary radix. Given $\gcd(N,b)=1$, \REDC{t}{N,b^m} provides a unique representation of $t\bmod N$. It can be efficiently computed from $t<b^mN$ due to the identity, \begin{equation} \eq{montgomery:cl:redc-equiv} t b^{-m} \equiv \qty(t-u N)/b^m \pmod{N}, \end{equation} where, \begin{equation} \eq{montgomery:cl:redc-U} u \defeq t N^{-1}\bmod b^m, \end{equation} such that, for co-prime $N$ and $b$, $u$ uniquely solves, \begin{equation} \eq{montgomery:cl:redc-solves} t-u N \equiv 0 \pmod{b^m}, \end{equation} ensuring that $(t-u N)$ is divisible by $b^m$.\footnote{Note that in the above we have deviated slightly from the typical construction (for example, as presented in~\cite{Montgomery1985}), in which the estimate is bound by the range $[0,2N)$. By rearranging signs slightly, we have shifted the bounds to the $(-N,N)$ range presented, which will serve to (very slightly) simplify the quantum construction we introduce below.} The right hand side of \eq{montgomery:cl:redc-equiv} is bound by $-N<(t-u N)/b^m<t/b^m$, such that its maximal value decreases toward zero for increasing $m$. Taking $m \ge \clog[b]{(t/N)}$, this limit becomes, \begin{equation} \eq{montgomery:cl:est-bound} -N \le (t-u N)/b^m < N, \end{equation} enabling the computation of Montgomery residue $t R^{-1}\equiv(t-u N)/b^m\pmod{N}$ with a single comparison and corrective addition. We refer to this term as the $m$-digit Montgomery estimate of $t$, \begin{equation} \eq{montgomery:cl:est} \MonEst{t}{N,b^m} \defeq ( t - u N )/b^m, \end{equation} and subdivide Montgomery reduction into independent \emph{estimation} and \emph{correction} stages (as in \alg{montgomery:cl:redc}). Choosing $b=2$, the division by $b^m$ in \eq{montgomery:cl:est} becomes computationally trivial in binary architectures. Due to \eq{montgomery:cl:redc-solves}, we can then compute the estimate while circumventing the explicit calculation of $u$. Re-expressing \eq{montgomery:cl:redc-solves} as a multiply-accumulate operation, \begin{equation} \eq{montgomery:cl:redc-mac} t-u N = t - \sum_{k=0}^{m-1}2^ku_kN \equiv 0 \pmod{2^m}, \end{equation} we find that each subtraction of $2^ku_kN$ is the final operation affecting the $k$th bit of the accumulator, and therefore must clear that bit. Beginning with the LSB $(k=0)$, we can therefore ignore $u_k$ and simply subtract $2^kN$ if bit $k$ of the accumulator needs to be cleared. That is, each bit $u_k$ of $u$ is equivalently the $k$th bit of the accumulator immediately prior to the corresponding conditional subtraction of $2^ku_kN$. As each step ultimately clears a new bit of the accumulator, we can also right-shift the register by a single bit without loss of information. As described in \alg{montgomery:cl:redc}, \alglines{montgomery:cl:redc}{est-start}{est-end}, after $m$ such iterations we have computed the Montgomery estimate $(t-u N)/2^m$, requiring only the $m$ conditional subtractions necessary to compute $(-u N)$ and $m$ computationally trivial right-shifts, and sidestepping the full-register comparison operations required of standard modular reduction. Combined with a single modulo-$N$ correction (\alglines{montgomery:cl:redc}{mod-start}{mod-end}, in which we finally add $N$ if the estimate is negative), we have constructed the binary algorithm for Montgomery reduction presented in \alg{montgomery:cl:redc}. \begin{algorithm}[h!] \caption{Classical Montgomery reduction algorithm, \REDC{t}{N,2^m}} \label{alg:montgomery:cl:redc} \begin{algorithmic}[1] \Require{Modulus $N$, integers $t$, $m$ s.t. $t < N2^m$} \Ensure{$S = t2^{-m}\bmod N$, $u= -t N^{-1}\bmod2^m$} \State {$S \gets t$} \For {$k\gets 0$ to $m-1$} \label{alg-line:montgomery:cl:redc:est-start} \Comment {Estimation stage} \State {$u_k\gets S\bmod 2$} \label{alg-line:montgomery:cl:redc:LSB} \State {$S \gets S - u_k\cdot N$} \label{alg-line:montgomery:cl:redc:add} \State {$S \gets S / 2$} \label{alg-line:montgomery:cl:redc:shift} \EndFor \label{alg-line:montgomery:cl:redc:est-end} \Statex{} \If {$S < 0$} \label{alg-line:montgomery:cl:redc:mod-start} \Comment {Correction stage} \State {$S \gets S+N$} \EndIf \label{alg-line:montgomery:cl:redc:mod-end} \Statex \State \Return {$S$} \end{algorithmic} \end{algorithm} \newcommand{\ensuremath{\bar{Y}{}}}{\ensuremath{\bar{Y}{}}} \subsection{Quantum Montgomery Reduction} \label{sec:montgomery:qu:redc} Given the initial construction of [n+m]{t} outlined in \sect{div:mult}, we now introduce a reversible Montgomery reduction operator in imitation of the \ensuremath{\text{\textsc{Q-}}}\DIV{} operator defined in \eq{div:div}, \begin{equation} \eq{montgomery:qu:redc} {0} [n+m]{t} \lra[\ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{}] [n]{ t2^{-m} \bmod N } [m+1]{ t N^{-1} \bmod 2^{m+1} }, \end{equation} While the Montgomery reduction $(t2^{-m}\bmod N)$ is not unique for an unknown $t\ge N$, the \ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{} operation is bijective iff $t<2^{m+1}N$ and $\gcd(N,2) = 1$ by the Chinese remainder theorem. As in the classical procedure, we split the quantum Montgomery reduction operation into distinct \emph{estimation} and \emph{correction} stages. Mirroring \sect{div}, we will then couple the \ensuremath{\text{\textsc{Q-}}}\REDC{}{} operator the standard initial multiplication stage (computing {t}) and a final \emph{uncomputation} stage (clearing {t M^{-1}\bmod2^{m+1}}) in order to construct a quantum Montgomery multiplication operator, \ensuremath{\text{\textsc{Q-}}}\MonPro{}{}. \subsubsection{Estimation Stage} \label{sec:montgomery:qu:redc-est} We first compute the Montgomery estimate, $(t-u N)/2^m$. Because this alone is not a unique representation of $t$, we preserve the $m$ bits of $u$ that naturally fall out of the classical estimation procedure, arriving at the bijective mapping, \begin{equation} \eq{montgomery:qu:redc-est} {0} [n+m]{t} {}\lra{} [n+1]{ (t - u N)/2^m } [m]{ u }. \end{equation} By \eq{montgomery:cl:est-bound}, we require $n+1$ bits to represent the estimate state ${(t - u N)/2^m}_{n+1}$, necessitating a single ancillary bit in addition to the $n+m$ bits holding $t<nN$. We proceed as in \alg{montgomery:cl:redc}. Prior to the $k$th iteration of the estimation stage, the LSB of the accumulation register is equivalently the $k$th bit of $u$, or $u_k$ (\alg{montgomery:cl:redc}, \algline{montgomery:cl:redc}{LSB}). Classically, $u_k$ is then used to condition the subtraction of $N$, so as to clear the LSB of the accumulator (\algline{montgomery:cl:redc}{add}). This represents a two-to-one operation, necessitating the creation of a garbage bit in the reversible case. However, after each iteration, we know also that the newly cleared LSB will be immediately shifted out (\algline{montgomery:cl:redc}{shift}); we can therefore consider only the effect of subtraction on the remaining bits of the register. The subtraction occurs only when the accumulator is odd (and $N$ is odd by design), so no borrow will be generated by the first bit of the subtraction. It is then equivalent to subtract $\flr{N/2}=(N-1)/2$ from the truncated register, conditioned on the LSB ${s_0} = {u_k}$: \begin{equation} \eq{montgomery:qu:redc-est-step} [w]{S} = [w-1]{s_{w-1}...s_1} {s_0} = [w-1]{\flr{S/2}} {u_k} \lra [w-1]{\flr{S/2} - u_k\cdot\flr{N/2}} { u_k }. \end{equation} In this way, we avoid accumulating new ancilla bits with each reduction step. Iterating through $k=0,...,m-1$, we compute the Montgomery estimate $(t - u N)/2^m$ with $n$ controlled subtractions. The garbage information created in this sequence is simply the $m$ bits of $u$, which are computed in the place of the $m$ least significant bits of the input state [n+m]{t}. As shown in \fig{montgomery:qu:redc-est}, the sequence of in-place subtractions mirrors that of the division case (\sect{div}, \fig{div:div}), but with adders controlled by the least significant bits of the product register. Both reduction procedures have the same depth (in quantum adders)--while Montgomery reduction sidesteps the trial subtractions required in the $\ensuremath{\text{\textsc{Q-}}}\DIV{}$ operation, the implementation of controlled, in-place quantum adders requires a pair of out-of-place adders identical to the paired adders of the division operation. \begin{figure}[H] \centering \includecircuit[8]{montgomery-est.pdf} \caption{Estimation stage of the quantum Montgomery reduction algorithm (\sect{montgomery:qu:redc-est}), \ensuremath{\text{\textsc{Q-}}}\MonEst{ N,2^m}{}. Note the parallel to the division-based procedure (\fig{div:div}); where the latter computes the quotient $q$ with trial subtractions and re-additions conditioned on the MSBs of the accumulation register, Montgomery reduction allows for the computation of $u$ from the LSBs of the register.} \label{fig:montgomery:qu:redc-est} \end{figure} \subsubsection{Correction Stage} \label{sec:montgomery:qu:redc-cor} The second piece of the classical Montgomery reduction algorithm is a single modulo-$N$ correction (\alg{montgomery:cl:redc}, \alglines{montgomery:cl:redc}{mod-start}{mod-end}). For $-N < (t-u N)/2^m < N$, this correction requires a single controlled addition of $N$ conditioned on the sign of the estimate. Labeling the sign bit {s_\pm}, we perform, \begin{equation} [n+1]{(t-u N)/2^m} \lra {s_\pm} [n]{ (t-u N)/2^m + s_\pm \cdot N}, \end{equation} where by \eq{montgomery:cl:redc-equiv} the final term is equivalently $[n]{t2^{-m}\bmod N}$. We are now left with the sign bit ${s_\pm}$ as garbage. For odd $N$, the conditional addition of $N$ must change the LSB of the register. Defining the LSBs before and after the modular reduction, \begin{align} p_\oplus &\defeq \qty( t2^{-m}\bmod N ) \bmod 2,\\ s_\oplus &\defeq \qty( t - u N )/2^m \bmod 2, \end{align} we therefore find $s_\pm \oplus p_\oplus = s_\oplus$. By negating the sign bit ${s_\pm}$, conditional on ${p_\oplus}$, with a single \CNOT/ gate (as in \fig{montgomery:qu:redc-cor}), we return it to the pre-addition LSB ${s_\oplus}$. \begin{figure}[H] \centering \includecircuit[5]{montgomery-mod.pdf} \caption{Quantum circuit demonstrating the correction stage of a quantum Montgomery reduction algorithm (\sect{montgomery:qu:redc-cor}), assuming $-N<(t-u N)/2^m<N$.} \label{fig:montgomery:qu:redc-cor} \end{figure} We can then re-express, \begin{equation} s_\oplus \equiv \qty(t - u N) /{2^m} \equiv (\ensuremath{\smash{\tilde{u}}} - u) / 2^m \pmod{2}. \end{equation} where, \begin{equation} \eq{montgomery:qu:redc-UU} \ensuremath{\smash{\tilde{u}}} \defeq t N^{-1} \bmod 2^{m+1}, \end{equation} descends from an $(m+1)$-bit Montgomery estimate $(t-\ensuremath{\smash{\tilde{u}}} N)/2^{m+1}$. In this form, {s_\oplus} can be concatenated with [m]{u} and equivalently described, \begin{equation} {s_\oplus} [m]{u} = [m+1]{2^ms_\oplus +u} = [m+1]{ \ensuremath{\smash{\tilde{u}}} } = [m+1]{t N^{-1}\bmod 2^m}, \end{equation} completing the \ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{} operation introduced in \eq{montgomery:qu:redc}. \subsection{Uncomputation} \label{sec:montgomery:uncompute} In order to construct a quantum Montgomery multiplier from the $\ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{}$ operator, we must finally uncompute the auxiliary output state [m+1]{\ensuremath{\smash{\tilde{u}}}}, mirroring the uncomputation of the quotient in \sect{div}. Given the partial products composing $t$, we can express $\ensuremath{\smash{\tilde{u}}}$, \begin{equation} t N^{-1} \equiv \sum_{k=0}^{n-1} y_k \qty( 2^k X \bmod N )N^{-1} \pmod{2^{m+1}}. \end{equation} We can therefore classically precompute the classical addends $((2^kX\bmod N)N^{-1}\bmod 2^{m+1})$ for $k=0,...,n-1$, and use $n$ controlled $(m+1)$-bit subtractions, conditioned on the bits of [n]{y}, to clear the register. Identically to \sect{div:uncompute}, the narrow register enables parallelization of the quantum adders to an overall depth of \ord{\log_2^2n}. \begin{figure}[ht] \begin{center} \includecircuit[9]{montgomery-full.pdf} \end{center} \caption{Out-of-place quantum Montgomery multiplier \ensuremath{\text{\textsc{Q-}}}\MonPro{t}{N,2^m}, comprising the initial computation of [n+m]{t}, estimation ({\sect{montgomery:qu:redc-est}}) and correction (\sect{montgomery:qu:redc-cor}) stages of \ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{}, and a final uncomputation of [n]{\ensuremath{\smash{\tilde{u}}}}.} \label{fig:montgomery:qu:monpro} \end{figure} \subsection{Modular Multiplication via Quantum Montgomery Reduction} \label{sec:montgomery:mult} In the case of quantum-classical multiplication, we can use the quantum Montgomery multiplier developed here to compute a product where the quantum register remains in the standard representation. Observing the Montgomery product of an $N$-residue $X'$ and a value $y$ in standard representation, \begin{equation} \MonPro{X',y}{N,R} = (XR\bmod N)yR^{-1}\bmod N = Xy\bmod N, \end{equation} we find the modular product $Xy\bmod N$ returned in standard representation. Classically, this would represent a disadvantage for most operations: we now need to remap the result to an $N$-residue representation for subsequent calculations. In the quantum-classical case, however, we can precompute the residue of the classical multiplicand off-line, and use it to compute the residue, \begin{equation} \eq{montgomery:mult:prod} t' \defeq{} \sum_{k=0}^{n-1} y_k (2^kX'\bmod N), \end{equation} from {y} identically to the usual product [n+m]{t}. Applying \ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{} to [n+m]{t'}, we compute the modular product [n]{Xy\bmod N} in standard representation while relegating conversion overhead of Montgomery multiplication to classical precalculation. \subsubsection{Montgomery Multiplication with Two Quantum Inputs} \label{sec:montgomery:all-quantum} We can also define a quantum Montgomery multiplier taking as input two $n$-bit quantum values. This requires adapting the initial multiply so that it accepts two quantum inputs, from which we can compute [n+m]{t} with $n$ quantum shift-and-reduce operations, as defined in \sect{mod-mult}. The \ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{} algorithm described in \sect{montgomery:qu:redc} acts only on [n+m]{t} and is independent of the individual multiplicands, and therefore goes unchanged. However, the final uncomputation of [m+1]{\ensuremath{\smash{\tilde{u}}}} requires some modification: we can no longer precompute the addends $\{(2^kx\bmod N)N^{-1}\bmod2^{m+1},k=0,...,n\}$. Instead, noting \eq{montgomery:cl:redc-solves}, we perform an in-place multiplication of [m+1]{\ensuremath{\smash{\tilde{u}}}} by the classical value $N$ (requiring $m = \clog[2]{n}$ adders). We are then left with the truncated product [m+1]{t}, which we can clear by reversing the $n$ adders of the initial multiplication (but truncated to $m+1$ bits). The reversed sequence will further undo the shift of [n]{x} in the initial multiply: \begin{alignat}{2} [n+m+1]{0}[n]{x}[n]{y} & \lraover[\ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{}]{\ensuremath{\text{\textsc{Q-}}}\ensuremath{\text{\textsc{MAC}}}} [1]{0} [n+m]{t}&& [n]{2^nx\bmod N} [n]{y} \nonumber\\ & \lraover{\ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{}} [n]{xy2^{-m}\bmod N} [m+1]{\ensuremath{\smash{\tilde{u}}}}&& [n]{2^nx\bmod N} [n]{y} \nonumber\\ & \lraover[\ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{}]{\ensuremath{\text{\textsc{Q-}}}\ensuremath{\text{\textsc{MUL}}}(N)} [n]{xy2^{-m}\bmod N} [m+1]{t}&& [n]{2^nx\bmod N} [n]{y} \nonumber\\ & \lraover[\ensuremath{\text{\textsc{Q-}}}\REDC{N,2^m}{}]{\ensuremath{\text{\textsc{Q-}}}\ensuremath{\text{\textsc{MAC}}}^\dagger} [n]{xy2^{-m}\bmod N} [m+1]{0}&& [n]{2^nx\bmod N} [n]{y}. \end{alignat} \subsection{Quantum Multiplication} \label{sec:appen:mult} Given an in-place quantum adder, we can construct the quantum multiply-accumulate circuit, \begin{equation} [w]{z}[n]{y} \lra[\ensuremath{\text{\textsc{Q-}}}\ensuremath{\text{\textsc{MAC}}}(X)] [w]{Xy+z} [n]{y}. \end{equation} Using the input {z} as an initialized accumulator register, we add $2^kX$ in-place for each bit $y_k$ of [n]{y}, requiring $n$ controlled, in-place quantum adders. The \ensuremath{\text{\textsc{Q-}}}\ensuremath{\text{\textsc{MAC}}}{} operator can be generalized to any sum controlled by the bits of {y}. In particular, as is described in the text (\sect{div:mult}), we can accumulate the congruent value, \begin{equation} t \defeq \sum_{k=0}^{n-1} y_k\qty(2^k X \bmod N), \end{equation} such that $t\equiv Xy\pmod{N}$ and requires at most $w\le\clog[2]{nN}\le n+\clog{n}$ bits to represent. As before, we require $n$ in-place controlled additions to the accumulation register [w]{z}, now of the reduced partial products $2^kX\bmod N$. If the accumulator is smaller than is required to hold $t$ (i.e. $w<\clog[2]{(nN)}$), these adders are truncated and the resulting product is computed modulo-$2^w$. For classical $X$, we can similarly construct an in-place quantum multiplier, \begin{equation} [w]{0}[n]{y} \lra[\ensuremath{\text{\textsc{Q-}}}\ensuremath{\text{\textsc{MUL}}}(X)] [n+w]{Xy}, \end{equation} where the product is computed over the input register {y} and is implicitly modulo-$2^{n+w}$. For odd $X$, we can express the product, \begin{equation} Xy = y + y(X-1) = y + \sum_{k=0}^{n-1} y_k\cdot2^{k+1}\qty(\frac{X-1}{2}), \end{equation} where $(X-1)/2$ is an integer. Each addition of $2^{k+1}(X-1)/2$ is then conditioned on the $k$th bit of $y$, and affects only the bits more significant than $k$ in the resulting sum. Given a register initialized to [n]{y}, we can therefore perform each of these additions in-place in descending order ($k=n-1,...,0$), so that each bit $y_k$ of {y} controls an addition affecting only the more significant bits of the register before it is affected by any addition in the sequence. For even $X$, the addend $(X-1)/2$ is not an integer. Instead, we compute the equivalent product $(X/2^{\lambda})y$, where $\lambda=v_2(X)$ is the two-adic order of $X$. The full product $Xy$ can then be produced by simply concatenating this result with $\lambda$ zero-initialized ancilla bits. The in-place multiplier relies on the $k$ trailing zeros of each partial product $2^kX$, and so is not compatible with the partially-reduced multiply introduced above. However, given the distribution of trailing zeros in the set of reduced partial products, it is likely that the result can be computed over about $\log_2(n)$ bits of the input state. \section{Acknowledgments} The authors would particularly like to acknowledge Kevin Obenland at MIT Lincoln Laboratory, whose invaluable discussions, insight, and expertise in both the design of high-performance reversible arithmetic and the efficient computational analysis of reversible circuits was critical to the success of this work. We also graciously acknowledge the support of the NSF iQuISE IGERT and the Intelligence Advanced Research Projects Activity (IARPA). \newpage{} \bibliographystyle{unsrt} \subsection{Prefix Adders} \label{sec:circuits:prefix_adders} The calculation of the carry bit-string for an adder can be thought of as a prefix operation, i.e., $C = c_{n-1}\circ\cdots\circ c_{2}\circ c_1\circ c_0$. Where each $c_i$ is represented by the tuple $(p_i,g_i)$ and $p_i$/$g_i$ indicates that a carry is propagated/generated at position $i$. The prefix composition function is defined as: $(p_{ij},g_{ij}) = (p_i\land p_j, g_j\lor(g_i\land p_j))$. For a multi-bit adder with inputs $a_{[n-1:0]}$ and $b_{[n-1:0]}$ the single-bit inputs to the prefix network are calculated as: $p_i = a_i\oplus b_i$ and $g_i = a_i\land b_i$. The generate value from the first bit to bit $i$ (defined as $g_{0i}$) is the carry out of position $i$. For a multi-bit adder, a parallel network can be used to compute the prefix bits. For classical, non-reversible, adders many networks have been proposed and used to create adders. Two example parallel-prefix networks are shown in \fig{adder:prefix:prefix_structure}. For a description of these adders and others consult any textbook on computer arithmetic, for example~\cite{Ercegovac2004}. Reversible adders that are suitable for quantum computing can be constructed from parallel prefix networks, however, because of the constraints of reversible logic, which adders require the fewest resources and have the lowest depth may be different than in the classical non-reversible case. For example the adder described in~\cite{Draper2006} is based on the network structure shown in \fig{adder:prefix:prefix_structure}(a). The depth of this adder is logarithmic in the number of bits and is well suited for a reversible implementation because it uses low fan-in/out nodes and requires fewer gates than many of the other proposed log-depth networks. Linear-depth parallel-prefix networks can also be defined, for example the network shown in \fig{adder:prefix:prefix_structure}(b) has depth $n/2+1$ for $n$ bits. This adder is similar to a sequential ripple adder, but because we calculate two-bit propagate values across adjacent pairs of bits the carries can be rippled across two bits at each step. The odd-position carries are not in the critical path of the circuit and can be computed in a later step. We call this adder the prefix-ripple adder. We have implemented a reversible circuit based on the network of \fig{adder:prefix:prefix_structure}(b), and the repeating segment used to calculate pairs of carries is shown in \fig{adder:prefix:half_prefix}. The first section of the circuit calculates the 2-bit propagate values and can be executed in parallel across all bits in the adder. $n/2$ sequential \Toffoli/ gates are then used to ripple the carry to all even bit-positions of the adder. The last step is to calculate the odd-position carries. The carry at position $i$ can be calculated in parallel with the calculation of an even-position carry in a later step, and therefore only adds a \Toffoli/ depth of one to the entire circuit. A full circuit built from the two-bit segments of \fig{adder:prefix:half_prefix} would require $4$ \Toffoli/ gates for every two bits in the full out-of-place addition of two quantum values. The circuit also requires an additional $n$-bit ancilla register to hold the carries. Comparing the prefix-ripple adder to the ripple adder in~\cite{Cuccaro2004}, the new adder has half the \Toffoli/ depth but requires a factor of $2$ more \Toffoli/ gates, and an extra $n$-bit ancilla register. If the prefix-ripple adder is used to add a classical value to a quantum one, then the first \Toffoli/ gate in the two-bit segment is reduced in degree and the cost becomes $1.5$ \Toffoli/ gates per bit. Additionally the total number of qubits for this classical/quantum adder is $2n+1$, which is equivalent to that required by the adder in~\cite{Cuccaro2004} used in the same way. \begin{figure} \centering \includegraphics[scale=0.68]{fig-prefix-adders} \caption{Network structure for two different prefix adders. The shaded nodes produce both propagate (p) and generate (g) bits from the inputs and the un-shaded nodes only produce the generate bits. The Brent-Kung adder has depth $2\log_2(n)-1$ for $n$ bits and the prefix-ripple adder has depth $n/2+1$. } \label{fig:adder:prefix:prefix_structure} \end{figure} \begin{figure} \centering \inputtikz{half-prefix} \caption{Circuit to calculate two bits of the carry for the prefix-ripple adder. This adder requires $4$ \Toffoli/ gates when both inputs are quantum values. The section labeled \emph{2-bit propagate} can be executed in constant time across all the bits in the adder. The \emph{rippled carry} step requires $n/2$ steps for the entire adder and calculates the second carry bit in the pair. The first carry bit of the pair is calculated after rippling the carry and can be done in constant time for all bits in the adder. } \label{fig:adder:prefix:half_prefix} \end{figure} \section{Resource Evaluation} \label{sec:resources} In this section we present a detailed resource analysis of the modular multiplier designs introduced in the previous sections, alongside that of the standard modular-adder approach (\sect{mod-mult}). We focus our analysis on the quantum-classical modular multiplier (where one input is a classical parameter); as described in \sect{mod-mult:quantum-quantum}, in-place multiplication with two quantum inputs would require a circuit to calculate the multiplicative inverse of one of the inputs, and this inverse circuit would dominate the resources of the multiplier. If the particular details of this analysis are not of interest, a summary important takeaways can be found in \sect{resources:summary}. We explicitly generate circuits for each multiplier design with various sizes, different classical constants, and different classical moduli. We utilize the four adders discussed in the appendix: the Fourier basis adder~\cite{Draper00}, the logarithmic-depth prefix (or carry-lookahead) adder~\cite{Draper2006}, the linear-depth majority ripple adder~\cite{Cuccaro2004}, and the rippled prefix adder defined in \apx{circuits:prefix_adders}. Resource analysis is performed assuming two different hardware models: the first treating all gates equally, and the second accounting for the relative complexity of each gate in the context of a fault-tolerant logical circuit using quantum error-correcting codes. With each model, we determine an overall circuit \emph{size}, or combined cost of all gates, and \emph{depth}, or total latency of the circuit allowing for unbounded parallelization. We do not include locality constraints in our hardware models. The impact of locality is primarily dependent on the particular architecture and the addition technique employed, with roughly the same impact on each multiplier design. \subsection{Evaluation Methodology} \label{sec:resources:methodology} For each evaluation, we construct a full quantum circuit consisting of gates from a set natural to the adder that is being used. For example, the Fourier basis adders predominantly require \CRy/ controlled rotation gates, while all circuits utilize \X/ gates, with zero, one, or two controls. We use a C++ program to generate each circuit. The program can generate sequences of arbitrary size, to a practical limit. The user specifies the classical modulus and multiplicand, which are folded into the circuit to produce an optimized circuit unique to those classical inputs. Circuit depth for each hardware model is determined by then running the generated circuit through a scheduler, which places gates into parallel time slots based on the specified latency of each gate in that model. The scheduler reorders gates according to standard commutation rules: gates that do not share any qubits can be trivially reordered, and two gates that do share qubits can be reordered if they both commute either with the Pauli \Z/ operator or the Pauli \X/ operator. Finally, each circuit is verified for a variety of input values with a separate gate-level simulation program. \begin{table}[ht] \centering \begin{tabular}{r|c|c} \hline &\multicolumn{2}{c}{Hardware Model} \\ \cline{2-3} Gate & Equal-latency & Fault-tolerant \\ \hline $\CX/$ & 1 & 1\\ $\H/$ & 1 & 1\\ $\T/$ & 1 & 10 \\ $\Toffoli/$ & 1 & 40 \\ $\GATE{Pauli}$ & 1 & 1\\ $\Ry/(\alpha)$ & 1 & $66\log_2(n) + 33$\\ $\CRy/(\alpha)$ & 1 & $\sim66\log_2(n) + 35$\\ \hline \end{tabular} \caption{Unit costs of gates for two hardware models. The first ``Equal latency'' model assumes the same latency for all gates. The second, ``Fault-tolerant'' model enforces gate-dependent latencies, representing a rough cost in primitive gates available to an error-correcting code. See the text for a description of the cost of the single-qubit $\Ry/(\alpha)$ and controlled $\CRy/(\alpha)$ rotation gates. } \label{tab:resources:hwmodel} \end{table} Our resource evaluations assume two different hardware models. In the first model (which we will label ``equal-latency''), all gates, including controlled arbitrary angled rotation gates, have the same latency. This provides a useful characterization of our circuits, and comparison to other circuits in the literature. However, it is not realistic for architectures that implement limited gate sets. The second model (``fault-tolerant'') is motivated by the gates that are typically available on logical qubits implemented using a quantum error correction code. For standard Calderbank-Steane-Shor (CSS) codes~\cite{Steane1996}, Clifford gates such as $\CX/$ and $\H/$ are easy to implement whereas gates such as $\T/$ and $\Toffoli/$ require the use of more complicated circuitry and potentially the use of magic-state distillation, which can be costly. For this reason we assign a cost of $10$ to the $\T/$ gate and a cost of $40$ to the $\Toffoli/$ gate. In order to construct controlled rotation gates fault-tolerantly, we decompose each into the sequence of two \CNOT/ gates and two unconditional rotations depicted in \fig{resources:cy-decomp}. As shown, the unconditional $\Ry/(\alpha/2)$ rotation in the decomposed gate commutes with the remainder of the operation, as well as adjacent single-qubit and controlled rotations targeting that qubit. We can therefore collect and merge these commuting gates into a single unconditional rotation per qubit, as shown in \fig{resources:cys-decomp}. We then expect that, in the fault-tolerant model, each decomposed controlled rotation has an amortized cost of one rotation gate in addition to the two \CNOT/ gates. Also shown in \fig{resources:cys-decomp}, the control qubit of the decomposed gate is only involved in the \CNOT/ operations; given the low latency of \CNOT/s relative to rotation gates in this model, the latter can be be further parallelized between the \CNOT/s of adjacent gates sharing a control qubit. \begin{figure}[!h] \centering \includecircuit[2.2]{cr-decomp.pdf} \caption{Decomposition of the $\CRy/(\alpha)$ controlled rotation gate into \CNOT/s and single-qubit rotations. Because the control qubit is not active during the single-qubit rotations, this decomposition also allows for greater parallelization than a composite two-qubit gate of the same latency.} \label{fig:resources:cy-decomp} \end{figure} \begin{figure}[!h] \centering \includecircuit[6]{crs-decomp.pdf} \caption{The outer rotation gate of the decomposed $\CRy/$ gates (\fig{resources:cy-decomp}) can be commuted through adjacent gates, and combined into a single rotation per qubit. This freed control qubit enables further parallelization of adjacent gates, taking advantage of the low latency of the \CNOT/ gate relative to the rotation.} \label{fig:resources:cys-decomp} \end{figure} Arbitrary single-qubit rotations cannot be implemented directly on CSS codes, and instead must be approximated with sequences from the finite set of available gates. A sequence of $\T/$ and $\H/$ gates approximating a single-qubit rotation to an accuracy $\epsilon$ can be constructed with $3\log_2(1/\epsilon)$ steps, where each step contains an $\H/$ gate and a $\T/$ gate raised to an integer power~\cite{Ross2014}. Noting that the complexity of implementing a $\T/^k$ gate on a CSS code should be no more than that of a single $\T/$ gate, we assume that the cost of each $(\H/\T/^k)$ step is the same as that of one $\H/$ gate and one $\T/$ gate. The necessary accuracy $\epsilon$ for fault-tolerance is a function of the total number of rotation gates in the multiplier. To first order, the Barrett and Montgomery multipliers require $2n^2$ \CRy/ rotations, each of which can be implemented using two $\CX/$ gates and one effective single qubit rotation. We therefore choose $\epsilon = 1/(2n^2)$, requiring $3\log_2(2n^2) = 6\log_2(n) + 3$ steps to approximate the single-qubit rotation. Incorporating the costs of $\T/$ and $\H/$ gates determined above, we estimate a total cost of $66\log_2(n) + 33$ per rotation, or $66\log_2(n)+35$ for the \CRy/ gate in \fig{resources:cy-decomp}. The cost of each gate in the two hardware models is summarized in \tab{resources:hwmodel}. Given the finite precision with which we are approximating rotation gates, it is also reasonable to explicitly limit the precision of the controlled rotation gates we can implement. In particular, we can drop rotations with an angle $\theta^2 \approx\epsilon$, as these are sufficiently well approximated by the identity operation. This simplification can dramatically reduce the overall gate count, while in the presence of decoherence possibly improving operator fidelity~\cite{Barenco1996}. For example, the set of rotation gates required by the \QFT/ is described by $\{\theta=\pi/2^k\mid k=1,...n\}$, and so can be truncated after $k\sim\log{n}+2$ as described in~\cite{Barenco1996}. We use this approximation in the analysis of all Fourier-basis circuits. For each analysis, we construct sequences with increasing input register widths $8\le n\le2048$. To avoid pathological cases, for a given $n$ we generate and average the behavior of eight unique quantum circuits, with (when applicable) classical moduli $2^{n-1}<N<2^n$ and multiplicands $0<X<N$ randomly selected from uniform distributions. We expect a slight reduction in circuit size from the worst case due to this randomization; for example, each binary adder contains a number of gates conditioned on the bits of the classical addend, half of which can be expected to be eliminated for a randomly chosen addend (with negligible variance for large $n$). \subsection{Adder Comparison} \label{sec:resources:adders} We first evaluate the different quantum addition techniques we will use to benchmark our modular multipliers. In \fig{resources:plots:adders}, we show the results of simulations of each of the adders using the two hardware models described above. Each circuit is generated to perform in-place, controlled quantum-classical addition, with a quantum input register size $0<n\le2048$ randomly chosen from a logarithmic distribution. In the equal-latency model (i.e. assuming all gates are equivalent), we see the expected logarithmic scaling of the prefix adder depth, and linear scaling of the ripple-carry and prefix-ripple constructions. As in~\cite{Cuccaro2004}, asymptotic latency of the ripple-carry adder is $\sim2.0n$. The prefix-ripple circuit improves upon the ripple-carry, approaching a depth of $\sim1.0n$ in our data set, but doubles the total number of gates required. This observed depth matches what we would expect for a circuit dominated by one \Toffoli/ gate per two bits (where both the forward and reverse circuits required for the in-place operation contribute $n/2$). The Fourier-basis adder simply consists of $n$ controlled rotation gates in this model, and afford no parallelization due to the due to the shared control qubit. The latencies of the three binary adders are dominated by \Toffoli/ gates, and so are increased proportionally in the fault-tolerant model. However, the total gate count of the prefix adder, having a greater overall proportion of \Toffoli/ gates, is increased more significantly than the ripple and prefix-ripple circuits. The decomposition of rotation gates required in the case of Fourier-basis addition in this model results in \ord{n\log n} total gates, dominating the linear sizes of the binary adders. However, the latency of the fault-tolerant Fourier-basis adder is only increased by a factor of two at large $n$. As described above, the logarithmic-depth single-qubit rotations can be parallelized such that the asymptotic latency is dominated by the $2n$ \CNOT/ gates sharing a control qubit. Below $n<652$, the latency of two single qubit rotations is greater than $2n$ and so must dominate the circuit depth. This transition is clearly visible in the Fault-tolerant latency plot of \fig{resources:plots:adders}. The depth of the Fourier-basis adder is comparable to the logarithmic-depth prefix adder prior to the dominance of the $2n$ \CNOT/s, and consistently below that of the ripple-carry and prefix-ripple adders. \begin{figure}[!htb] \centering \includeplot{adder} \caption{Resources required for standalone quantum adder implementations. The details of the adders are described in the Appendix. The depth is the latency of the parallelized circuit and the size is the total number of gates in the circuit. The inflection point in the depth of the fault-tolerant Fourier-basis adder is where the $2n$ \CNOT/ gates from the adder's control qubit begin to dominate the logarithmic-depth fault-tolerant rotations. Logarithmic depth could be preserved by fanning out the control, at the cost of an additional $\sim n$-qubit register, but this is unnecessary in any of our constructions.} \label{fig:resources:plots:adders} \end{figure} \subsection{Modular Multiplier Components} \begin{figure}[!ht] \begin{center} \includecircuit[8]{generic-full.pdf} \end{center} \caption{Generalized three-stage structure of our three proposed out-of-place modular multipliers, where $m\approx\log_2n$. The size of each circuit is asymptotically dominated by the initial $n$-addition \multiplication/ stage, with the \reduction/ stage requiring only \ord{\log n} adders, and the \uncomputation/ stage adders only spanning \ord{\log n} qubits. As in \sect{mod-mult:in-place}, the in-place multiplier then requires the structure to be repeated in reverse.} \label{fig:resources:generic-mult} \end{figure} The quantum division, Montgomery multiplication, and Barrett reduction circuits introduced in the preceding sections share a common high-level three-stage structure, diagrammed in \fig{resources:generic-mult}. The initial \multiplication/ stage, identical in the three designs, consists of $n$ controlled adders arranged as an $(n+\log_2n)$-qubit quantum accumulator. The goal of each design has been to reduce the overall complexity of quantum modular multiplication to that of this step. We therefore begin by generating circuits for the \multiplication/ operation in isolation with each of the quantum addition techniques outlined above. The sizes and depths of the generated circuits are displayed in \fig{resources:plots:multiply}. Observing these results in comparison to \fig{resources:plots:adders}, in each case the total gate count predictably approaches that of $n$ isolated adders. When constructed with any of three binary addition circuits, the overall circuit latency of the multiplier is increased by the same factor, indicating that the scheduler did not find significant opportunity to parallelize gates between adjacent adders. The commutable rotations composing the Fourier-basis multiplier, however, enable parallelization across the entire accumulator. In the equal latency model, this allows $n$ simultaneous controlled rotation gates in each time step, circumventing the control bottleneck of the single adder and reducing the total latency to $(n+\log_2n)$. In the fault-tolerant model, the sequential adders further allow the commutation and combination of single-qubit rotations, as in \fig{resources:cys-decomp}. Accordingly, the circuit latency observed in \fig{resources:plots:multiply} is close to $66n\log_2(n)$, or that of $n$ single-qubit rotations alone. Even after this incurred cost in the Fault-Tolerant model, the Fourier-basis \multiplication/ circuit has the lowest depth among the adders. Unfortunately, the total gate count in this model is \ord{n^2\log n}, asymptotically dominating the $\ord{n^2}$ gates required by the three binary circuits. \begin{figure}[!htb] \centering \includeplot{multiply} \caption{Resource requirements of the initial \multiplication/ circuit common to the three modular multipliers, comprising $n$ sequential $(n+\log_2n)$-qubit quantum adders. In both hardware models, we see the expected asymptotic speedup of the logarithmic-depth prefix adder over the linear-depth ripple and prefix-ripple circuits (upper plots), and the corresponding increase in circuit size (lower plots). In the equal-latency model, the depth of the parallelized Fourier-basis multiplier is linear in $n$, while the prefix adder scales as \ord{n\log n} and the ripple and prefix ripple as \ord{n^2}. In the Fault-tolerant model, the \ord{\log n} depth of controlled rotation gates results in the higher \ord{n^2\log n} asymptotic scaling of the Fourier-basis multiplication, while its parallelized depth remains the least of the four.} \label{fig:resources:plots:multiply} \end{figure} The division, Montgomery, and Barrett circuits are differ in their respective \reduction/ stages, in which the modular reduction (or Montgomery reduction) is computed from the non-modular product, alongside an \ord{\log n}-qubit quotient-like term (which is necessarily produced by a generic information-preserving modular reduction). The key to the performance of these circuits is in limiting the number of additions required in this stage to \ord{\log n}. The resources required for the \reduction/ stage of each proposed multiplier, in both the Fourier-basis and binary (prefix) implementations, are compared in \fig{resources:plots:reduce}. Due to its logarithmic dependency on register size, a small number of adds relative to the accumulator size makes the decreased latency of the prefix adder relative to the other binary circuits even more pronounced in this stage, allowing the reduction to be performed in poly-logarithmic time compared to the $\ord{n\log n}$ depth achieved with ripple and prefix-ripple adders. For each binary adder, the circuit size and latency of this stage are overwhelmingly dominated by the initial \multiplication/ stage in \fig{resources:plots:multiply}. Conversely, as described in \sect{fourier}, the need for at least one comparison operation within each multiplier's \reduction/ stage necessitates the invocation of intermediary \QFT/s in a Fourier-basis implementation, making this the highest-latency stage of the Fourier-basis modular multipliers. The Barrett and Montgomery circuits, requiring a constant number of \QFT/s, have linear depth in the equal-latency model. The \ord{\log n} comparisons in the \ensuremath{\Phi\text{-}}\DIV{} circuit increase this latency to \ord{n\log n}, asymptotically dominating the linear depth of the \multiplication/ stage. However, assuming finite-precision rotations, the gate counts of the circuits are \ord{n\log n} and \ord{n\log^2 n}, respectively, all scaling comparably to the binary \reduction/ circuits and asymptotically smaller than the Fourier-basis \multiplication/ stage. \begin{figure}[!htb] \centering \includeplot{reduce} \caption{\reduction/ stage of proposed modular multipliers, constructed with prefix (hollow marks) and Fourier-basis (solid marks) adders. In the prefix case, the circuit size and latency of each circuit is comparable to the \uncomputation/ stage and asymptotically dominated by the initial \multiplication/. The Fourier-basis \reduction/ stages also require asymptotically fewer gates than the corresponding \multiplication/ stages, but have greater depth due to the remaining of comparison operations. Like the \multiplication/ stage, the Barrett and Montgomery circuits have linear depth in the equal-latency model, whereas the division circuit requires \ord{\log n} comparisons and therefore has \ord{n\log n} depth.} \label{fig:resources:plots:reduce} \end{figure} The final step of each modular multiplier is a $(\log n)$-bit quantum accumulator, consisting of $n$ adders controlled by the bits of the input state [n]{x}. The \uncomputation/ step in \fig{resources:generic-mult} serves to uncompute the $(\log_2n)$-qubit quotient-like term produced by the preceding \reduction/ stage. As described in \sect{div:uncompute}, with the binary circuits we can parallelize adders over work qubits already necessary for the preceding stages, so as to match the poly-logarithmic circuit depth achieved with prefix adders in the \reduction/ stage. The resource requirements for the parallelized \uncomputation/ stage constructed with each addition method are shown in \fig{resources:plots:uncompute}. In the case of the binary adders, we immediately find that the size and depth of this step is then asymptotically insignificant in comparison to the initial \multiplication/ stage (\fig{resources:plots:multiply}). In the equal-latency model, we find that the ripple-carry circuit has by a small margin the lowest latency for $n<2^6$, at which point it becomes comparable to the prefix-ripple circuit. The latter performs marginally better in the fault-tolerant model. In all cases, the prefix circuit has the greatest size and latency of the binary adders. Observing \fig{resources:plots:adders}, in the range of $n$ being analyzed, we expect the depth of a $(\log_2n)$-qubit adder to be comparable with any of the three circuits, while the additional qubits required by the prefix adder reduces the number of additions we can perform simultaneously in the parallelized accumulator. We expect that for even larger multiplier sizes the prefix adder would again provide the fastest circuit. In the Fourier-basis case, we cannot further parallelize adders, resulting in a linear depth in the equal-latency model and \ord{n\log n} depth in the fault tolerant model. In both models, the gate count of the Fourier-basis \uncomputation/ remains asymptotically dominated by the \multiplication/ stage. \begin{figure}[!htb] \centering \includeplot{uncompute} \caption{\uncomputation/ stage accumulator: each of the proposed modular multipliers requires the clearing of a $(\log_2n)$-qubit quotient register, requiring additions conditioned on each bit of the $n$-qubit input register. Using binary adders, these can either be executed sequentially or parallelized over the work qubits necessary for the preceding stages. The Fourier-basis implementation does not afford this parallelization, resulting in its linear depth in the equal-latency model.} \label{fig:resources:plots:uncompute} \end{figure} \subsection{Multiplier Comparison} We now benchmark the optimized resource requirements of each of the quantum modular multipliers, constructed with both binary and Fourier-basis adders. For each design, we construct circuits for in-place modular multiplication, incorporating quantum control with the third method outlined in \sect{mod-mult:control} (at the cost of $3n$ controlled-\SWAP/ operations, decomposed into \Toffoli/ and \CNOT/ gates). We begin with the binary-basis multipliers. Given the results of the previous section, we use the prefix adder for all \ord{n}-qubit quantum adders, and the prefix-ripple adder for the $(\log_2n)$-qubit \uncomputation/-stage accumulator. The results for each multiplier, for both hardware models, are shown in \fig{resources:mults-binary}, where we have normalized by the expected asymptotic scaling ($\ord{n\log_2n}$ depth and $\ord{n^2}$ size). As promised, we find that the asymptotic latency and gate count of each of the three modular multipliers proposed in this work is twice that of the initial \multiplication/ stage alone (where the factor of two results from the two out-of-place circuits required for in-place modular multiplication). Circuits generated with standard modular addition approach, also shown in \fig{resources:mults-binary}, increase both the size and depth by another factor of three, corresponding to the three quantum additions required for each modular adder. \begin{figure}[!htb] \centering \includeplot{prefix} \caption{\label{fig:resources:mults-binary}In-place, controlled modular multipliers constructed with binary quantum addition circuits. We use the logarithmic-depth prefix circuit for the \ord{n}-qubit adders, but the prefix-ripple circuit for the $(\log_2n)$-qubit \uncomputation/-stage accumulator. Both size and depth are normalized by their expected asymptotic scaling.} \end{figure} \begin{figure}[!bht] \centering \textbf{Qubits Required for Modular Multipliers}% \\\vspace{2ex} \begin{minipage}[c]{0.5\textwidth} \includeplot{width} \end{minipage}\hfill \begin{minipage}[c]{0.5\textwidth} \caption{Qubit cost of modular multipliers with prefix (hollow marks) or Fourier-basis (solid marks) addition. Asymptotically, each proposed circuit requires $5n$ qubits with prefix adders, identically to the standard modular-addition approach. The Fourier circuits require just $2n$ qubits asymptotically, matching the low-ancilla but much more costly circuit in~\cite{Beauregard2003}. At small $n$, the additional $\ord{\log n}$ qubits required by our circuits become apparent. The Barrett multiplier further requires an \ord{\log n}-qubit register for {\xyapp}, causing its dominance at low $n$.\label{fig:resources:plots:mod-mult-qubits}} \end{minipage} \end{figure} In \fig{resources:plots:fourier}, we show the resources consumed by the same multipliers constructed with Fourier basis operations. Here, the circuit depths of the different modular multipliers deviate significantly, driven by the intermediary \QFT/s necessary for comparison or modular reduction. The standard modular-addition technique, as introduced in~\cite{Beauregard2003}, requires a total of $8n$ \QFT/s, resulting in the \ord{n^2} depth observed in the equal-latency model in \fig{resources:plots:fourier}. The \reduction/ stage of the division-based circuit drives its \ord{n\log n} depth, while both the Barrett and Montgomery circuits display linear depth at large $n$, plateauing below the $14n$ worst-case latency determined in \sect{montgomery}. At large $n$, the total number of gates required by all three proposed circuits coalesce to about $2n$. This asymptotic size is twice that observed in \fig{resources:plots:multiply}, indicating that all three circuits are dominated in size by their non-modular multiplication operations. As a result of the imposed finite precision of Fourier rotations, the number of gates required by the division algorithm's \reduction/ stage is eventually dominated by the $n^2$ gates in the initial \multiplication/ stage (however, as seen in \fig{resources:plots:fourier}, the size of the division-based circuit remains greater in the range of circuits observed). In the fault-tolerant model, the total gate count of each circuit is increased by a factor of $\sim66\log_2(n)$ factor, corresponding to the cost of decomposing rotation gates. However, after scheduling, we find that the total latency is only increased by $\sim40\log_2(n)$, demonstrating the additional parallelization enabled by the decomposition of controlled rotation gates described in \sect{resources:methodology}. \begin{figure}[!htb] \centering \includeplot{draper} \caption{In-place, controlled modular multipliers constructed with Fourier-basis adders. Both size and depth are normalized by the expected best-case asymptotic scaling.} \label{fig:resources:plots:fourier} \end{figure} \begin{figure}[!htb] \centering \includeplot{montgomery} \caption{Proposed modular multipliers constructed with both Fourier-basis arithmetic (solid marks) and prefix addition (hollow marks). Both size and depth are normalized by the expected best-case asymptotic scaling.} \label{fig:resources:plots:prefix-fourier} \end{figure} Finally, we plot the prefix and Fourier-basis implementations of the three proposed multipliers together in \fig{resources:plots:prefix-fourier}, and their corresponding qubit costs in \fig{resources:plots:mod-mult-qubits}. A principal motivator for Fourier-basis quantum arithmetic is in reducing the number of required qubits. Motivating its introduction in~\cite{Beauregard2003}, the Fourier-basis modular-addition-based circuit requires at most $2n+3$ qubits for any $n$. The number of qubits required for Fourier-basis implementation of each of our proposed circuits also approaches $2n$ asymptotically, requiring only a logarithmic increase in ancilla. Notably, prior speedups to Fourier-basis modular multiplication have come at the expense of additional \ord{n}-qubit registers~\cite{Kutin2006,Pavlidis2014}. Comparatively, the ripple and prefix-ripple circuits each consume $\sim3n$ qubits, requiring an additional $n$-bit register to contain carry bits. The prefix circuit, while having the lowest latency, of the binary adders requires $5n$ qubits. In each case, the asymptotic ancilla cost for modular multiplication is that of $n$-qubit addition with the chosen adder. While the $2n^2$ gate count of and linear depth of the Fourier-basis Barrett and Montgomery multiplication circuits in the equal latency model clearly outperform the $\sim2^5n^2$ gates and \ord{n\log n} latency observed of the multipliers constructed from prefix adders, this comparison is likely not valid in conjunction with quantum error correction. In the more realistic fault-tolerant model, the Barrett and Montgomery circuits implemented with Fourier-basis addition and all three proposed multipliers constructed with prefix adders have \ord{n\log n} asymptotic latency. In our simulations, the latencies of the Fourier-basis circuits were about $75\%$ greater than those from the prefix-adder circuits, and fall below the latencies of circuits constructed in the typical modular-addition approach with either adder. The small latency increase indicates a potentially reasonable tradeoff for the qubit reduction enabled by the Fourier-basis circuits, given reasonable architectural assumptions. Comparatively, the latencies of the modular addition circuits are increased from \ord{n\log n} to \ord{n^2\log n} when implemented with fault-tolerant Fourier-basis arithmetic instead of prefix adders. Further, while the gate counts of all of the Fourier-basis circuits grow faster than the binary circuits by a factor of \ord{\log n}, the discrepancy remains within a factor of three in the large range of $n$ observed. \subsection{Summary} \label{sec:resources:summary} We find a number important takeaways from the above experimental analysis of the quantum division, Barrett reduction, and Montgomery reduction circuits for quantum modular multiplication. First, empirical analysis confirms the dominance of the non-modular multiplication process in the complexity of the three modular multiplication procedures introduced in this work. Benchmarking the individual stages of the modular multipliers with a variety of binary quantum adders, we have found that the gate count and circuit latency of the modular reduction and uncomputation components become insignificant at large $n$ compared to the initial multiplication step of each circuit. Additionally, the observation of the different addition circuits further enabled the characterization of multiplier stages in relation to their component adders, as well as corresponding optimizations. For example, in the range of register sizes considered, the normally-fast prefix adder turns out to perform the worst of the adders in the case of the width-sensitive parallelized \uncomputation/ accumulator. For the combined, in-place modular multiplier, the total asymptotic gate count and circuit latency observed of each design was twice that of a single $n$-bit quantum accumulator (where the factor of two results from the two out-of-place modular multipliers required for one in-place circuit). In total, this represents a factor of three reduction in complexity from the typical modular addition approach to quantum modular multiplication, in which each $n$-bit addition required for product calculation is coupled with two more to perform a modular reduction step. Accordingly, the number of ancilla qubits required for each modular multiplier is principally determined by the requirements of the $n$-bit addition circuit implemented, with all three of the proposed circuits requiring only a logarithmic number of ancilla beyond those required of for modular-addition-based circuit. Further, in contrast to many previous proposals for fast modular multiplication~\cite{Zalka1998,Kutin2006,Pavlidis2014}, the proposed algorithms do not rely on inexact mathematical calculations or assumptions, and outperform the modular-adder design for input sizes as low as $n=2^5$. Second, the modular multipliers introduced here, and the Barrett and Montgomery circuits in particular, present a unique amenability to implementation with quantum Fourier-basis arithmetic. All three of the proposed modular multipliers can be implemented with quantum Fourier-basis adders with $2n+\ord{\log n}$ qubits, matching asymptotically the low-qubit modular-addition circuit proposed in~\cite{Beauregard2003}. Assuming equal cost of each quantum gate, the total gate count of all three circuits approaches $2n^2$ at large $n$, again determined by the single multiplication procedure. Further, the Barrett and Montgomery reduction circuits circumvent the costly Fourier-basis comparisons that dominate circuit latency of the modular addition circuit (and, to a lesser extent, the division-based circuit introduced here). Experimentally, both circuits were demonstrated with latencies just below the analytically-determined $14n$-gate worst-case depth. Comparatively, the Fourier-basis multiplier constructed from modular adders has \ord{n^2} latency and requires \ord{n^2\log n} gates, while the faster circuit introduced in~\cite{Pavlidis2014} requires $9n$ qubits and has a depth of $1000n$, and the inexact multiplier introduced in~\cite{Kutin2006} has the slightly smaller $12n$-gate depth but requires $3n$ total qubits. Finally, our analysis demonstrates the competitiveness of Fourier-basis arithmetic for realistic (fault-tolerant) quantum modular multiplication. The arbitrary rotations composing Fourier-basis operations can not be implemented directly on CSS codes, but instead must be decomposed into a sequence of available operations. Given a reasonable set of architectural assumptions and the performance bounds for approximating arbitrary quantum gates presented in~\cite{Ross2014}, we nonetheless find that Fourier-basis implementations of the proposed Barrett and Montgomery multipliers can be demonstrated which perform comparably to the equivalent implementations with fault-tolerant logarithmic-depth binary adders. After optimizing specifically for the decomposed Fourier rotation gates, with the assistance of a computerized scheduler, the Fourier-basis multipliers had less than twice the latency of the binary circuits in our model, in exchange for the $60\%$ reduction in required qubits. Gate counts for fault-tolerant Fourier-basis circuits continued to dominate their binary counterparts by a logarithmic factor, but remained within a factor of three in the $8\le n\le2048$ range of input sizes modeled. \subsection{Select Undo Adder} \label{sec:adders:select-undo} The adders descried in this section thus far have assumed unconditional addition of two values. However, normally we require an adder that includes a global control. For example, this is the case when the adder is used in a multiplier circuit. Typically a reversible in-place addition first calculates ${x}{y}{0}\lra{}{x}{y}{x+y}$ out-of-place and then because the subtraction: ${x}{x+y}{0}\lra{}{x}{x+y}{y}$ produces the same set of outputs, we can run it in reverse after the addition to produce our in-place addition: ${x}{y}{0}\lra{}{x}{x+y}{0}$. Because ${y}$ and ${x+y}$ are at positions $2$ and $3$ respectively after the addition, but the subtraction requires that they are in the opposite order, we must swap the two registers. However, in most cases we can just relabel the two registers instead of swapping them. \begin{figure} \centering \inputtikz{undo-adder} \caption{Inplace select undo adder. When the select bit is $0$ the second reverse adder uncomputes the first addition. When the select bit is $1$ the second addition acts as a reverse subtraction clearing the ${y}$ input.} \label{fig:quantum:adder:undo_adder} \end{figure} The similarity between addition and subtraction suggests a way to perform controlled in-place addition. A controlled adder should perform the in-place addition as described above when the control bit is set, but when the bit is reset, it should undo the action of the addition. In the first case we are performing a reverse subtraction, and in the later case a reverse addition. We can easily construct a circuit that selects between these two cases based on the state of a control bit. A subtraction is just an addition where one of the inputs has been negated. In two-complemented arithmetic this can be done by selectively flipping each bit with a \CNOT/ gate and setting the input carry to one. The SWAP between the two out-of-place adders must now be controlled, requiring two \CNOT/ gates and one \Toffoli/ per bit. This controlled adder, which we call the \emph{select undo} adder, is illustrated in \fig{quantum:adder:undo_adder}. For this adder neither of the out-of-place adders is controlled and, depending on the type of adder employed, this may lead to a reduction in the number of \Toffoli/ gates required. The main extra cost of the select-undo adder are the $n$ \Toffoli/ gates required between the two out-of-place adders. However, the SWAP that they are used to implement is between two existing registers and therefore no extra ancillas are required.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Physical laws are often presented by the means of differential equations. The original discoveries of differential equations associated with real-world physical processes typically require a good understanding of the physical laws, and supportive evidence from empirical observations. We consider an inverse problem of this - from the experimental real data, how to directly recognize the underlying PDE. We combine tools from machine learning and numerical PDEs to explore the given data and automatically identify the underlying dynamics. Let $ \{u_{i}^n | i=1, \dots, N_1 \text{ and } n= 1, \dots, N_2\}$ be the given discrete time dependent data, where the index $i$ and $n$ represent the spacial and time discrete domain, respectively. The objective is to find the differential equation, i.e., an operator $\mathcal{F}$: \[ u_t = \mathcal{F}(x,u,u_x,u_{xx}) \text{ such that } u(x_i,t_n) \approx u_i^n. \] Recently there have been a number of important works on learning dynamical systems or differential equations. Two pioneering works can be found in \cite{bongard2007automated,schmidt2009distilling}, where symbolic regression was used to recover the underlying physical systems from experimental data. In \cite{brunton2016discovering}, Brunton, et al. considered the discovery of nonlinear dynamical systems with sparsity-promoting techniques. The underlying dynamical systems are assumed to be governed by a small number of active terms in a prescribed dictionary, and sparse regression is used to identify these active terms. Various extensions of this sparse regression approach can be found in \cite{kaiser2018sparse,loiseau2018constrained,mangan2017model,rudy2017data}. In \cite{schaeffer2017learning}, Schaeffer considered the problem of learning PDEs using spectral method, and focused on the benefit of using $L^1$ minimization for sparse coefficient recovery. Highly corrupted and undersampled data are considered in \cite{tran2017exact,schaeffer2018extracting} for the recovery of dynamical systems. In \cite{schaeffer2018extracting}, Schaeffer et al. developed a random sampling theory for the selection dynamical systems from undersampled data. These nice series of works focused on the benefit and power of using $L^1$ minimization to resolve dynamical systems or PDEs with certain sparse pattern \cite{schaeffer2013sparse}. A Bayesian approach was considered in \cite{zhang2018robust} where Zhang et al. used dimensional analysis and sparse Bayesian regression to recover the underlying dynamical systems. Another related problem is to infer the interaction function in a system of agents from the trajectory data. In \cite{bongini2017inferring,lu2018nonparametric}, nonparametric regression was used to predict the interaction function and a theoretical guarantee was established. There are approaches using deep learning techniques. In \cite{long2017pde}, Long et al. proposed a PDE-Net to learn differential operators by learning convolution kernels. In \cite{raissi2017physics}, Raissi et al. used neural networks to learn and predict the solution of the equation without finding its explicit form. In \cite{raissi2018hidden}, neural networks were further used to learn certain parameters in the PDEs from the given data. In \cite{qin2018data}, Residual Neural Networks (ResNet) are used as building blocks for equation approximation. In \cite{khoo2018switchnet}, neural networks are used to solve the wave equation based inverse scattering problems by providing maps between the scatterers and the scattered field (and vice versa). Related works showing the advantages of deep learning include \cite{khoo2018switchnet,lusch2018deep,qin2018data,raissi2018deep}. In this paper, we propose a new algorithm based on the convergence analysis of numerical PDE schemes. We assume that the governing PDE is a linear combination of a subset of a prescribed dictionary containing different differential terms, and the objective is to find the correct set of coefficients. We use finite difference methods, such as the 5-point ENO scheme, to approximate the spatial derivatives in the dictionary. While we utilize $L^1$ minimization to aid the efficiency of the approach, the main idea is to validate and correct the results by Time Evolution Error (TEE). This approach, we call Identifying Differential Equations with Numerical Time evolution (IDENT) is explored for data with non-periodic boundary conditions, noisy data and PDEs with varying coefficients for nonlinear PDE identification. For noisy data, we propose an order preserving denoising method called Least Square Moving Average (LSMA) to effectively denoise the given data. To tackle varying coefficients, we expand the number of coefficients in terms of finite element bases. This procedure called Base Element Expansion (BEE), again uses the fundamental idea of convergence in finite element approximation. From a theoretical perspective, we establish a performance guarantee based on an incoherence property, and define a new noise-to-signal ratio for the PDE identification problem. Contributions of this paper include: \begin{enumerate}\vspace{-0.2cm} \item{establishing a new direction of using numerical PDE techniques for PDE identification, } \vspace{-0.2cm} \item{proposing a flexible approach which can handle different boundary conditions, are more robust against noise, and can identify nonlinear PDEs with varying coefficients, }\vspace{-0.2cm} \item{establishing a recovery theory of Lasso for weighted $L^1$ minimization, which leads to the new definition of noise-to-signal ratio for PDE identification,}\vspace{-0.2cm} \item{systematically analyzing the noise and downsampling, and proposing a new denoising method called Least Square Moving Average (LSMA). } \end{enumerate} This paper is organized as follows: The main algorithm is presented in Section \ref{sec:ident}, aspects of denoising and downsampling effects are in Section \ref{sec:noise}, and PDEs with varying coefficients are in Section \ref{sec:varying}, followed by a concluding remark in \ref{sec:summary} and some details in the Appendix. Specifically, the set-up of the problem is presented in subsection \ref{ssec:setup}; details of the IDENT algorithm are in subsection \ref{ssec:algo}; a recovery theory for Lasso and the new noise-to-signal ratio are in subsection \ref{ssec:recovery}; and the first set of numerical experiments are in subsection \ref{ssec:num_nonoise}. In Section \ref{sec:noise} of denoising and downsampling, LSMA denoising method is introduced in subsection \ref{subsec:lsma}, numerical experiments for noisy data are presented in subsection \ref{subsecnoisyexperiment}, and downsampling effects are considered in subsection \ref{subsec:down}. In Section \ref{sec:varying}, we consider nonlinear PDEs with varying coefficients and introduce BEE motivated by finite element approximation. \section{Identifying Differential Equations with Numerical Time evolution (IDENT)} \label{sec:ident} We start with general notations in Section \ref{ssec:notation} and the set-up of the problem in Section \ref{ssec:setup}, then present our IDENT algorithm with the time evolution error check in Section \ref{ssec:algo}. A recovery theory is established in Section \ref{ssec:recovery}, and the first set of numerical experiments is presented in Section \ref{ssec:num_nonoise}. \subsection{Notations} \label{ssec:notation} We use bold letter to denote vectors, such as $\mathbf{a},\mathbf{b}$. The support of a vector $\mathbf{x}$ is the set of indices at which it is nonzero: ${\rm supp}(\mathbf{x}) := \{j : x_j \neq 0\}$. We use $A^T$ and $A^*$ to denote the transpose and the conjugate transpose of the matrix $A$. We use $x \rightarrow \varepsilon^+$ to denote $x > \varepsilon$ and $x\rightarrow \varepsilon$. Let $\mathbf{f} = \{f(x_i,t_n) | i=1,\ldots,N_1, n=1,\ldots,N_2\} \in \mathbb{R}^{N_1N_2}$ be samples of a function $f: \mathcal{D} \times [0,\infty) \rightarrow \mathbb{R}$ with spatial spacing $\Delta x$ and time spacing $\Delta t$. The integers $N_1$ and $N_2$ are the total number of spatial and time discretization respectively. We assume PDEs are simulated on the grid with time spacing $\delta t$ and spatial spacing $\delta x$, while data are sampled on the grid with time spacing $\Delta t$ and spatial spacing $\Delta x$. The vector $L^p$ norm of $\mathbf{f}$ is $\|\mathbf{f}\|_p = (\sum_{i=1}^{N_1}\sum_{n=1}^{N_2}|f(x_i,t_n)|^p )^{1/p} $. Denote $\|\mathbf{f}\| = \|\mathbf{f}\|_2$. The function $L^p$ norm of $\mathbf{f}$ is $\|\mathbf{f}\|_{L^p} = (\sum_{i=1}^{N_1}\sum_{n=1}^{N_2}|f(x_i,t_n)|^p \Delta x \Delta t )^{1/p} $. Notice that $\|\mathbf{f}\|_{L^p} = \|\mathbf{f}\Delta x^{1/p} \Delta t^{1/p}\|_p$. \subsection{The set-up of the problem} \label{ssec:setup} We consider the parametric model of PDEs where $\mathcal{F}(x,u,u_x,u_{xx})$ is a linear combination of monomials such as $1$, $u$, $u^2$, $u_x$, $u_x^2$, $uu_x$, $u_{xx}$, $u_{xx}^2$, $uu_{xx}$, $u_x u_{xx}$ with coefficients $\mathbf{a} = \{a_j\}_{j=1}^{10}$: \begin{equation}\label{E:constant_a} u_t = a_1 + a_2 u + a_3 u^2 + a_4 u_x + a_5 u_x^2+ a_6 u u_x + a_7 u_{xx} + a_8 u^2_{xx} + a_9 u u_{xx} + a_{10} u_x u_{xx}. \end{equation} We refer to each monomial as a feature, and let $N_3$ be the number of features, i.e., $N_3 = 10$ in \eqref{E:constant_a}. The right hand side can be viewed as a second-order Taylor expansion of $\mathcal{F}(u,u_x,u_{xx})$. It can easily be generalized to higher-order Taylor expansions, and operators $\mathcal{F}(u,u_x,u_{xx},u_{xxx}, \partial_x^4 u,\ldots)$ depending on higher order derivatives. This model contains a rich class of differential equations, e.g., the heat equation, transport equation, Burger's equation, KdV equation, Fisher's equation that models gene propagation. Evaluating \eqref{E:constant_a} at discrete time and space $(x_i,t_n), i=1,\ldots,N_1, n = 1,\ldots,N_2$ yields the discrete linear system $$ F \textbf{a} = \mathbf{b},$$ where $$\mathbf{b} = \{u_t(x_i,t_n)| i=1,\ldots,N_1, n = 1,\ldots,N_2\} \in \mathbb{R}^{N_1N_2},$$ and $F$ is a $N_1N_2 \times N_3 $ feature matrix in the form of {\footnotesize \begin{equation} \label{E:constant_F} F = \left( \begin{array}{cccccccccc} \vdots & \vdots & \vdots & \vdots &\vdots & \vdots &\vdots & \vdots &\vdots & \vdots \\ 1 & u(x_i,t_n) & u^2(x_i,t_n) & u_x(x_i,t_n) & u_x^2(x_i,t_n) & uu_x(x_i,t_n) & u_{xx}(x_i,t_n) & u_{xx}^2(x_i,t_n) & u u_{xx}(x_i,t_n) & u_x u_{xx}(x_i,t_n) \\ \vdots & \vdots & \vdots & \vdots &\vdots & \vdots &\vdots & \vdots &\vdots & \vdots \\ \end{array} \right). \end{equation} } We use $F[j]$ to denote the $j$th column vector associated with the $j$th feature evaluated at $(x_i,t_n), {i=1,\ldots,N_1, n=1,\ldots,N_2}$ The \textbf{objective} of PDE identification is to recover the unknown coefficient vector $\mathbf{a} \in \mathbb{R}^{N_3}$ from given data. Real world physical processes are often presented with a few number of features in the right hand side of \eqref{E:constant_a}, so it is reasonable to assume that the coefficients are sparse. For differential equations with varying coefficients, we consider PDEs of the form \begin{equation}\label{E:general_a} u_t = a_1(x) + a_2 (x) u + a_3 (x) u^2 + a_4(x) u_x + a_5(x) u_x^2+ a_6(x) u u_x + a_7(x) u_{xx} + a_8 (x) u^2_{xx} + a_9 (x) u u_{xx} + a_{10}(x) u_x u_{xx} \end{equation} where each $a_j(x)$ is a function on the spatial domain of the PDE. We expand the coefficients in terms of finite element bases $\{ \phi_l\}_{l=1}^{L} $ such that \begin{equation}\label{E:L} a_j (x) \approx \sum_{l=1}^{L} a_{j,l} \phi_l(x) \text{ for } j=1,\dots,N_3, \end{equation} where $L$ is the number of finite element bases used to approximate $a_j(x)$. Let $y_1<y_2<\cdots <y_L$ be a partition of the spatial domain. We use a typical finite element basis function, e.g., $\phi_l(x)$ is continuous, and linear within each subinterval $(y_i, y_{i+1})$, and $\phi_l(y_i)=\delta_{li}=1$ if $i=l$; $0$ otherwise. If the $a_j(x)$'s are Lipchitz functions, and finite element bases are defined on a grid with spacing $O(1/L)$. The approximation error of the $a_j(x)$'s satisfies \begin{equation}\label{E:approxError \|a_j - \sum_{l=1}^{L} a_{j,l} \phi_l\|_{L^p} \le O(1/L), \ p \in (0,\infty). \end{equation} In the case of varying coefficients, the feature matrix $F$ is of size $N_1 N_2 \times N_3 L$, {\footnotesize \begin{align} \label{E:general_F} F = \left( \begin{array}{ccc|ccc|c|ccc} \vdots & & \vdots & \vdots & & \vdots & & \vdots & & \vdots \\ \phi_1(x_i) & \dots & \phi_{L}(x_i) & u(x_i,t_n)\phi_1(x_i) & \dots & u(x_i,t_n)\phi_{L}(x_i) & \dots & u_xu_{xx}(x_i,t_n)\phi_1(x_i) & \dots &u_x u_{xx}(x_i,t_n)\phi_{L}(x_i) \\ \vdots & & \vdots & \vdots & & \vdots & & \vdots & & \vdots \\ \end{array} \right), \end{align} } and the vector to be identified is \[ \textbf{a} = \left(a_{1,1},\dots,a_{1,L} | a_{2,1}, \dots, a_{2,L} | \dots \dots \dots | a_{N_3,1}, \dots, a_{N_3,L} \right)^T \in \mathrm{R}^{N_3 L}. \] The feature matrix $F$ has a block structure. We use $F[j,l]$ to denote the column of $F$ associated with the $j$th feature and the $l$th basis. To be clear, $F[j]$ is the $j$th column of \eqref{E:constant_F}, and $F[j,l]$ is the $(j-1)L+l$th column of \eqref{E:general_F}. Evaluating \eqref{E:general_a} at $(x_i,t_n), i=1,\ldots,N_1, n = 1,\ldots,N_2$ yields the discrete linear system \ F \mathbf{a} = \mathbf{b}+ \boldsymbol{\eta}, \ where $\boldsymbol{\eta} = \{\eta(x_i,t_n) | i=1,\ldots,N_1, n=1,\ldots,N_2\} \in \mathbb{R}^{N_1 N_2}$ represents the approximation error of the $a_j(x)$'s by finite element bases such that $$\eta(x_i,t_n) = \left(\sum_{l=1}^L a_{1,l}\phi_l(x_i) - a_1(x_i)\right) + \ldots + \left(\sum_{l=1}^L a_{10,l}\phi_l(x_i) - a_{10}(x_i)\right) u_xu_{xx} (x_i,t_n).$$ In the case that $u,u_x,u_{xx}$ are uniformly bounded, $$\|\boldsymbol{\eta}\|_{L^p} \le O(1/L), \ p \in (0,\infty),$$ and $\boldsymbol{\eta} = 0$ when all coefficients are constants. \subsection{The proposed algorithm: IDENT} \label{ssec:algo} In this paper, we assume that only the discrete data $\{u_{i}^n | i=1, \dots, N_1\text{ and } n= 1, \dots, N_2\}$ and the boundary conditions are given. If data are perfectly generated and there is no measurement noise, $u_{i}^n = u(x_i,t_n)$ for every $i$ and $n$, and we outline the proposed IDENT algorithm in this section assuming the given data do not have noise. \textbf{The first step} of IDENT is to construct the empirical version of the feature matrix $F$ and the vector $\mathbf{b}$ containing time derivatives from the given data. The derivatives are approximated by finite difference methods which gives flexibility in dealing with different types of PDEs and boundary conditions (e.g. non-periodic). We approximate the time derivative $u_t$ by a first-order backward difference scheme: $$u_t (x_j,t_k) \approx \widehat{u_t} (x_j,t_n) := \frac{u(x_j,t_n) - u(x_j,t_{n-1})}{\Delta t},$$ which yields the error \ \widehat{u_t} (x_j,t_n) = {u_t} (x_j,t_n) + O(\Delta t). \ Let $\widehat\mathbf{b}$ be the empirical version of $\mathbf{b}$ constructed from data: $$\widehat{\mathbf{b}} = \{\widehat{u_t}(x_i,t_n): i=1,\ldots,N_1, n = 1,\ldots,N_2\} \in \mathbb{R}^{N_1N_2}.$$ We approximate the spatial derivative $u_x$ through the five-point ENO method proposed by Harten, Engquist, Osher and Chakravarthy \cite{ENO87}. Let $\widehat{u_x} (x_j,t_n)$ and $\widehat{u_{xx}} (x_j,t_n)$ be approximations of ${u_x} (x_j,t_n)$ and ${u_{xx}} (x_j,t_n)$ by the five-point ENO method which yields the error: \ \widehat{u_x} (x_j,t_n) = {u_x} (x_j,t_n) + O(\Delta x^4), \quad \widehat{u_{xx}} (x_j,t_n) = {u_{xx}} (x_j,t_n) + O(\Delta x^3). \ Putting $\widehat{u_x} (x_j,t_n)$'s and $\widehat{u_{xx}} (x_j,t_n)$'s to the feature matrix $F$ in \eqref{E:general_F} gives rise to the empirical feature matrix, denoted by $\widehat{F}$. For example, the second column of $\widehat{F}$ is given by $\{u_i^n | i=1,\ldots,N_1 \text{ and } n=1,\ldots,N_2\}$ as an approximation of $\{u(x_i,t_n) | i=1,\ldots,N_1 \text{ and } n=1,\ldots,N_2\}$ as follows \ (u_1^1, u_2^1, \dots,u_{N_1}^1, u_1^2, \dots, u_{N_1}^2, \dots, u_1^{N_2}, \dots,u_{N_1}^{N_2})^T \in \mathrm{R}^{N_1 N_2}. \ These empirical quantities give rise to the linear system \begin{equation} \label{E:linear1} \widehat F \mathbf{a}= \widehat\mathbf{b} + \mathbf{e} , \quad \mathbf{e} = \mathbf{b} -\widehat{\mathbf{b}} + (\widehat{F}-F)\mathbf{a} +\boldsymbol{\eta}, \end{equation} where the terms $\mathbf{b} -\widehat{\mathbf{b}}$, $(\widehat{F}-F)\mathbf{a}$ and $\boldsymbol{\eta}$ arise from errors in approximating time and spatial derivatives, and the finite element expansion of varying coefficients, respectively. The total error $\mathbf{e}$ satisfies \begin{equation} \label{E:errorbound} \|\mathbf{e}\|_{L^2} \le \varepsilon \text{ such that } \varepsilon = O(\Delta t + \Delta x^3 + 1/L). \end{equation} \textbf{The second step} is to find possible candidates for the non-zero coefficients of $ \mathbf{a}$. We utilize $L^1$-regularized minimization, also known as Lasso \cite{tibshirani1996regression} or group Lasso \cite{yuan2006model}, solved by Alternating Direction Method of Multipliers \cite{boyd2011distributed} to get a sparse or block-sparse vector. We minimize the following energy: \begin{equation}\label{eqgLasso} \widehat{\mathbf{a}}_{\text{G-Lasso}}(\lambda) ={\textstyle \arg \min_{\mathbf{z}}} \left\{ \frac{1}{2} \| \widehat\mathbf{b} - \widehat F_{\infty} \textbf{z} \|^2_2 + \lambda \sum_{j=1}^{N_3} \left(\sum_{l=1}^L |z_{j,l}|^2 \right)^{\frac{1}{2}} \right\}, \end{equation} where $\lambda$ is a balancing parameter between the first fitting term and the second regularization term. The matrix $\widehat{F}_\infty$ is obtained from $\widehat{F}$ with each column divided by the maximum magnitude of the column, namely, $\widehat{F}_\infty[j,l] = \widehat{F}[j,l]/\|\widehat{F}[j,l]\|_{\infty}$. We use Lasso for the constant coefficient case where $L=1$, and group Lasso for the varying coefficient case $L>1$. A set of possible active features is selected by thresholding the normalized coefficient magnitudes: \begin{equation} \label{algthresholding} \widehat\Lambda_{\tau} : = \left\{j : \|\widehat{F}[j]\|_{L^1}\left\| \sum_{l=1}^L \frac{\widehat{\mathbf{a}}_{\text{G-Lasso}}(\lambda)_{j,l}}{\|\widehat{F}[j,l]\|_\infty} \phi_l \right\|_{L^1} \ge \tau \right\}. \end{equation} with a fixed thresholding parameter $\tau\ge 0$. \textbf{The final step} is to identify the correct support using the Time Evolution Error (TEE). (i) From the candidate coefficient index set $\widehat\Lambda_{\tau}$, consider every subset $\Omega \subseteq \widehat\Lambda_{\tau}$. For each $\Omega= \{j_1, j_2, \ldots, j_k\}$, find the coefficients $\widehat{\textbf{a}} = (0, 0, \widehat{a}_{j_1}, 0, \dots, \widehat{a}_{j_k}, \dots )$ by a least-square fit such that $\widehat{\mathbf{a}}_{\Omega} = \widehat{F}_{\Omega}^\dagger \widehat \mathbf{b}$ and $\widehat{\mathbf{a}}_{\Omega^\complement} = \mathbf{0}$. (ii) Using these coefficients, construct the differential equation and numerically time evolve \[ u_t = \mathcal{F} \widehat{\textbf{a}},\] starting from the given initial data, for each $\Omega$. It is crucial to use a smaller time step $\widetilde{\Delta t}\ll \Delta t$, where $\Delta t$ is the time spacing of the given data. We use first-order forward Euler time discretization of the time derivative with time step $\widetilde{\Delta t}=O(\Delta x^r)$ where $r$ is the highest order of the spatial derivatives associated with $\widehat{\mathbf{a}}$. (iii) Finally, calculate the time evolution error for each $\widehat{\textbf{a}}$: \[ \text{ TEE} (\widehat{\textbf{a}}) := \sum_{i=1}^{N_1} \sum_{n=1}^{N_2} |\bar{u}_i^n - u_i^n| \Delta x \Delta t, \] where $\bar{u}_i^n$ is the numerically time evolved solution at $(x_i,t_n)$ of the PDE with the support $\Omega$ and coefficient $\widehat{\textbf{a}}$. We pick the subset $\Omega$ and the corresponding coefficients $\widehat{\mathbf{a}}$, which give the smallest TEE, and denote the recovered support as $\widehat{\Lambda}$. This is the output of the algorithm, which is the identified PDE. Algorithm \ref{algident} summarizes this procedure. \begin{algorithm} \caption{Identifying Differential Equations with Numerical Time evolution (IDENT) } \label{algident} \textbf{Input}: The discrete data $\{u_i^n | i=1, \dots, N_1 \text{ and } n=1,\dots,N_2\}$. \\ \textbf{[Step 1]} Construct the empirical feature matrix $\widehat{F}$ and the empirical vector $\widehat\mathbf{b}$ using ENO schemes. \\ \textbf{[Step 2]} Find a set of possible active features by the $L^1$ minimization \eqref{eqgLasso} followed by thresholding. \\ \textbf{[Step 3]} Pick the coefficient vector $\widehat{\textbf{a}}$, with minimum Time Evolution Error (TEE).\\ \textbf{Output:} The identified coefficients $\widehat{\mathbf{a}}$ where $\widehat{\mathbf{a}}_{\widehat{\Lambda}} = \widehat{F}_{\widehat{\Lambda}}^{\dagger} \widehat{\mathbf{b}}$. \end{algorithm} We note that it is possible to skip the $L^1$ minimization step, and use TEE to recover the support of coefficients by considering all possible combinations from the beginning, however, the computational cost is very high. The $L^1$ minimization helps to reduce the number of combinatorial trials, and make IDENT more computationally efficient. On the other hand, while $L^1$ minimization is effective in finding a sparse vector, $L^1$ alone is often not enough: (i) Zero coefficients in the true PDE may become non-zero in the minimizer of $L^1$. (ii) If active terms are chosen by a thresholding, results are sensitive to the choice of thresholding parameter, e.g., $\tau$ in (\ref{algthresholding}). (iii) The balancing parameter $\lambda$ can effect the results. (iv) If some columns of the empirical feature matrix $\widehat{F}$ are highly correlated, Lasso is known to have a larger support than the ground truth \cite{fannjiang2012coherence}. TEE refines the results from Lasso, and relaxes the dependence on the parameters. There are two fundamental ideas behind TEE: \begin{enumerate}\vspace{-0.1cm} \item{For nonlinear PDEs, it is impossible to isolate each term separately to identify each coefficient. Any realization of PDE must be understood as a set of terms. } \vspace{-0.1cm} \item{If the underlying dynamics are identified by the true PDE, any refinement in the discretization of the time domain should not deviate from the given data. This is the fundamental idea of the consistency, stability and convergence of a numerical scheme. } \end{enumerate} Therefore, the main effect of TEE is to evolve the numerical error from the wrongly identified differential terms. This method can be applied to linear or nonlinear PDEs. The effectiveness of TEE can be demonstrated with an example. Assume that the solution $u$ is smooth and decays sufficiently fast at infinity, and consider the following linear equation with constant coefficients: \[ \frac{\partial u}{\partial t} = a_0 u + a_1 \frac{\partial u}{\partial x} +\cdots + a_m \frac{\partial^m u}{\partial x^m}. \] After taking the Fourier transform for the equation and solving the ODE, one can obtain the transformed solution: \[ \hat{u}(\xi,t)=\hat{u}(\xi,0) e^{a_0 t} e^{a_1\mathbf{i} \xi t} e^{-a_2 \xi^2 t }\cdots e^{a_m (\mathbf{i}\xi)^m t}, \]% where $\mathbf{i}=\sqrt{-1}$ and $\xi$ is the variable in the Fourier domain. If a term with an even-order derivative, such as $a_2 \frac{\partial^2 u}{\partial x^2}$, is mistakenly included in the PDE, it will make every frequency mode grow or decrease exponentially in time; if a term with an odd-order derivative, such as $a_1 \frac{\partial u}{\partial x}$, is mistakenly included in the solution, it will introduce a wrong-speed ossicilation of the solution. In either case, the deviation from the correct solution grows fast in time, providing an efficient way to distinguish the wrong terms. Our numerical experiments show that TEE is an effective tool to correctly identify the coefficients. Our first set of experiments are presented in subsection \ref{ssec:num_nonoise}. \subsection{Recovery theory of Lasso, and new Noise-to-Signal Ratio (NSR) } \label{ssec:recovery} In this subsection, we establish a performance guarantee of Lasso for the identification of PDEs with constant coefficients. In the Step 2 of IDENT, Lasso is applied as $L^1$ regularization in (\ref{eqgLasso}). We consider the incoherence property proposed in \cite{donoho2001uncertainty}, and follow the ideas in \cite{fuchs2004sparse,tropp2004just,tropp2006just} to establish a recovery theory. While the details of the proof is presented in Appendix \ref{A:recovery}, here we state the result which leads to the new definition of noise-to-signal ratio. For PDEs with constant coefficients, we set $L=1$ in (\ref{E:L}), and consider the standard Lasso: \begin{equation} \label{eqLasso} \tag{Lasso} \widehat{\mathbf{a}}_{\text{Lasso}}(\lambda) ={\textstyle \arg \min_{\mathbf{z}}} \left\{ \frac{1}{2} \| \widehat\mathbf{b} - \widehat F_\infty \textbf{z} \|^2_2 + \lambda \|\mathbf{z}\|_1 \right\}. \end{equation} If all columns of $\widehat{F}$ are uncorrelated, $\mathbf{a}$ can be robustly recovered by Lasso. Let $\widehat{F} = [\widehat{F}[1] \ \widehat{F}[2] \ \ldots \ \widehat{F}[N_3]]$ where $\widehat{F}[j]$ stands for the $j$th column of $\widehat{F}$ in \eqref{E:constant_F}. To measure the correlation between the $j$th and the $l$th column of $\widehat{F}$, we use the pairwise coherence $$ \mu_{j,l}(\widehat{F}) = \frac{|\langle \widehat{F}[j] , \widehat{F}[l] \rangle|}{\|\widehat{F}[j]\|_2 \|\widehat{F}[l]\|_2}$$ and the mutual coherence of $\widehat{F}$ as in \cite{donoho2001uncertainty}: \ \mu(\widehat{F}) = \max_{j \neq l} \mu_{j,l}(\widehat{F}) = \max_{j\neq l} \frac{|\langle \widehat{F}[j] , \widehat{F}[l] \rangle|}{\|\widehat{F}[j]\|_2 \|\widehat{F}[l]\|_2}. \] Since normalization does not affect the coherence, we have $ \mu_{j,l}(\widehat{F}_\infty)= \mu_{j,l}(\widehat{F})$ and $\mu(\widehat{F}_\infty)=\mu(\widehat{F})$. The smaller $\mu(\widehat{F})$, the less correlated are the columns of $\widehat{F}$, and $\mu(\widehat{F}) = 0$ if and only if the columns are orthogonal. Lasso will recover the correct coefficients if $\mu(\widehat{F})$ is sufficiently small. \begin{theorem} \label{thmLasso} Let $\mu =\mu(\widehat{F})$, $w_{\max} = \max_j \|\widehat{F}[j]\|_\infty \|\widehat{F}[j]\|_{L^2}^{-1}$ and $w_{\min} = \min_j \|\widehat{F}[j]\|_\infty \|\widehat{F}[j]\|_{L^2}^{-1}$. Suppose the support of $\mathbf{a}$ contains no more than $s$ indices, $\mu(s-1) < 1$ and $$\frac{\mu s}{1-\mu(s-1)} < \frac{w_{\min}}{w_{\max}}.$$ Let \begin{equation} \label{thmlambda} \lambda = \frac{[1-(s-1)\mu] }{w_{\min}[1-\mu(s-1)] - w_{\max} \mu s }\cdot \frac{\varepsilon^+}{\Delta x \Delta t}. \end{equation} Then \begin{enumerate} \item[1)] the support of $\widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)$ is contained in the support of $\mathbf{a}$; \item[2)] the distance between $\widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)$ and $\mathbf{a}$ satisfies \begin{equation} \label{thmdist} \max_j \|\widehat{F}[j]\|_{L^2} \left|\|\widehat{F}[j]\|_\infty^{-1}\widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)_j-a_j\right| \le \frac{w_{\max} + \varepsilon/\sqrt{\Delta t \Delta x}}{w_{\min}[1-\mu(s-1)] -w_{\max} \mu s } \varepsilon; \end{equation} \item[3)] if \begin{equation} \label{thmnsr} \min_{j:\ a_j\neq 0} \|\widehat{F}[j]\|_{L^2} | a_j| > \frac{w_{\max}+\varepsilon/\sqrt{\Delta t \Delta x}}{w_{\min}[1-\mu(s-1)] -w_{\max} \mu s } \varepsilon, \end{equation} then the support of $\widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)$ is exactly the same as the support of $\mathbf{a}$. \end{enumerate} \end{theorem} Theorem \ref{thmLasso} shows that Lasso will give rise to the correct support when the empirical feature matrix $\widehat{F}$ is incoherent, i.e. $\mu(\widehat{F}) \ll 1$, and all underlying coefficients are sufficiently large compared to noise. When the empirical feature matrix is coherent, i.e., some columns of $\widehat{F}$ are correlated, it has been observed that $\widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)$ are usually supported on ${\rm supp}(\mathbf{a})$ and the indices that are highly correlated with ${\rm supp}(\mathbf{a})$ \cite{fannjiang2012coherence}. We select possible features by thresholding in \eqref{algthresholding} which is equivalent to $\widehat\Lambda_{\tau} : = \left\{j : \|\widehat{F}[j]\|_{L^1} \|\widehat{F}[j]\|_{\infty}^{-1} |\widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)_j| \ge \tau \right\}$ in the case of constant coefficients. After this, TEE is an effective tool in complement of Lasso to distinguish the correct features from the wrong ones. The details of Theorem \ref{thmLasso} can be found in Appendix \ref{A:recovery}. This analysis also gives rise to a \textbf{new noise-to-signal ratio}: \begin{equation} \label{eqnsr} \text{Noise-to-Signal Ratio (NSR)} := \frac{\|\widehat{F} \mathbf{a} -\widehat\mathbf{b}\|_{L^2}}{\min_{j:\ a_j\neq 0} \|\widehat{F}[j]\|_{L^2} | a_j| }. \end{equation} The definition is derived from \eqref{thmnsr}, showing that the signal level is contributed by the minimum of the product of the coefficient and the column norm in the feature matrix - $\min_{j:\ a_j\neq 0} \|\widehat{F}[j]\|_{L^2} | a_j|$. This term represents the dynamics resulted from the feature. It is important to consider the multiplication rather than the magnitude of the coefficient only. We also use this new definition of NSR to measure the level of noise in the following sections, which gives a more consistent representation. \subsection{First set of IDENT experiments }\label{ssec:num_nonoise} We present the first set of numerical experiments to illustrate the effects of IDENT. Here data are sampled from exact or simulated solutions of PDEs with constant coefficients. For boundary conditions, we use zero Dirichlet boundary conditions throughout the paper. Modification to periodic or other boundary conditions is trivial, and numerical schemes with periodic boundary conditions can achieve higher accuracy, for the cases without noise. We observe that the Lasso results are not very sensitive to the choice of $\lambda$ using TEE, and we set $\lambda =500$ in all experiments. \begin{figure} \centering \begin{tabular}{ccc} (a) Given data & (b) Coherence pattern & (c) Result from Lasso \\ \includegraphics[width = 2.05in]{Figure/clean_TruB_Given.pdf} & \includegraphics[width = 2.05in]{Figure/clean_TruB_Coherence.png} & \includegraphics[width = 2.05in]{Figure/clean_TruB_L1v2.pdf} \end{tabular} \caption{Experiment with the Burger's equation \eqref{E:burger}. (a) The given data are sampled from true analytic solution. (b) The coherence pattern of $\widehat{F}$. (c) Normalized coefficient magnitudes from Lasso. Two possible features are identified, which are $u$ and $uu_x$. } \label{Fig-BurgerExactDemo} \end{figure} The first experiment is on the Burger's equation with Dirichlet boundary conditions: \begin{align} & u_t + \left( \frac{u^2}{2}\right)_x =0, \ x \in [0,1] \label{E:burger} \\ &u(x,0)= \sin 4\pi x \text{ and }u(0,t) = u(1,t) = 0. \nonumber \end{align} The given data are sampled from the true analytic solution, shown in Figure \ref{Fig-BurgerExactDemo} (a), with $\Delta x = 1/56$ and $\Delta t = 0.004$, for $t \in [0,0.05]$. Figure \ref{Fig-BurgerExactDemo} (b) displays the coherence pattern of the empirical feature matrix: the absolute values of $\widehat{F}_{\rm unit}^* \widehat{F}_{\rm unit}$ where $\widehat{F}_{\rm unit}$ is obtained from $\widehat{F}$ with column normalized to unit $L^2$ norm. This pattern shows the correlation between any pair of the columns in $\widehat{F}$. (c) shows the normalized coefficient magnitudes $\{\|\widehat{F}[j]\|_{L^1} \|\widehat{F}[j]\|_{\infty}^{-1} |\widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)_j|\}$ after $L^1$ minimization. The magnitudes of $u$ and $uu_x$ are not negligible, so they are picked as a possible set of active features in $\widehat\Lambda_{\tau}$. Then, TEE is computed for all subsets $\Omega \subseteq \widehat\Lambda_{\tau}$, i.e., $u_t = a u$, $u_t = b uu_x$ and $u_t = cu +d uu_x$ where the coefficients $a,b,c,d$ are calculated by least-squares: \begin{center} \begin{tabular}{ | c | c | c | } \hline Active terms & Coefficients of active terms by least-squares & TEE \\ \hline $u$ & $0.27$ & $78.76$ \\ \textcolor{red}{ $uu_x$} & \textcolor{red}{$ -0.99$} & \textcolor{red}{$0.48$} \\ $[u \ uu_x]$ & $ [0.10 \ -0.99]$ & $1.40$ \\ \hline \end{tabular} \end{center} The red line with only $uu_x$ term has the smallest TEE, and therefore is identified as the result of IDENT. Since the true PDE is $u_t = - u u_x$, the computed result shows a small coefficient error. The second experiment is on the Burger's equation with a diffusion term: \begin{align} & u_t + \left( \frac{u^2}{2}\right)_x =0.1u_{xx},\ x \in [0,1] \label{E:burger_diff} \\ &u(x,0)= \sin 4\pi x \text{ and }u(0,t) = u(1,t) = 0. \nonumber \end{align} The given data are simulated with a first-order explicit method where $\delta x = 1/256$ and $\delta t = (\delta x)^2$ for $t \in [0,0.1]$. Data are downsampled from the numerical simulation by a factor of $4$ such that $\Delta x = 4\delta x$ and $\Delta t = 4\delta t$. (We explore the effects of downsampling in more detail in Section \ref{sec:noise}.) \begin{figure} \centering \begin{tabular}{ccc} (a) Given data & (b) Coherence pattern & (c) Result from Lasso \\ \includegraphics[width = 2.05in]{Figure/clean_BD_Given.pdf} & \includegraphics[width = 2.05in]{Figure/clean_BD_Coherence.png} & \includegraphics[width = 2.05in]{Figure/clean_BD_L1v2.pdf} \end{tabular} \caption{Experiment with Burger's equation with a diffusion term \eqref{E:burger_diff}. (a) The given data are numerically simulated and downsampled. (b) shows that $u$ and $u_x u_{xx}$ are highly correlated with $u_{xx}$ and $u u_x$, respectively. From (c), four terms $u, uu_x, u_{xx} $ and $u_x u_{xx}$ are selected for TEE. } \label{Fig-BurgerDiffDemo} \end{figure} Figure \ref{Fig-BurgerDiffDemo} (a) shows the given data, (b) displays the coherence pattern of $\widehat{F}$, and (c) shows the normalized coefficient magnitudes $\{\|\widehat{F}[j]\|_{L^1} \|\widehat{F}[j]\|_{\infty}^{-1} |\widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)_j|\}$. In this case, the coherence pattern in (b) shows that $u$ and $u_x u_{xx}$ are highly correlated with $u_{xx}$ and $u u_x$, respectively, and therefore all four terms $u,uu_x,u_{xx},u_x u_{xx}$ are identified as meaningful ones by Lasso in (c). Considering TEE for each subset refines these results: \begin{center} \begin{tabular}{ | c | c | c | } \hline Active terms & Coefficients of active terms by least-squares & TEE \\ \hline $u$ & $-16.08$ & $3709.77$ \\ $u u_x$ & $-0.34$ & $67092.21$ \\ $u_{xx}$ & $0.10$ & $4345.98$ \\ $u_xu_{xx}$ & $-0.0008$ & $\infty$ \\ $[u \ uu_x]$ & $ [-16.17 \ -0.50]$ & $2120.14$ \\ $[u \ u_{xx}]$ & $ [-21.42 \ -0.03]$ & $1.49 \times 10^{26}$ \\ $[u \ u_x u_{xx}]$ & $ [-16.44 \ -0.003]$ & $\infty$ \\ \textcolor{blue}{$[uu_x \ u_{xx}]$} & \textcolor{blue}{$ [-1.00 \ 0.10]$} & \textcolor{blue}{$82.33$} \\ $[uu_{x} \ u_x u_{xx}]$ & $ [-12.03 \ -0.07]$ & $\infty$ \\ $[u_{xx} \ u_x u_{xx}]$ & $ [-0.10 \ -0.006]$ & $371.08$ \\ $[u \ uu_x \ u_{xx}]$ & $ [-0.10\ -1.00 \ 0.10]$ & $83.73$ \\ $[u \ uu_x \ u_x u_{xx}]$ & $ [-15.86\ -1.03 \ -0.003]$ & $\infty$ \\ $[u \ u_{xx} \ u_x u_{xx}]$ & $ [-0.58\ 0.10 \ 0.006]$ & $367.68$ \\ \textcolor{red}{ $[uu_x \ u_{xx} \ u_x u_{xx}]$} & \textcolor{red}{$ [-1.00 \ 0.10 \ -1.35\times 10^{-5}]$} & \textcolor{red}{$82.29$} \\ $[u \ uu_x \ u_{xx} \ u_x u_{xx}]$ & $ [-0.11 \ -1.00 \ 0.10 \ -2.8\times 10^{-5}]$ & $83.85$ \\ \hline \end{tabular} \end{center} The red line is the result of IDENT, while the blue line is the ground truth. The TEE of $[uu_x \ u_{xx} \ u_x u_{xx}]$ is the smallest, which is comparable with the TEE of the true equation with $[ uu_x \ u_{xx}]$. One wrongly identified term in red, $u_x u_{xx}$, has a coefficient magnitude of $ -1.35 \times 10^{-5}$ which is negligible. The level of error in the identification is also related to the total error to be explored in (\ref{E:enoise}). Without TEE, if all four terms are used from $L^1$ minimization, an additional wrong term $u$ is identified with the coefficient $-0.11$. This is comparable to other terms with coefficients like -1 or 0.1, and cannot be ignored. Theorem \ref{thmLasso} proves that the identified coefficients from Lasso will converge to the ground truth as $\Delta t \rightarrow 0$ and $\Delta x \rightarrow 0$ (see Equation \eqref{E:errorbound} and \eqref{thmdist}), when there is no noise and the empirical feature matrix has a small coherence. Figure \ref{Fig-BurgerCoeffVersusDeltax} shows the recovered coefficients from Lasso versus $\Delta t$ and $\Delta x$ for the Burger's equation \eqref{E:burger} and Burger's equation with diffusion \eqref{E:burger_diff}. In Figure \ref{Fig-BurgerCoeffVersusDeltax} (a), data are sampled from the analytic solution of the Burger's equation \eqref{E:burger} with spacing $\Delta x = 2^{k}$ for $k=-12, -11, \dots, -5$ respectively and $\Delta t = \Delta x$ for $t \in [0,0.05]$. Figure \ref{Fig-BurgerCoeffVersusDeltax} (a) shows the result from Lasso, namely, $\{ \|\widehat{F}[j]\|_{\infty}^{-1} \widehat{\mathbf{a}}_{\text{Lasso}}(\lambda)_j\}$, versus $\log_2 \Delta x$. Notice that the coefficient of $u u_x$ converges to $-1$ and all other coefficients converge to $0$ as $\Delta t$ and $\Delta x$ decrease. For Figure \ref{Fig-BurgerCoeffVersusDeltax} (b), data are sampled from the numerical simulation of the Burger's equation with diffusion in \eqref{E:burger_diff}, where the PDE is solved by a first-order method with $\delta x = 2^{-10}$ and $\delta t = (\delta x)^2$ for $t \in [0,0.1]$. Data are sampled with $\Delta x = 2^{-10},2^{-9},2^{-8},2^{-7},2^{-6}$ respectively, and $\Delta t = (\Delta x)^2$ for $t \in [0,0.1]$. Figure \ref{Fig-BurgerCoeffVersusDeltax} (b) shows the recovered coefficients from Lasso versus $\log_2 \Delta x$. Here the coefficients of $u u_x$ and $u_{xx} $ converge to $-1$ and $0.1$ respectively, and all other coefficients except the one of $u$, converge to $0$, as $\Delta t$ and $\Delta x$ decrease. The coefficient of $u$ does not converge to $0$ because data are generated by a first order numerical scheme with the error $O[\delta t + (\delta x)^2]$, i.e., the error for Lasso $\|\mathbf{e}\|_{L^2}$ does not decay to $0$ as $\Delta t$ and $\Delta x$ decrease. We further discuss this aspect of data generations in Section \ref{sec:noise}. \begin{figure} \centering \begin{tabular}{cc} (a) Burger's equation & (b) Burger's equation with diffusion \\ \includegraphics[width = 2.7in]{Figure/clean_TruB_LassoConv} & \includegraphics[width = 2.7in]{Figure/clean_BD_LassoConv} \end{tabular} \caption{Identified coefficients from Lasso (Step 2 only) versus $\log_2\Delta x$. In (a), as $\Delta t$ and $\Delta x$ decrease (from right to left), the coefficient of $u u_x$ correctly converges to 1, and all other terms correctly converge to 0. In (b), as $\Delta t$ and $\Delta x$ decrease (from right to left), while the coefficients of $u_{xx}$ and $u u_x$ correctly converge to 0.1 and -1 respectively, one wrong term $u$ does not converge to 0, due to the error from data generations. } \label{Fig-BurgerCoeffVersusDeltax} \end{figure} \section{Noisy data, Downsampling and IDENT}\label{sec:noise} As noticed above, identification results depend on the accuracy of the given data. In this section, we explore the effects of inaccuracy in data generations, noise and downsampling. We derive an error formula to incorporate the errors arising from these three aspects, which provides a theoretical guidance of the difficulty of identification. The given data $\{\widetilde u_i^n\}$ may contain noise, such that $$\widetilde u_i^n = u_i^n + \xi_i^n,$$ where the noise $\xi_i^n$ arises from inaccuracy in data generations and/or the measurement error. Consider a $r$th order PDE with the highest-order spatial derivative $\partial_x^r u$. Suppose data are numerically simulated by a $q$th-order method with time step $\delta t$ and spacial spacing $\delta x$, and the measurement error is independently drawn from the normal distribution with mean $0$ and variance $\sigma^2$. Then $$\xi_i^n = O(\delta t + \delta x^q + \sigma).$$ We use the five-point ENO method to approximate the spatial derivatives in the empirical feature matrix $\widehat{F}$ in Section \ref{sec:ident}. In general, one could interpolate the data with a $p$th-order polynomial and use the derivatives of the polynomial to approximate $u_x$ and $u_{xx}$, etc. In this case, the error for the $k$th-order spacial derivative $\partial_x^k u$ is $O(\Delta x^{p+1-k})$. The error for Lasso given by \eqref{E:linear1} is $\mathbf{e} = \mathbf{b} -\widehat{\mathbf{b}} + (\widehat{F}-F)\mathbf{a} +\boldsymbol{\eta} $, where $\mathbf{b} -\widehat{\mathbf{b}}$ is from the approximation of $u_t$, $(\widehat{F}-F)\mathbf{a}$ is from the approximation of the spatial derivatives of $u$, and $\boldsymbol{\eta}$ arrises from the finite element basis expansion for the varying coefficients. If $u,u_x,u_{xx},\ldots$ are bounded, these terms satisfy \begin{align*} \| (\widehat{F}-F)\mathbf{a}\|_\infty \le O\left(\Delta x^{p+1-r} + \frac{\delta t + \delta x^q +\sigma}{\Delta x^r}\right) \text{ and } \|\mathbf{b} -\widehat{\mathbf{b}}\|_\infty \le O\left(\Delta t + \frac{\delta t + \delta x^q +\sigma}{\Delta t}\right), \end{align*} and $\|\boldsymbol{\eta}\|_\infty = O\left( 1/ L \right)$ so that \begin{equation} \label{E:enoise} \|\mathbf{e}\|_{L^2} \le \varepsilon, \text{ with } \varepsilon = O\left(\Delta t + \Delta x^{p+1-r}+ \underbrace{\frac{\delta t + \delta x^q}{\Delta t}+ \frac{\delta t + \delta x^q }{\Delta x^r}}_{\text{errors from data generations}} + \underbrace{\frac{\sigma}{\Delta t} + \frac{\sigma}{\Delta x^r}}_{\text{measurement noise}} + \frac 1 L\right). \end{equation} This error formula suggests the followings: \begin{description} \item[Sensitivity to measurement noise] Finite differences are sensitive to measurement noise since Gaussian noise with mean $0$ and variance $\sigma^2$ results in $O(\sigma/\Delta t + \sigma/\Delta x^r)$ in the error formula. Higher-order PDEs are more sensitive to measurement noise than lower-order PDEs. Denoising the given data is helpful to Lasso in general. \item[Downsampling of data] In applications, the given data are downsampled such that $\Delta t = C_t \delta t$ and $\Delta x = C_x \delta x$ where $C_t$ and $C_x$ are the downsampling factors in time and space. Downsampling could help to reduce the error depending on the balances among the terms in \eqref{E:enoise}. \end{description} We further explore these effects below. We propose an order preserving denoising method in Section \ref{subsec:lsma}, experiment IDENT with noisy data in Section \ref{subsecnoisyexperiment}, and discuss the downsampling of data in Section \ref{subsec:down}. \subsection{An order preserving denoising method: Least-Squares Moving Average} \label{subsec:lsma} Our error formula in \eqref{E:enoise} shows that a small amount of noise can quickly increase the complexity of the recovery, especially for higher-order PDEs. Denoising is helpful in general. We propose an order preserving method which keeps the order of the approximation to the underlying function, while smooths out possible noise. Let the data $\{d_i\}$ be given on a one-dimensional uniform grid $\{x_i\}$ and define its five-point moving average as $\tilde{d_i} = \frac{1}{5}\sum_{l=0,\pm1, \pm2} d_{i+l}$ for all $i$. At each grid point $x_i$, we determine a quadratic polynomial $p(x)= a_0 + a_1 (x-x_i) + a_2 (x-x_i)^2$ fitting the local data, which preserves the order of smoothness, up to the degree of polynomial. There are a few possible choices for denoising, such as (i) Least-Squares Fitting (LS): find $a_0, a_1$ and $a_2$ to minimize the functional $ F( a_0, a_1, a_2 )=\sum_{{\rm some} \; j\; {\rm near} \; i} (p(x_j)- d_j)^2$; (ii) Moving-Average Fitting (MA): find $a_0, a_1$ and $a_2$, such that the local average of the fitted polynomial matches with the local average of the data, $ {1}/{5} \sum_{l=0,\pm1, \pm2} p(x_{j+l}) = \tilde{d_j},$ for $j=i, i\pm 1$ (or another set of $3$ grid points near $\{x_i\}$). The polynomial generated by LS may not represent the underlying true dynamics. Moving average fitting is better in keeping the underlying dynamics, however, the matrix may be ill-conditioned when a linear system is solved to determine $a_0,a_1,a_2$. We propose to use (iii) Least-Squares Moving Average (LSMA): find $a_0, a_1$ and $a_2$ to minimize the functional \[G( a_0, a_1, a_2 )= \sum_{j=i,i\pm1, i\pm2} \left\{\left[\frac{1}{5} \sum_{l=0,\pm1, \pm2} p(x_{j+l})\right] - \tilde{d_j}\right\}^2.\] The condition number of this linear system tends to be better in comparison with MA, because $j$ is chosen from a larger set of indices. This LSMA denoising method preserves the approximation order of data and can easily be incorporated into numerical PDE techniques. MA fitting and LSMA are similar to the non-oscillatory polynomial reconstruction from cell averages which is a key step in high-resolution shock capturing schemes, see e.g. \cite{barth1990higher,ENO87,hu1999weighted}. The quadratic polynomials computed by the methods above are locally third-order approximation to the underlying function. We prove that, if the given data are sampled from a third-order approximation to a smooth function, then LSMA will keep the same order of the approximation. This theorem can be easily generalized to any higher order, we kept to $3$rd order to be consistent with our experiments in this paper. \begin{theorem} If data are given as a $3$rd order approximation to a smooth function, with or without additive noise, then denosing the data (to obtain a piecewise quadratic function) with the Least-Squares Moving Average (LSMA) method will keep the same order of accuracy to the function. \end{theorem} \begin{proof} Let $f(x)$ be the smooth function. The proof can be done by comparing the quadratic function to that of the Taylor expansion of $f(x)$ at a grid point $x_{i_0}$, see e.g., \cite{LSTZ07}. Let $p(x)=a_0+a_1(x-x_{i_0})+a_2(x-x_{i_0})^2$ be the quadratic function to be determined near $x_{i_0}$. The least-squares method solves the linear system $A^T(Ac-b)=0$ for the coefficient vector $c=[a_0, a_1,a_2]^T $, where $A$ is a $5\times 3$ matrix whose rows can be written as $[1, \frac15 \sum_{j=0,\pm 1, \pm 2}(x_{i+j}-x_{i_0}), \frac15 \sum_{j=0,\pm 1, \pm 2}(x_{i+j}-x_{i_0})^2], $ for $i=i_0-2, \cdots, i_0+2$, and $b=[\tilde{d}_{i_0-2},\cdots, \tilde{d}_{i_0+2}]^T.$ According to the assumption we have $$\tilde{d_i} = f(x_{i_0})+f'(x_{i_0})\frac15 \sum_{j=0,\pm 1, \pm 2}(x_{i+j}-x_{i_0})+ \frac12 f''(x_{i_0})\frac15 \sum_{j=0,\pm 1, \pm 2}(x_{i+j}-x_{i_0})^2+O(\Delta x^3), $$ for any grid point $x_i$ near $x_{i_0}$, i.e., $|x_i-x_{i_0}|= O(\Delta x)$. Let $s=[f(x_{i_0}), f'(x_{i_0}), \frac12 f''(x_{i_0})]^T$. We have $$A^T(Ac-b)= HB^T\{BH(c-s)+O(\Delta x^3)\},$$ where $H$ is the $3\times 3$ diagonal matrix ${\rm diag}\{1, \Delta x, \Delta x^2\} $, and $$B=A H^{-1}=\left [ \begin{array}{lcr} &\vdots & \\ 1 & \frac15 \sum_{j=0,\pm 1, \pm 2}\frac{x_{i_0+j}-x_{i_0}}{\Delta x} & \frac15 \sum_{j=0,\pm 1, \pm 2}\frac{(x_{i_0+j}-x_{i_0})^2}{\Delta x^2} \\ &\vdots & \end{array} \right ]. $$ Therefore $H(c-s)= (B^TB)^{-1}B^T\cdot O(\Delta x^3)$. Note that $B$ is independent of $\Delta x$, we have $|p(x)-f(x)|= O(\Delta x^3)$ for all $x$ such that $|x-x_{i_0}|= O(\Delta x)$. \end{proof} \subsection{IDENT experiments for noisy data}\label{subsecnoisyexperiment} We next present numerical experiments with noisy data. We say $P$ percent Gaussian noise is added to the noise-free data $\{u_i^n : i=1,\ldots,N_1 \text{ and } n=1,\ldots,N_2\}$, if the observed data are $\{\widetilde u_i^n\}$ where $\widetilde u_i^n = u_i^n + \xi_i^n$ and $\xi_i^n \sim \mathcal{N}(0,\sigma^2)$ with $\sigma = \frac{P}{100} \sqrt{\sum_{i=1}^{N_1} \sum_{n=1}^{N_2} |u_i^n|^2}/\sqrt{N_1 N_2}$. \begin{figure} \centering \begin{tabular}{ccc} (a) Given data & (b) Coherence pattern & (c) Result from Lasso \\ \includegraphics[width = 2.05in]{Figure/Noisy_noD_TruB_Given.pdf} & \includegraphics[width = 2.05in]{Figure/Noisy_noD_TruB_Coh.png} & \includegraphics[width = 2.05in]{Figure/Noisy_noD_TruB_L1v2.pdf} \end{tabular} \caption{Burger's equation in \eqref{E:burger} with $8\%$ Gaussian noise. (a) Given noisy data, (b) Coherence pattern of the feature matrix. (c) The normalized coefficient magnitudes from Lasso. This fails to identify the correct term $uu_x$. }\label{Fig-BurgerExactDemoNoise8} \end{figure} Our first experiment is on the Burger's equation in \eqref{E:burger} with $8\%$ Gaussian noise. Data are sampled from the analytic solution with $\Delta x = 1/56$ and $\Delta t = 0.004$ for $t \in [0,0.05]$, and then $8\%$ Gaussian noise is added. For comparison, we do not denoise the given data, but directly applied IDENT. Figure \ref{Fig-BurgerExactDemoNoise8} (a) shows the noisy given data, (b) shows the coherence pattern, and (c) shows the normalized coefficient magnitudes from Lasso. The NSR for Lasso defined in \eqref{eqnsr} is 3.04. Lasso fails to include the correct set of terms, thus TEE identifies the wrong equation $u_t=-0.59u^2$ as a solution. \begin{center} \begin{tabular}{ | c | c | c || c |c|c| } \hline Active terms & Coefficients & TEE & Active terms & Coefficients & TEE \\ \hline $1$ & $-0.27$ & $94.20$ & $[1 \ u ]$ & $[-0.27 \ 1.17]$ & $99.15$ \\ $u$ & $ 1.17$ & $99.36$ & $[1 \ u^2 ]$ & $[0.18\ -0.83]$ & $94.04$ \\ \textcolor{red}{$u^2$} & \textcolor{red}{$ -0.59$} & \textcolor{red}{$94.03$} & $[u \ u^2]$ & $[1.17\ -0.59]$ & $98.85$ \\ $[1 \ u \ u^2]$ & $[0.19 \ 1.17 \ -0.84]$ & $98.81$ & & & \\ \hline \end{tabular} \end{center} \begin{figure}[h!] \centering \begin{tabular}{ccc} (a) Given data & (b) Coherence pattern & (c) Result from Lasso \\ \includegraphics[width = 2.05in]{Figure/Noisy_LSMA_TruB_Given.pdf} & \includegraphics[width = 2.05in]{Figure/Noisy_LSMA_TruB_Coh.png} & \includegraphics[width = 2.05in]{Figure/Noisy_LSMA_TruB_L1v2.pdf} \end{tabular} \caption{Burger's equation in \eqref{E:burger} with $8\%$ Gaussian noise as in Figure \ref{Fig-BurgerExactDemoNoise8}. (a) The data after the LSMA denoising. (b) Coherence pattern of $\widehat{F}$. (c) The normalized coefficient magnitudes from Lasso identifies $1,u$ and $uu_x$ which include the correct term $u u_x$. }\label{Fig-BurgerExactDemoNoise8_LSMA} \end{figure} For the same data, Figure \ref{Fig-BurgerExactDemoNoise8_LSMA} and the table below show the results when LSMA denoising is applied. After denoising, and the given data is noticeably smoother when Figure \ref{Fig-BurgerExactDemoNoise8_LSMA} (a) is compared with Figure \ref{Fig-BurgerExactDemoNoise8} (a). The Lasso result shows great improvement in Figure \ref{Fig-BurgerExactDemoNoise8_LSMA} (c). With the correct terms included in the Step 2 of Lasso, TEE determines the PDE with correct feature: $u_t = -0.92 u u_x$. \begin{center} \begin{tabular}{ | c | c | c | c |c|c| } \hline Active terms & Coefficients & TEE & Active terms & Coefficients & TEE \\ \hline $1$ & $-0.25$ & $87.10$ & $[1 \ u ]$ & $[-0.25 \ 0.27]$ & $87.94$ \\ $u$ & $ 0.27$ & $87.89$ & $[1 \ u u_x ]$ & $[-0.22\ -0.92]$ & $29.5412$ \\ \textcolor{red}{$uu_x$} & \textcolor{red}{$ -0.92$} & \textcolor{red}{$29.5409$} & $[u \ uu_x]$ & $[0.07\ -0.92]$ & $29.81$ \\ $[1 \ u \ u u_x]$ & $[-0.22 \ 0.07 \ -0.92]$ & $29.80$ & & &\\ \hline \end{tabular} \end{center} In the next set of experiments, we explore different levels of noise for denosing+IDENT. In Figure \ref{Fig-BurgerExactVersusNoise}, we experiment on the Burger's equation \eqref{E:burger} with its analytic solution sampled in the same way as above, while the noise level increases from $0$ to $30\%$. For each noise level, we (i) first generate data with $100$ sets of random noises, (ii) denoise by LS and LSMA for a comparison respectively, then (iii) run IDENT. The parameter $\tau$ is chosen as $10\%$ of the largest coefficient magnitude. Figure \ref{Fig-BurgerExactVersusNoise} (a) represents how likely wrong results are found. It is computed by the average ratio between the wrong coefficients and all computed coefficients: $\textstyle \sum_{j \in \widehat{\Lambda}\setminus \Lambda} |\widehat{\mathbf{a}}_j|/\|\widehat{\mathbf{a}}\|_{1}$, where $\Lambda$ and $\widehat{\Lambda}$ are the exact support and the identified support, respectively. Each bar plot represents the standard deviation of the results among 100 trials. The green curves denoised by LSMA show the most stable results even as the noise level increases. Figure \ref{Fig-BurgerExactVersusNoise} (b) shows the recovered coefficient of $uu_x$, where the true value is $-1$. Notice that the LSMA+IDENT (green points) results are closer to $-1$, while others find wrong coefficients more often. In general, denoising the given data with LSMA improves the result significantly. \begin{figure} \centering \begin{tabular}{cc} (a) Ratio of wrong coefficients versus noise & (b) Coefficient of $uu_x$, if $uu_x$ is identified \\ \includegraphics[width = 2.5in]{Figure/Noisy_NVary_TruB_Ratio.pdf} & \includegraphics[width = 2.5in]{Figure/Noisy_NVary_TruB_Coeff6.pdf} \\ \end{tabular} \begin{tabular}{ | c |c|c|c|c|c|c|c| c|} \hline Added noise in \% & 0 & 4 & 8 & 12 & 16 & 20 & 24 & 28 \\ \hline Added noise in new NSR \eqref{eqnsr} & 0.04 & 2.18& 3.09& 3.33& 3.40& 3.31& 3.31 & 3.23 \\ \hline \end{tabular} \caption{Burger's equation \eqref{E:burger} with increasing noise levels. (a) The average ratio between the identified wrong coefficients and all identified coefficients over $100$ trails. (b) The recovered coefficient of $u u_x$ by IDENT. Denoising the given data with LSMA significantly improves the result. The table shows the new NSR \eqref{eqnsr} corresponding to the noise level given in percentage.} \label{Fig-BurgerExactVersusNoise} \end{figure} Figure \ref{Fig-BurgerDiffVersusNoise} shows the Burger's equation with diffusion in \eqref{E:burger_diff} with varying noise levels. The given data are sampled in the same way as in Figure \ref{Fig-BurgerDiffDemo}, the noise level increases from $0$ to $0.12\%$. (a) shows the average ratio between the wrong coefficients and the total coefficients. (b) and (c) show the recovered coefficients of $uu_x$ and $u_{xx}$, respectively. Again using LSMA shows better performance. \begin{figure} \centering \begin{tabular}{ccc} (a) Ratio of wrong coefficients & (b) Coefficient of $uu_x$ & (c) Coefficient of $u_{xx}$ \\ \includegraphics[width = 2.05in]{Figure/Noisy_NVary_BD_Ratio.pdf} & \includegraphics[width = 2.05in]{Figure/Noisy_NVary_BD_Coeff6.pdf} & \includegraphics[width = 2.05in]{Figure/Noisy_NVary_BD_Coeff7.pdf} \\ \end{tabular} \begin{tabular}{ | c | c | c | c |c|c|c|c| } \hline Added noise in \% & 0 & 0.02& 0.04 & 0.06 & 0.08& 0.10 & 0.12 \\ \hline Added noise in new NSR \eqref{eqnsr} & 0.05& 2.02& 4.04& 6.06& 8.08& 10.10& 12.13 \\ \hline \end{tabular} \caption{Burger's equation with diffusion in \eqref{E:burger_diff} with varying noise levels. (a) The average ratio between the identified wrong coefficients and all identified coefficients over $100$ trails. (b) and (c) the computed coefficients of $u u_x$ and $u_{xx}$ respectively by IDENT. While the noise level in percentage seems small, the new NSR represents the severeness of noise for PDE with high order derivatives. } \label{Fig-BurgerDiffVersusNoise} \end{figure} For both Figures \ref{Fig-BurgerExactVersusNoise} and \ref{Fig-BurgerDiffVersusNoise}, we present the new NSR defined in \eqref{eqnsr}. This clearly presents that noise affects different PDEs in different ways. The Burger's equation \eqref{E:burger} only have first order derivatives, while the Burger's equation with diffusion in \eqref{E:burger_diff} has a second order derivative. This seemingly small difference makes a big impact on the NSR and identification. While in Figure \ref{Fig-BurgerExactVersusNoise}, the noise level is experimented up to $30 \%$, its corresponding new NSR varies only from 0 to less than 3.5. In Figure \ref{Fig-BurgerDiffVersusNoise}, the noise level only varies from 0 to 0.12 in percentage, however, this corresponds to new NSR varying form 0 to above 12. The level of the new NSR characterizes the difficulty of identification using IDENT (Step 2, Lasso), since having a higher-order term affects the Lasso negatively, especially in the presents of noise. \subsection{Downsampling effects and IDENT}\label{subsec:down} In applications, data are often collected on a coarse grid to save the expenses of sensors. We explore the effect of downsampling in data collections in this section. Consider a $r$th order PDE. Simulating its solution with a $q$th order method on a fine grid with time step $\delta t $ and spatial spacing $\delta x$ gives rise to the error $O(\delta t + \delta x^q)$. Suppose data are downsampled by a factor of $C_t$ in time and $C_x$ in space, such that data are sampled with spacing $\Delta t = C_t \delta t$ and $\Delta x = C_x \delta x$. Our error formula in \eqref{E:enoise} is crucially dependent on the downsampling factors $C_t$ and $C_x$. Each term is affected by downsampling differently. \begin{itemize} \setlength\itemsep{-0.02cm} \item{The term $\Delta t + \Delta x^{p+1-r}$ arises from the approximation of time and spatial derivatives. It increases as the downsampling factors $C_t$ and $C_x$ increase.} \item{The term $\frac{\delta t + \delta x^q}{\Delta t} + \frac{\delta t + \delta x^q}{\Delta x^r}$ arises from the error in data generations. It decreases as the downsampling factors $C_t$ and $C_x$ increase. } \item{The term $\frac{\sigma}{\Delta t} + \frac{\sigma}{\Delta x^r}$ arises from the measurement noise. It decreases as the downsampling factors $C_t$ and $C_x$ increase.} \end{itemize} % Therefore, downsampling may positively affect the identification depending on the balance among these three terms. As a numerical example, we consider the Burger's equation in \eqref{E:burger} with different downsampling factors. The analytic solution is evaluated on the grid with spacing $\delta x = 1/1024$ and $\delta t = 0.001$ for $t \in [0,0.05]$. After evaluating the analytic solution, we generate $100$ sets of random noises then downsample the noisy data with spacing $\Delta x = C_x \delta x$ and $\Delta t = C_t \delta t$ where $C_x = C_t = 1,2,2^2,2^3,2^4$, and $2^5$ respectively. We run IDENT on the given downsampled noisy data, denoised by LS and LSMA respectively. % Figure \ref{Fig-BurgerExactVersusDownsample} displays the ratio of wrong coefficients by IDENT versus $\log_2 C_x$ in the presence of $5\%$ or $10\%$ Gaussian noise. We observe that increasing downsampling rates can positively affect the result until the downsampling rates become too large. LSMA also gives the best performance. \begin{figure} \centering \begin{tabular}{cc} (a) Ratio of wrong coefficients with $5\%$ noise & (a) Ratio of wrong coefficients with $10\%$ noise \\ \includegraphics[width = 3.3in]{Figure/DSVary_TruB_Noise5.pdf} & \includegraphics[width = 3.3in]{Figure/DSVary_TruB_Noise10.pdf} \\ \end{tabular} \caption{Burger's equation in \eqref{E:burger} with various downsampling factors. (a) and (b) show the average ratio between the identified wrong coefficients and all identified coefficients in $100$ trails versus $\log_2(\text{downsampling factor})$ in the presence of $5\%$ (left) and $10\%$ (right) noise respectively. Increasing the downsampling factors can positively affects the result until the downsampling factors become too large.} \label{Fig-BurgerExactVersusDownsample} \end{figure} \section{Varying coefficients and Base Element Expansion}\label{sec:varying} In this section, we consider PDEs with varying coefficients, e.g., $a_j(x)$ varying in space. As illustrated in (\ref{E:L}), we can easily generalize the IDENT set-up to PDEs with varying coefficients, by expanding the coefficients in terms of finite element bases and solving group Lasso for $L>1$. Due to the increasing number of coefficients, the complexity of the problem increases as $L$ increases. In order to design a stable algorithm, we propose to let $L$ grow before TEE is applied. We refer to this extra procedure as Base Element Expansion (BEE). From the given discrete data $ \{u_{i}^n | i=1, \dots, N_1 \text{ and } n= 1, \dots, N_2\}$, we first compute numerical approximations of $u_t,u_x,u_{xx}$, etc, then apply BEE to gradually increase $L$ until the recovered coefficients become stable. For each fixed $L$, we form the feature matrix $\widehat{F}$ according to \eqref{E:general_F}, and solve group Lasso with the balancing parameter $\lambda$ to obtain $\widehat{\mathbf{a}}_{\text{G-Lasso}}(\lambda)$. We record the normalized block magnitudes from group Lasso, as $L$ increases: \[ \text{ BEE procedure}:= \left\{\|\widehat{F}[j]\|_{L^1}\left\| \sum_{l=1}^L \frac{\widehat{\mathbf{a}}_{\text{G-Lasso}}(\lambda)_{j,l}}{\|\widehat{F}[j,l]\|_\infty} \phi_l \right\|_{L^1}\right\}_{j=1,\ldots,N_3} \text{ versus } L. \] The main idea of BEE is based on the convergence of the finite element approximation (\ref{E:approxError}) - as more basis functions are used, the more accurate the approximation is. In the BEE procedure, the normalized block magnitudes reach a plateau as $L$ increases, i.e., candidate features can be selected by a thresholding according to \eqref{algthresholding} when $L$ is sufficiently large. With this added BEE procedure, IDENT continues to the Step 3 of TEE to refine the selection. In the following, we present various numerical experiments for PDEs with varying coefficients using IDENT with BEE. For the first set of experiments, in Figure \ref{Fig-7VaryNoise0}, \ref{Fig-7VaryNoise0p2NoDenoising} and \ref{Fig-7VaryNoise0p2MALS}, we assume only one coefficient is known a priori to vary in $x$. For the second set of experiments, in Figure \ref{Fig-47VaryDownsample}, we assume two coefficients are known a priori to vary in $x$, and the final experiment, in Figure \ref{Fig-AllVaryDownsample4} assumes all coefficients are free to vary without any a priori information. \begin{figure} \centering \begin{tabular}{ccc} (a) Given data & (b) BEE & (c) Group Lasso, when $L=20$ \\ \includegraphics[width = 2.05in]{Figure/VC_noN_TruB_Given.pdf} & \includegraphics[width = 2.05in]{Figure/VC_noN_TruB_blickVsL.pdf} & \includegraphics[width = 2.05in]{Figure/VC_noN_TruB_L20L1.pdf}\\ (d) TEE in $\log_{10}$ scale vs. $L$& (e) $\widehat{c}(x)$ vs. $c(x)$ & (f) $\|c(x)-\widehat{c}(x)\|_{L^1}$ vs. $L$ \\ \includegraphics[width = 2.05in]{Figure/VC_noN_TruB_TEEvsL.pdf} & \includegraphics[width = 2.05in]{Figure/VC_noN_TruB_L20Coeff67.pdf} & \includegraphics[width = 2.05in]{Figure/VC_noN_TruB_Coeff67Err.pdf}\\ \end{tabular} \caption{Burger's equation with a varying diffusion coefficient \eqref{E:BurgerDiffVary} where data are downsampled by a factor 4. (a) The given data. (b) BEE as $L$ increases from $1$ to $30$. (c) An example of the magnitudes of coefficients from Group Lasso when $L=20$. (d) TEE versus $L$, for all subsets of coefficients of $\{uu_x \ u_{xx} \}$. (e) Recovered diffusion coefficient $\widehat{c}(x) = \sum_{l=1}^L \widehat{a}_{7,l}\phi_l(x)$ when $L=20$ (blue), compared with the true diffusion coefficient $c(x)$ (red). (f) The error $\|c(x)-\widehat{c}(x)\|_{L^1}$ as $L$ increases from $1$ to $30$.} \label{Fig-7VaryNoise0} \end{figure} The first experiment is on the Burger's equation with a varying diffusion coefficient: \begin{align} & u_t + \left( \frac{u^2}{2}\right)_x =c(x)u_{xx}, \text{ where } c(x) = 0.05+0.2 \sin \pi x \label{E:BurgerDiffVary} \\ &\ x \in [0,1], \ u(x,0)= \sin(4\pi x)+0.5\sin(8 \pi x) \text{ and } u(0,t) = u(1,t) = 0. \nonumber \end{align} The given data, shown in Figure \ref{Fig-7VaryNoise0} (a), is numerically simulated by a first-order method with spacing $\delta x = 1/256$ and $\delta t = (\delta x)^2/2$ for $t \in [0,0.05]$. Data are downsampled by a factor of $4$ in time and space. There is no measurement noise. Our objective is to identify the correct features $u u_x$ and $u_{xx}$ and recover the coefficients $-1$ and the varying coefficient $c(x)$. After we expand the diffusion coefficient with $L$ finite element bases, the vectors to be identified can be written as $\mathbf{a} = [a_1 \ldots a_6 \ a_{7,1} \ \ldots \ a_{7,L} \ a_8 \ldots a_{10}]^T$ where $c(x) \approx \sum_{\ell=1}^L a_{7,l}\phi_l(x)$. \begin{figure}[t!] \centering \begin{tabular}{ccc} (a) BEE & (b) TEE vs. $L$ & (c) $\widehat{c}(x)$, when $L=20$ \\ \includegraphics[width = 2.05in]{Figure/VC_N02_TruB_blickVsL.pdf} & \includegraphics[width = 2.05in]{Figure/VC_N02_TruB_TEEvsL.pdf} & \includegraphics[width = 2.05in]{Figure/VC_N02_TruB_L20Coeff67.pdf} \\ \end{tabular} \caption{Equation\eqref{E:BurgerDiffVary}, where data are downsampled by a factor of $4$ and $0.2\%$ measurement noise is added. No denoising is applied. (a) BEE as $L$ increases from $1$ to $30$. (b) TEE versus $L$, for all subsets of four terms selected in (a). The correct support $[uu_x \ u_{xx}]$ is identified with the lowest TEE when $L \ge 7$. (c) Recovered diffusion coefficient $\widehat{c}(x)$ when $L=20$ (blue), compared with the true diffusion coefficient $c(x)$ (red). } \label{Fig-7VaryNoise0p2NoDenoising} \end{figure} \begin{figure}[h!] \centering \begin{tabular}{ccc} (a) BEE & (b) TEE vs. $L$ & (c) $\widehat{c}(x)$, when $L=20$ \\ \includegraphics[width = 2.05in]{Figure/VC_N02LSMA_TruB_blickVsL.pdf} & \includegraphics[width = 2.05in]{Figure/VC_N02LSMA_TruB_TEEvsL.pdf} & \includegraphics[width = 2.05in]{Figure/VC_N02LSMA_TruB_L20Coeff67.pdf} \\ \end{tabular} \caption{The same experiment as Figure \ref{Fig-7VaryNoise0p2NoDenoising}, but IDENT+BEE is applied with LSMA denoising. (a) BEE as $L$ increases from $1$ to $30$. (b) TEE versus $L$, for all subset of coefficients identified in (a). The correct support $[uu_x \ u_{xx}]$ is identified since it gives rise to the lowest TEE when $L \ge 19$. (c) The recovered diffusion coefficient when $L=20$, compared with the true diffusion coefficient (red), which shows a clear improvement compared to Figure \ref{Fig-7VaryNoise0p2NoDenoising} (c). } \label{Fig-7VaryNoise0p2MALS} \end{figure} Figure \ref{Fig-7VaryNoise0} (b) presents BEE as $L$ increases from $1$ to $30$. This graph clearly shows that BEE stabilizes when $L \ge 5$. (c) is an example of the group Lasso result, the normalized block magnitudes, when $L = 20$. The magnitudes of $uu_x$ and $u_{xx}$ are significantly larger than the others, so they are picked for TEE in Step 3. Figure \ref{Fig-7VaryNoise0} (d) presents TEE for all different $L$: TEE of $uu_x$, $u_{xx}$ and $[uu_x \ u_{xx}]$ in $\log_{10}$ scale, respectively. The correct set, $[uu_x \ u_{xx}]$, is identified as the recovered feature set with the smallest TEE. The coefficient $[\widehat{a}_6 \ \widehat{a}_{7,1} \ldots \widehat{a}_{7,L}]$ is computed by least squares, and (e) displays the recovered diffusion coefficient $\widehat{c}(x) = \sum_{l=1}^L \widehat{a}_{7,l}\phi_l(x)$ when $L=20$, compared with the true equation $c(x) = 0.05+0.2 \sin \pi x$ given in (\ref{E:BurgerDiffVary}). Figure \ref{Fig-7VaryNoise0} (f) shows the error $\|c(x)-\widehat{c}(x)\|_{L^1}$ as $L$ increases from $1$ to $30$. The error decreases as $L$ increases, yet does not converge to $0$ due to the errors arising from data generations and finite-difference approximations of $u_t,u_x$ and $u_{xx}$. For the same equation, $0.2\%$ noise is added to the next experiments presented in Figure \ref{Fig-7VaryNoise0p2NoDenoising} and \ref{Fig-7VaryNoise0p2MALS}. Figure \ref{Fig-7VaryNoise0p2NoDenoising} presents the result without any denoising. (a) shows BEE as $L$ increases from $1$ to $30$, where the magnitudes of $u,uu_x,u_{xx},u_xu_{xx}$ are not negligible for $L \ge 20$. These terms are picked for TEE, and Figure \ref{Fig-7VaryNoise0p2NoDenoising} (b) shows TEE versus $L$. The correct support $[uu_x \ u_{xx}]$ is identified with the lowest TEE when $L \ge 7$. The computed diffusion coefficient $\widehat{c}(x)$ is compared to the true one in (c), which has the error $\|c(x)-\widehat{c}(x)\|_{L^1} \approx 0.019$. Even for the data with noise, IDENT+BEE without any denoising gives a good identification of the general form of the PDE. However, the varying coefficient approximation can be improved if LSMA denoising applied to the data as discussed in Section \ref{sec:noise}. Figure \ref{Fig-7VaryNoise0p2MALS} presents the same experiment with LSMA denoising. In (a), BEE picks $u,uu_x,u_{xx}$ for TEE. Notice that the coefficient of $u_x u_{xx}$ almost vanishes after denoising compared to Figure \ref{Fig-7VaryNoise0p2NoDenoising}. (b) shows TEE versus $L$, where the correct support $[uu_x \ u_{xx}]$ gives the lowest TEE, when $L \ge 19$. The recovered diffusion coefficient when $L=20$ is shown in (c), which yields the error $\|c(x)-\widehat{c}(x)\|_{L^1} \approx 0.008$. In comparison with the results in Figure \ref{Fig-7VaryNoise0p2NoDenoising} without denoising, LSMA reduces the error of the recovered diffusion coefficient from $0.019$ to $0.008$. \begin{figure}[h] \centering \begin{tabular}{ccc} (a) Numerical solution & (b) $\widehat{b}(x)$, $\widehat{c}(x)$ vs. $b(x)$, $c(x)$ & (b) $\widehat{b}(x)$, $\widehat{c}(x)$ vs. $b(x)$, $c(x)$ \\ \includegraphics[width = 2.05in]{Figure/VC_2cof_Given.pdf} & \includegraphics[width = 2.05in]{Figure/VC_2cof_noN_L20Coeff47.pdf} & \includegraphics[width = 2.05in]{Figure/VC_2cof_nodown_L20Coeff47.pdf} \\ \end{tabular} \caption{Equation \eqref{E:47Vary} with varying convection and diffusion coefficients. (a) The numerical solution of \eqref{E:47Vary}. (b) With data downsampled by a factor of $4$ in time and space, the recovered coefficient $\widehat{b}(x)$ of $u_x$ is not accurate near $x=1$. The downsampling rate is too high near $x=1$ so that details of the solution are lost. (c) The same experiment without any downsampling, and the recovered coefficients $\widehat{b}(x)$ and $\widehat{c}(x)$ are more accurate than (b). } \label{Fig-47VaryDownsample} \end{figure} In Figure \ref{Fig-47VaryDownsample}, we experiment on the following PDF with two varying coefficients \begin{align} & u_t =b(x)u_x + c(x)u_{xx}, \text{ where } b(x)=-2x \text{ and } c(x) = 0.05+0.2 \sin \pi x, \label{E:47Vary} \\ & x \in [0,1], \ u(x,0)= \sin(4\pi x)+0.5\sin(8 \pi x) \text{ and } u(0,t) = u(1,t) = 0. \nonumber \end{align} The given data are simulated by a first-order method with spacing $\delta x = 1/256$ and $\delta t = (\delta x)^2/2$ for $t \in [0,0.05]$. The vectors to be identified are $\mathbf{a} = [a_1 \ldots a_3 \ a_{4,1} \ \ldots \ a_{4,L} \ a_5 \ a_6 \ a_{7,1} \ \ldots \ a_{7,L} \ a_8 \ldots a_{10}]^T$ where $b(x) \approx \sum_{\ell=1}^L a_{4,l}\phi_l(x)$ and $c(x) \approx \sum_{\ell=1}^L a_{7,l}\phi_l(x)$. Figure \ref{Fig-47VaryDownsample} (a) shows the numerical solution of \eqref{E:47Vary}. In (b) the given data are downsampled by a factor of $4$ in time and space, and in (c) data not downsampled. BEE and TEE successfully identifies the correct features. Figure \ref{Fig-47VaryDownsample} (b) plots both the recovered coefficients $\widehat{b}(x)$, $\widehat{c}(x)$ and the true coefficients $b(x)$ and $c(x)$ when data are downsampled. Notice that the coefficient $\widehat{b}(x)$ of $u_x$ is not accurate when $x$ is close to $1$. The result is improved in (c) where data are not downsampled. No downsampling helps to keep details of the solution around $x=1$ and reduces the finite-difference approximation errors Our final experiment is on Equation \eqref{E:BurgerDiffVary}, but all coefficients are allowed to vary in $x$. The numerical solution is simulated in the same way as Figure \ref{Fig-7VaryNoise0} and the given data are downsampled by a factor of $4$ in time and space. After all coefficients are expanded in terms of $L$ finite element bases, the vectors to be identified is $\mathbf{a} = \{a_{k,l}\}_{k=1,\ldots,10,\ l=1,\ldots,L}$ where $-1 = b(x) \approx \sum_{l=1}^L a_{4,l}\phi(x)$ and $c(x) \approx \sum_{\ell=1}^L a_{7,l}\phi_l(x)$. Figure \ref{Fig-AllVaryDownsample4} (a) shows BEE, (b) shows group Lasso result, and (c) shows TEE. TEE identifies the correct support $[u_x \ u_{xx}]$ since it yields the smallest error. The coefficients $[\widehat{a}_{6,1} \ \ldots \ \widehat{a}_{6,L} \ \widehat{a}_{7,1} \ldots \widehat{a}_{7,L}]$ is computed by least squares. Figure \ref{Fig-AllVaryDownsample4} (d) displays the computed coefficients $\widehat{b}(x) = \sum_{l=1}^L \widehat{a}_{6,l}\phi_l(x)$ and $\widehat{c}(x) = \sum_{l=1}^L \widehat{a}_{7,l}\phi_l(x)$ when $L=20$, and (e) shows the coefficient recovery errors $\|-1-\widehat{b}(x)\|_{L^1}$ and $\|c(x)-\widehat{c}(x)\|_{L^1}$ as $L$ increases from $1$ to $30$. IDENT with BEE successfully identifies the correct terms even when all coefficients are free to vary in space. The accuracy of the recovered coefficients has room for improvement if data are simulated and sampled on a finer grid. \begin{figure}[h] \centering \begin{tabular}{cc} (a) BEE & (b) Group Lasso when $L=20$ \\ \includegraphics[width = 2.05in]{Figure/VC_allvary_blickVsL.pdf} & \includegraphics[width = 2.05in]{Figure/VC_allvary_L20L1.pdf}\\ \end{tabular} \begin{tabular}{ccc} (c) TEE in $\log_{10}$ scale vs. $L$& (d) $\widehat{c}(x)$ vs. $c(x)$ & (e) $\|c(x)-\widehat{c}(x)\|_{L^1}$ vs. $L$ \\ \includegraphics[width = 2.05in]{Figure/VC_allvary_TEEvsL.pdf} & \includegraphics[width = 2.05in]{Figure/VC_allvary_L20Coeff67.pdf} & \includegraphics[width = 2.05in]{Figure/VC_allvary_Coeff67Err.pdf}\\ \end{tabular} \caption{Equation \eqref{E:BurgerDiffVary} where we are a priori given that all coefficients are free to vary with respect to $x$. Data are downsampled by a factor of $4$ in time and space. } \label{Fig-AllVaryDownsample4} \end{figure} \section{Concluding remarks}\label{sec:summary} We proposed a new method to identify PDEs from a given set of time dependent data, with techniques from numerical PDEs and fundamental ideas of convergence. Assuming that the PDE is spanned by a few active terms in a prescribed dictionary, we used finite differences, such as the ENO scheme, to form an empirical version of the dictionary, and utilize $L^1$ minimization for efficiency. Time Evolution Error (TEE) was proposed as an effective tool to pick the correct set of terms. Starting with the first set of basic experiments, we considered noisy data, downsampling effect and PDEs with varying coefficients. By establishing the recovery theory of Lasso for PDE recovery, a new noise-to-signal ratio (\ref{eqnsr}) is proposed, which measures the noise level more accurately in the setting of PDE identification. We derived an error formula in (\ref{E:enoise}) and analyzed the effects of noise and downsampling. A new order preserving denoising method called LSMA was proposed in subsection \ref{subsec:lsma} to aid the identification with noisy data. IDENT can be applied to PDEs with varying coefficients and BEE procedure helps to stabilize the result and reduce the computational cost.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The last decades have brought enormous technological progress and innovation. Two main factors that are undoubtedly key to this development are (i)~\emph{hardware advancement} and (ii)~\emph{algorithm advancement}. Moore's Law, the prediction made by Gordon Moore in 1965 \cite{Moore65}, that the number of components per integrated circuit doubles every year, has shown to be astonishingly accurate for several decades. Given such an exponential improvement on the hardware side, one is tempted to overlook the progress made on the algorithmic side. This paper aims to compare the impact of hardware advancement and algorithm advancement based on a genuine problem, the propositional satisfiability problem (SAT). This problem is well-suited for such a comparison since it is one of the first problems for which progress in solving has been measured regularly through competitions~\cite{JarvisaloBRS12}. Also, a standardized instance format has been established very early. By focusing on this problem, the comparison allows us to fathom the SAT and CP community's contribution to the overall progress. Of course, the advancements in hardware and algorithms cannot be separated entirely. Targeted algorithm engineering can make use of new hardware features~\cite{BornebuschWilleDrechsler17a,ChuHarwoodStuckey09a,FichteMantheyStecklina20a,JarvisaloHeuleBiere12} and hardware development can be guided by the specific demands of modern algorithms. We are well aware that this can quickly end up in comparing apples and oranges. Nevertheless, we think that by carefully setting up the experiment and choosing hardware and algorithms, it still allows us to draw some conclusions on the impact of the individual components. We base the general setup of the comparison on a \emph{Time Leap Challenge}, where virtual teams compete. Team~SW\xspace uses new solvers on old hardware; Team~HW\xspace uses old solvers on new hardware. The time between ``old'' and ``new'' spans about two decades. Which team can solve more instances? Depending on the outcome, one can compare the impact of hardware advancement and algorithm advancement. The idea for this time leap challenge for SAT- solvers was inspired by a thought experiment on algorithms in mixed-integer linear programming (MILP), suggested by Sebastian Stiller~\cite{Stiller15a}. In the early 1990s, the dominant complete method for SAT- solving was the \emph{DPLL Algorithm} (Davis-Putnam-Logemann-Loveland \cite{DavisPutnam60,DavisLogemannLoveland62}), which combines backtracking search with Boolean constraint propagation~\cite{ZabihMcallester88}. However, in the late 1990s, the \emph{CDCL Solvers} (Conflict-Driven Clause Learning) took over. They extended the basic DPLL framework with new methods, including clause learning~\cite{MarquessilvaSakallah96}, lazy data structures like watched literals~\cite{MoskewiczEtAl01}, backjumping~\cite{MarquessilvaSakallah96,MoskewiczEtAl01}, and dynamic branching heuristics~\cite{MoskewiczEtAl01}; the combination of these methods resulted in a significant performance boost, often referred to as the ``CDCL Revolution''. Although the CDCL paradigm is still predominating today's SAT- solving, there have been several significant improvements made over the last two decades, including efficient preprocessing \cite{EenBiere05} and inprocessing \cite{JarvisaloHeuleBiere12}, aggressive clause deletion~\cite{AudemardSimon09}, fast restarts \cite{LubySinclairZuckermann93}, lightweight component caching \cite{PipatsrisawatDarwiche07}, implication queue sorting~\cite{LewisSchubertBecker05}, and new branching heuristics~\cite{LiangGaneschPaupartCzarnecki16}. \subsection{Experimental Setting} For our Time Leap Challenge, Team~HW\xspace (old solvers on new hardware) is composed of the solvers \solver{Grasp} (1996), \solver{zChaff} (2001), and \solver{siege} (2003) running on a computer from 2019 with an Intel Xeon Silver 4112 CPU at 2.60GHz base frequency and 128GB RAM. Team~SW\xspace (new solvers on old hardware) is composed of the solvers \solver{MapleSat19} (2019), \solver{CaDiCal} (2019), and \solver{Glucose} (2016) running on a computer from 1999 with a Pentium~III processor at 467MHz frequency and 1.5GB RAM. An essential question for setting up the experiment was the choice of a suitable set of benchmark instances. On the one hand, the instances should not be too challenging so that they are not entirely out of reach for old solvers or old hardware; on the other hand, the instances should still be challenging enough to provide interesting results. We settled on the benchmark set~\textit{set-asp-gauss}~\cite{HoosKaufmannSchaub13a} that provides a reasonably good compromise, as it contains a large variety of instances, tailors adapted instance hardness, is free of duplicates, reproducible, and publicly available. We used a timeout of 900 seconds, which is the default for SAT competitions. Right in the beginning, we state a clear disclaimer. While a theoretical challenge is easy to design, a practical comparison can rarely be comprehensive and complete. About 20 years of evolution increases the practical search space by orders. There are many possibilities to combine hardware, software, benchmarks, and solvers. Particularly, there might be solvers that are still available, but we missed during our research. Still, we provide a clear guideline on how we selected the teams and provide extensive details beyond. Our results are reproducible in the setting, and the conclusions provide a general idea. However, the ideas might not generalize to conclusions over other benchmark sets or solvers we might have missed. This is, however, a usual situation in many experiments with combinatorial solving as there is no good theoretical understanding of the practical effects~\cite{Nordstrom15a}. Still, we aimed to put the concept of a thought time leap challenge from literature in popular science into a practical scientific context. \subsection{Results} Table~\ref{table:summary} gives a summary of our results (we provide more details in Section~\ref{sec:results}). We see that both teams perform in a similar range with a slight advantage for Team~SW\xspace. \begin{table}[t] \centering \[ \begin{array}{@{}r@{\hskip 0.5em}@{\hskip 0.5em}c@{\hskip 0.5em}@{\hskip 0.5em}c@{\hskip 0.5em}@{\hskip 0.5em}c@{\hskip 0.5em}@{\hskip 0.5em}c@{\hskip 0.5em}@{\hskip 0.5em}c@{\hskip 0.5em}@{\hskip 0.5em}c@{}} \toprule & \solver{Grasp} & \solver{zChaff} & \solver{siege\_v3} & \solver{Glucose} & \solver{CaDiCal} & \solver{Maple} \\ & (1996) & (2001) & (2003) & (2016) & (2019) & (2019) \\[1.8em] \hbox{old~HW (1999)} & 73 & 48 & 37 & \tikzmark{startS}106 & 98 & 77\tikzmark{endS} \\[1.5em] \hbox{new~HW (2019)} &\tikzmark{startH} 76 & 71 & 93 \tikzmark{endH}& 188 & 190 & 195 \\[0.5em] \bottomrule \end{array} \] \begin{tikzpicture}[remember picture,overlay] \foreach \Val in {S,H} { \draw[rounded corners,black,thick] ([shift={(-1.0\tabcolsep,-1.5ex)}]pic cs:start\Val) node[above right,xshift=2.6em,yshift=1.4em]{\small \sffamily Team \Val{}W} rectangle ([shift={(1.5\tabcolsep,2ex)}]pic cs:end\Val); } \end{tikzpicture} \caption{Summary of experimental results} \label{table:summary} \end{table} \nocite{LiangGaneschPaupartCzarnecki16,LubySinclairZuckermann93,EenBiere05, LewisSchubertBecker05,PipatsrisawatDarwiche07,AudemardSimon09, GomesSelmanKautz98,ZabihMcallester88,JarvisaloHeuleBiere12} \subsection{Related Work} Knuth~\cite{Knuth15a} provides an overview of various aspects of SAT- solving, including commented implementations of algorithms from several epochs of SAT- solving. His implementations assemble a DPLL solver (\solver{SAT10}), a DPLL look-ahead solver (\solver{SAT11}), and a CDCL solver (\solver{SAT13}), as well as a preprocessor (\solver{SAT12}). Since all these solvers are implemented uniformly, without special implementation or hardware tricks, they provide an excellent comparison of the algorithmic advancement of solver techniques. We therefore included, for comparison, the results of Knuth's solvers on the same benchmark set and hardware platform as the time leap challenge. Mitchell~\cite{Mitchell05} provides an overview of techniques, implementations, and algorithmic advances of the year 2005 and looking back for 15 years. % He already mentioned that the success of SAT- solving is due to three factors: improved algorithms, improved implementation techniques, and increased machine capacity. However, Mitchell's work does not provide evaluations on any actual practical effects at the time. Kohlhase~\cite{Kohlhase19a} recently published work on collecting and preserving the comparability of old theorem provers to preserve cultural artifacts and history in Artificial Intelligence.\footnote{The Theorem Prover Museum is available online at \url{https://theoremprover-museum.github.io/}} % For an overview on the technique of CDCL-based solvers we refer the reader to introductory literature such as a chapter in the Handbook of Knowledge Representation~\cite{GomesKautzSabharwalSelman08}, chapters on the history of modern SAT- solving~\cite{FrancoMartin09a}, and CDCL-solvers~\cite{Marques-SilvaLynceMalik09a} in the Handbook of Satisfiability~\cite{BiereHeuleMaarenWalsh09}. Katebi, Sakallah, and Marques-Silva~\cite{KatebiSakallahSilva11a,SakallahSilva11a} considered various techniques of modern SAT- solvers under an empirical viewpoint. They designed experiments to evaluate factors and the aggregation of different SAT- enhancements that contribute to today's practical success of modern solvers. Works on targeted algorithm engineering for SAT- solvers are extensive. Just to name a few examples, there is work on exploiting features such as optimizing memory footprints for the architecture~\cite{BornebuschWilleDrechsler17a}, on implementing cache-aware~\cite{ChuHarwoodStuckey09a}, on using huge pages~\cite{FichteMantheyStecklina20a}, on how to benefit from parallel solving~\cite{IserBalyoSinz19a} or employing inprocessing. Inprocessing particularly takes advantage of modern hardware as one can execute much more instructions on a modern CPU than accessing bytes on memory~\cite{HennessyPatterson11a,MahapatraVenkatrao99a}. Very recently, Audemard, Paulevé, and Simon~\cite{AudemardPauleveSimon20a} published a heritage system for SAT solvers. It allows for compiling, archiving, and running almost all released SAT solvers and is based on Docker, GitHub, and Zenodo. While they aim for archivability, our work provides an actual experiment incorporating soft- and hardware advances. We hope that their system allows for long term preservation and, if there is no major change in the computer architecture, that one can repeat our time leap challenge in another decade. \section{The Arena: Designing the Time Leap Challenge} To run a proper challenge, we design an arena by selecting from standard benchmark sets and several contestants out of a vast space of possibilities. We aim for the reasonable oldest hardware on which we can still run modern benchmark sets and solvers. In turn, this requires to set up a modern operating system on old hardware. To make it a time leap challenge, we are interested in solvers and hardware from similar generations, so a preferably small time frame from which both originate. The physical effort restricts us to consider only two time frames in the following. We take modern hardware and solvers from 2019 and old hardware from around 2000 and solvers from 2001/2002. Following academic ideas by Stallman~\cite{Stallman85a}, we focus on benchmark sets and solvers that are publicly available. Throughout the experimental work, we follow standard guidelines for benchmarking~\cite{KouweAndriesseBos18a}. In the course of this section, we elaborate on various technical and organisational obstacles. Setting up a time leap challenge is also somewhat of an archaeological challenge. In theory, a variety of competitions have been organized in the past. The competition results give a broad picture of benchmark instances and solvers. Old hardware and operating systems should still be widely available. In practice, neither open source, nor version control systems, nor public platforms to host software projects such as SourceForge\footnote{\url{https://en.wikipedia.org/wiki/SourceForge}}, bitbucket, github, or gitlab, were popular in the community around the millennium. Publicly funded data libraries such as Zenodo~\cite{Nielsen19a} were also established much later. While the culture of storing text in libraries dates back to Alexandria and the first librarian Zenodotus in 280~BC, searching for datasets and source codes from 20 years ago feels like digging through a burnt library. Enthusiasts maintained datasets and source codes from early competitions. Sometimes source codes were kept as a secret~\cite{GoldbergNovikov03}. Some links redirect to grabbed domains, or people moved and with them, the webpages. Sometimes binaries show up from private collections or the Internet Archive~\cite{Kahle20a}. However, it turned out that they do not run, as libraries on which they depend do not run on modern Linux or Unix distributions. \smallskip Below we report and explain details of the selection process. \paragraph{Instance Format.} Johnson and Trick suggested a uniform input format description in 1993, which is still used as the standard for SAT input instances~\cite{JohnsonTrick93a}. The standardized input format and backward compatibility substantially simplified our selection process. \subsection{Selecting a Suitable Benchmark Set} Our focus on selecting a benchmark set is to consider a larger benchmark set, say of a cardinality ranging from 100 to~300. We are interested in a safe and stable choice of instances since benchmarks run a wide variety of experiments with preferably more than 10 solvers resulting in months of running time. Hence, we push to a reasonable state-of-the-art benchmark setting. We prefer instances that (i)~are publicly available, (ii)~contain a good selection of domains, including an industrial background, random, and combinatorial instances, and (iii) highlight differences for modern solvers. We summarize runtime and number of solved instances during our instance selection process in Table~\ref{tab:selection}. For an initial selection, we ran instances only with the solver \solver{Glucose}~\cite{glucose421}, which showed robust performance on many earlier experimental works that we carried out. \begin{table} \centering \def\arraystretch{1.2}% \begin{tabular}{@{}c@{\hskip 1em}@{\hskip 1em}c@{\hskip 1em}@{\hskip 1em}r@{\hskip 1em}r@{\hskip 1em}r@{\hskip 1em}r@{\hskip 1em}r@{}} \toprule benchmark & solver & \# & TO & ERR & $t[h]$ & avg[s]\\ \midrule DIMACS2 & \solver{Glucose} & 225 & 15 & 1 & 0.34 & 5.46\\ SATLIB &\solver{Glucose} & 43892 & 15 & 6399 & 4.45 & 0.36\\ set-asp-gauss & \solver{Glucose} & 189 & 11 & 0 & 4.50 & 85.71 \\ \bottomrule \end{tabular} \medskip \caption{Runtime of a modern solver and modern hardware on selected benchmark sets. \# refers to the number of solved instances, TO refers to the number of instances on which the solver timed out, ERR refers to the number of instances on which the solver found an input error, $t[h]$ refers to the total running time on the solved instances in hours, avg[s] refers to the average running time of an instance.} \label{tab:selection} \end{table} \paragraph{Available Instances.} The first available benchmark instances \set{DIMACS-2} date back to 1992 and the 2nd \mbox{DIMACS} Challenge 1992--1993 on NP-hard problems, which also considered SAT as a problem~\cite{TrickChvatalCook93a}. The 241 instances are still well maintained and downloadable\footnote{See: \url{http://archive.dimacs.rutgers.edu/pub/challenge/sat/benchmarks/}}. Note that the 1st SAT competition already took part in 1992~\cite{BuningBuro93a}. However, the instances are not publicly available. Over time researchers collected benchmarks such as \set{SATLIB}~\cite{Hoos00a}, which count more than 50,000 instances in total. The instances are still available on an old webpage by the collector.\footnote{See: \url{https://www.cs.ubc.ca/~hoos/SATLIB/benchm.html}} A subset of these instances was also used for the SAT Competition 2002. However, those instances are not available from the SAT Competition website due to an abandoned domain. Instances from one of the annual SAT competitions from 2002 to 2019\footnote{The webpage~\url{http://www.satcompetition.org/} gives a broad overview on the results and details of the competitions since 2002.} follow stricter rules and detailed reports are available~\cite{JarvisaloBerreRoussel12a}. There are plenty of tracks, thousands of instances, and many of the more modern instances are enormous in size. A popular benchmark set with various instances from SAT competitions until 2013 and various fields is the benchmark set \set{set-asp-gauss}~\cite{HoosKaufmannSchaub13a}. The set is a composition of representative benchmarks from a variety of sources. It has been widely used as a robust selection for tuning solvers in the past and was obtained by classifying the practical hardness of the instances from the SAT Competition 2009 and SAT Challenge 2012 and then selecting instances by sampling with the Gaussian probability distribution~\cite{HoosKaufmannSchaub13a}. \paragraph{Initial Evaluations.} In order to gather initial insights, we ran all available solvers on our cluster. The hardware for the benchmark selection process consisted of a cluster of RHEL~7.7 Linux machines equipped with two Intel Xeon E5-2680v3 CPUs of 12 physical cores each running at 2.50GHz, which we enforced by performance governors. The machines are equipped with 64GB main memory of which 60.5GB are freely available to programs. We compare wall clock time and number of timeouts. However, we avoid IO access on the CPU solvers whenever possible,~i.e., we load instances into the RAM before we start solving. We run four solvers on one node at most, set a timeout of 900 seconds, and limit available RAM to~8GB per instance and solver. We summarize our initial evaluation of the early benchmark sets in Table~\ref{tab:selection}. The DIMACS-2 instances turned out to be very easy for modern solvers. For example, the solver Glucose solved almost all instances within less than one second, only five large instances (par32-X.cnf) of a parity learning problem remained unsolved within 900 seconds. The SATLIB instances are more challenging but still fairly easy for modern solvers. The SAT Competition 2002--2019 instances provide a broad selection. Since the results are still publicly available, we refrained from rerunning these sets. The runtime results on the benchmark set \set{set-asp-gauss} revealed that modern solvers can solve many instances. However, the instances are still challenging as the overall runtimes are reasonably long. Old solvers are still able to solve plenty of instances on modern hardware. The benchmark set consists of 200 instances in total. \paragraph{Decision.} After running the instances, we picked one existing benchmark set. Since the set \set{DIMACS-2} contains almost only easy instances, we rejected the set right away. While the \set{SATLIB} instances contain mainly easy instances, they are not very challenging for modern solvers. Further, the contained benchmarks have a strong bias towards handcrafted and random instances. The SAT 2002--2019 instances contain very interesting sets. However, some of the more modern instances are very large, and we figured that it is impossible to transfer and run the instances on old hardware. After reviewing the initial results and sampling memory requirements from earlier SAT competitions, we decided to use the benchmark set \set{set-asp-gauss}~\cite{HoosKaufmannSchaub13a}, which provides a reasonably good compromise. It contains a large variety of instances, tailors adapted instance hardness, is free of duplicates, reproducible, and publicly available. \subsection{Selecting Solvers} In the following section, we describe the selection process of SAT- solvers for our challenge. In order to foster reproducibility and favor open-source, we focus on publicly available solvers (binary or source code). Note that modern SAT- solving also includes various parallel algorithms. Due to the unavailability of wide parallel computation on old hardware, we restrict ourselves to sequential solvers. Further, we consider only solvers that are, vaguely speaking, descendants of the DPLL~\cite{DavisPutnam60,DavisLogemannLoveland62} algorithm,~i.e., CDCL. These solvers are often referred to as solvers implementing complete and systematic search. However, restarts and deletion might affect completeness under certain conditions in practice~\cite{Marques-SilvaLynceMalik09a}. To our knowledge, CDCL-based solvers with various additional techniques on top, which even extend the underlying proof system, are still the most prevailing paradigm for SAT- solvers. However, today, some solvers use strong proof techniques such as the division rule in cutting planes~\cite{ElffersNordstrom18a,GochtNordstromYehudayoff19a} or Gaussian Elimination~\cite{Soos10a,Soos18a}. \paragraph{Researching for Solvers.} The 1st SAT Competition~\cite{BuningBuro93a} and 2nd \mbox{DIMACS} Challenge~\cite{TrickChvatalCook93a} took place around 1992. However, no online resources on detailed solvers or source codes are available. The earliest public collection of solvers which is still available online\footnote{See: \url{https://www.cs.ubc.ca/~hoos/SATLIB/solvers.html}}, is the SATLIB Solver Collection~\cite{Hoos00b}. The collection contains implementations on DPLL-based implementations as well as stochastic local search solvers. DPLL-based Implementations in the collection are \solver{Grasp}~\cite{MarquessilvaSakallah96}, \solver{NTAB}~\cite{CrawfordAuton93a}, \solver{POSIT}~\cite{Freeman95a}, various versions of \solver{REL\_SAT}~\cite{BayardoSchrag97a,BayardoPehoushek00}, which are also available on github\footnote{See: \url{https://github.com/roberto-bayardo/relsat}}, two versions of \solver{SATO}~\cite{Zhang97a}, and four versions of \solver{Satz}~\cite{LiAnbulagan97a}. Further, we asked colleagues for the source code of old solvers and received an even older version of \solver{Grasp} from~1996~\cite{Marques-Silva20a}. The era of CDCL solvers started in 2001~\cite{MoskewiczEtAl01}. There, successful solvers such as \solver{BerkMin}~\cite{GoldbergNovikov03}, \solver{siege}~\cite{Ryan03a}, and \solver{zChaff}~\cite{FuMahajanMalik04a} materialized. \solver{Siege}~\cite{Ryan03a} is publicly available with binaries in three versions from 2003 to 2004. We contacted colleagues on the source code of \solver{siege}, but the author retired and the sources seem to be lost. For \solver{zChaff}~\cite{FuMahajanMalik04a} even the source code is publicly available in four versions from 2001 to 2007. Binaries of \solver{BerkMin} showed up in a backup of experiments on SAT- solvers from earlier works. We contacted the authors on source codes but received no answer. A famous solver in the SAT- solvers line is \solver{MiniSat}, which is available online\footnote{See: \url{http://minisat.se/MiniSat.html}} in various versions~\cite{EenSorensson08,EenSorensson04a,SorenssonEen05}. The development of \solver{MiniSat} started around 2003~\cite{EenSorensson04a} intending to create a compact, readable, and efficient solver for the community. The earliest version online is from 2005 and the most known and very popular version~2.2 from 2008. Another popular SAT- solver is \solver{Glucose}~\cite{AudemardSimon12a}, which was developed to aggressively remove clauses that are not helpful during clause learning of the CDCL procedure. This results in an incomplete algorithm as keeping learnt clauses is essential for completeness. We consider the version \solver{Glucose} syrup~4.2.1~\cite{glucose421}. A very popular, successful and recent solver is \solver{Lingeling}~\cite{Biere17a}, which won several SAT competitions and the prize on the most innovative solver~\cite{BalyoBiereIser16a} in 2015. Two medalists of the SAT 2019 Race were \solver{CaDiCaL}~1.0.3~\cite{Biere19a} and a descendant of the solver \solver{MapleSAT}~\cite{LiangGaneschPaupartCzarnecki16}, namely \solver{MapleLCMDistChronoBTDL-v3 (MapleSat19)}~\cite{MapleLCMDistChronoBTDL}. \paragraph{Testing the Solvers.} In order to benchmark a solver, we first need to compile it or run the binary with a modern operating system as there is otherwise no chance to get the solvers running on modern hardware. First, we considered all solvers from the SATLIB collection. We were able to compile and successfully run the solvers \solver{Grasp}, \solver{Relsat}, \solver{Satz}, and \solver{SATO}. However, we had to modify the source codes and build files so that they would compile with a modern compiler due to harder interpretations of language standards in modern compilers. Since the solvers were originally designed for 32bit Linux, we compiled the solvers on 32bit Linux and used them late on 64bit Linux by compatibility layers. While we were also successful in compiling solvers on 64bit systems, the 64bit binary would often solve fewer instances on the 64bit system or result in many segfaults. We suspect compatibility issues as either the developers of the old solvers could not expect certain datatypes on a future architecture or implemented sloppy memory management. All versions of the solver \solver{siege}, which were available as a binary, still ran on a modern Linux using the 32bit compatibility mode. We were successful in building all versions of the solver~\solver{zChaff}; both on a 32bit as well as 64bit architecture. Unfortunately, the solver BerkMin does not run on modern or fairly recent Linux distribution. It turns out that the binary was compiled with an old gcc and linked to an old version of the glibc, which we discovered in an old Red Hat Enterprise Linux, but we were unable to integrate it into a modern Linux distribution. We found that all modern solvers were well maintained and still compiled on 32 and 64bit Linux distributions as well as a 64bit version of NetBSD. \paragraph{Final Teams.} In order to have a comparison on theoretical advances in SAT- solving between DPLL and CDCL from an abstract perspective and out of the hand of programmer, we picked the implementations by Donald Knuth~\cite{Knuth15a}. The implementations represent particular time periods, more precisely, DPLL solver (\solver{SAT10}), a DPLL look-ahead solver (\solver{SAT11}), and a CDCL solver (\solver{SAT13}), as well as a preprocessor (\solver{SAT12}). We still tested the old solvers \solver{Relsat}, \solver{Satz}, and \solver{SATO}, which resulted in less than~20 solved instances on our modern hardware for the best solver among them (SATO). Since it is theoretically well-known that CDCL can be significantly faster than DPLL~\cite{PipatsrisawatDarwiche09a,Nordstrom15a}, we already have the solvers by Knuth. There has already been work on the technological advances of various techniques between techniques in DPLL and CDCL solvers, we focus on the more modern CDCL solvers for both teams. However, since the solver \solver{Grasp} decides a considerable number of instances and already implements conflict learning, we include \solver{Grasp} into Team~HW\xspace. Then, there are three solvers left for a team of solvers from about~20 years ago (Team~HW\xspace), namely, \solver{zChaff} (2001), \solver{siege} (2003), and an early version of \solver{MiniSat} (2005). We decided to include the earliest solver of \solver{zChaff} (2001.2.17) into Team~HW\xspace, since the numbers of solved instances did not differ much between the 2001 and 2004 versions on our reference hardware. We preferred to include version~3 of the solver \solver{siege} (2003) as it solved about~12 instances more than version~1 (2001) on our modern reference hardware. We discarded \solver{MiniSat} as the youngest of the older solvers. We picked \solver{CaDiCaL} 1.0.3~\cite{Biere19a} and \solver{MapleLCMDistChronoBTDL-v3 (MapleSat19)}~\cite{MapleLCMDistChronoBTDL} for Team~SW\xspace (new solvers on old hardware) due to their good performance in the SAT 2019 Race. \solver{MapleSat19} won the SAT 2019 Race, and \solver{CaDiCal} scored a second place. Since the slightly older solver \solver{Glucose} syrup~4.2.1~\cite{glucose421} solved about ten instances more than the solver \solver{Lingeling} 7d5db72~\cite{Biere17a} on our modern reference hardware, we decided to pick \solver{Glucose} for our Team~SW\xspace. \subsection{Selecting the Environment: Operating System and Compiler} Since we are interested in comparing the team new solvers on old hardware and old solvers on new hardware, we think that it is only fair to also include advancements in kernel architecture, compilers, and operating systems into the consideration for new solvers. Anyway, it is not possible to obtain ancient Linux or Unix distributions due to missing source code mirrors and it is not possible to run such Linux or Unix distributions on modern hardware due to the lack of modern chipset drivers in ancient kernels. Due to long term support of hardware, we decided to favor Debian~10 codename buster (July 2019)~\cite{Carter19a} and try NetBSD~9 (Feb. 2020)~\cite{NetBSD-www-team20a} as operating systems. We ran the experiments on Linux kernel version 4.19.0-8-686-pae. We use gcc~8.3.0 on Debian and NetBSD. Our modern hardware at university was equipped with Linux Mint~19 codename Tara, kernel version 4.15.0-91, and gcc compiler version~7.5.0-3. \subsection{Selecting the Hardware} To have a wide variety of hardware, we started to gather old hardware from friends and colleagues. We collected ten systems over different generations, namely, systems containing a Pentium~II (1998), a Pentium~III (1999), an Ultra Sparc IIe (2001), a Pentium~IV (2002), a Pentium IV Prescott (2004), a Core2 Duo (2007), an i5 Nehalem (2009), a Xeon Haswell (2013), a Xeon Skylake (2017), and an i7 Icelake (2019). A colleague prepared a SPARCstation II (1995) and SPARCstation Voyager (1995) for us. \paragraph{Technical Restrictions.} The selection of a benchmark set and operating systems restricted the space of possibilities on the potential old hardware. Preferably, we are interested in the oldest possible hardware and the youngest possible hardware. In more detail, modern Linux distributions such as Debian~10 still supports all x86-based (IA-32) i686 processors, including various AMD, Intel, and VIA processors. However, the i686 architectures limits experiments to Pentium~II processors (1997) or later~\cite{JacksonSchwarzMorris19a}. BSD distributions such as NetBSD~9 still supports the Sparc64 architecture, which in theory still allows the running of systems with processors SPARC64 (1995) and UltraSPARC IIe (1999). We were able run NetBSD~9 on a system with an Ultra Sparc~IIe, namely, the Sun Netra X1 from about 2000/2001. Since for some solvers, we only had access to Linux or Solaris binaries and we were unable to setup Debian~10 or Solaris onto the Netra system in decent time due to a required setup via serial LOM interface and network boot, we discarded the Sun system from our final hardware selection. It is well known that modern operating systems and SAT- solvers are very memory-demanding~\cite{FichteMantheySchidler20a} resulting in a requirement of having at least 1GB of total RAM inside the system. Since the L2 cache controllers of the Pentium~II only allow the use of 512MB of RAM and we could not get access to a system with a Pentium Pro processor, our oldest possible system (1999) was a Pentium~III processor running at 467MHz equipped with 1.5GB RAM. Hence, we picked this system to run the solvers of Team~SW\xspace. While the most modern CPU architecture we had access to was an i7 Icelake (2019), we decided to prefer the system running a Xeon Skylake due to the much larger caches, which are usually beneficial for SAT- solving. Still, the modern system with the Xeon Skylate was bought in 2019 for dedicated benchmarking, while the i7 was just a small-form-factor barebone desktop computer for which we feared that high permanent load over months might significantly degenerate performance due to overheating. The system for Team~HW\xspace then contained two Intel Xeon Silver 4112 CPUs (Skylake architecture) of 2.60GHz base-frequency equipped with 128GB RAM. We ran the experiments at the maximum frequency of 3.00GHz. Since the Netra X1 from 2000 was equipped with 2GB and the NetBSD allowed to still run all source code based solvers, even the very modern ones, the Sun system serves as a point of reference. \subsection{The Final Stage: Experimental Setting and Limitations} We compare wall clock time and number of timeouts. However, we avoid IO access on the CPU solvers whenever possible, i.e., we load instances into the RAM if a network file system is involved and store uncompressed instances. We set a timeout of 900 seconds, and limited available RAM to 512MB per instance and solver. We also tested for some solvers with resident set size restricted to 1GB RAM and observed only a very small difference. Since Intel hardware around 2002 rarely had more than 512MB RAM available, we went for the 512MB setup. We follow standard guidelines for benchmarking~\cite{KouweAndriesseBos18a}. Note that we do not validate for correctness of the solver outputs. We set and enforce resource limits by the tool \textit{runsolver}~\cite{Roussel11a}. \begin{table} \centering \begin{tabular}{@{}c@{\hskip 1em}@{\hskip 1em}l@{\hskip 1em}c@{\hskip 1em}r@{\hskip 1em}H@{\hskip 1em}@{\hskip 1em}r@{}} \toprule & Solver & Year/Generation & \texttt{HW99} & Old2 & \texttt{HW19}\\ \midrule \hspace{1em} & \solver{MapleSat19} & 2019 & \multirow{3}{*}{\rotatebox[origin=c]{90}{{\tiny Team~SW\xspace}}~~} \textbf{77} & 72 & 195\\ & \solver{CaDiCal} & 2019 & \textbf{98} & & 190\\ & \solver{Glucose} & 2016 & \textbf{106} & 109 & 188\\ \midrule & \solver{vbest} & & \textbf{124} & -- & 198\\ & sum & & 281 & -- & 573\\ & avg (\%) & & 46.8 & -- & 95.5\\ \midrule \midrule & \solver{siege\_v3} & 2003 & 37 & -- & \multirow{3}{*}{\rotatebox[origin=c]{90}{{\tiny Team~HW\xspace}}~} \textbf{93}\\ & \solver{zChaff} & 2001 & 48 & 46 & \textbf{71} \\ & \solver{Grasp} & 1996 & 73 & -- & \textbf{76} \\ \midrule & \solver{vbest} & & 87 & -- & \textbf{124}\\ & sum & & 158 & -- & 240\\ & avg (\%) & & 26.3 & -- & 40.0\\ \midrule \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{Knuth}} & \solver{SAT13+12} & CDCL+P & 31 & & 104 \\ & \solver{SAT13} & CDCL & 31 & & 98 \\ & \solver{SAT11+12} & LH+P & 8 & & 15 \\ & \solver{SAT11} & LH & 15 & & 20 \\ & \solver{SAT10+12} & DPLL+P& 4 & & 45 \\ & \solver{SAT10} & DPLL & 6 & & 4 \\ \midrule \midrule \multirow{8}{*}{\rotatebox[origin=c]{90}{Other Solvers}} & \solver{Lingeling} & 2019 & \textbf{70} & & 179\\ & \solver{Lingeling-aqw-27d9fd4} & 2013 & \textbf{87} & & 186\\ & \solver{Lingeling-276} & 2011 & \textbf{83} & & 177\\ & \solver{MiniSat} & 2008 & 84 & 84 & 178\\ & \solver{siege\_v4} & 2004 & 45 & -- & 93\\ & \solver{siege\_v1} & 2003 & 33 & -- & 81\\ & \solver{sato} & 2000 & 15 & -- & 19\\ & \solver{satz} & 1998 & 7 & 4 & 9\\ \bottomrule \end{tabular} \caption{Overview of the number of solved instances for the various solvers on our old and new hardware. \texttt{HW99} represents the number of solved instances on the old hardware. \texttt{HW19} represents the number of solved instances on the new hardware. \solver{vbest} represents virtual best solvers, which are virtual solvers that we obtain by taking all instances that have been solved by the solvers considered in the group listed above. } \label{tab:results} \end{table} \begin{figure}[t] \centering \resizebox{1\columnwidth}{!}{% \includegraphics{cactus_wall.pdf} } \caption{Runtime for the SAT- solvers on all considered instances. The x-axis refers to the number of instances, and the y-axis depicts the runtime sorted in ascending order for each solver individually. vbest refers to the virtual best solver,~i.e., we take the union over the solved instances for each team and consider the minimum for each instance. In the legend $[X]$ refers to a number of~$X$ solved instances. \texttt{HW19} refers to the new hardware, and \texttt{HW99} refers to the old hardware. \texttt{SAT19} refers to a modern solver on modern hardware, which one can consider as a potential baseline.} \label{fig:cactus} \end{figure} \section{The Trophies} \label{sec:results} Table~\ref{tab:results} gives an overview on the number of solved instances for each solver and the two hardware generations. Figure~\ref{fig:cactus} illustrates the runtime of the selected solvers and hardware as a cactus plot. Our results and gathered source codes are all publicly available~\cite{FichteHecherSzeider20a}. Note that we report only on the two Intel-based hardware generations in this table. The results on the Ultra Sparc~IIe system look very similar, usually, a few more instances were solved. Detailed data can be found in the supplemental material. \subsection{Results} When we consider the number of solved instances on the hardware from 2019, \solver{MapleSat19} solves 195 instances. Recall that Team~HW\xspace consists of the old solvers on modern hardware. They solve~93 instances (\solver{siege\_v3}), 76 instances (\solver{Grasp}), and 71 instances (\solver{zChaff}). On average, they solve about~80 instances (40\% of the instances) at a standard deviation of about 12. However, the virtual best solver (\solver{vbest}) for Team~HW\xspace solves 124 instances,~i.e., about 62\% of the instances. The virtual best solver is the virtual solver that we obtain from taking the union over the solved instances by all three solvers and keeping the instance with best solved runtime. The Team~SW\xspace consists of the new solvers on old hardware. They solve 77 instances (\solver{MapleSat19}), 98 instances (\solver{CaDiCal}), and 106 instances (\solver{Glucose}). On average, they solved about 94 instances (46.8\% of the instances) with a standard deviation of 15. Their virtual best solver (\solver{vbest}) solves 124 instances,~i.e., about 62\% of the instances. When considering the results on the solvers \solver{MapleSat19}, \solver{CaDiCal}, and \solver{Glucose} on modern hardware, they solve 191 instances on average with a very low standard deviation of~3.6 instances. When considering the results on the solvers \solver{siege\_v3}, \solver{zChaff}, and \solver{Grasp}) on old hardware, they solve on average about~53 instances (26\% of the instances) at a standard deviation of about 18. \subsection{Discussion of the Results} \paragraph{Comparing the Teams.} The solver \solver{MapleSat19}, which is the best solver from the 2019 SAT Race, solves as expected the highest number of instances on the new hardware. We are not surprised that neither Team~SW\xspace nor Team~HW\xspace or their virtual best solver gets anywhere close to this result. In view of Table~\ref{tab:results} and Figure~\ref{fig:cactus}, there are plenty of ways to compare the two teams. One can carry out (i)~an individual comparison by the best (vbest), worst, or average solver, or even consider the individual solvers in direct comparison to each other, but one could also (ii)~consider the virtual best solver for each team. If we choose Method~(i) and individually compare the solvers, Team~SW\xspace clearly wins for the measure best, worst, or average solver. We can also do one by one comparison and compare the solvers from each team individually with the solvers from the other team. Then, we take the number of solved instances for each solver~X from Team~SW\xspace against each solver~Y from Team~HW\xspace, and we give X a point if it solves more instances than~Y or give a point to Y in the opposite case. Then, \solver{Glucose} obtains 3 points (because it solves more instances than \solver{siege\_v3}, \solver{zChaff}, and \solver{Grasp}), \solver{CaDiCal} obtains 3 points, and \solver{MapleSat19} obtains 1 point, which totals 7 points for Team~SW\xspace. In comparison, Team~HW\xspace receives 0 points for \solver{zChaff}, 0 points for \solver{Grasp}, and 1 point for \solver{siege\_v3}, which totals 1 point. Hence, Team~SW\xspace also wins. Nevertheless, if we consider the virtual best solvers, Team~HW\xspace performs equally well as Team~SW\xspace. \paragraph{Notable Observations.} We found it surprising that the winner from the 2019 SAT Race (\solver{MapleSat19/HW99}) solves less instances than the best solver (\solver{siege\_v3/ HW19}) of Team~HW\xspace. This seems surprising to us and we currently do not have a good explanation why \solver{MapleSat19} solves so few instances on the old hardware, namely 21 instances less than \solver{CaDiCal} and 29 instances less than \solver{Glucose}. Since we observed a similar behavior with the latest implementation of Lingeling but not with CaDiCal, which also implements inprocessing techniques, we suspect that the advanced data structures in the solvers, the learning and restarting policy, and strong tuning towards modern hardware might be contributing factors. We found it interesting that the old solvers \solver{siege\_v3}, \solver{zChaff}, and \solver{Grasp} still solve a considerable number of instances on the new hardware. In particular, the solver \solver{siege\_v3} seems to benefit substantially from the new hardware, while \solver{Grasp} gains almost no benefit from the new hardware. When we consider the implementations by Knuth, it is particularly remarkable that the DPLL solver with preprocessing on new hardware overtakes the CDCL solver with 45 solved instances. Where the CDCL implementation solves 31 instances with or without preprocessing on the old hardware. \subsection{Summary} When reviewing the results, we believe that our test-setting revealed that both Team~SW\xspace and~HW\xspace perform in a similar range. If we compare individually, Team~SW\xspace wins, which is also well visible in the cactus plot in Figure~\ref{fig:cactus}. However, if we consider virtual best solvers, Team~HW\xspace performs equally well. This leaves us with the conclusion that the last decades have brought enormous technological progress and innovation for SAT- solving, and the two main factors (i)~\emph{hardware advancement} and (ii)~\emph{algorithm advancement} both have a considerable influence. \section{Conclusion} We compare the impact of hardware and algorithm advancement on a genuine problem, namely, the propositional satisfiability problem (SAT). We describe in detail the decisions and challenges from a thought experiment to an actual experiment between old solvers and new solvers on new and old hardware with a time difference of about two decades. Our experiment's outcome confirms that modern algorithms have a strong influence on the performance of solvers, even when they run on old hardware. Nonetheless, solving significantly profits from technological advancement in hardware development and there is no clear winner between Team~SW\xspace (new solvers on old hardware) vs. Team~HW\xspace (old solvers on new hardware) in our time leap challenge. Overall, both teams perform in a similar range with a slight advantage for Team~SW\xspace (new solvers on old hardware), which leads us to the conclusion that both hardware and software advances in science and industry have a mutual influence on modern solving. Hence, algorithm advancements are at least as important for the field of SAT- solving as hardware advancement. Further, algorithm engineering becomes of more importance. During our research, we noticed that long term reproducibility highly depends on available source code or static binaries with few dependencies. Further, it turned out helpful if the setup of a solver requires few additional system tools and few dependencies on external libraries. The dependencies within the operating system and source codes usually were not the problem as architectural dependencies would forbid to run the solvers. From our archaeological investigations, we suggest avoiding any external system for the setup for future long term experiments,~i.e., tight dependencies on kernel versions or software containers such as Docker. Still, one uniform shared system for the entire community such as the SAT heritage project might prove helpful~\cite{AudemardPauleveSimon20a} if implemented also by competition organizers. Further, we think that public data libraries would be beneficial to understand long term advancements, not just source code repositories of private companies or university webpages. One could post an open call and repeat the experiment with any solver. However, we believe that this would probably challenge developers of modern solvers to optimize their implementation for old hardware, which would result in a distorted picture for old solvers. Hence, we do not primarily intend to repeat the experiments in the near future~\cite{Munroe19a}. We hope that our work stimulates research for others to also set up a time leap challenge in their fields such as for stochastic SAT-solvers, CSP-solvers, MaxSAT-solvers, and ILP-solvers. \subsection*{Acknowledgements} We would like to thank several colleagues. Dave Mitchell and Jo{\~a}o Marques-Silva supported us with source codes from their old disks or mailboxes. Uwe Pretzsch and Siegmar Schoene helped to organize old hardware and Toni Pisjak maintained the modern benchmarking system, which we used.
{ "redpajama_set_name": "RedPajamaArXiv" }